Database monitoring, SQL optimization & transactions
What is query execution plan?
A query plan, also known as an execution plan or query execution plan, is a detailed, step-by-step blueprint that the database management system (DBMS) uses to execute a specific SQL query. The query plan is generated by the query optimizer, a component of the DBMS responsible for determining the most efficient way to execute a query based on the database schema, indexes, statistics, and other factors.
The query plan provides insights into how the DBMS will retrieve and process the data to satisfy the query, including details on which indexes will be used, the order of table access, and the algorithms employed for sorting and joining data. Understanding and analyzing the query plan can be crucial for optimizing the performance of SQL queries.
How to find a query execution plan?
Many relational database systems support the EXPLAIN
statement, which provides information about how a query will be executed without actually executing it.
For example:
EXPLAIN SELECT column1, column2 FROM my_table WHERE column1 = 'empty' AND column2 = 'non_empty';
Different database systems have specific commands to obtain query plans. For example:
- PostgreSQL: use EXPLAIN or EXPLAIN ANALYZE
- MySQL: use EXPLAIN or EXPLAIN EXTENDED
- SQL Server: use SHOWPLAN_XML
- MariaDb: EXPLAIN with many more options…
Feel free to check documentation / manual for given SQL flavour. Almost all is there!
Some database management tools provide graphical representations of query plans, for example, DBeaver
IntelliJ IDEA supports two types of execution plans:
- Explain Plan: the result is shown in a mixed tree and table format on a dedicated Plan tab.
- Explain Plan (Raw): the result is shown in a table format.
How to interpret query execution plan?
Table access: look for information on how tables are accessed, including whether full table scans or index scans are used. Consider whether indexes are being utilized effectively.
Joins: check how joins between tables are executed. Different join algorithms (nested loops, hash joins, merge joins) have different performance characteristics. The choice of the join algorithm depends on the size of the tables and the available indexes.
Filter predicates: examine the conditions used to filter rows. Ensure that indexes are used for selective conditions and that the query is leveraging the available statistics.
Sorting and group operations: check for any sorting or grouping operations. Determine if the query plan is using indexes or other methods to satisfy these operations.
Index usage: verify that indexes are being used efficiently. Check if the indexes cover the columns needed for the query and if they are selective.
Parallel execution: some query plans may involve parallel execution, where multiple processes are used to speed up the query. Understand if and how parallelism is being employed.
Use flame graph!
In IntelliJ Idea, Flame Graph is a part of Query Execution Plan feature. A flame graph in the context of SQL typically refers to a visualization technique used for profiling and analyzing the performance of SQL queries or database operations. While flame graphs, in the broader sense, are often associated with the visualization of stack traces in programming, a flame graph in the SQL domain focuses on representing the execution flow and time distribution of SQL queries. Sometimes flame graphs represents not only SQL queries, but also REST API requests associated with them along with microservices that handle the complete excecution flow (for example, in DataDog).
Get familiar with database profiler
A database profiler is a tool or feature provided by database management systems (DBMS) to capture and analyze information about the execution of SQL queries and operations against the database. Profilers are valuable for performance tuning, optimization, and troubleshooting, as they allow database administrators and developers to identify bottlenecks, inefficient queries, and areas for improvement in the database system.
Profilers provide access to query execution plans, but they offer much more features.
Profilers capture detailed statistics about the execution of SQL queries, including the time taken for execution, resource usage (CPU, memory, disk I/O), and the number of rows affected.
They can report on locking and blocking issues, helping to identify situations where transactions are contending for the same resources and causing delays. Profilers provide information about the start and end of transactions, as well as the duration and resource consumption of transactions. This can be essential for understanding the impact of transactions on overall system performance. You will find details about active database sessions and connections, including the users accessing the database, the duration of their sessions, and the resources they are consuming. Profilers may log errors and exceptions encountered during query execution. Some of them offer real-time monitoring capabilities.
Finally, profilers may support the creation of triggers or events that automatically capture information when specific conditions are met. For example, a profiler might capture information whenever a query takes longer than a defined threshold.
How to optimize database?
Use indexes wisely
Ensure that tables are appropriately indexed based on the queries being executed. Analyze if the existing indexes are being utilized effectively.
Update statistics
Regularly update table statistics to provide the query optimizer with accurate information about the distribution of data in tables.
Consider query rewriting
In some cases, rewriting the query or restructuring the schema can lead to more efficient query plans.
Avoid functions on indexed columns
Avoid using functions on columns involved in WHERE clauses, as it may prevent the use of indexes.
Thoroughly check joins
Ensure that join conditions are well-defined and that indexes are available for columns used in join conditions.
Review transactions
Think about transactions: are they used effectively? Is locking strategy adequate to the purpose?
What is transaction?
In SQL, a transaction is a sequence of one or more SQL statements that are executed as a single, indivisible unit of work. The properties of a transaction are often described by the ACID properties, which stand for Atomicity, Consistency, Isolation, and Durability.
In general, transaction phases are:
- acquiring lock
- read
- update
- validation
- commit
- rollback
When to use transactions?
Transactions are good solution in following situations:
- to perform multiple database operations as a cohesive unit, such as updating multiple tables, inserting records, or deleting data
- enforce data integrity and consistency
- financial, banking, e-commerce
- concurrency
- where a large number of records need to be updated or processed
- offline / online synchronization
- when rollback is needed as an option
- for isolation
- complex operations (multiple steps)
- ensuring durability of data
When not to use transactions?
In these cases, avoid transactions:
- simple read-only operations
- where high concurrency is a top priority and conflicts are unlikely
- individual, independent operations
- performance-critical scenarios
- where data is cached or denormalized
- bulk data loading or large-scale batch processing
- non-critical data, short-lived operations
- logging & auditing
Deadlock
A deadlock in SQL occurs when two or more transactions are blocked, each waiting for the other to release a lock on a resource, resulting in a circular waiting condition. It must be avoided at all cost.
Optimistic lock
Locking is a way of preventing lost update. Optimistic lock checks whether a value to be updated has not been changed since last read. The optimistic locking approach allows multiple transactions to proceed with their operations without acquiring locks on the data. Instead, it relies on a mechanism to detect conflicts and resolve them at the time of committing the changes.
During read phase, the transaction records some form of a version identifier associated with the data (e.g., a timestamp, a version number, a hash value).
It does not acquire any lock.
First, it reads data (1) and records some form of a version identifier associated with the data (e.g., a timestamp, a version number, a hash value).
Then starts the second phase: update (2).
During validation phase (3), it checks for any modifiactions done by another transaction in the meantime. This is typically done by comparing the recorded version identifier with the current version of the data.
Commit / rollback phase (4): if no changes, do commit. If there are changes, perform rollback or conflict resolution.
When optimistic locking is a good strategy?
With high concurrency requirements: in scenarios where high levels of concurrent access to the data are crucial, optimistic locking can be more suitable. It allows multiple transactions to read and modify data concurrently, reducing contention and increasing overall system performance.
With low risk of conflicts: when the likelihood of conflicts between transactions is low, optimistic locking is an efficient choice. If the data is not frequently updated by multiple transactions simultaneously, the overhead of acquiring and releasing locks may be unnecessary.
For short transactions: optimistic locking is well-suited for short-duration transactions where the time between reading and updating the data is minimal. Short transactions reduce the window during which conflicts might occur, making it less likely for two transactions to modify the same data concurrently.
When optimizing read-heavy workloads: in situations where the workload is predominantly read-heavy, and write operations are infrequent, optimistic locking can be effective. Readers are not impeded by locks, and conflicts during write operations are addressed when they occur.
To reducing lock contention: optimistic locking helps in reducing lock contention (competition for acquiring locks). By allowing multiple transactions to read data simultaneously and only checking for conflicts at the time of update, contention is minimized.
Optimistic locking is often more compatible with distributed systems. In scenarios where data is distributed across multiple nodes or databases, acquiring locks might be challenging or impractical. Optimistic locking allows for a more decentralized approach.
Optimistic locking is commonly used in scenarios where data may be edited offline, and changes need to be merged with the central database. Each offline editor can make changes independently, and conflicts are resolved when attempting to merge the changes. It seems to be the way the deck synchronization in Anki works.
It is also more scalable solution for systems with a large number of transactions and a desire to reduce the load on the database caused by acquiring and releasing locks.
Pessimistic lock
It is another way of preventing lost update. Pessimistic lock explicitly forces other threads to wait until an update is done.
Lock acquisition (untill commit / rollback) is done in this strategy. In many cases, it involves exclusive locks. Another type of pessimistic is a shared lock, which might be later escalated to exclusive lock.
Pessimistic locking may lead to deadlocks. Pessimistic locking is often associated with higher isolation levels, with more consistency and less concurrency.
When pessimistic locking is a good strategy?
Where certain sections of code or database operations are critical and must be executed without interference from other transactions, pessimistic locking can be beneficial. This ensures that only one transaction at a time can access or modify the protected resource.
When maintaining data integrity is a top priority, pessimistic locking can be appropriate. For example, if an application enforces business rules that require consistency in data relationships, acquiring locks during transactions helps prevent concurrent modifications that could violate those rules.
In situations where transactions involve resource-intensive operations or complex calculations, pessimistic locking can be used to avoid conflicts and ensure that a transaction completes without interference from other transactions. Also, when transactions involve **multiple steps or span different parts of the application, pessimistic locking can be used to ensure that the entire transaction is executed atomically without interference from other transactions.
Pessimistic locking is effective in preventing race conditions, where multiple transactions compete to read or modify the same data simultaneously. By acquiring locks, the system can control access and avoid conflicts.
In batch processing scenarios where large volumes of data are processed, pessimistic locking can help maintain order and prevent concurrent transactions from affecting each other. This is especially important when the order of processing is crucial. Maintaining Consistency in Distributed Systems:
In distributed systems with shared resources, pessimistic locking can be used to ensure that only one node at a time makes modifications to a shared resource.
What is ACID?
Transactions should be designed and implemented accordingly to ACID rules.
Atomicity (A) ensures that a transaction is treated as a single, indivisible unit of work. Either all the changes made within the transaction are committed to the database, or none of them are. If any part of the transaction fails, the entire transaction is rolled back to its previous state.
Consistency (C) ensures that a transaction brings the database from one valid state to another. The database must satisfy certain integrity constraints before and after the transaction. If a transaction violates any integrity constraints, the database is left unchanged.
Isolation (I) ensures that the execution of one transaction is isolated from the execution of other transactions, even if they are executed concurrently.
Durability (D) guarantees that once a transaction is committed, its effects are permanent and survive subsequent system failures. The changes made by the transaction are stored in non-volatile storage (such as disk) and can be recovered even if the system crashes or restarts.