When two or more users access the same tables at the same time, they’re considered concurrent users. If the operations that they are performing are simple reads, then SQL Server can easily accommodate both users simultaneously. But once one or more users begin updating and inserting records into a table that someone else is reading, things start to get a little bit more complicated.

A lost update occurs when two or more users attempt to update the same record at the same time. Both users will see the same initial value in a field,but when they try and update it, one user will immediately have their change overwritten by the other user. The problem here is that both users will see a success message when their update was completed,but if they both immediately read back their data again, one user will see a value that they didn’t expect. 

An uncommitted dependency, or a dirty read, happens when one user reads data from a table that is already in the process of being changed by a transaction initiated by another user. Transactions have the ability to roll back changes if certain conditions are met, so the first might read a value that has yet to be fully committed. If the second user’s transaction fails and the changes are rolled back, then the first use will have received the view of the data that was never fully saved in the first place.

Inconsistent analysis, or a nonrepeatable read, occurs when two or more transactions are operating on the same data. In this scenario, one transaction reads a value in preparation to do some operation. Then, another transaction sneaks in and modifies and fully commits a new value. When the first transaction comes back to the value, it finds that it’s been updatedand is inconsistent with the value that it saw previously. This is similar to an uncommitted dependency, except that in an inconsistent analysis, the read operation happens twice, once on both sides of a fully committed transaction. 

Phantom reads occur when a transaction reads data twice and another transaction has inserted or deleted a record in between those two actions. In the case of an insert, the second read will pick up a new record that didn’t exist in the first read. And in the case of a delete, the second read will be missing a record that was present in the first read. Either way, the phantom read scenario returns more or fewer records than the first read when it’s expected that the second read should be the same. 

And finally, missing and double reads. This occurs in a similar fashion to a phantom read, only the second intermediate transaction is affecting the index of a table. If one transaction is scanning through the records, and the second transaction alters the position of a record while the scan is still in progress, then the table scan might miss a record if it’s moved before the point the scan is at.Or a scan can read a record twice if it’s moved ahead.

Sources:

https://www.linkedin.com/learning/designing-database-solutions-for-sql-server-2016/locking-and-concurrency-strategy

Last modified: March 19, 2019

Author

Comments

Write a Reply or Comment