Five ‘s Weblog

November 2, 2007

Transaction Concurrency (1)

Filed under: Five's thought — by powerdream5 @ 10:07 pm
Tags: , , ,

       One of the big differences between Enterprise Application and Normal web application is that the enterprise applications must handle the problem of concurrency, because they are usually multiuser applications, where several transactions update the database at the same time frequently. Before we can deal with concurrency, It is useful to know more about it.


       In fact, we devide the problems caused by concurrency into several categories. They are:
        Lost Update : Two transactions both update a row and then the second transaction aborts, causing both changes to be lost.
       Dirty read : One transaction reads changes made by another transaction that hasn’t yet been committed. However, if the second transction roll back, the data the first transaction readed is not correct!
       Unrepeatable read : A transaction reads a row twice and reads different state each time. For example, another transaction may have written to the row, and committed, between the two reads.
       Second lost updates problem : A special case of an unrepeatable read. Imagine that two concurrent transactions both update arow, one writes to it and commits, and then the second writes to it and commits. The changes made by the first writer are lost. This situation is pretty similar to Lost Update, please note the differences between them.
      phantom read : A transaction executes a query twice, and the second result set includes rows that weren’t visible in the firstresult set. This situation is caused by another transactioin inserting new rows between the executions of the two queries.

     Ok, It is time to handle concurrency now. Usually, it is not a good way to serialize transactions which means make the transactions execute one by one and it will reduce the performance of the application. In fact, we define severl transaction isolation levels. They are separately:
     Read Committed : Permits unrepeatable reads but not dirty reads. Reading transactions don’t block other transactions from accessing a row. However, an uncommitted writting transaction blocks all other transactions from accessing the row.
     Repeatable read : permits neither unrepeatable reads not dirty reads. Phantom reads may occur. Reading transactions block writing transactions (but not other reading transactions), and writing transactions block all other transactions.

     In Hibernate framework, we can set the isolation for JDBC connections using a Hibernate configuration option. The defaultisolation level is repeatable read isolation whichs is represented by number 4 (Hibernate.connection.isolation = 4). 1 means read uncommitted isolation; 2 means read committed isolation; 8 means serializable isolation. In most cases, we prefer 2.

     However, it is not a good idea to let the database to deal with the concurrency by itself, we think it is the responsibility of the application to handle them. In my next article, I would like to write something about how to deal with concurrency using optimistic locking and pessimistic locking.



1 Comment »

  1. Good photo. good looking

    Comment by Jagad — November 9, 2009 @ 7:44 am |Reply

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at

%d bloggers like this: