Analysis and Solution of mysql Error Duplicate Key Exception

Keywords: Database Java JDBC Mybatis

When doing database synchronization, an error was found. mysql reported the following error:

org.springframework.dao.DuplicateKeyException:
### Error updating database. Cause: com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException:XXX
### The error may involve com.jd.medicine.b2c.trade.center.daoHistory.RxOrderHistoryDao.addRxOrder-Inline
### The error occurred while setting parameters
### SQL: INSERT INTO XXX (... ) VALUES ( ? ,? ,? )
### Cause: error: code = AlreadyExists desc = Duplicate entry 'XXXXX' for key 'PRIMARY' (errno 1062) (sqlstate 23000) during query: insert into XXX( ...) values (...); 
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:245)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72)
at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:71)
at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:364)
at com.sun.proxy.$Proxy17.insert(Unknown Source)
at org.mybatis.spring.SqlSessionTemplate.insert(SqlSessionTemplate.java:236)
at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:46)
at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:43)
at com.sun.proxy.$Proxy25.addRxOrder(Unknown Source)
...
... 49 common frames omitted

According to the error message, the error was caused by repeated insertions of the same primary key.
Think of my business logic is: real-time synchronization database (A library master data, real-time synchronization to B library)
1. Receive the changes of A Library in real time, and receive binlog as soon as the changes occur.
2. According to the main key of binlog, first query B library whether there is this data, modify if there is, insert if not.
The problem is in the second step. If the concurrency is very high, two identical binlog s come at the same time.
The first one is to query the B library first. It does not find this data. It performs insertion operation. It is normal.
At this point, the second binlog also came. At this time, the first one has not been inserted successfully, so at this time, the query B library, the result is still not this data.
Then the insert operation is performed.
In fact, this data has been inserted into the B library at this time, so the second insertion will report an error, and the same primary key will be inserted repeatedly.

Solve:
Solution 1: Reduce concurrency through the business side
Solution 2: Cache in redis every time you insert, expiration time is 2 seconds
Then check redis before each insertion. If you can find the value, you can prove that this data has been inserted, and you can also prevent weight.

Posted by edraynham on Sun, 27 Jan 2019 12:12:15 -0800