sharding-jdbc read-write separation

Keywords: Programming SQL Database JDBC MySQL

Read the official document before analyzing the source code Read / write separation Major Records Learn, Public Number: Madad's Technical Wheel

Core concept

  • Main library: add, update and delete data operations
  • Slave library: The database used to query data operations, which can support multiple slave Libraries
  • Multi-master and multi-slave read-write separation, multi-master and multi-slave need to use sharding

Source code analysis

1. start entry:

public class JavaConfigurationExample {
    
//    private static ShardingType shardingType = ShardingType.SHARDING_DATABASES;
//    private static ShardingType shardingType = ShardingType.SHARDING_TABLES;
//    private static ShardingType shardingType = ShardingType.SHARDING_DATABASES_AND_TABLES;
    private static ShardingType shardingType = ShardingType.MASTER_SLAVE;
//    private static ShardingType shardingType = ShardingType.SHARDING_MASTER_SLAVE;
//        private static ShardingType shardingType = ShardingType.SHARDING_VERTICAL;
    
    public static void main(final String[] args) throws SQLException {
        DataSource dataSource = DataSourceFactory.newInstance(shardingType);
        CommonService commonService = getCommonService(dataSource);
        commonService.initEnvironment();
        commonService.processSuccess();
        commonService.cleanEnvironment();
    }
    
    private static CommonService getCommonService(final DataSource dataSource) {
        return new CommonServiceImpl(new OrderRepositoryImpl(dataSource), new OrderItemRepositoryImpl(dataSource));
    }
}

2. Take sharding-jdbc as an example, configure master-slave read-write separation code as follows:

@Override
public DataSource getDataSource() throws SQLException {
    //Master slave configuration
    MasterSlaveRuleConfiguration masterSlaveRuleConfig = new MasterSlaveRuleConfiguration(/*Master-slave naming*/"demo_ds_master_slave", /*Main library*/"demo_ds_master", /*Slave Library*/Arrays.asList("demo_ds_slave_0", "demo_ds_slave_1"));
    //Print sql
    Properties props = new Properties();
    props.put("sql.show", true);
    //Creating MasterSlave Data Source Data Source
    return MasterSlaveDataSourceFactory.createDataSource(createDataSourceMap(), masterSlaveRuleConfig, props);
}

private Map<String, DataSource> createDataSourceMap() {
    Map<String, DataSource> result = new HashMap<>();
    //Main library
    result.put("demo_ds_master", DataSourceUtil.createDataSource("demo_ds_master"));
    //Two slave Library
    result.put("demo_ds_slave_0", DataSourceUtil.createDataSource("demo_ds_slave_0"));
    result.put("demo_ds_slave_1", DataSourceUtil.createDataSource("demo_ds_slave_1"));
    return result;
}

Creating MasterSlave Data Source for Sharing Master-Slave Data Source

public MasterSlaveDataSource(final Map<String, DataSource> dataSourceMap, final MasterSlaveRuleConfiguration masterSlaveRuleConfig, final Properties props) throws SQLException {
        super(dataSourceMap);
        //Caching mysql metadata
        cachedDatabaseMetaData = createCachedDatabaseMetaData(dataSourceMap);
        //Master-slave rule configuration
        this.masterSlaveRule = new MasterSlaveRule(masterSlaveRuleConfig);
        //Master-slave sql parsing
        parseEngine = new MasterSlaveSQLParseEntry(getDatabaseType());
        shardingProperties = new ShardingProperties(null == props ? new Properties() : props);
    }

3. Execute insert insert method

@Override
    public Long insert(final Order order) throws SQLException {
        String sql = "INSERT INTO t_order (user_id, status) VALUES (?, ?)";
        //Get the Master Slave Data Source data source connection and create Master Slave Prepared Statement
        //Here are two different meanings of Statement
        //1.MasterSlaveStatement: Routing when sql is executed
        //2. Master Slave Prepared Statement: Route when Statement is created

        //Statement.RETURN_GENERATED_KEYS automatically generates the primary key and returns the generated primary key
        try (Connection connection = dataSource.getConnection();
             PreparedStatement preparedStatement = connection.prepareStatement(sql, Statement.RETURN_GENERATED_KEYS)) {
            preparedStatement.setInt(1, order.getUserId());
            preparedStatement.setString(2, order.getStatus());
            //MasterSlavePreparedStatement executes sql
            preparedStatement.executeUpdate();
            try (ResultSet resultSet = preparedStatement.getGeneratedKeys()) {
                if (resultSet.next()) {
                    order.setOrderId(resultSet.getLong(1));
                }
            }
        }
        return order.getOrderId();
    }

Get the database connection MasterSlaveConnection - > AbstractConnection Adapter # getConnection

    /**
     * Get database connection.
     *
     * @param dataSourceName data source name
     * @return database connection
     * @throws SQLException SQL exception
     */
    //MEMORY_STRICTLY: Proxy maintains connections to all routed tables in a database. The advantage of this approach is to save memory by using a streaming ResultSet.
    //
    //CONNECTION_STRICTLY: The proxy releases the connection after taking out all the data in the ResultSet, and at the same time, the memory consumption will increase.
    //
    public final Connection getConnection(final String dataSourceName) throws SQLException {
        return getConnections(ConnectionMode.MEMORY_STRICTLY, dataSourceName, 1).get(0);
    }
    
    /**
     * Get database connections.
     *
     * @param connectionMode connection mode
     * @param dataSourceName data source name
     * @param connectionSize size of connection list to be get
     * @return database connections
     * @throws SQLException SQL exception
     */
    public final List<Connection> getConnections(final ConnectionMode connectionMode, final String dataSourceName, final int connectionSize) throws SQLException {
        //Getting data sources
        DataSource dataSource = getDataSourceMap().get(dataSourceName);
        Preconditions.checkState(null != dataSource, "Missing the data source name: '%s'", dataSourceName);
        Collection<Connection> connections;
        //Concurrent access from cache
        synchronized (cachedConnections) {
            connections = cachedConnections.get(dataSourceName);
        }
        List<Connection> result;
        //Returns the specified number of connections if the number of connections in cache is greater than the specified number of connections
        if (connections.size() >= connectionSize) {
            result = new ArrayList<>(connections).subList(0, connectionSize);
        } else if (!connections.isEmpty()) {
            result = new ArrayList<>(connectionSize);
            result.addAll(connections);
            //Create missing specified number of connections
            List<Connection> newConnections = createConnections(dataSourceName, connectionMode, dataSource, connectionSize - connections.size());
            result.addAll(newConnections);
            synchronized (cachedConnections) {
                cachedConnections.putAll(dataSourceName, newConnections);
            }
        } else {
            result = new ArrayList<>(createConnections(dataSourceName, connectionMode, dataSource, connectionSize));
            synchronized (cachedConnections) {
                cachedConnections.putAll(dataSourceName, result);
            }
        }
        return result;
    }
    
    @SuppressWarnings("SynchronizationOnLocalVariableOrMethodParameter")
    private List<Connection> createConnections(final String dataSourceName, final ConnectionMode connectionMode, final DataSource dataSource, final int connectionSize) throws SQLException {
        //There is no concurrent access case for 1:00, and a single connection is returned directly.
        if (1 == connectionSize) {
            return Collections.singletonList(createConnection(dataSourceName, dataSource));
        }
        //TODO does not handle concurrency
        if (ConnectionMode.CONNECTION_STRICTLY == connectionMode) {
            return createConnections(dataSourceName, dataSource, connectionSize);
        }
        //Concurrent
        synchronized (dataSource) {
            return createConnections(dataSourceName, dataSource, connectionSize);
        }
    }
    
    private List<Connection> createConnections(final String dataSourceName, final DataSource dataSource, final int connectionSize) throws SQLException {
        List<Connection> result = new ArrayList<>(connectionSize);
        for (int i = 0; i < connectionSize; i++) {
            try {
                result.add(createConnection(dataSourceName, dataSource));
            } catch (final SQLException ex) {
                for (Connection each : result) {
                    each.close();
                }
                throw new SQLException(String.format("Could't get %d connections one time, partition succeed connection(%d) have released!", connectionSize, result.size()), ex);
            }
        }
        return result;
    }
    
    private Connection createConnection(final String dataSourceName, final DataSource dataSource) throws SQLException {
        //Judging whether it's a sharding thing or not
        Connection result = isInShardingTransaction() ? shardingTransactionManager.getConnection(dataSourceName) : dataSource.getConnection();
        replayMethodsInvocation(result);
        return result;
    }

Prepare routing and cache Statement

public MasterSlavePreparedStatement(final MasterSlaveConnection connection, final String sql, final int autoGeneratedKeys) throws SQLException {
        this.connection = connection;
        //Create router objects
        masterSlaveRouter = new MasterSlaveRouter(connection.getMasterSlaveDataSource().getMasterSlaveRule(), connection.getParseEngine(), 
                connection.getMasterSlaveDataSource().getShardingProperties().<Boolean>getValue(ShardingPropertiesConstant.SQL_SHOW));
        //Statement after Cache Routing, sql Statement after useCache Caching Resolution
        for (String each : masterSlaveRouter.route(sql, true)) {
            //Get the database connection
            PreparedStatement preparedStatement = connection.getConnection(each).prepareStatement(sql, autoGeneratedKeys);
            routedStatements.add(preparedStatement);
        }
    }

Execute the MasterSlaveRouter route method to get the routing Library

public Collection<String> route(final String sql, final boolean useCache) {
        //Parse sql, not how SQL uses antlr4 parsing
        Collection<String> result = route(parseEngine.parse(sql, useCache));
        //Whether to print sql or not
        if (showSQL) {
            SQLLogger.logSQL(sql, result);
        }
        return result;
    }
    
    private Collection<String> route(final SQLStatement sqlStatement) {
        //Judging master
        if (isMasterRoute(sqlStatement)) {
            //Set whether the current thread allows access to the main library
            MasterVisitedManager.setMasterVisited();
            //Return to main library
            return Collections.singletonList(masterSlaveRule.getMasterDataSourceName());
        }
        //According to the algorithm of configuration, the slave library is acquired. There are two kinds of algorithms:
        //1, random
        //2, polling
        return Collections.singletonList(masterSlaveRule.getLoadBalanceAlgorithm().getDataSource(
                masterSlaveRule.getName(), masterSlaveRule.getMasterDataSourceName(), new ArrayList<>(masterSlaveRule.getSlaveDataSourceNames())));
    }

Execute MasterSlavePreparedStatement#executeUpdate

@Override
    public int executeUpdate() throws SQLException {
        int result = 0;
        //Traverse execution from local cache
        for (PreparedStatement each : routedStatements) {
            result += each.executeUpdate();
        }
        return result;
    }

4. Acquisition of slave library algorithm strategy

  • Stochastic algorithm
@Getter
@Setter
public final class RandomMasterSlaveLoadBalanceAlgorithm implements MasterSlaveLoadBalanceAlgorithm {
    
    private Properties properties = new Properties();
    
    @Override
    public String getType() {
        return "RANDOM";
    }
    
    @Override
    public String getDataSource(final String name, final String masterDataSourceName, final List<String> slaveDataSourceNames) {
        //Get a random number from slave.size()
        return slaveDataSourceNames.get(new Random().nextInt(slaveDataSourceNames.size()));
    }
}
  • polling algorithm
@Getter
@Setter
public final class RoundRobinMasterSlaveLoadBalanceAlgorithm implements MasterSlaveLoadBalanceAlgorithm {

    //Concurrent map
    private static final ConcurrentHashMap<String, AtomicInteger> COUNTS = new ConcurrentHashMap<>();
    
    private Properties properties = new Properties();
    
    @Override
    public String getType() {
        return "ROUND_ROBIN";
    }
    
    @Override
    public String getDataSource(final String name, final String masterDataSourceName, final List<String> slaveDataSourceNames) {
        //View counters with corresponding names, initialize one if not
        AtomicInteger count = COUNTS.containsKey(name) ? COUNTS.get(name) : new AtomicInteger(0);
        COUNTS.putIfAbsent(name, count);
        // Using cas polling, if the counter grows to slave.size(), then zero (to prevent the counter from growing)
        count.compareAndSet(slaveDataSourceNames.size(), 0);
        //Absolute value, counter% slave.size() modulus
        return slaveDataSourceNames.get(Math.abs(count.getAndIncrement()) % slaveDataSourceNames.size());
    }
}
  • Default algorithm

SPI extension mechanism, load the first algorithm as the default algorithm; ss default is random

Posted by thedualmind on Wed, 02 Oct 2019 01:49:01 -0700