Encapsulation and use of springbatch

Keywords: Java SQL Spring

springbatch

It mainly realizes the processing of batch data. I encapsulate the batch and put forward the jobBase type. The specific job needs to implement it. Spring Batch not only provides a unified read-write interface, rich task processing methods, flexible transaction management and concurrent processing, but also supports logging, monitoring, task restart and skipping, greatly simplifying batch application development, freeing developers from the complex task configuration management process, enabling them to pay more attention to core business processing Process.

Several components

  • job
  • step
  • read
  • write
  • listener
  • process
  • validator

JobBase defines several common methods

 /**
  * springBatch The job base class of
  */
 public abstract class JobBase<T> {
 
   /**
    * Batch.
    */
   protected int chunkCount = 5000;
   /**
    * Monitor.
    */
   private JobExecutionListener jobExecutionListener;
   /**
    * Processor.
    */
   private ValidatingItemProcessor<T> validatingItemProcessor;
   /**
    * job Name.
    */
   private String jobName;
   /**
    * Checker.
    */
   private Validator<T> validator;
   @Autowired
   private JobBuilderFactory job;
   @Autowired
   private StepBuilderFactory step;
 
 
   /**
    * Initialization.
    *
    * @param jobName                 job Name
    * @param jobExecutionListener    Monitor
    * @param validatingItemProcessor processor
    * @param validator               test
    */
   public JobBase(String jobName,
                  JobExecutionListener jobExecutionListener,
                  ValidatingItemProcessor<T> validatingItemProcessor,
                  Validator<T> validator) {
     this.jobName = jobName;
     this.jobExecutionListener = jobExecutionListener;
     this.validatingItemProcessor = validatingItemProcessor;
     this.validator = validator;
   }
 
   /**
    * job Initialization and startup
    */
   public Job getJob() throws Exception {
     return job.get(jobName).incrementer(new RunIdIncrementer())
         .start(syncStep())
         .listener(jobExecutionListener)
         .build();
   }
 
   /**
    * Perform step
    *
    * @return
    */
   public Step syncStep() throws Exception {
     return step.get("step1")
         .<T, T>chunk(chunkCount)
         .reader(reader())
         .processor(processor())
         .writer(writer())
         .build();
   }
 
   /**
    * Single processing data
    *
    * @return
    */
   public ItemProcessor<T, T> processor() {
     validatingItemProcessor.setValidator(processorValidator());
     return validatingItemProcessor;
   }
 
   /**
    * Verification data
    *
    * @return
    */
   @Bean
   public Validator<T> processorValidator() {
     return validator;
   }
 
   /**
    * Read data in bulk
    *
    * @return
    * @throws Exception
    */
   public abstract ItemReader<T> reader() throws Exception;
 
   /**
    * Write data in batch
    *
    * @return
    */
   @Bean
   public abstract ItemWriter<T> writer();
 
 }

It mainly specifies the execution strategy of public methods, while the specific job name, read and write still need to be implemented by specific job.

Specific Job implementation

 @Configuration
 @EnableBatchProcessing
 public class SyncPersonJob extends JobBase<Person> {
   @Autowired
   private DataSource dataSource;
   @Autowired
   @Qualifier("primaryJdbcTemplate")
   private JdbcTemplate jdbcTemplate;
 
   /**
    * Initialize, rule job name and monitor
    */
   public SyncPersonJob() {
     super("personJob", new PersonJobListener(), new PersonItemProcessor(), new BeanValidator<>());
   }
 
   @Override
   public ItemReader<Person> reader() throws Exception {
     StringBuffer sb = new StringBuffer();
     sb.append("select * from person");
     String sql = sb.toString();
     JdbcCursorItemReader<Person> jdbcCursorItemReader =
         new JdbcCursorItemReader<>();
     jdbcCursorItemReader.setSql(sql);
     jdbcCursorItemReader.setRowMapper(new BeanPropertyRowMapper<>(Person.class));
     jdbcCursorItemReader.setDataSource(dataSource);
 
     return jdbcCursorItemReader;
   }
 
 
   @Override
   @Bean("personJobWriter")
   public ItemWriter<Person> writer() {
     JdbcBatchItemWriter<Person> writer = new JdbcBatchItemWriter<Person>();
     writer.setItemSqlParameterSourceProvider(new BeanPropertyItemSqlParameterSourceProvider<Person>());
     String sql = "insert into person_export " + "(id,name,age,nation,address) "
         + "values(:id, :name, :age, :nation,:address)";
     writer.setSql(sql);
     writer.setDataSource(dataSource);
     return writer;
   }
 
 }

The write operation needs to define the declaration of its own bean

Note that you need to start a name for each job's write. Otherwise, when there are multiple jobs, the write will be scrambled

  /**
   * Write data in batch
   *
   * @return
   */
  @Override
  @Bean("personVerson2JobWriter")
  public ItemWriter<Person> writer() {
   
  }

Add an api to trigger manually

 @Autowired
  SyncPersonJob syncPersonJob;

  @Autowired
  JobLauncher jobLauncher;

  void exec(Job job) throws Exception {
    JobParameters jobParameters = new JobParametersBuilder()
        .addLong("time", System.currentTimeMillis())
        .toJobParameters();
    jobLauncher.run(job, jobParameters);
  }

  @RequestMapping("/run1")
  public String run1() throws Exception {
    exec(syncPersonJob.getJob());
    return "personJob success";
  }

Posted by Saviola on Wed, 27 Nov 2019 06:52:52 -0800