PICE(2):JDBCStreaming - gRPC-JDBC Service

Keywords: Scala JDBC Google Session

In an akka-cluster environment, JDBC databases are separated from other nodes in the cluster from the point of view of data invocation. This is because the JDBC database is not distributed and does not have the transparency of node location. Therefore, JDBC database server must provide data manipulation through service mode. In this scenario, the server is a JDBC service, and other nodes, including other JDBC database nodes, are the calling clients of this JDBC service. Since we have explicitly chosen to implement gRPC service mode in akka-cluster environment and to implement program control of database operation through akka-stream flow control mode, we will demonstrate the implementation and use of gRPC-JDBC-Streaming in this discussion.

In the last discussion, we demonstrated the simplest JDBC-Streaming Unary request/response pattern: sending a JDBC Query from the client to the JDBC-Service, running JDBC Query from the JDBC server and returning a data stream DataRows to the client. The data and service types defined by IDL in the jdbc.proto file are as follows:

message JDBCDataRow {
 string year = 1;
 string state = 2;
 string county = 3;
 string value = 4;
}

message JDBCQuery {
  string dbName = 1;
  string statement = 2;
  bytes parameters = 3;
  google.protobuf.Int32Value fetchSize= 4;
  google.protobuf.BoolValue autoCommit = 5;
  google.protobuf.Int32Value queryTimeout = 6;
}

service JDBCServices {
  rpc runQuery(JDBCQuery) returns (stream JDBCDataRow) {}
}

The above data types JDBC Data Row and JDBC Query correspond to the JDBC-Streaming tool's flow element structure and JDBC Query Context respectively, as follows:

  val toRow = (rs: WrappedResultSet) => JDBCDataRow(
    year = rs.string("REPORTYEAR"),
    state = rs.string("STATENAME"),
    county = rs.string("COUNTYNAME"),
    value = rs.string("VALUE")
  )

   val ctx = JDBCQueryContext[JDBCDataRow](
     dbName = Symbol(q.dbName),
     statement = q.statement,
     parameters = params,
     fetchSize = q.fetchSize.getOrElse(100),
     autoCommit = q.autoCommit.getOrElse(false),
     queryTimeout = q.queryTimeout
   )
   jdbcAkkaStream(ctx, toRow)

After compiling with scalaPB, the server and client frame codes (boilerplate-code) are automatically generated. We need to implement specific JDBC services:

class JDBCStreamingServices(implicit ec: ExecutionContextExecutor) extends JdbcGrpcAkkaStream.JDBCServices {
  val logger = Logger.getLogger(classOf[JDBCStreamingServices].getName)

  val toRow = (rs: WrappedResultSet) => JDBCDataRow(
    year = rs.string("REPORTYEAR"),
    state = rs.string("STATENAME"),
    county = rs.string("COUNTYNAME"),
    value = rs.string("VALUE")
  )
  override def runQuery: Flow[JDBCQuery, JDBCDataRow, NotUsed] = {
    logger.info("**** runQuery called on service side ***")
    Flow[JDBCQuery]
      .flatMapConcat { q =>
        //unpack JDBCQuery and construct the context
        val params: Seq[Any] = unmarshal[Seq[Any]](q.parameters)
        logger.info(s"**** query parameters: ${params} ****")
        val ctx = JDBCQueryContext[JDBCDataRow](
          dbName = Symbol(q.dbName),
          statement = q.statement,
          parameters = params,
          fetchSize = q.fetchSize.getOrElse(100),
          autoCommit = q.autoCommit.getOrElse(false),
          queryTimeout = q.queryTimeout
        )
        jdbcAkkaStream(ctx, toRow)
      }
  }
}

Following is a demonstration of client invocation services:

  val query = JDBCQuery (
    dbName = "h2",
    statement = "select * from AQMRPT where STATENAME = ? and VALUE = ?",
    parameters = marshal(Seq("Arizona", 5))
  )
  def queryRows: Source[JDBCDataRow,NotUsed] = {
    logger.info(s"running queryRows ...")
    Source
      .single(query)
      .via(stub.runQuery)
  }

The program runs as follows:

object QueryRows extends App {
  implicit val system = ActorSystem("QueryRows")
  implicit val mat = ActorMaterializer.create(system)

  val client = new JDBCStreamClient("localhost", 50051)

  client.queryRows.runForeach(println)

  scala.io.StdIn.readLine()
  mat.shutdown()
  system.terminate()
}

So what if you send a bunch of JDBC Queries from the client? This is also the so-called BiDi-Streaming pattern. The services described in jdbc.proto are as follows:

service JDBCServices {
  rpc runQuery(JDBCQuery) returns (stream JDBCDataRow) {}
  rpc batQuery(stream JDBCQuery) returns (stream JDBCDataRow) {}
}

We see that batQuery's entry is a stream. The automatically generated service function batQuery style is as follows:

  override def runQuery: Flow[JDBCQuery, JDBCDataRow, NotUsed] = { ... }
  override def batQuery: Flow[JDBCQuery, JDBCDataRow, NotUsed] = runQuery

runQuery and batQuery have the same function style. This shows that the service mode provided by the server is the same. In our example, they all send back relevant data streams to each JDBC Query received. In fact, the difference between the two services lies in the client side. The following is the source code generated by scalaPB:

   override def runQuery: Flow[grpc.jdbc.services.JDBCQuery, grpc.jdbc.services.JDBCDataRow, NotUsed] =
      Flow.fromGraph(
        new GrpcGraphStage[grpc.jdbc.services.JDBCQuery, grpc.jdbc.services.JDBCDataRow]({ outputObserver =>
          new StreamObserver[grpc.jdbc.services.JDBCQuery] {
            override def onError(t: Throwable): Unit = ()
            override def onCompleted(): Unit = ()
            override def onNext(request: grpc.jdbc.services.JDBCQuery): Unit =
              ClientCalls.asyncServerStreamingCall(
                channel.newCall(METHOD_RUN_QUERY, options),
                request,
                outputObserver
              )
          }
        })
      )
 ...
      override def batQuery: Flow[grpc.jdbc.services.JDBCQuery, grpc.jdbc.services.JDBCDataRow, NotUsed] =
        Flow.fromGraph(new GrpcGraphStage[grpc.jdbc.services.JDBCQuery, grpc.jdbc.services.JDBCDataRow](outputObserver =>
          ClientCalls.asyncBidiStreamingCall(
            channel.newCall(METHOD_BAT_QUERY, options),
            outputObserver
          )
        ))

So on the client side we call batQuery:

  def batQueryRows: Source[JDBCDataRow,NotUsed] = {
    logger.info(s"running batQueryRows ...")
    Source
      .fromIterator(() => List(query,query2,query3).toIterator)
      .via(stub.batQuery)
  }

JDBC operations should have data updates in addition to Query, including Schema DDL and database-updates. JDBC-update delivers update requests through JDBC Context:

case class JDBCContext(
                        dbName: Symbol,
                        statements: Seq[String] = Nil,
                        parameters: Seq[Seq[Any]] = Nil,
                        fetchSize: Int = 100,
                        queryTimeout: Option[Int] = None,
                        queryTags: Seq[String] = Nil,
                        sqlType: JDBCContext.SQLTYPE = JDBCContext.SQL_UPDATE,
                        batch: Boolean = false,
                        returnGeneratedKey: Seq[Option[Any]] = Nil,
                        // no return: None, return by index: Some(1), by name: Some("id")
                        preAction: Option[PreparedStatement => Unit] = None,
                        postAction: Option[PreparedStatement => Unit] = None) {... }

The protobuf message corresponding to this class is defined as follows:

message JDBCResult {
  bytes result = 1;
}

message JDBCUpdate {
  string dbName = 1;
  repeated string statements = 2;
  bytes parameters = 3;
  google.protobuf.Int32Value fetchSize= 4;
  google.protobuf.Int32Value queryTimeout = 5;
  int32 sqlType = 6;
  google.protobuf.Int32Value batch = 7;
  bytes returnGeneratedKey = 8;
}

service JDBCServices {
  rpc runQuery(JDBCQuery) returns (stream JDBCDataRow) {}
  rpc batQuery(stream JDBCQuery) returns (stream JDBCDataRow) {}
  rpc runDDL(JDBCUpdate) returns (JDBCResult) {}
}

The service function runDDL returns the message type JDBCResult: encapsulates a return value of type Seq[Any]. Following is a demonstration of the protobuf message packaging and restoring usage of JDBCContext, which calls jdbcExecute DDL after disassembling JDBCContext to build JDBCContext on the server side:

 override def runDDL: Flow[JDBCUpdate, JDBCResult, NotUsed] = {
    logger.info("**** runDDL called on service side ***")
    Flow[JDBCUpdate]
      .flatMapConcat { context =>
        //unpack JDBCUpdate and construct the context

        val ctx = JDBCContext(
          dbName = Symbol(context.dbName),
          statements = context.statements,
          sqlType = JDBCContext.SQL_EXEDDL,
          queryTimeout = context.queryTimeout
        )

        logger.info(s"**** JDBCContext => ${ctx} ***")

        Source
          .fromFuture(jdbcExecuteDDL(ctx))
          .map { r => JDBCResult(marshal(r)) }

      }
  }

JdbcExecute DDL returns Future[String], as follows:

  def jdbcExecuteDDL(ctx: JDBCContext)(implicit ec: ExecutionContextExecutor): Future[String] = {
    if (ctx.sqlType != SQL_EXEDDL) {
      Future.failed(new IllegalStateException("JDBCContex setting error: sqlType must be 'SQL_EXEDDL'!"))
    }
    else {
      Future {
        NamedDB(ctx.dbName) localTx { implicit session =>
          ctx.statements.foreach { stm =>
            val ddl = new SQLExecution(statement = stm, parameters = Nil)(
              before = WrappedResultSet => {})(
              after = WrappedResultSet => {})

            ddl.apply()
          }
          "SQL_EXEDDL executed succesfully."
        }
      }
    }
  }

We can use Source. fromFuture (jdbcExecute DDL (cox)) to build an akka-stream Source. Build a JDBCUpdate structure on the client side and pass it to the server for operation:

  val dropSQL: String ="""
      drop table members
    """

  val createSQL: String ="""
    create table members (
      id serial not null primary key,
      name varchar(30) not null,
      description varchar(1000),
      birthday date,
      created_at timestamp not null,
      picture blob
    )"""
  val ctx = JDBCUpdate (
    dbName = "h2",
    sqlType = JDBCContext.SQL_EXEDDL,
    statements = Seq(dropSQL,createSQL)
  )

  def createTbl: Source[JDBCResult,NotUsed] = {
    logger.info(s"running createTbl ...")
    Source
      .single(ctx)
      .via(stub.runDDL)
  }

Note: statements = Seq(dropSQL,createSQL) contains two separate SQL operations.

Let's show you how to transfer a stream MemberRow from the client and insert the database operation from the server. DDL data types and service functions are defined as follows:

message JDBCDate {
  int32 yyyy = 1;
  int32 mm   = 2;
  int32 dd   = 3;
}

message JDBCTime {
  int32 hh   = 1;
  int32 mm   = 2;
  int32 ss   = 3;
  int32 nnn  = 4;
}

message JDBCDateTime {
   JDBCDate date = 1;
   JDBCTime time = 2;
}
message MemberRow {
  string name = 1;
  JDBCDate birthday = 2;
  string description = 3;
  JDBCDateTime created_at = 4;
  bytes picture = 5;
}

service JDBCServices {
  rpc runQuery(JDBCQuery) returns (stream JDBCDataRow) {}
  rpc batQuery(stream JDBCQuery) returns (stream JDBCDataRow) {}
  rpc runDDL(JDBCUpdate) returns (JDBCResult) {}
  rpc insertRows(stream MemberRow) returns(JDBCResult) {}
}

The insertRows service function is implemented as follows:

 override def insertRows: Flow[MemberRow, JDBCResult, NotUsed] = {
    logger.info("**** insertRows called on service side ***")
    val insertSQL = """
      insert into members(
        name,
        birthday,
        description,
        created_at
      ) values (?, ?, ?, ?)
    """
    Flow[MemberRow]
      .flatMapConcat { row =>
        val ctx = JDBCContext('h2)
          .setUpdateCommand(true,insertSQL,
             row.name,
             jdbcSetDate(row.birthday.get.yyyy,row.birthday.get.mm,row.birthday.get.dd),
             row.description,
             jdbcSetNow
          )

        logger.info(s"**** JDBCContext => ${ctx} ***")

        Source
          .fromFuture(jdbcTxUpdates[Vector](ctx))
          .map { r => JDBCResult(marshal(r)) }
      }
  }

Similarly, this jdbcTxUpdates returns a Future type. It is implemented in JDBCEngine.scala.

The client builds a MemberRow stream and sends it to the server through stub.insertRows:

  val p1 = MemberRow( "Peter Chan",Some(JDBCDate(1967,5,17)),"new member1",None,_root_.com.google.protobuf.ByteString.EMPTY)
  val p2 = MemberRow( "Alanda Wong",Some(JDBCDate(1980,11,10)),"new member2",None,_root_.com.google.protobuf.ByteString.EMPTY)
  val p3 = MemberRow( "Kate Zhang",Some(JDBCDate(1969,8,13)),"new member3",None,_root_.com.google.protobuf.ByteString.EMPTY)
  val p4 = MemberRow( "Tiger Chan",Some(JDBCDate(1962,5,1)),"new member4",None,_root_.com.google.protobuf.ByteString.EMPTY)

  def insertRows: Source[JDBCResult,NotUsed] = {
    logger.info(s"running insertRows ...")
    Source
      .fromIterator(() => List(p1,p2,p3,p4).toIterator)
      .via(stub.insertRows)
  }

Finally, we demonstrate the use of the jdbcBatchUpdate function. We read MemberRow from the server and send it back to the server for updating. DDL is as follows:

message MemberRow {
  string name = 1;
  JDBCDate birthday = 2;
  string description = 3;
  JDBCDateTime created_at = 4;
  bytes picture = 5;
}

service JDBCServices {
  rpc runQuery(JDBCQuery) returns (stream JDBCDataRow) {}
  rpc batQuery(stream JDBCQuery) returns (stream JDBCDataRow) {}
  rpc runDDL(JDBCUpdate) returns (JDBCResult) {}
  rpc insertRows(stream MemberRow) returns(JDBCResult) {}
  rpc updateRows(stream MemberRow) returns(JDBCResult) {}
  rpc getMembers(JDBCQuery) returns (stream MemberRow) {}
}

Server-side functions are defined as follows:

 val toMemberRow = (rs: WrappedResultSet) => MemberRow(
    name = rs.string("name"),
    description = rs.string("description"),
    birthday = None,
    createdAt = None,
    picture = _root_.com.google.protobuf.ByteString.EMPTY
  )
  override def getMembers: Flow[JDBCQuery, MemberRow, NotUsed] = {
    logger.info("**** getMembers called on service side ***")
    Flow[JDBCQuery]
      .flatMapConcat { q =>
        //unpack JDBCQuery and construct the context
        var params: Seq[Any] =  Nil
        if (q.parameters != _root_.com.google.protobuf.ByteString.EMPTY)
           params = unmarshal[Seq[Any]](q.parameters)
        logger.info(s"**** query parameters: ${params} ****")
        val ctx = JDBCQueryContext[MemberRow](
          dbName = Symbol(q.dbName),
          statement = q.statement,
          parameters = params,
          fetchSize = q.fetchSize.getOrElse(100),
          autoCommit = q.autoCommit.getOrElse(false),
          queryTimeout = q.queryTimeout
        )
        jdbcAkkaStream(ctx, toMemberRow)
      }
  }
override def updateRows: Flow[MemberRow, JDBCResult, NotUsed] = {
    logger.info("**** updateRows called on service side ***")
    val updateSQL = "update members set description = ?, created_at = ? where name = ?"

    Flow[MemberRow]
      .flatMapConcat { row =>
        val ctx = JDBCContext('h2)
            .setBatchCommand(updateSQL)
            .appendBatchParameters(
              row.name + " updated.",
              jdbcSetNow,
              row.name
            ).setBatchReturnGeneratedKeyOption(true)

        logger.info(s"**** JDBCContext => ${ctx} ***")

        Source
          .fromFuture(jdbcBatchUpdate[Vector](ctx))
          .map { r => JDBCResult(marshal(r)) }
      }
  }

The source code for the jdbcBatchUpdate function is in the attachment JDBCEngine.scala. The client code is as follows:

  val queryMember = JDBCQuery (
    dbName = "h2",
    statement = "select * from members"
  )
  def updateRows: Source[JDBCResult,NotUsed] = {
    logger.info(s"running updateRows ...")
    Source
      .single(queryMember)
      .via(stub.getMembers)
      .via(stub.updateRows)
  }

The following example demonstrates how to use JDBC Action Stream to batch data. The source code of the server is as follows:

  val params: JDBCDataRow => Seq[Any] = row => {
    Seq((row.value.toInt * 2), row.state, row.county, row.year) }
  val sql = "update AQMRPT set total = ? where statename = ? and countyname = ? and reportyear = ?"

  val jdbcActionStream = JDBCActionStream('h2,sql ,params)
    .setParallelism(4).setProcessOrder(false)
  val jdbcActionFlow = jdbcActionStream.performOnRow

  override def updateBat: Flow[JDBCDataRow, JDBCDataRow, NotUsed] = {
    logger.info("**** updateBat called on service side ***")
    Flow[JDBCDataRow]
         .via(jdbcActionFlow)
  }

JdbcAction Flow is a Flow[R,R,], so we connect it directly to the previous Flow with via. The following is the definition code for JDBC Action Stream:

  case class JDBCActionStream[R](dbName: Symbol, parallelism: Int = 1, processInOrder: Boolean = true,
                                 statement: String, prepareParams: R => Seq[Any]) {
    jas =>
    def setDBName(db: Symbol): JDBCActionStream[R] = jas.copy(dbName = db)
    def setParallelism(parLevel: Int): JDBCActionStream[R] = jas.copy(parallelism = parLevel)
    def setProcessOrder(ordered: Boolean): JDBCActionStream[R] = jas.copy(processInOrder = ordered)

    private def perform(r: R)(implicit ec: ExecutionContextExecutor) = {
      import scala.concurrent._
      val params = prepareParams(r)
      Future {
        NamedDB(dbName) autoCommit { session =>
          session.execute(statement, params: _*)
        }
        r
      }
    }
    def performOnRow(implicit ec: ExecutionContextExecutor): Flow[R, R, NotUsed] =
      if (processInOrder)
        Flow[R].mapAsync(parallelism)(perform)
      else
        Flow[R].mapAsyncUnordered(parallelism)(perform)

  }
  object JDBCActionStream {
    def apply[R](_dbName: Symbol, _statement: String, params: R => Seq[Any]): JDBCActionStream[R] =
      new JDBCActionStream[R](dbName = _dbName, statement=_statement, prepareParams = params)
  }

The function performOnRow is a passthrough process that uses mapAsync to support multi-threaded operations. Client calls are as follows:

  def updateBatches: Source[JDBCDataRow,NotUsed] = {
    logger.info(s"running updateBatches ...")
      Source
        .fromIterator(() => List(query,query2,query3).toIterator)
        .via(stub.batQuery)
        .via(stub.updateBat)

  }

The following is the complete source code for this demonstration:

jdbc.proto

syntax = "proto3";

import "google/protobuf/wrappers.proto";
import "google/protobuf/any.proto";
import "scalapb/scalapb.proto";


package grpc.jdbc.services;

option (scalapb.options) = {
  // use a custom Scala package name
  // package_name: "io.ontherocks.introgrpc.demo"

  // don't append file name to package
  flat_package: true

  // generate one Scala file for all messages (services still get their own file)
  single_file: true

  // add imports to generated file
  // useful when extending traits or using custom types
  // import: "io.ontherocks.hellogrpc.RockingMessage"

  // code to put at the top of generated file
  // works only with `single_file: true`
  //preamble: "sealed trait SomeSealedTrait"
};

/*
 * Demoes various customization options provided by ScalaPBs.
 */

message JDBCDataRow {
 string year = 1;
 string state = 2;
 string county = 3;
 string value = 4;
}


message JDBCQuery {
  string dbName = 1;
  string statement = 2;
  bytes parameters = 3;
  google.protobuf.Int32Value fetchSize= 4;
  google.protobuf.BoolValue autoCommit = 5;
  google.protobuf.Int32Value queryTimeout = 6;
}

message JDBCResult {
  bytes result = 1;
}

message JDBCUpdate {
  string dbName = 1;
  repeated string statements = 2;
  bytes parameters = 3;
  google.protobuf.Int32Value fetchSize= 4;
  google.protobuf.Int32Value queryTimeout = 5;
  int32 sqlType = 6;
  google.protobuf.Int32Value batch = 7;
  bytes returnGeneratedKey = 8;
}

message JDBCDate {
  int32 yyyy = 1;
  int32 mm   = 2;
  int32 dd   = 3;
}

message JDBCTime {
  int32 hh   = 1;
  int32 mm   = 2;
  int32 ss   = 3;
  int32 nnn  = 4;
}

message JDBCDateTime {
   JDBCDate date = 1;
   JDBCTime time = 2;
}

message MemberRow {
  string name = 1;
  JDBCDate birthday = 2;
  string description = 3;
  JDBCDateTime created_at = 4;
  bytes picture = 5;
}

service JDBCServices {
  rpc runQuery(JDBCQuery) returns (stream JDBCDataRow) {}
  rpc batQuery(stream JDBCQuery) returns (stream JDBCDataRow) {}
  rpc runDDL(JDBCUpdate) returns (JDBCResult) {}
  rpc insertRows(stream MemberRow) returns(JDBCResult) {}
  rpc updateRows(stream MemberRow) returns(JDBCResult) {}
  rpc getMembers(JDBCQuery) returns (stream MemberRow) {}
}

JDBCEngine.scala

package sdp.jdbc.engine
import java.sql.PreparedStatement
import scala.collection.generic.CanBuildFrom
import akka.stream.scaladsl._
import scalikejdbc._
import scalikejdbc.streams._
import akka.NotUsed
import akka.stream._
import java.time._
import scala.concurrent.duration._
import scala.concurrent._
import sdp.jdbc.FileStreaming._

import scalikejdbc.TxBoundary.Try._

import scala.concurrent.ExecutionContextExecutor
import java.io.InputStream

object JDBCContext {
  type SQLTYPE = Int
  val SQL_EXEDDL= 1
  val SQL_UPDATE = 2
  val RETURN_GENERATED_KEYVALUE = true
  val RETURN_UPDATED_COUNT = false

}

case class JDBCQueryContext[M](
                                dbName: Symbol,
                                statement: String,
                                parameters: Seq[Any] = Nil,
                                fetchSize: Int = 100,
                                autoCommit: Boolean = false,
                                queryTimeout: Option[Int] = None)


case class JDBCContext(
                        dbName: Symbol,
                        statements: Seq[String] = Nil,
                        parameters: Seq[Seq[Any]] = Nil,
                        fetchSize: Int = 100,
                        queryTimeout: Option[Int] = None,
                        queryTags: Seq[String] = Nil,
                        sqlType: JDBCContext.SQLTYPE = JDBCContext.SQL_UPDATE,
                        batch: Boolean = false,
                        returnGeneratedKey: Seq[Option[Any]] = Nil,
                        // no return: None, return by index: Some(1), by name: Some("id")
                        preAction: Option[PreparedStatement => Unit] = None,
                        postAction: Option[PreparedStatement => Unit] = None) {

  ctx =>

  //helper functions

  def appendTag(tag: String): JDBCContext = ctx.copy(queryTags = ctx.queryTags :+ tag)

  def appendTags(tags: Seq[String]): JDBCContext = ctx.copy(queryTags = ctx.queryTags ++ tags)

  def setFetchSize(size: Int): JDBCContext = ctx.copy(fetchSize = size)

  def setQueryTimeout(time: Option[Int]): JDBCContext = ctx.copy(queryTimeout = time)

  def setPreAction(action: Option[PreparedStatement => Unit]): JDBCContext = {
    if (ctx.sqlType == JDBCContext.SQL_UPDATE &&
      !ctx.batch && ctx.statements.size == 1)
      ctx.copy(preAction = action)
    else
      throw new IllegalStateException("JDBCContex setting error: preAction not supported!")
  }

  def setPostAction(action: Option[PreparedStatement => Unit]): JDBCContext = {
    if (ctx.sqlType == JDBCContext.SQL_UPDATE &&
      !ctx.batch && ctx.statements.size == 1)
      ctx.copy(postAction = action)
    else
      throw new IllegalStateException("JDBCContex setting error: preAction not supported!")
  }

  def appendDDLCommand(_statement: String, _parameters: Any*): JDBCContext = {
    if (ctx.sqlType == JDBCContext.SQL_EXEDDL) {
      ctx.copy(
        statements = ctx.statements ++ Seq(_statement),
        parameters = ctx.parameters ++ Seq(Seq(_parameters))
      )
    } else
      throw new IllegalStateException("JDBCContex setting error: option not supported!")
  }

  def appendUpdateCommand(_returnGeneratedKey: Boolean, _statement: String,_parameters: Any*): JDBCContext = {
    if (ctx.sqlType == JDBCContext.SQL_UPDATE && !ctx.batch) {
      ctx.copy(
        statements = ctx.statements ++ Seq(_statement),
        parameters = ctx.parameters ++ Seq(_parameters),
        returnGeneratedKey = ctx.returnGeneratedKey ++ (if (_returnGeneratedKey) Seq(Some(1)) else Seq(None))
      )
    } else
      throw new IllegalStateException("JDBCContex setting error: option not supported!")
  }

  def appendBatchParameters(_parameters: Any*): JDBCContext = {
    if (ctx.sqlType != JDBCContext.SQL_UPDATE || !ctx.batch)
      throw new IllegalStateException("JDBCContex setting error: batch parameters only supported for SQL_UPDATE and batch = true!")

    var matchParams = true
    if (ctx.parameters != Nil)
      if (ctx.parameters.head.size != _parameters.size)
        matchParams = false
    if (matchParams) {
      ctx.copy(
        parameters = ctx.parameters ++ Seq(_parameters)
      )
    } else
      throw new IllegalStateException("JDBCContex setting error: batch command parameters not match!")
  }


  def setBatchReturnGeneratedKeyOption(returnKey: Boolean): JDBCContext = {
    if (ctx.sqlType != JDBCContext.SQL_UPDATE || !ctx.batch)
      throw new IllegalStateException("JDBCContex setting error: only supported in batch update commands!")
    ctx.copy(
      returnGeneratedKey = if (returnKey) Seq(Some(1)) else Nil
    )
  }

  def setDDLCommand(_statement: String, _parameters: Any*): JDBCContext = {
    ctx.copy(
      statements = Seq(_statement),
      parameters = Seq(_parameters),
      sqlType = JDBCContext.SQL_EXEDDL,
      batch = false
    )
  }

  def setUpdateCommand(_returnGeneratedKey: Boolean, _statement: String,_parameters: Any*): JDBCContext = {
    ctx.copy(
      statements = Seq(_statement),
      parameters = Seq(_parameters),
      returnGeneratedKey = if (_returnGeneratedKey) Seq(Some(1)) else Seq(None),
      sqlType = JDBCContext.SQL_UPDATE,
      batch = false
    )
  }
  def setBatchCommand(_statement: String): JDBCContext = {
    ctx.copy (
      statements = Seq(_statement),
      sqlType = JDBCContext.SQL_UPDATE,
      batch = true
    )
  }

}

object JDBCEngine {

  import JDBCContext._

  type JDBCDate = LocalDate
  type JDBCDateTime = LocalDateTime
  type JDBCTime = LocalTime

  def jdbcSetDate(yyyy: Int, mm: Int, dd: Int) = LocalDate.of(yyyy,mm,dd)
  def jdbcSetTime(hh: Int, mm: Int, ss: Int, nn: Int) = LocalTime.of(hh,mm,ss,nn)
  def jdbcSetDateTime(date: JDBCDate, time: JDBCTime) =  LocalDateTime.of(date,time)
  def jdbcSetNow = LocalDateTime.now()


  type JDBCBlob = InputStream

  def fileToJDBCBlob(fileName: String, timeOut: FiniteDuration = 60 seconds)(
    implicit mat: Materializer) = FileToInputStream(fileName,timeOut)

  def jdbcBlobToFile(blob: JDBCBlob, fileName: String)(
    implicit mat: Materializer) =  InputStreamToFile(blob,fileName)



  private def noExtractor(message: String): WrappedResultSet => Nothing = { (rs: WrappedResultSet) =>
    throw new IllegalStateException(message)
  }

  def jdbcAkkaStream[A](ctx: JDBCQueryContext[A],extractor: WrappedResultSet => A)
                       (implicit ec: ExecutionContextExecutor): Source[A,NotUsed] = {
      val publisher: DatabasePublisher[A] = NamedDB(ctx.dbName) readOnlyStream {
      val rawSql = new SQLToCollectionImpl[A, NoExtractor](ctx.statement, ctx.parameters)(noExtractor(""))
      ctx.queryTimeout.foreach(rawSql.queryTimeout(_))
      val sql: SQL[A, HasExtractor] = rawSql.map(extractor)

      sql.iterator
        .withDBSessionForceAdjuster(session => {
          session.connection.setAutoCommit(ctx.autoCommit)
          session.fetchSize(ctx.fetchSize)
        })
    }
    Source.fromPublisher[A](publisher)
  }


  def jdbcQueryResult[C[_] <: TraversableOnce[_], A](ctx: JDBCQueryContext[A],
                                                     extractor: WrappedResultSet => A)(
                                                      implicit cbf: CanBuildFrom[Nothing, A, C[A]]): C[A] = {

    val rawSql = new SQLToCollectionImpl[A, NoExtractor](ctx.statement, ctx.parameters)(noExtractor(""))
    ctx.queryTimeout.foreach(rawSql.queryTimeout(_))
    rawSql.fetchSize(ctx.fetchSize)
    implicit val session = NamedAutoSession(ctx.dbName)
    val sql: SQL[A, HasExtractor] = rawSql.map(extractor)
    sql.collection.apply[C]()

  }

  def jdbcExecuteDDL(ctx: JDBCContext)(implicit ec: ExecutionContextExecutor): Future[String] = {
    if (ctx.sqlType != SQL_EXEDDL) {
      Future.failed(new IllegalStateException("JDBCContex setting error: sqlType must be 'SQL_EXEDDL'!"))
    }
    else {
      Future {
        NamedDB(ctx.dbName) localTx { implicit session =>
          ctx.statements.foreach { stm =>
            val ddl = new SQLExecution(statement = stm, parameters = Nil)(
              before = WrappedResultSet => {})(
              after = WrappedResultSet => {})

            ddl.apply()
          }
          "SQL_EXEDDL executed succesfully."
        }
      }
    }
  }

  def jdbcBatchUpdate[C[_] <: TraversableOnce[_]](ctx: JDBCContext)(
    implicit ec: ExecutionContextExecutor,
             cbf: CanBuildFrom[Nothing, Long, C[Long]]): Future[C[Long]] = {
    if (ctx.statements == Nil)
     Future.failed ( new IllegalStateException("JDBCContex setting error: statements empty!"))
    if (ctx.sqlType != SQL_UPDATE) {
      Future.failed(new IllegalStateException("JDBCContex setting error: sqlType must be 'SQL_UPDATE'!"))
    }
    else {
      if (ctx.batch) {
        if (noReturnKey(ctx)) {
          val usql = SQL(ctx.statements.head)
            .tags(ctx.queryTags: _*)
            .batch(ctx.parameters: _*)
          Future {
            NamedDB(ctx.dbName) localTx { implicit session =>
              ctx.queryTimeout.foreach(session.queryTimeout(_))
              usql.apply[Seq]()
              Seq.empty[Long].to[C]
            }
          }
        } else {
          val usql = new SQLBatchWithGeneratedKey(ctx.statements.head, ctx.parameters, ctx.queryTags)(None)
          Future {
            NamedDB(ctx.dbName) localTx { implicit session =>
              ctx.queryTimeout.foreach(session.queryTimeout(_))
              usql.apply[C]()
            }
          }
        }

      } else {
        Future.failed(new IllegalStateException("JDBCContex setting error: must set batch = true !"))
      }
    }
  }
  private def singleTxUpdateWithReturnKey[C[_] <: TraversableOnce[_]](ctx: JDBCContext)(
          implicit ec: ExecutionContextExecutor,
                    cbf: CanBuildFrom[Nothing, Long, C[Long]]): Future[C[Long]] = {
    val Some(key) :: xs = ctx.returnGeneratedKey
    val params: Seq[Any] = ctx.parameters match {
      case Nil => Nil
      case p@_ => p.head
    }
    val usql = new SQLUpdateWithGeneratedKey(ctx.statements.head, params, ctx.queryTags)(key)
    Future {
      NamedDB(ctx.dbName) localTx { implicit session =>
        session.fetchSize(ctx.fetchSize)
        ctx.queryTimeout.foreach(session.queryTimeout(_))
        val result = usql.apply()
        Seq(result).to[C]
      }
    }
  }

  private def singleTxUpdateNoReturnKey[C[_] <: TraversableOnce[_]](ctx: JDBCContext)(
          implicit ec: ExecutionContextExecutor,
                   cbf: CanBuildFrom[Nothing, Long, C[Long]]): Future[C[Long]] = {
    val params: Seq[Any] = ctx.parameters match {
      case Nil => Nil
      case p@_ => p.head
    }
    val before = ctx.preAction match {
      case None => pstm: PreparedStatement => {}
      case Some(f) => f
    }
    val after = ctx.postAction match {
      case None => pstm: PreparedStatement => {}
      case Some(f) => f
    }
    val usql = new SQLUpdate(ctx.statements.head,params,ctx.queryTags)(before)(after)
    Future {
      NamedDB(ctx.dbName) localTx {implicit session =>
        session.fetchSize(ctx.fetchSize)
        ctx.queryTimeout.foreach(session.queryTimeout(_))
        val result = usql.apply()
        Seq(result.toLong).to[C]
      }
    }

  }

  private def singleTxUpdate[C[_] <: TraversableOnce[_]](ctx: JDBCContext)(
    implicit ec: ExecutionContextExecutor,
             cbf: CanBuildFrom[Nothing, Long, C[Long]]): Future[C[Long]] = {
    if (noReturnKey(ctx))
      singleTxUpdateNoReturnKey(ctx)
    else
      singleTxUpdateWithReturnKey(ctx)
  }

  private def noReturnKey(ctx: JDBCContext): Boolean = {
    if (ctx.returnGeneratedKey != Nil) {
      val k :: xs = ctx.returnGeneratedKey
      k match {
        case None => true
        case Some(k) => false
      }
    } else true
  }

  def noActon: PreparedStatement=>Unit = pstm => {}

  def multiTxUpdates[C[_] <: TraversableOnce[_]](ctx: JDBCContext)(
    implicit ec: ExecutionContextExecutor,
             cbf: CanBuildFrom[Nothing, Long, C[Long]]): Future[C[Long]] = {
    Future {
      NamedDB(ctx.dbName) localTx { implicit session =>
        session.fetchSize(ctx.fetchSize)
        ctx.queryTimeout.foreach(session.queryTimeout(_))
        val keys: Seq[Option[Any]] = ctx.returnGeneratedKey match {
          case Nil => Seq.fill(ctx.statements.size)(None)
          case k@_ => k
        }
        val sqlcmd = ctx.statements zip ctx.parameters zip keys
        val results = sqlcmd.map { case ((stm, param), key) =>
          key match {
            case None =>
              new SQLUpdate(stm, param, Nil)(noActon)(noActon).apply().toLong
            case Some(k) =>
              new SQLUpdateWithGeneratedKey(stm, param, Nil)(k).apply().toLong
          }
        }
        results.to[C]
      }
    }
  }


  def jdbcTxUpdates[C[_] <: TraversableOnce[_]](ctx: JDBCContext)(
    implicit ec: ExecutionContextExecutor,
             cbf: CanBuildFrom[Nothing, Long, C[Long]]): Future[C[Long]] = {
    if (ctx.statements == Nil)
      Future.failed( new IllegalStateException("JDBCContex setting error: statements empty!"))
    if (ctx.sqlType != SQL_UPDATE) {
      Future.failed(new IllegalStateException("JDBCContex setting error: sqlType must be 'SQL_UPDATE'!"))
    }
    else {
      if (!ctx.batch) {
        if (ctx.statements.size == 1)
          singleTxUpdate(ctx)
        else
          multiTxUpdates(ctx)
      } else
        Future.failed(new IllegalStateException("JDBCContex setting error: must set batch = false !"))

    }
  }

  case class JDBCActionStream[R](dbName: Symbol, parallelism: Int = 1, processInOrder: Boolean = true,
                                 statement: String, prepareParams: R => Seq[Any]) {
    jas =>
    def setDBName(db: Symbol): JDBCActionStream[R] = jas.copy(dbName = db)
    def setParallelism(parLevel: Int): JDBCActionStream[R] = jas.copy(parallelism = parLevel)
    def setProcessOrder(ordered: Boolean): JDBCActionStream[R] = jas.copy(processInOrder = ordered)

    private def perform(r: R)(implicit ec: ExecutionContextExecutor) = {
      import scala.concurrent._
      val params = prepareParams(r)
      Future {
        NamedDB(dbName) autoCommit { session =>
          session.execute(statement, params: _*)
        }
        r
      }
     // Future.successful(r)
    }
    def performOnRow(implicit ec: ExecutionContextExecutor): Flow[R, R, NotUsed] =
      if (processInOrder)
        Flow[R].mapAsync(parallelism)(perform)
      else
        Flow[R].mapAsyncUnordered(parallelism)(perform)

  }
  object JDBCActionStream {
    def apply[R](_dbName: Symbol, _statement: String, params: R => Seq[Any]): JDBCActionStream[R] =
      new JDBCActionStream[R](dbName = _dbName, statement=_statement, prepareParams = params)
  }
}

JDBCService.scala

package demo.grpc.jdbc.services

import akka.NotUsed
import akka.stream.scaladsl.{Source,Flow}
import grpc.jdbc.services._
import java.util.logging.Logger
import protobuf.bytes.Converter._
import sdp.jdbc.engine._
import JDBCEngine._
import scalikejdbc.WrappedResultSet
import scala.concurrent.ExecutionContextExecutor

class JDBCStreamingServices(implicit ec: ExecutionContextExecutor) extends JdbcGrpcAkkaStream.JDBCServices {
  val logger = Logger.getLogger(classOf[JDBCStreamingServices].getName)

  val toRow = (rs: WrappedResultSet) => JDBCDataRow(
    year = rs.string("REPORTYEAR"),
    state = rs.string("STATENAME"),
    county = rs.string("COUNTYNAME"),
    value = rs.string("VALUE")
  )
  override def runQuery: Flow[JDBCQuery, JDBCDataRow, NotUsed] = {
    logger.info("**** runQuery called on service side ***")
    Flow[JDBCQuery]
      .flatMapConcat { q =>
        //unpack JDBCQuery and construct the context
        var params: Seq[Any] =  Nil
        if (q.parameters != _root_.com.google.protobuf.ByteString.EMPTY)
          params = unmarshal[Seq[Any]](q.parameters)
        logger.info(s"**** query parameters: ${params} ****")
        val ctx = JDBCQueryContext[JDBCDataRow](
          dbName = Symbol(q.dbName),
          statement = q.statement,
          parameters = params,
          fetchSize = q.fetchSize.getOrElse(100),
          autoCommit = q.autoCommit.getOrElse(false),
          queryTimeout = q.queryTimeout
        )
        jdbcAkkaStream(ctx, toRow)
      }
  }

  override def batQuery: Flow[JDBCQuery, JDBCDataRow, NotUsed] = runQuery

  override def runDDL: Flow[JDBCUpdate, JDBCResult, NotUsed] = {
    logger.info("**** runDDL called on service side ***")
    Flow[JDBCUpdate]
      .flatMapConcat { context =>
        //unpack JDBCUpdate and construct the context

        val ctx = JDBCContext(
          dbName = Symbol(context.dbName),
          statements = context.statements,
          sqlType = JDBCContext.SQL_EXEDDL,
          queryTimeout = context.queryTimeout
        )

        logger.info(s"**** JDBCContext => ${ctx} ***")

        Source
          .fromFuture(jdbcExecuteDDL(ctx))
          .map { r => JDBCResult(marshal(r)) }

      }

  }

  override def insertRows: Flow[MemberRow, JDBCResult, NotUsed] = {
    logger.info("**** insertRows called on service side ***")
    val insertSQL = """
      insert into members(
        name,
        birthday,
        description,
        created_at
      ) values (?, ?, ?, ?)
    """
    Flow[MemberRow]
      .flatMapConcat { row =>
        val ctx = JDBCContext('h2)
          .setUpdateCommand(true,insertSQL,
             row.name,
             jdbcSetDate(row.birthday.get.yyyy,row.birthday.get.mm,row.birthday.get.dd),
             row.description,
             jdbcSetNow
          )

        logger.info(s"**** JDBCContext => ${ctx} ***")

        Source
          .fromFuture(jdbcTxUpdates[Vector](ctx))
          .map { r => JDBCResult(marshal(r)) }
      }
  }

  override def updateRows: Flow[MemberRow, JDBCResult, NotUsed] = {
    logger.info("**** updateRows called on service side ***")
    val updateSQL = "update members set description = ?, created_at = ? where name = ?"

    Flow[MemberRow]
      .flatMapConcat { row =>
        val ctx = JDBCContext('h2)
            .setBatchCommand(updateSQL)
            .appendBatchParameters(
              row.name + " updated.",
              jdbcSetNow,
              row.name
            ).setBatchReturnGeneratedKeyOption(true)

        logger.info(s"**** JDBCContext => ${ctx} ***")

        Source
          .fromFuture(jdbcBatchUpdate[Vector](ctx))
          .map { r => JDBCResult(marshal(r)) }
      }
  }


  val toMemberRow = (rs: WrappedResultSet) => MemberRow(
    name = rs.string("name"),
    description = rs.string("description"),
    birthday = None,
    createdAt = None,
    picture = _root_.com.google.protobuf.ByteString.EMPTY
  )
  override def getMembers: Flow[JDBCQuery, MemberRow, NotUsed] = {
    logger.info("**** getMembers called on service side ***")
    Flow[JDBCQuery]
      .flatMapConcat { q =>
        //unpack JDBCQuery and construct the context
        var params: Seq[Any] =  Nil
        if (q.parameters != _root_.com.google.protobuf.ByteString.EMPTY)
           params = unmarshal[Seq[Any]](q.parameters)
        logger.info(s"**** query parameters: ${params} ****")
        val ctx = JDBCQueryContext[MemberRow](
          dbName = Symbol(q.dbName),
          statement = q.statement,
          parameters = params,
          fetchSize = q.fetchSize.getOrElse(100),
          autoCommit = q.autoCommit.getOrElse(false),
          queryTimeout = q.queryTimeout
        )
        jdbcAkkaStream(ctx, toMemberRow)
      }

  }
}

JDBCServer.scala

package demo.grpc.jdbc.server

import java.util.logging.Logger

import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import io.grpc.Server
import demo.grpc.jdbc.services._
import io.grpc.ServerBuilder
import grpc.jdbc.services._

class gRPCServer(server: Server) {

  val logger: Logger = Logger.getLogger(classOf[gRPCServer].getName)

  def start(): Unit = {
    server.start()
    logger.info(s"Server started, listening on ${server.getPort}")
    sys.addShutdownHook {
      // Use stderr here since the logger may has been reset by its JVM shutdown hook.
      System.err.println("*** shutting down gRPC server since JVM is shutting down")
      stop()
      System.err.println("*** server shut down")
    }
    ()
  }

  def stop(): Unit = {
    server.shutdown()
  }

  /**
    * Await termination on the main thread since the grpc library uses daemon threads.
    */
  def blockUntilShutdown(): Unit = {
    server.awaitTermination()
  }
}

object JDBCServer extends App {
  import sdp.jdbc.config._

  implicit val system = ActorSystem("JDBCServer")
  implicit val mat = ActorMaterializer.create(system)
  implicit val ec = system.dispatcher

  ConfigDBsWithEnv("dev").setup('h2)
  ConfigDBsWithEnv("dev").loadGlobalSettings()

  val server = new gRPCServer(
    ServerBuilder
      .forPort(50051)
      .addService(
        JdbcGrpcAkkaStream.bindService(
          new JDBCStreamingServices
        )
      ).build()
  )
  server.start()
  //  server.blockUntilShutdown()
  scala.io.StdIn.readLine()
  ConfigDBsWithEnv("dev").close('h2)
  mat.shutdown()
  system.terminate()
}

JDBCClient.scala

package demo.grpc.jdbc.client
import grpc.jdbc.services._
import java.util.logging.Logger

import protobuf.bytes.Converter._
import akka.stream.scaladsl._
import akka.NotUsed
import akka.actor.ActorSystem
import akka.stream.{ActorMaterializer, ThrottleMode}
import io.grpc._
import sdp.jdbc.engine._

class JDBCStreamClient(host: String, port: Int) {
  val logger: Logger = Logger.getLogger(classOf[JDBCStreamClient].getName)

  val channel = ManagedChannelBuilder
    .forAddress(host,port)
    .usePlaintext(true)
    .build()


  val stub = JdbcGrpcAkkaStream.stub(channel)
  val query = JDBCQuery (
    dbName = "h2",
    statement = "select * from AQMRPT where STATENAME = ? and VALUE = ?",
    parameters = marshal(Seq("Arizona", 2))
  )
  val query2 = JDBCQuery (
    dbName = "h2",
    statement = "select * from AQMRPT where STATENAME = ? and VALUE = ?",
    parameters = marshal(Seq("Colorado", 3))
  )
  val query3= JDBCQuery (
    dbName = "h2",
    statement = "select * from AQMRPT where STATENAME = ? and VALUE = ?",
    parameters = marshal(Seq("Arkansas", 8))
  )

  def queryRows: Source[JDBCDataRow,NotUsed] = {
    logger.info(s"running queryRows ...")
    Source
        .single(query)
        .via(stub.runQuery)
  }

  def batQueryRows: Source[JDBCDataRow,NotUsed] = {
    logger.info(s"running batQueryRows ...")
    Source
      .fromIterator(() => List(query,query2,query3).toIterator)
      .via(stub.batQuery)
  }

  val dropSQL: String ="""
      drop table members
    """

  val createSQL: String ="""
    create table members (
      id serial not null primary key,
      name varchar(30) not null,
      description varchar(1000),
      birthday date,
      created_at timestamp not null,
      picture blob
    )"""
  val ctx = JDBCUpdate (
    dbName = "h2",
    sqlType = JDBCContext.SQL_EXEDDL,
    statements = Seq(dropSQL,createSQL)
  )

  def createTbl: Source[JDBCResult,NotUsed] = {
    logger.info(s"running createTbl ...")
    Source
      .single(ctx)
      .via(stub.runDDL)
  }

  val p1 = MemberRow( "Peter Chan",Some(JDBCDate(1967,5,17)),"new member1",None,_root_.com.google.protobuf.ByteString.EMPTY)
  val p2 = MemberRow( "Alanda Wong",Some(JDBCDate(1980,11,10)),"new member2",None,_root_.com.google.protobuf.ByteString.EMPTY)
  val p3 = MemberRow( "Kate Zhang",Some(JDBCDate(1969,8,13)),"new member3",None,_root_.com.google.protobuf.ByteString.EMPTY)
  val p4 = MemberRow( "Tiger Chan",Some(JDBCDate(1962,5,1)),"new member4",None,_root_.com.google.protobuf.ByteString.EMPTY)

  def insertRows: Source[JDBCResult,NotUsed] = {
    logger.info(s"running insertRows ...")
    Source
      .fromIterator(() => List(p1,p2,p3,p4).toIterator)
      .via(stub.insertRows)
  }

  val queryMember = JDBCQuery (
    dbName = "h2",
    statement = "select * from members"
  )
  def updateRows: Source[JDBCResult,NotUsed] = {
    logger.info(s"running updateRows ...")
    Source
      .single(queryMember)
      .via(stub.getMembers)
      .via(stub.updateRows)
  }
  def updateBatches: Source[JDBCDataRow,NotUsed] = {
    logger.info(s"running updateBatches ...")
      Source
        .fromIterator(() => List(query,query2,query3).toIterator)
        .via(stub.batQuery)
        .via(stub.updateBat)

  }

}


object TestConversion extends App {

  val orgval: Seq[Option[Any]] = Seq(Some(1),Some("id"),None,Some(2))
  println(s"original value: ${orgval}")

  val marval = marshal(orgval)

  println(s"marshal value: ${marval}")

  val unmval = unmarshal[Seq[Option[Any]]](marval)

  println(s"marshal value: ${unmval}")


  val m1 = MemberRow(name = "Peter")
  val m2 = m1.update(
    _.birthday.yyyy := 1989,
    _.birthday.mm := 10,
    _.birthday.dd := 3,
    _.description := "a new member"
  )


}


object QueryRows extends App {
  implicit val system = ActorSystem("QueryRows")
  implicit val mat = ActorMaterializer.create(system)

  val client = new JDBCStreamClient("localhost", 50051)

  client.queryRows.runForeach { r => println(r) }


  scala.io.StdIn.readLine()
  mat.shutdown()
  system.terminate()
}

object BatQueryRows extends App {
  implicit val system = ActorSystem("BatQueryRows")
  implicit val mat = ActorMaterializer.create(system)

  val client = new JDBCStreamClient("localhost", 50051)

  client.batQueryRows.runForeach(println)

  scala.io.StdIn.readLine()
  mat.shutdown()
  system.terminate()
}

object RunDDL extends App {
  implicit val system = ActorSystem("RunDDL")
  implicit val mat = ActorMaterializer.create(system)

  val client = new JDBCStreamClient("localhost", 50051)

  client.createTbl.runForeach{r => println(unmarshal[Seq[Any]](r.result))}

  scala.io.StdIn.readLine()
  mat.shutdown()
  system.terminate()

}

object InsertRows extends App {
  implicit val system = ActorSystem("InsertRows")
  implicit val mat = ActorMaterializer.create(system)

  val client = new JDBCStreamClient("localhost", 50051)

  client.insertRows.runForeach { r => println(unmarshal[Vector[Long]](r.result)) }

  scala.io.StdIn.readLine()
  mat.shutdown()
  system.terminate()
}

object UpdateRows extends App {
  implicit val system = ActorSystem("UpdateRows")
  implicit val mat = ActorMaterializer.create(system)

  val client = new JDBCStreamClient("localhost", 50051)

  client.updateRows.runForeach{ r => println(unmarshal[Vector[Long]](r.result)) }

  scala.io.StdIn.readLine()
  mat.shutdown()
  system.terminate()
}

object BatUpdates extends App {
  implicit val system = ActorSystem("BatUpdates")
  implicit val mat = ActorMaterializer.create(system)

  val client = new JDBCStreamClient("localhost", 50051)

  client.updateBatches.runForeach(println)

  scala.io.StdIn.readLine()
  mat.shutdown()
  system.terminate()
}

ByteConverter.scala

package protobuf.bytes
import java.io.{ByteArrayInputStream,ByteArrayOutputStream,ObjectInputStream,ObjectOutputStream}
import com.google.protobuf.ByteString
object Converter {

  def marshal(value: Any): ByteString = {
    val stream: ByteArrayOutputStream = new ByteArrayOutputStream()
    val oos = new ObjectOutputStream(stream)
    oos.writeObject(value)
    oos.close()
    ByteString.copyFrom(stream.toByteArray())
  }

  def unmarshal[A](bytes: ByteString): A = {
    val ois = new ObjectInputStream(new ByteArrayInputStream(bytes.toByteArray))
    val value = ois.readObject()
    ois.close()
    value.asInstanceOf[A]
  }

}

Other parts of the source code and system settings can be obtained from the previous discussion.

Posted by cr55dan on Tue, 18 Dec 2018 02:21:05 -0800