Akka typed - cqrs read write separation mode

Keywords: snapshot Database Programming REST

Event source and cluster are introduced earlier. Now it's time to discuss CQRS. CQRS is the mode of separation of reading and writing, which is composed of independent writer program and reader program. The specific principle has been introduced in previous blogs. Akka typed should naturally support CQRS mode, at least it provides support for writer programming, which can be seen from EventSourcedBehavior. Akka typed provides a new EventSourcedBehavior actor, which greatly facilitates the application and development of persistent actor, but at the same time, it also creates some limitations for programmers. For example, it will be more difficult to change the state manually. EventSourcedBehavior does not support multi-layer persist. That is to say, it is impossible to process the state in the event handler program through some specific events of persist. Here is an example of a shopping cart application: when the payment is completed, you need to take a snapshot. Here is the code of the snapshot:

       snapshotWhen {
          (state,evt,seqNr) => CommandHandler.takeSnapshot(state,evt,seqNr)
       }
...
 
  def takeSnapshot(state: Voucher, evt: Events.Action, lstSeqNr: Long)(implicit pid: PID) = {
    if (evt.isInstanceOf[Events.PaymentMade]
        || evt.isInstanceOf[Events.VoidVoucher.type]
        || evt.isInstanceOf[Events.SuspVoucher.type])
      if (state.items.isEmpty) {
        log.step(s"#${state.header.num} taking snapshot at [$lstSeqNr] ...")
        true
      } else
        false
    else
      false
  }

There is no problem judging the event type, because it is the current event, but another condition is that the shopping cart must be empty. This is a bit difficult, because this state can only be determined by the results of these event operations, that is to say, the next step, but to determine the results, it needs to calculate the shopping cart content, which seems to be a dead cycle. In akka classic, after judging the result of event operation, if we need to change the state, we can persist a special event, and then process the state in the handler of the event. No way, EventSourcedBehavior does not support multi-layer persist, only to do this:

      case PaymentMade(acct, dpt, num, ref,amount) =>
             ...
              writerInternal.lastVoucher = Voucher(vchs, vItems)
              endVoucher(Voucher(vchs,vItems),TXNTYPE.sales)
              Voucher(vchs.nextVoucher, List())
             ...   
          
    

I can only save the current status, perform statement calculation, and then empty the shopping cart, so that the snapshot can go smoothly.

OK, akka's read side programming is implemented through PersistentQuery. The function of reader is to read the event from the database and then restore it to the specific data format. Let's learn about the implementation details of reader in this application from the call of Reader:

    val readerShard = writerInternal.optSharding.get   
    val readerRef = readerShard.entityRefFor(POSReader.EntityKey, s"$pid.shopId:$pid.posId")
    readerRef ! Messages.PerformRead(pid.shopid, pid.posid,writerInternal.lastVoucher.header.num,writerInternal.lastVoucher.header.opr,bseq,eseq,txntype,writerInternal.expurl,writerInternal.expacct,writerInternal.exppass)

You can see that the reader is a cluster partition, sharding entity. The idea is to send a message to an entity after each purchase, and the entity will automatically terminate after the reader function is completed, and the occupied resources will be released immediately. The reader actor is defined as follows:

object POSReader extends LogSupport {
  val EntityKey: EntityTypeKey[Command] = EntityTypeKey[Command]("POSReader")

  def apply(nodeAddress: String, trace: Boolean): Behavior[Command] = {
    log.stepOn = trace
    implicit var pid: PID = PID("","")
    Behaviors.supervise(
      Behaviors.setup[Command] { ctx =>
        Behaviors.withTimers { timer =>
          implicit val ec = ctx.executionContext
          Behaviors.receiveMessage {
            case PerformRead(shopid, posid, vchnum, opr, bseq, eseq, txntype, xurl, xacct, xpass) =>
              pid = PID(shopid, posid)
              log.step(s"POSReader: PerformRead($shopid,$posid,$vchnum,$opr,$bseq,$eseq,$txntype,$xurl,$xacct,$xpass)")(PID(shopid, posid))
              val futReadSaveNExport = for {
                txnitems <- ActionReader.readActions(ctx, vchnum, opr, bseq, eseq, trace, nodeAddress, shopid, posid, txntype)
                _ <- ExportTxns.exportTxns(xurl, xacct, xpass, vchnum, txntype == Events.TXNTYPE.suspend,
                     { if(txntype == Events.TXNTYPE.voidall)
                       txnitems.map (_.copy(txntype=Events.TXNTYPE.voidall))
                     else txnitems },
                     trace)(ctx.system.toClassic, pid)
              } yield ()
              ctx.pipeToSelf(futReadSaveNExport) {
                case Success(_) => {
                  timer.startSingleTimer(ReaderFinish(shopid, posid, vchnum), readInterval.seconds)
                  StopReader
                }
                case Failure(err) =>
                  log.error(s"POSReader:  Error: ${err.getMessage}")
                  timer.startSingleTimer(ReaderFinish(shopid, posid, vchnum), readInterval.seconds)
                  StopReader
              }

            Behaviors.same
            case StopReader =>
              Behaviors.same
            case ReaderFinish(shopid, posid, vchnum) =>
              Behaviors.stopped(
                () => log.step(s"POSReader: {$shopid,$posid} finish reading voucher#$vchnum and stopped")(PID(shopid, posid))
              )
          }
        }
      }
    ).onFailure(SupervisorStrategy.restart)
  }

Reader is a normal actor. It is worth noting that the reader program may be a huge and complex program, which must be divided into multiple modules, so we can divide the module functions according to the process sequence: in this way, the following modules may need the results generated by the above modules to continue. Remember, never block threads in the actor, all modules return to Future, and then string them with for yield. We used it on it ctx.pipeToSelf After the Future operation is completed, send a ReaderFinish message to the user to inform him to stop.

In this example, we divide the reader task into:

1. Read events from database

2. Event replay generates status data (shopping cart content)

3. Store the formed shopping cart contents as transaction document items in the database

4. Output transaction data to the rest API provided by users

event reading is implemented by Cassandra persistence plugin:

    val query =
    PersistenceQuery(classicSystem).readJournalFor[CassandraReadJournal](CassandraReadJournal.Identifier)

    // issue query to journal
    val source: Source[EventEnvelope, NotUsed] =
      query.currentEventsByPersistenceId(s"${pid.shopid}:${pid.posid}", startSeq, endSeq)

    // materialize stream, consuming events
    val readActions: Future[List[Any]] = source.runFold(List[Any]()) { (lstAny, evl) => evl.event :: lstAny }

This part is relatively simple: define a PersistenceQuery, use it to generate a Source, and then run the Source to get Future[List[Any]].

Repeat event generates transaction data:

    def buildVoucher(actions: List[Any]): List[TxnItem] = {
      log.step(s"POSReader: read actions: $actions")
      val (voidtxns,onlytxns) = actions.asInstanceOf[Seq[Action]].pickOut(_.isInstanceOf[Voided])
      val listOfActions = onlytxns.reverse zip (LazyList from 1)   //zipWithIndex
      listOfActions.foreach { case (txn,idx) =>
        txn.asInstanceOf[Action] match {
          case Voided(_) =>
          case ti@_ =>
            curTxnItem = EventHandlers.buildTxnItem(ti.asInstanceOf[Action],vchState).copy(opr=cshr)
            if(voidtxns.exists(a => a.asInstanceOf[Voided].seq == idx)) {
              curTxnItem = curTxnItem.copy(txntype = TXNTYPE.voided, opr=cshr)
              log.step(s"POSReader: voided txnitem: $curTxnItem")
            }
            val vch = EventHandlers.updateState(ti.asInstanceOf[Action],vchState,vchItems,curTxnItem,true)
            vchState = vch.header
            vchItems = vch.txnItems
            log.step(s"POSReader: built txnitem: ${vchItems.txnitems.head}")
        }
      }
      log.step(s"POSReader: voucher built with state: $vchState, items: ${vchItems.txnitems}")
      vchItems.txnitems
    }

Repeat List[Event] to generate List[TxnItem].

Write List[TxnItem] to the database:

  def writeTxnsToDB(vchnum: Int, txntype: Int, bseq: Long, eseq: Long, txns: List[TxnItem])(
                   implicit system: akka.actor.ActorSystem, session: CassandraSession, pid: PID): Future[Seq[TxnItem]] = ???

Note that the return result type is Future[Seq[TxnItem]]. We use for yield to string these actions:

    val txnitems: Future[List[Events.TxnItem]] = for {
      lst1 <- readActions    //read list from Source
      lstTxns <- if (lst1.length < (endSeq -startSeq))    //if imcomplete list read again
        readActions
        else FastFuture.successful(lst1)
      items <- FastFuture.successful( buildVoucher(lstTxns) )
      _ <- JournalTxns.writeTxnsToDB(vchnum,txntype,startSeq,endSeq,items)
      _ <- session.close(ec)
    } yield items

Note: the Future[List[TxnItem]] returned by for is provided to the rest API output function. Where List[TxnItem] will be converted to json as the package embedded data of post.

Now the return result type of all subtasks is Future. We can use for to string them:

             val futReadSaveNExport = for {
                txnitems <- ActionReader.readActions(ctx, vchnum, opr, bseq, eseq, trace, nodeAddress, shopid, posid, txntype)
                _ <- ExportTxns.exportTxns(xurl, xacct, xpass, vchnum, txntype == Events.TXNTYPE.suspend,
                     { if(txntype == Events.TXNTYPE.voidall)
                       txnitems.map (_.copy(txntype=Events.TXNTYPE.voidall))
                     else txnitems },
                     trace)(ctx.system.toClassic, pid)
              } yield ()

When it comes to EventSourcedBehavior, because of the use of Cassandra plugin, I suddenly think that there is a big difference between the old and the new in the configuration file. Now this application.conf That's true:

akka {
  loglevel = INFO
  actor {
    provider = cluster
    serialization-bindings {
      "com.datatech.pos.cloud.CborSerializable" = jackson-cbor
    }
  }
  remote {
    artery {
      canonical.hostname = "192.168.11.189"
      canonical.port = 0
    }
  }
  cluster {
    seed-nodes = [
      "akka://cloud-pos-server@192.168.11.189:2551"]
    sharding {
      passivate-idle-entity-after = 5 m
    }
  }
  # use Cassandra to store both snapshots and the events of the persistent actors
  persistence {
    journal.plugin = "akka.persistence.cassandra.journal"
    snapshot-store.plugin = "akka.persistence.cassandra.snapshot"
  }
}
akka.persistence.cassandra {
  # don't use autocreate in production
  journal.keyspace = "poc2g"
  journal.keyspace-autocreate = on
  journal.tables-autocreate = on
  snapshot.keyspace = "poc2g_snapshot"
  snapshot.keyspace-autocreate = on
  snapshot.tables-autocreate = on
}

datastax-java-driver {
  basic.contact-points = ["192.168.11.189:9042"]
  basic.load-balancing-policy.local-datacenter = "datacenter1"
}

akka.persitence.cassandra The keyspace name can be defined in the paragraph, so that the old and new versions of the application can share a Cassandra and be online at the same time.

 

 

Posted by dustbuster on Thu, 25 Jun 2020 19:26:59 -0700