Author | Sun Jianbo (Tianyuan) Aliyun Technical Expert
Read about: DMASpec has been iterating for nearly three months. v1alpha2 Version finally released!The new version has become more Kubernetes-friendly as a whole, balancing standards and scalability to a large extent and supporting CRD better, on the basis of adherence to platform-independent DMASpec.If you have already written an existing CRD Operator, you can smoothly integrate into the OAM system and enjoy the dividends of the OAM model.
At present, OAM has become the core architecture for building cloud products for many companies including Ali, Microsoft, Upbond, Harmonious Cloud and so on.They built an application-centric, user-friendly Kubernetes PaaS through OAM; fully utilized the standardization and scalability of OAM, achieved the core Controller of OAM, and quickly accessed the existing Operator capabilities; through OAM, crossed through multiple modules, breaking the predicament that the original Operators were isolated from each other and could not be reused.<br />
-
To understand the background and origin of OAM, you can refer to Deep Interpretation!Lessons and Practices of Upgrading the Unified Application Management Framework of Ali;
-
What value can OAM bring to end users?Can be referred to In-depth Interpretation of OAM: What value does OAM bring to cloud native applications?"
To put it right, let's see what changes v1alpha2 has made.
Description of major changes
For your convenience, here are only the main changes, some details are upstream OAM Spec Github Warehouse prevails.
Terminology Description
-
CRD (Custom Resource Definition): CRD in OAM is a generic, custom resource description definition.K8s'OAM implementations can correspond to K8s' CRDs exactly. In non-K8s implementations, OAM's CRDs need to include APIVersion/Kind and be able to describe fields for validation.
-
CR (Custom Resource), an instance of CRD in OAM, is a resource description that conforms to the field format definition in CRD.K8s OAM implementations can exactly correspond to K8s CRs, while non-K8s implementations can require alignment of APIVersion/Kind and field format definitions.
Major Change 1 Use the Reference model to define Workload, Trait, and Cope
The original way v1alpha1 worked was as follows:
// Old version, only compare apiVersion: core.oam.dev/v1alpha1 kind: WorkloadType metadata: name: OpenFaaS annotations: version: v1.0.0 description: "OpenFaaS a Workload which can serve workload running as functions" spec: group: openfaas.com version: v1alpha2 names: kind: Function singular: function plural: functions workloadSettings: | { "$schema": "http://json-schema.org/draft-07/schema#", "type": "object", "required": [ "name", "image" ], "properties": { "name": { "type": "string", "description": "the name to the function" }, "image": { "type": "string", "description": "the docker image of the function" } } }
In the original pattern, group/version/kind were fields, and spec checks were represented by jsonschema, which showed that the overall format was actually CRD-like, but not entirely consistent.
The new version of v1alpha2 has been completely changed to a reference model, describing a reference relationship in the form of Workload Definition TraitDefinition ScopeDefinition.A CRD can be referenced directly, and name is the name of the CRD.For non-K8s OAM implementations, the name here is an index to find a CRD-like check file that contains apiVersion and type, along with the corresponding schema checks.
- Workload
apiVersion: core.oam.dev/v1alpha2 kind: WorkloadDefinition metadata: name: containerisedworkload.core.oam.dev spec: definitionRef: # Name of CRD. name: containerisedworkload.core.oam.dev
- Trait
apiVersion: core.oam.dev/v1alpha2 kind: TraitDefinition metadata: name: manualscalertrait.core.oam.dev spec: appliesToWorkloads: - containerizedworkload.core.oam.dev definitionRef: name: manualscalertrait.core.oam.dev
- Scope
apiVersion: core.oam.dev/v1alpha2 kind: ScopeDefinition metadata: name: networkscope.core.oam.dev spec: allowComponentOverlap: true definitionRef: name: networkscope.core.oam.dev
Be careful:
-
Here, for the OAM implementation of K8s, name is the name of the CRD in K8s and consists of <plural-type>. <group>.The best practice for the community is that a CRD has only one version running in the cluster, and generally new versions will be forward compatible and upgraded with the latest version at once.If two versions do exist at the same time, the user can choose further by using kubectl get CRD <name>
-
Definition is a layer that is not oriented to end user and is mainly used by platform implementations. For non-K8s implementations, if there are multiple versions of scenarios, the implementation platform of OAM can show end users different versions choices.
Major Change 2 directly embeds K8s CR as Component and Trait instances
In the original way, at the Workload and Trait levels, we only pulled out the spec part of the CR and placed it in the workload Settings and properties fields, respectively.
Although this method can "deduce" the K8s CR, it is not conducive to the access of CRD s in the K8s ecosystem, and spec s need to be redefined once in a different format.
// Old version, only compare apiVersion: core.oam.dev/v1alpha1 kind: ComponentSchematic metadata: name: rediscluster spec: workloadType: cache.crossplane.io/v1alpha1.RedisCluster workloadSettings: engineVersion: 1.0 region: cn
// Old version, only compare apiVersion: core.oam.dev/v1alpha1 kind: ApplicationConfiguration metadata: name: custom-single-app annotations: version: v1.0.0 description: "Customized version of single-app" spec: variables: components: - componentName: frontend instanceName: web-front-end parameterValues: traits: - name: manual-scaler properties: replicaCount: 5
Now the CR is embedded directly, and you can see the complete CR description below the workload and trait fields.
apiVersion: core.oam.dev/v1alpha2 kind: Component metadata: name: example-server spec: prameters: - name: xxx fieldPaths: - "spec.osType" workload: apiVersion: core.oam.dev/v1alpha2 kind: Server spec: osType: linux containers: - name: my-cool-server image: name: example/very-cool-server:1.0.0 ports: - name: http value: 8080 env: - name: CACHE_SECRET
apiVersion: core.oam.dev/v1alpha2 kind: ApplicationConfiguration metadata: name: cool-example spec: components: - componentName: example-server traits: - trait: apiVersion: core.oam.dev/v1alpha2 kind: ManualScalerTrait spec: replicaCount: 3
The benefits are obvious:
- It is easy to dock CRD s in the existing K8s system, even including K8s native Deployment (accessed as a custom workload) and other resources.
- Field definitions at the K8s CR level are mature, and parsing and validation are completely left to the CRD system.
Here you notice that the structure of traits is []trait{CR} instead of []CR}, with a seemingly useless trait field added, mainly for two reasons:
- Leave room for subsequent extensions in the trait dimension, such as possible ordering.
- The non-K8s system can be completely customized in this layer without strictly following the CR writing and without binding the K8s description format.
Major Change 3 Parameter Pass Replaces the original fromParam with jsonPath
Developing the ability to set aside fields for operation and maintenance coverage has always been an important feature of OAM.
Reflected in the flowchart of DMASpec is that R&D defines parameters in Component and Operations Maintenance overrides parameters in AppConfig through parameterValue.
The initial parameter pass was to have a fromParam field after each field, which clearly does not cover all scenarios when a custom schema is supported:
// Old version, only compare apiVersion: core.oam.dev/v1alpha1 kind: ComponentSchematic metadata: name: rediscluster spec: workloadType: cache.crossplane.io/v1alpha1.RedisCluster parameters: - name: engineVersion type: string workloadSettings: - name: engineVersion type: string fromParam: engineVersion
Later we proposed such a scheme:
// Old version, only compare apiVersion: core.oam.dev/v1alpha1 kind: ComponentSchematic metadata: name: rediscluster spec: workloadType: cache.crossplane.io/v1alpha1.RedisCluster parameters: - name: engineVersion type: string workloadSettings: engineVersion: "[fromParam(engineVersion)]"
The biggest problem with this scenario is that static IaD (Infrastructure as Data) incorporates dynamic functions, which complicate understanding and use.
After many discussions, in the new scheme we describe the parameter positions to be injected in the form of JsonPath, which guarantees that AppConfig is static in user's understanding.
apiVersion: core.oam.dev/v1alpha2 kind: Component metadata: name: example-server spec: workload: apiVersion: core.oam.dev/v1alpha2 kind: Server spec: containers: - name: my-cool-server image: name: example/very-cool-server:1.0.0 ports: - name: http value: 8080 env: - name: CACHE_SECRET value: cache parameters: - name: instanceName required: true fieldPaths: - ".metadata.name" - name: cacheSecret required: true fieldPaths: - ".workload.spec.containers[0].env[0].value"
fieldPaths is an array, and each element defines parameters and fields in the corresponding Workload.
apiVersion: core.oam.dev/v1alpha2 kind: ApplicationConfiguration metadata: name: my-app-deployment spec: components: - componentName: example-server parameterValues: - name: cacheSecret value: new-cache
In AppConfig, go to parameterValues to override the parameters in Component.
Major change 4 Component Schematic name to Component
Originally the concept of component was called ComponentSchematic. The main reason for naming it this way was that there were some mixed grammatical descriptions and choices, such as for Core Workload (container) and for extended Workload (workload Settings), which were written differently. Containers define specific parameters, and workload Settings are more like schema (how parameters are filled in).The v1alpha1 version of Workload Setting also incorporates types/descriptions, which makes it more ambiguous.
// Old version, only compare apiVersion: core.oam.dev/v1alpha1 kind: ComponentSchematic metadata: name: rediscluster spec: containers: ... workloadSettings: - name: engineVersion type: string description: engine version fromParam: engineVersion ...
In v1alpha2, the concept of component was changed to Component, which is explicitly an instance of Workload. All syntax definitions are defined by the actual CRD referenced in Workload Definition.
In the K8s implementation, WorkloadDefinition refers to the CRD, and Component.spec.workload refers to the instance CR for which the CRD is written.
apiVersion: core.oam.dev/v1alpha2 kind: Component metadata: name: example-server spec: workload: apiVersion: core.oam.dev/v1alpha2 kind: Server spec: ...
Major Change 5 Scope created by CR alone, no longer by AppConfig
The cope in v1alpha1 was created by AppConfig, and as you can see from the example, it is also essentially a CR, which can be "inferred" to create a CR.However, since Scopes are positioned to accommodate components from different AppConfigs and Scopes are not an App per se, creating Scopes using AppConfig has always been inappropriate.
// Old version, only compare apiVersion: core.oam.dev/v1alpha1 kind: ApplicationConfiguration metadata: name: my-vpc-network spec: variables: - name: networkName value: "my-vpc" scopes: - name: network type: core.oam.dev/v1alpha1.Network properties: network-id: "[fromVariable(networkName)]" subnet-ids: "my-subnet1, my-subnet2"
The new version of v1alpha2 uses CR to correspond to instances. In order to make the concept of Scope clearer and more convenient to correspond to different types of Scopes, the Scope is taken out and created directly by the CRD defined by ScopeDefinition.Examples are as follows:
apiVersion: core.oam.dev/v1alpha2 kind: ScopeDefinition metadata: name: networkscope.core.oam.dev spec: allowComponentOverlap: true definitionRef: name: networkscope.core.oam.dev
apiVersion: core.oam.dev/v1alpha2 kind: NetworkScope metadata: name: example-vpc-network labels: region: us-west environment: production spec: networkId: cool-vpc-network subnetIds: - cool-subnetwork - cooler-subnetwork - coolest-subnetwork internetGatewayType: nat
Use scope references in AppConfig as follows:
apiVersion: core.oam.dev/v1alpha2 kind: ApplicationConfiguration metadata: name: custom-single-app annotations: version: v1.0.0 description: "Customized version of single-app" spec: components: - componentName: frontend scopes: - scopeRef: apiVersion: core.oam.dev/v1alpha2 kind: NetworkScope name: my-vpc-network - componentName: backend scopes: - scopeRef: apiVersion: core.oam.dev/v1alpha2 kind: NetworkScope name: my-vpc-network
Major Change 6 Removes the Variable list and [fromVariable()] dynamic functions
Variables are included in the v1alpha1 version to reduce redundancy by opening source references to some common variables in AppConfig, so the Variables list is added.In practice, however, the reduced redundancy does not significantly reduce the complexity of the DMA spec; on the contrary, increasing the dynamic function significantly increases the complexity.
On the other hand, capabilities such as fromVariable can be accomplished entirely through tools such as helm template/ kustomiz, which render a complete DMAspec for use.
So the variables list and related fromVariable s are removed here, without affecting any functionality.
// Old version, only compare apiVersion: core.oam.dev/v1alpha1 kind: ApplicationConfiguration metadata: name: my-app-deployment spec: variables: - name: VAR_NAME value: SUPPLIED_VALUE components: - componentName: my-web-app-component instanceName: my-app-frontent parameterValues: - name: ANOTHER_PARAMETER value: "[fromVariable(VAR_NAME)]" traits: - name: ingress properties: DATA: "[fromVariable(VAR_NAME)]"
Major Change 7 Replace the original six core Workload s with ContainerizedWorkload
Because Workload is now defined uniformly with Workload Definition, Component becomes an instance, so the original six core Workloads actually become the same Workload Definition, the field descriptions are exactly the same, the only difference is that the constraints and demands on trait are different.Therefore, the original six core Workload spec s were uniformly modified to a Workload type called ContainerizedWorkload.
At the same time, the plan here is to let R&D express its demand for operational and maintenance strategies by adding concepts such as policy, that is, what trait s can be expressed in Component that you want to add.
apiVersion: core.oam.dev/v1alpha2 kind: WorkloadDefinition metadata: name: containerizedworkloads.core.oam.dev spec: definitionRef: name: containerizedworkloads.core.oam.dev
An example of using ContainerizedWorkload:
apiVersion: core.oam.dev/v1alpha2 kind: Component metadata: name: frontend annotations: version: v1.0.0 description: "A simple webserver" spec: workload: apiVersion: core.oam.dev/v1alpha2 kind: ContainerizedWorkload metadata: name: sample-workload spec: osType: linux containers: - name: web image: example/charybdis-single:latest@@sha256:verytrustworthyhash resources: cpu: required: 1.0 memory: required: 100MB env: - name: MESSAGE value: default parameters: - name: message description: The message to display in the web app. required: true type: string fieldPaths: - ".spec.containers[0].env[0].value"
Next Plan
- Parameter passing and dependencies between application-level components ( workflow);
- Policy scheme To facilitate R&D to demand trait s in Component;
- Component The concept of adding versions is also given OAM Solves Relevant Ways to Release Application Versions.
Common FAQ s
- What do we need to do to transform our original platform into an OAM model implementation?
For application management platforms originally on K8s, there are two phases to transform access into OAM implementation:
- Implements the Application Configuration Controller (referred to as AppConfig Controller), which contains the Component, Workload Definition, TraitDefinition, ScopeDefinition and other CRDs of OAM.AppConfig Controller pulls up the CRD Operator of the original platform according to the description in APPConfig 2a;
- Gradually, the original CRD Operator was divided into Workload and Trait based on the idea of separation of concerns.Connect and reuse more Workload, Trait in the OAM community to enrich the functionality in more scenarios.
- What changes do existing CRD Operator s need to make to access OAM?
Existing CRD Operator** functions allow smooth access to the OAM system, such as as as as an independent extension of Workload access.However, in order to better enable end users to appreciate the benefits of separation of OAM concerns, we strongly recommend that CRD Operator be separated into different CRDs based on different concerns of R&D and operation, R&D focused CRDs as Workload Access DMA, and Operations focused CRDs as Trait Access DMA.
Currently, OAM specifications and models have actually solved many existing problems, but its journey is just beginning.OAM is a neutral open source project, and we welcome more people to participate in defining the future of cloud native application delivery.
Participation:
- Pin Scavenger Enter Chinese Discussion Group for OAM Projects
Introduction to the Author
Sun Jianbo (Hua Name: Tianyuan) Ali Cloud technical expert is one of the main formulators of OAM specifications, and is committed to promoting standardization of cloud native applications.It is also involved in large-scale cloud native application delivery and application management in Alibaba.The team is inviting experts from the fields of App Delivery, Serverless, PaaS to join and welcome to contact jianbo.sjb AT alibaba-inc.com
Hire people!
Cloud Native Application Platform invites Kubernetes / Serverless / PaaS / Application Delivery Specialist (P6-P8) to join:
- Work life: P6-7 from three years and P8 from five years, depending on actual ability;
- Work place: domestic (Beijing/Hangzhou/Shenzhen); overseas (San Francisco Bay Area/Seattle);
- Jobs include: Architect, Technical Specialist, Full Stack Engineer, etc.
Resume immediately, 2-3 weeks results, Resume delivery: jianbo.sjb AT alibaba-inc.com.
"Alibaba Cloud Native Focus on the technology fields such as micro-services, Serverless, containers, Service Mesh, focus on cloud native popular technology trends, cloud native large-scale floor practices, and do the technology circle with the best understanding of cloud native developers."