Geek time cloud native training camp | latest HD

Keywords: Front-end Back-end Programmer

Download: Geek time cloud native training camp | latest HD

Cloud native introduction

With the rolling wave of cloud computing, the concept of Cloud Native came into being. Cloud Native is very popular and has been in a mess for 9102. If you don't understand Cloud Native, it's really Out.

What is cloud primordial?
Cloud native is a method of building and running applications. It is a set of technical system and methodology. Cloud native is a compound word, Cloud+Native.

Cloud means that the application is located in the cloud instead of the traditional data center; Native means that the application takes into account the cloud environment from the beginning of design, is designed for the cloud, runs in the best posture on the cloud, and makes full use of and gives full play to the elastic + distributed advantages of the cloud platform.

Why is cloud primordial becoming more and more important?
The future world must be cloud native
Future software, from its birth, is born and grows on the cloud. This statement is definitely not groundless. Look at what big Internet companies are doing now, and you will know:

As the core platform of cloud native, Kubernetes has attracted more and more programmers to understand, learn and master it. I know a lot of people, because they can use Kubernetes, the job hopping salary is very good.

Actual development of cloud native Kubernetes

Flink on TKE semi hosted service, the ultimate Flink cloud native experience

Flink on TKE semi managed service provides Flink cluster deployment, logging, monitoring, storage and other one-stop services. Users can run other online businesses in the same cluster with Flink, so as to maximize the utilization of resources and achieve the capabilities of unified resources, unified technology stack, unified operation and maintenance, etc.

We build Flink Kubernetes computing cluster based on TKE container platform. According to the operation of existing Flink jobs, we find that most Flink jobs mainly consume memory, and the CPU utilization is generally low. In terms of model selection, we recommend choosing memory machines.

apiVersion: flinkoperator.Kubernetes.io/v1beta1
kind: FlinkCluster
metadata:
  name: flink-hello-world
spec:
  image:
    name: flink:1.11.3
  jobManager:
    resources:
      limits:
        memory: "1024Mi"
        cpu: "200m"
  taskManager:
    replicas: 2
    resources:
      limits:
        memory: "2024Mi"
        cpu: "200m"
  job:
    jarFile: /opt/flink/examples/streaming/helloword.jar
    className: org.apache.flink.streaming.examples.wordcount.WordCount
    args: ["--input", "/opt/flink/README.txt"]
    parallelism: 2
  flinkProperties:
    taskmanager.numberOfTaskSlots: "2"

By submitting the deployment through the above declarative API, we can see that the user's jar package needs to be typed into the image in advance. As a platform provider, of course, it is impossible for each user to type docker image. Some users don't even know how to use docker, so we should shield the Docker image from users. Users only need to upload jar packages and other resources. Flink Operator provides the initContainer option. With this option, we can automatically download the resources uploaded by users. However, for simplicity, we directly modify the docker entrypoint startup script. First download the resources uploaded by users, and then start the relevant Flink processes. The resources uploaded by users are declared through environment variables. For example:

apiVersion: flinkoperator.Kubernetes.io/v1beta1
kind: FlinkCluster
metadata:
  name: flink-hello-world
spec:
  image:
    name: flink:1.11.3
  envVars:
    - name: FLINK_USER_JAR
      value: hdfs://xxx/path/to/helloword.jar
    - name: FLINK_USER_DEPENDENCIES
      value: hdfs://xxx/path/to/config.json,hdfs://xxx/path/to/vocab.txt
  ...

Geek time cloud native training camp - Kubernetes architecture

Kubernetes originally originated from Borg within Google and provided an application-oriented container cluster deployment and management system. Kubernetes aims to eliminate the burden of choreographing physical / virtual computing, network and storage infrastructure, and enable application operators and developers to focus entirely on container centric primitives for self-service operations. Kubernetes also provides a stable and compatible Foundation (platform) for building customized workflows and more advanced automation tasks. Kubernetes has perfect cluster management capabilities, including multi-level security protection and access mechanism, multi tenant application support capability, transparent service registration and service discovery mechanism, built-in load balancer, fault discovery and self-healing capability, service rolling upgrade and on-line capacity expansion, scalable automatic resource scheduling mechanism, and multi granularity resource quota management capability. Kubernetes also provides perfect management tools, covering development, deployment, testing, operation and maintenance monitoring and other links.

// ImageService defines the public APIs for managing images.
service ImageService {

// ListImages lists existing images.
rpc ListImages(ListImagesRequest) returns (ListImagesResponse) {}
// ImageStatus returns the status of the image. If the image is not
// present, returns a response with ImageStatusResponse.Image set to
// nil.
rpc ImageStatus(ImageStatusRequest) returns (ImageStatusResponse) {}
// PullImage pulls an image with authentication config.
rpc PullImage(PullImageRequest) returns (PullImageResponse) {}
// RemoveImage removes the image.
// This call is idempotent, and must not return an error if the image has
// already been removed.
rpc RemoveImage(RemoveImageRequest) returns (RemoveImageResponse) {}
// ImageFSInfo returns information of the filesystem that is used to store images.
rpc ImageFsInfo(ImageFsInfoRequest) returns (ImageFsInfoResponse) {}

}

Geek time cloud native training camp - Kubernetes cluster expansion

Use custom resource extension API
Custom resources are extensions of Kubernetes API. Each resource in kubernetes is a collection of API objects. For example, those spec s defined in YAML file are the definition of resource objects in kubernetes. All custom resources can use kubectl operation like the built-in resources in kubernetes.

TPR
If we want to create a TPR named cron-tab.stable.example.com, the yaml file is defined as follows:

apiVersion: extensions/v1beta1
kind: ThirdPartyResource
metadata:
  name: cron-tab.stable.example.com
description: "A specification of a Pod to run on a cron style schedule"
versions:
- name: v1
kind: ThirdPartyResource
apiVersion: extensions/v1beta1
metadata:
  name: d-tab.l5d.io
description: stores dtabs used by namerd
versions:
- name: v1alpha1

CRD

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  # The name must conform to the following format: < plural >. < group >
  name: crontabs.stable.example.com
spec:
  # Group name used by REST API: / APIs / < group > / < version >
  group: stable.example.com
  # Version number used by REST API: / APIs / < group > / < version >
  version: v1
  # Named or Cluster
  scope: Namespaced
  names:
    # Plural name used in URL: / APIs / < group > / < version > / < plural >
    plural: crontabs
    # Singular name used in CLI
    singular: crontab
    # Singular type of CamelCased format. Use in manifest file
    kind: CronTab

Posted by Superian on Fri, 03 Dec 2021 14:27:07 -0800