IT Cloud. Eugeny Shtoltc. Читать онлайн. Newlib. NEWLIB.NET

Автор: Eugeny Shtoltc
Издательство: ЛитРес: Самиздат
Серия:
Жанр произведения: Зарубежная компьютерная литература
Год издания: 2021
isbn:
Скачать книгу
see, immediately after the POD became unavailable (the process of deleting it began) its replacement began to be created. Soon, the cluster will fully restore its structure. After we have finished our experiments, remove the virtual machines with the cluster:

      esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters delete mycluster –zone europe-north1-a;

      The following clusters will be deleted.

      – [mycluster] in [europe-north1-a]

      Do you want to continue (Y / n)? Y

      Deleting cluster mycluster … done.

      Deleted [https://container.googleapis.com/v1/projects/essch/zones/europe-north1-a/clusters/mycluster].

      esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters list –filter = name = mycluster

      Total. We created a cluster and created a load balancer with just two run and expose commands, now we can go to the balancer's IP address and watch the NGINX welcome page in the browser. In this case, the cluster recovers itself, for this we emulated a failure of the pod by deleting it – it was created again.

      Cluster Reproducibility

      Let's take a look at the situation from the previous chapter, in which we created a cluster, deleted a replica, and it recovered. The fact is that we do not manage commands directly, but with the help of commands we create descriptions of the required configuration of the cluster and place it in the distributed storage, after which the state of the nodes is maintained in accordance with this description in the distributed storage. We can also get and edit these descriptions, or write ourselves and then upload them to a distributed storage. This will allow us to save the state on disk in the form of YAML files and restore it back, as is often done when moving from a production server to a test one. In addition, we get the opportunity to more flexibly customize the state, but since we are not limited to commands.

      esschtolts @ cloudshell: ~ (essch) $ kubectl get deployment / Nginx –output = yaml

      apiVersion: extensions / v1beta1

      kind: Deployment

      metadata:

      annotations:

      deployment.kubernetes.io/revision: "1"

      creationTimestamp: 2018-12-16T10: 23: 26Z

      generation: 1

      labels:

      run: Nginx

      name: Nginx

      namespace: default

      resourceVersion: "1612985"

      selfLink: / apis / extensions / v1beta1 / namespaces / default / deployments / Nginx

      uid: 9fb3ad6a-011c-11e9-bfaa-42010aa60088

      spec:

      progressDeadlineSeconds: 600

      replicas: 1

      revisionHistoryLimit: 10

      selector:

      matchLabels:

      run: Nginx

      strategy:

      rollingUpdate:

      maxSurge: 1

      maxUnavailable: 1

      type: RollingUpdate

      template:

      metadata:

      creationTimestamp: null

      labels:

      run: Nginx

      spec:

      containers:

      – image: Nginx

      imagePullPolicy: Always

      name: Nginx

      resources: {}

      terminationMessagePath: / dev / termination-log

      terminationMessagePolicy: File

      dnsPolicy: ClusterFirst

      restartPolicy: Always

      schedulerName: default-scheduler

      securityContext: {}

      terminationGracePeriodSeconds: 30

      status:

      availableReplicas: 1

      conditions:

      – lastTransitionTime: 2018-12-16T10: 23: 26Z

      lastUpdateTime: 2018-12-16T10: 23: 26Z

      message: Deployment has minimum availability.

      reason: MinimumReplicasAvailable

      status: "True"

      type: Available

      – lastTransitionTime: 2018-12-16T10: 23: 26Z

      lastUpdateTime: 2018-12-16T10: 23: 28Z

      message: ReplicaSet "Nginx-64f497f8fd" has successfully progressed.

      reason: NewReplicaSetAvailable

      status: "True"

      type: Progressing

      observedGeneration: 1

      readyReplicas: 1

      replicas: 1

      updatedReplicas: 1

      This will be superfluous for us, so I will delete the unnecessary, because when creating, we specified only the name and image, the rest was filled with default values:

      apiVersion: extensions / v1beta1

      kind: Deployment

      metadata:

      labels:

      run: Nginx

      name: Nginx

      spec:

      selector:

      matchLabels:

      run: Nginx

      template:

      metadata:

      labels:

      run: Nginx

      spec:

      containers:

      – image: Nginx

      name: Nginx

      You can also create a template:

      gcloud services enable compute.googleapis.com –project = $ {PROJECT}

      gcloud beta compute instance-templates create-with-container $ {TEMPLATE} \

      –-machine-type = custom-1-4096 \

      –-image-family = cos-stable \

      –-image-project = cos-cloud \

      –-container-image = gcr.io / kuar-demo / kuard-amd64: 1 \

      –-container-restart-policy = always \

      –-preemptible \

      –-region = $ {REGION} \

      –-project = $ {PROJECT}

      gcloud compute instance-groups managed create $ {TEMPLATE} \

      –-base-instance-name = $ {TEMPLATE} \

      –-template = $ {TEMPLATE} \

      –-size = $ {CLONES} \

      –-region = $ {REGION} \

      –-project = $ {PROJECT}

      High service availability

      To ensure high availability, you need to redirect traffic to the spare in the event of an application crash. Also, it is often important that the load is evenly distributed, since the application in a single instance is not able to handle all the traffic. To do this, a cluster is created, for example, let's take a more complex image in order to parse a larger number of nuances:

      esschtolts @ cloudshell: ~ / bitrix (essch) $ cat deploymnet.yaml

      apiVersion: apps / v1

      kind: Deployment

      metadata:

      name: