IT Cloud. Eugeny Shtoltc. Читать онлайн. Newlib. NEWLIB.NET

Автор: Eugeny Shtoltc
Издательство: ЛитРес: Самиздат
Серия:
Жанр произведения: Зарубежная компьютерная литература
Год издания: 2021
isbn:
Скачать книгу
myapp

      build:.

      depence-on: mysql

      images: myimages

      link:

      – db: mysql

      – Nginx: Nginx

      … Here we see the whole picture as a whole, the containers are connected by one network, where the application can access mysql and NGINX via the db and NGINX hosts, respectively, the myapp container will be created only when after raising the mysql database, even if it takes some time.

      Service Discovery

      With the growth of the cluster, the probability of nodes falling increases and manual detection of what has happened becomes more complicated; Service Discovery systems are designed to automate the detection of newly appeared services and their disappearance. But in order for the cluster to be able to detect the state, given that the system is decentralized – the nodes must be able to exchange messages with each other and choose a leader, examples are Consul, ETCD and ZooKeeper. We will consider Consul based on its following features: the whole program is one file, it is extremely easy to use and configure, has a high-level interface (ZooKeeper does not have it, it is believed that over time, third-party applications that implement it should appear), is written in a non-demanding language to computing machine resources (Consul – Go, ZooKeeper – Java) and neglected its support in other systems, such as, for example, ClickHouse (supports ZooKeeper by default).

      Let's check the distribution of information between the nodes using a distributed key-value storage, that is, if we added records to one node, then they should spread to other nodes, and it should not have a hard-coded Master node. Since Consul consists of one executable file, download it from the official website at the link https://www.consul.io/downloads. html on each node:

      wget https://releases.hashicorp.com/consul/1.3.0/consul_1.3.0_linux_amd64.zip -O consul.zip

      unzip consul.zip

      rm -f consul.zip

      Now you need to start one node, for now, as master consul -server -ui , and others as slave consul -server -ui and consul -server -ui . After that, we will stop Consul, which is in master mode, and launch it as an equal, as a result of Consul – they will re-elect the temporary leader, and in case of a yoke of failure, they will re-elect again. Let's check the work of our cluster consul members :

      consul members;

      And so let's check the distribution of information in our storage:

      curl -X PUT -d 'value1' .....: 8500 / v1 / kv / group1 / key1

      curl -s .....: 8500 / v1 / kv / group1 / key1

      curl -s .....: 8500 / v1 / kv / group1 / key1

      curl -s .....: 8500 / v1 / kv / group1 / key1

      Let's set up service monitoring, for more details see the documentation https://www.consul.io/docs/agent/options. html #telemetry, for that .... https://medium.com/southbridge/monitoring-consul-with-statsd-exporter-and-prometheus-bad8bee3961b

      In order not to configure, we will use the container and mode for development with the already configured IP address at 172.17.0.2:

      essh @ kubernetes-master: ~ $ mkdir consul && cd $ _

      essh @ kubernetes-master: ~ / consul $ docker run -d –name = dev-consul -e CONSUL_BIND_INTERFACE = eth0 consul

      Unable to find image 'consul: latest' locally

      latest: Pulling from library / consul

      e7c96db7181b: Pull complete

      3404d2df15cb: Pull complete

      1b2797650ac6: Pull complete

      42eaf145982e: Pull complete

      cef844389e8c: Pull complete

      bc7449359c58: Pull complete

      Digest: sha256: 94cdbd83f24ec406da2b5d300a112c14cf1091bed8d6abd49609e6fe3c23f181

      Status: Downloaded newer image for consul: latest

      c6079f82500a41f878d2c513cf37d45ecadd3fc40998cd35020c604eb5f934a1

      essh @ kubernetes-master: ~ / consul $ docker inspect dev-consul | jq '. [] | .NetworkSettings.Networks.bridge.IPAddress'

      "172.17.0.4"

      essh @ kubernetes-master: ~ / consul $ docker run -d –name = consul_follower_1 -e CONSUL_BIND_INTERFACE = eth0 consul agent -dev -join = 172.17.0.4

      8ec88680bc632bef93eb9607612ed7f7f539de9f305c22a7d5a23b9ddf8c4b3e

      essh @ kubernetes-master: ~ / consul $ docker run -d –name = consul_follower_2 -e CONSUL_BIND_INTERFACE = eth0 consul agent -dev -join = 172.17.0.4

      babd31d7c5640845003a221d725ce0a1ff83f9827f839781372b1fcc629009cb

      essh @ kubernetes-master: ~ / consul $ docker exec -t dev-consul consul members

      Node Address Status Type Build Protocol DC Segment

      53cd8748f031 172.17.0.5:8301 left server 1.6.1 2 dc1 <all>

      8ec88680bc63 172.17.0.5:8301 alive server 1.6.1 2 dc1 <all>

      babd31d7c564 172.17.0.6:8301 alive server 1.6.1 2 dc1 <all>

      essh @ kubernetes-master: ~ / consul $ curl -X PUT -d 'value1' 172.17.0.4:8500/v1/kv/group1/key1

      true

      essh @ kubernetes-master: ~ / consul $ curl $ (docker inspect dev-consul | jq -r '. [] | .NetworkSettings.Networks.bridge.IPAddress'): 8500 / v1 / kv / group1 / key1

      [

      {

      "LockIndex": 0,

      "Key": "group1 / key1",

      "Flags": 0,

      "Value": "dmFsdWUx",

      "CreateIndex": 277,

      "ModifyIndex": 277

      }

      ]

      essh @ kubernetes-master: ~ / consul $ firefox $ (docker inspect dev-consul | jq -r '. [] | .NetworkSettings.Networks.bridge.IPAddress'): 8500 / ui

      With the determination of the location of the containers, it is necessary to provide authorization; for this, key stores are used.

      dockerd -H fd: // –cluster-store = consul: //192.168.1.6: 8500 –cluster-advertise = eth0: 2376

      * –cluster-store – you can get data about keys

      * –cluster-advertise – can be saved

      docker network create –driver overlay –subnet 192.168.10.0/24 demo-network

      docker network ls

      Simple clustering

      In this article, we will not consider how to create a cluster manually, but will use two tools: Docker Swarm and Google Kubernetes – the most popular and most common solutions. Docker Swarm is simpler, it is part of Docker and therefore has the largest audience (subjectively), and Kubernetes provides much more capabilities, more tool integrations (for example, distributed storage for Volume), support in popular clouds, and more easily scalable for large projects (large abstraction, component approach).

      Let's consider what a cluster is and what good it will bring us. A cluster is a distributed structure that abstracts independent servers into one logical entity and automates work on:

      * In the event of a server crash, containers are dropped (new ones created) to other servers;

      * even distribution of containers across servers for fault tolerance;

      * creating a container on a server suitable for free resources;

      * Expanding the container in case of failure;

      * unified management interface from one point;

      * performing operations taking into account the parameters of servers, for example, the size and type of disk and