Eugeny Shtoltc - IT Cloud

Тут можно читать онлайн Eugeny Shtoltc - IT Cloud - бесплатно ознакомительный отрывок. Жанр: foreign-comp, год 2021. Здесь Вы можете читать ознакомительный отрывок из книги онлайн без регистрации и SMS на сайте лучшей интернет библиотеки ЛибКинг или прочесть краткое содержание (суть), предисловие и аннотацию. Так же сможете купить и скачать торрент в электронном формате fb2, найти и слушать аудиокнигу на русском языке или узнать сколько частей в серии и всего страниц в публикации. Читателям доступно смотреть обложку, картинки, описание и отзывы (комментарии) о произведении.
  • Название:
    IT Cloud
  • Автор:
  • Жанр:
  • Издательство:
    неизвестно
  • Год:
    2021
  • ISBN:
    нет данных
  • Рейтинг:
    5/5. Голосов: 11
  • Избранное:
    Добавить в избранное
  • Отзывы:
  • Ваша оценка:
    • 100
    • 1
    • 2
    • 3
    • 4
    • 5

Eugeny Shtoltc - IT Cloud краткое содержание

IT Cloud - описание и краткое содержание, автор Eugeny Shtoltc, читайте бесплатно онлайн на сайте электронной библиотеки LibKing.Ru
In this book, the Chief Architect of the Cloud Native Competence Architecture Department at Sberbank shares his knowledge and experience with the reader on the creation and transition to the cloud ecosystem, as well as the creation and adaptation of applications for it. In the book, the author tries to lead the reader along the path, bypassing mistakes and difficulties. To do this, practical applications are demonstrated and explained so that the reader can use them as instructions for educational and work purposes. The reader can be both developers of different levels and ecosystem specialists who wish not to lose the relevance of their skills in an already changed world.

IT Cloud - читать онлайн бесплатно ознакомительный отрывок

IT Cloud - читать книгу онлайн бесплатно (ознакомительный отрывок), автор Eugeny Shtoltc
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Nginx-65899c769f-zs6g5 1/1 Running 0 55m

As we can see, immediately after the POD became unavailable (the process of deleting it began) its replacement began to be created. Soon, the cluster will fully restore its structure. After we have finished our experiments, remove the virtual machines with the cluster:

esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters delete mycluster –zone europe-north1-a;

The following clusters will be deleted.

– [mycluster] in [europe-north1-a]

Do you want to continue (Y / n)? Y

Deleting cluster mycluster … done.

Deleted [https://container.googleapis.com/v1/projects/essch/zones/europe-north1-a/clusters/mycluster].

esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters list –filter = name = mycluster

Total. We created a cluster and created a load balancer with just two run and expose commands, now we can go to the balancer's IP address and watch the NGINX welcome page in the browser. In this case, the cluster recovers itself, for this we emulated a failure of the pod by deleting it – it was created again.

Cluster Reproducibility

Let's take a look at the situation from the previous chapter, in which we created a cluster, deleted a replica, and it recovered. The fact is that we do not manage commands directly, but with the help of commands we create descriptions of the required configuration of the cluster and place it in the distributed storage, after which the state of the nodes is maintained in accordance with this description in the distributed storage. We can also get and edit these descriptions, or write ourselves and then upload them to a distributed storage. This will allow us to save the state on disk in the form of YAML files and restore it back, as is often done when moving from a production server to a test one. In addition, we get the opportunity to more flexibly customize the state, but since we are not limited to commands.

esschtolts @ cloudshell: ~ (essch) $ kubectl get deployment / Nginx –output = yaml

apiVersion: extensions / v1beta1

kind: Deployment

metadata:

annotations:

deployment.kubernetes.io/revision: "1"

creationTimestamp: 2018-12-16T10: 23: 26Z

generation: 1

labels:

run: Nginx

name: Nginx

namespace: default

resourceVersion: "1612985"

selfLink: / apis / extensions / v1beta1 / namespaces / default / deployments / Nginx

uid: 9fb3ad6a-011c-11e9-bfaa-42010aa60088

spec:

progressDeadlineSeconds: 600

replicas: 1

revisionHistoryLimit: 10

selector:

matchLabels:

run: Nginx

strategy:

rollingUpdate:

maxSurge: 1

maxUnavailable: 1

type: RollingUpdate

template:

metadata:

creationTimestamp: null

labels:

run: Nginx

spec:

containers:

– image: Nginx

imagePullPolicy: Always

name: Nginx

resources: {}

terminationMessagePath: / dev / termination-log

terminationMessagePolicy: File

dnsPolicy: ClusterFirst

restartPolicy: Always

schedulerName: default-scheduler

securityContext: {}

terminationGracePeriodSeconds: 30

status:

availableReplicas: 1

conditions:

– lastTransitionTime: 2018-12-16T10: 23: 26Z

lastUpdateTime: 2018-12-16T10: 23: 26Z

message: Deployment has minimum availability.

reason: MinimumReplicasAvailable

status: "True"

type: Available

– lastTransitionTime: 2018-12-16T10: 23: 26Z

lastUpdateTime: 2018-12-16T10: 23: 28Z

message: ReplicaSet "Nginx-64f497f8fd" has successfully progressed.

reason: NewReplicaSetAvailable

status: "True"

type: Progressing

observedGeneration: 1

readyReplicas: 1

replicas: 1

updatedReplicas: 1

This will be superfluous for us, so I will delete the unnecessary, because when creating, we specified only the name and image, the rest was filled with default values:

apiVersion: extensions / v1beta1

kind: Deployment

metadata:

labels:

run: Nginx

name: Nginx

spec:

selector:

matchLabels:

run: Nginx

template:

metadata:

labels:

run: Nginx

spec:

containers:

– image: Nginx

name: Nginx

You can also create a template:

gcloud services enable compute.googleapis.com –project = $ {PROJECT}

gcloud beta compute instance-templates create-with-container $ {TEMPLATE} \

–-machine-type = custom-1-4096 \

–-image-family = cos-stable \

–-image-project = cos-cloud \

–-container-image = gcr.io / kuar-demo / kuard-amd64: 1 \

–-container-restart-policy = always \

–-preemptible \

–-region = $ {REGION} \

–-project = $ {PROJECT}

gcloud compute instance-groups managed create $ {TEMPLATE} \

–-base-instance-name = $ {TEMPLATE} \

–-template = $ {TEMPLATE} \

–-size = $ {CLONES} \

–-region = $ {REGION} \

–-project = $ {PROJECT}

High service availability

To ensure high availability, you need to redirect traffic to the spare in the event of an application crash. Also, it is often important that the load is evenly distributed, since the application in a single instance is not able to handle all the traffic. To do this, a cluster is created, for example, let's take a more complex image in order to parse a larger number of nuances:

esschtolts @ cloudshell: ~ / bitrix (essch) $ cat deploymnet.yaml

apiVersion: apps / v1

kind: Deployment

metadata:

name: Nginxlamp

spec:

selector:

matchLabels:

app: lamp

replicas: 1

template:

metadata:

labels:

app: lamp

spec:

containers:

– name: lamp

image: mattrayner / lamp: latest-1604-php5

ports:

– containerPort: 80

esschtolts @ cloudshell: ~ / bitrix (essch) $ cat loadbalancer.yaml

apiVersion: v1

kind: Service

metadata:

name: frontend

spec:

type: LoadBalancer

ports:

– name: front

port: 80

targetPort: 80

selector:

app: lamp

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods

NAME READY STATUS RESTARTS AGE

Nginxlamp-7fb6fdd47b-jttl8 2/2 Running 0 3m

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

frontend LoadBalancer 10.55.242.137 35.228.73.217 80: 32701 / TCP, 8080: 32568 / TCP 4m

kubernetes ClusterIP 10.55.240.1 none> 443 / TCP 48m

Now we can create identical copies of our clusters, for example, for Production and Develop, but balancing will not work as expected. The balancer will find PODs by label, and PODs in both production and Developer clusters correspond to this label. Also, placing clusters in different projects will not be an obstacle. Although, for many tasks, this is a big plus, but not in the case of a cluster for developers and production. The namespace is used to delimit the scope. We use them discreetly, when we list PODs without specifying a scope, we are displayed by default , but the PODs are not taken out of the system scope:

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get namespace

NAME STATUS AGE

default Active 5h

kube-public Active 5h

kube-system Active

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods –namespace = kube-system

NAME READY STATUS RESTARTS AGE

event-exporter-v0.2.3-85644fcdf-tdt7h 2/2 Running 0 5h

fluentd-gcp-scaler-697b966945-bkqrm 1/1 Running 0 5h

fluentd-gcp-v3.1.0-xgtw9 2/2 Running 0 5h

heapster-v1.6.0-beta.1-5649d6ddc6-p549d 3/3 Running 0 5h

kube-dns-548976df6c-8lvp6 4/4 Running 0 5h

kube-dns-548976df6c-mcctq 4/4 Running 0 5h

kube-dns-autoscaler-67c97c87fb-zzl9w 1/1 Running 0 5h

kube-proxy-gke-bitrix-default-pool-38fa77e9-0wdx 1/1 Running 0 5h

kube-proxy-gke-bitrix-default-pool-38fa77e9-wvrf 1/1 Running 0 5h

l7-default-backend-5bc54cfb57-6qk4l 1/1 Running 0 5h

metrics-server-v0.2.1-fd596d746-g452c 2/2 Running 0 5h

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods –namespace = default

NAMEREADY STATUS RESTARTS AGE

Nginxlamp-b5dcb7546-g8j5r 1/1 Running 0 4h

Let's create a scope:

esschtolts @ cloudshell: ~ / bitrix (essch) $ cat namespace.yaml

apiVersion: v1

kind: Namespace

metadata:

name: development

labels:

name: development

esschtolts @ cloudshell: ~ (essch) $ kubectl create -f namespace.yaml

namespace "development" created

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get namespace –show-labels

NAME STATUS AGE LABELS

default Active 5h none>

development Active 16m name = development

kube-public Active 5h none>

kube-system Active 5h none>

The essence of working with scope is that for specific clusters we set the scope and we can execute commands specifying it, while they will apply only to them. At the same time, except for the keys in commands such as kubectl get pods I do not appear in the scope, therefore the configuration files of controllers (Deployment, DaemonSet and others) and services (LoadBalancer, NodePort and others) do not appear, allowing them to be seamlessly transferred between the scope, which especially relevant for the development pipeline: developer server, test server, and production server. Scopes are set in the cluster context file $ HOME / .kube / config using the kubectl config view command . So, in my cluster context entry, the scope entry does not appear (default is default ):

– context:

cluster: gke_essch_europe-north1-a_bitrix

user: gke_essch_europe-north1-a_bitrix

name: gke_essch_europe-north1-a_bitrix

You can see something like this:

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl config view -o jsonpath = '{. contexts [4]}'

{gke_essch_europe-north1-a_bitrix {gke_essch_europe-north1-a_bitrix gke_essch_europe-north1-a_bitrix []}}

Let's create a new context for this user and cluster:

esschtolts @ cloudshell: ~ (essch) $ kubectl config set-context dev \

> –namespace = development \

> –cluster = gke_essch_europe-north1-a_bitrix \

> –user = gke_essch_europe-north1-a_bitrix

Context "dev" modified.

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl config set-context dev \

> –namespace = development \

> –cluster = gke_essch_europe-north1-a_bitrix \

> –user = gke_essch_europe-north1-a_bitrix

Context "dev" modified.

As a result, the following was added:

– context:

cluster: gke_essch_europe-north1-a_bitrix

namespace: development

user: gke_essch_europe-north1-a_bitrix

name: dev

Now it remains to switch to it:

esschtolts @ cloudshell: ~ (essch) $ kubectl config use-context dev

Switched to context "dev".

esschtolts @ cloudshell: ~ (essch) $ kubectl config current-context

dev

esschtolts @ cloudshell: ~ (essch) $ kubectl get pods

No resources found.

esschtolts @ cloudshell: ~ (essch) $ kubectl get pods –namespace = default

NAMEREADY STATUS RESTARTS AGE

Nginxlamp-b5dcb7546-krkm2 1/1 Running 0 10h

You could add a namespace to the existing context:

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl config set-context $ (kubectl config current-context) –namespace = development

Context "gke_essch_europe-north1-a_bitrix" modified.

Now create a new cluster in the scope dev (it is now the default, and it can be omitted –namespace = dev ) and removed from the field by default visibility default (it is no longer the default for our cluster, and it is necessary to specify –namespace = default ):

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать


Eugeny Shtoltc читать все книги автора по порядку

Eugeny Shtoltc - все книги автора в одном месте читать по порядку полные версии на сайте онлайн библиотеки LibKing.




IT Cloud отзывы


Отзывы читателей о книге IT Cloud, автор: Eugeny Shtoltc. Читайте комментарии и мнения людей о произведении.


Понравилась книга? Поделитесь впечатлениями - оставьте Ваш отзыв или расскажите друзьям

Напишите свой комментарий
x