Eugeny Shtoltc - IT Cloud

Тут можно читать онлайн Eugeny Shtoltc - IT Cloud - бесплатно ознакомительный отрывок. Жанр: foreign-comp, год 2021. Здесь Вы можете читать ознакомительный отрывок из книги онлайн без регистрации и SMS на сайте лучшей интернет библиотеки ЛибКинг или прочесть краткое содержание (суть), предисловие и аннотацию. Так же сможете купить и скачать торрент в электронном формате fb2, найти и слушать аудиокнигу на русском языке или узнать сколько частей в серии и всего страниц в публикации. Читателям доступно смотреть обложку, картинки, описание и отзывы (комментарии) о произведении.
  • Название:
    IT Cloud
  • Автор:
  • Жанр:
  • Издательство:
    неизвестно
  • Год:
    2021
  • ISBN:
    нет данных
  • Рейтинг:
    5/5. Голосов: 11
  • Избранное:
    Добавить в избранное
  • Отзывы:
  • Ваша оценка:
    • 100
    • 1
    • 2
    • 3
    • 4
    • 5

Eugeny Shtoltc - IT Cloud краткое содержание

IT Cloud - описание и краткое содержание, автор Eugeny Shtoltc, читайте бесплатно онлайн на сайте электронной библиотеки LibKing.Ru
In this book, the Chief Architect of the Cloud Native Competence Architecture Department at Sberbank shares his knowledge and experience with the reader on the creation and transition to the cloud ecosystem, as well as the creation and adaptation of applications for it. In the book, the author tries to lead the reader along the path, bypassing mistakes and difficulties. To do this, practical applications are demonstrated and explained so that the reader can use them as instructions for educational and work purposes. The reader can be both developers of different levels and ecosystem specialists who wish not to lose the relevance of their skills in an already changed world.

IT Cloud - читать онлайн бесплатно ознакомительный отрывок

IT Cloud - читать книгу онлайн бесплатно (ознакомительный отрывок), автор Eugeny Shtoltc
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

provider "google" {

credentials = file ("./ kubernetes_key.json")

project = "node-cluster-243923"

region = "europe-west2"

}

module "kubernetes" {

source = "./Kubernetes"

}

data "google_client_config" "default" {}

module "Nginx" {

source = "./nodejs"

image = "gcr.io/node-cluster-243923/nodejs_cluster:latest"

endpoint = module.kubernetes.endpoint

access_token = data.google_client_config.default.access_token

cluster_ca_certificate = module.kubernetes.cluster_ca_certificate

}

essh @ kubernetes-master: ~ / node-cluster $ gcloud config list project

[core]

project = node-cluster-243923

Your active configuration is: [default]

essh @ kubernetes-master: ~ / node-cluster $ gcloud config set project node-cluster-243923

Updated property [core / project].

essh @ kubernetes-master: ~ / node-cluster $ gcloud compute instances list

NAME ZONE INTERNAL_IP EXTERNAL_IP STATUS

gke-node-ks-default-pool-2e5073d4-csmg europe-north1-a 10.166.0.2 35.228.96.97 RUNNING

gke-node-ks-node-ks-pool-ccbaf5c6-4xgc europe-north1-a 10.166.15.233 35.228.82.222 RUNNING

gke-node-ks-default-pool-72a6d4a3-ldzg europe-north1-b 10.166.15.231 35.228.143.7 RUNNING

gke-node-ks-node-ks-pool-9ee6a401-ngfn europe-north1-b 10.166.15.234 35.228.129.224 RUNNING

gke-node-ks-default-pool-d370036c-kbg6 europe-north1-c 10.166.15.232 35.228.117.98 RUNNING

gke-node-ks-node-ks-pool-d7b09e63-q8r2 europe-north1-c 10.166.15.235 35.228.85.157 RUNNING

Switch gcloud and look at an empty project:

essh @ kubernetes-master: ~ / node-cluster $ gcloud config set project node-cluster-prod-244519

Updated property [core / project].

essh @ kubernetes-master: ~ / node-cluster $ gcloud config list project

[core]

project = node-cluster-prod-244519

Your active configuration is: [default]

essh @ kubernetes-master: ~ / node-cluster $ gcloud compute instances list

Listed 0 items.

The previous time, for node-cluster-243923, we created a service account, on behalf of which we created a cluster. To work with multiple Terraform accounts, we will create a service account for the new project through IAM and Administration -> Service Accounts. We will need to make two separate folders to run Terraform separately in order to separate SSH connections that have different authorization keys. If we put both providers with different keys, we will get a successful connection for the first project, later when Terraform proceeds to create a cluster for the next project, it will be rejected due to the invalid key from the first project to the second. There is another possibility – to activate the account as a company account (you need a website and email, and check them by Google), then it will be possible to create projects from the code without using the admin panel. After dev environment:

essh @ kubernetes-master: ~ / node-cluster $ ./terraform destroy

essh @ kubernetes-master: ~ / node-cluster $ mkdir dev

essh @ kubernetes-master: ~ / node-cluster $ cd dev /

essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud config set project node-cluster-243923

Updated property [core / project].

essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud config list project

[core]

project = node-cluster-243923

Your active configuration is: [default]

essh @ kubernetes-master: ~ / node-cluster / dev $ ../kubernetes_key.json ../main.tf.

essh @ kubernetes-master: ~ / node-cluster / dev $ cat main.tf

provider "google" {

alias = "dev"

credentials = file ("./ kubernetes_key.json")

project = "node-cluster-243923"

region = "europe-west2"

}

module "kubernetes_dev" {

source = "../Kubernetes"

node_pull = false

providers = {

google = google.dev

}

}

data "google_client_config" "default" {}

module "Nginx" {

source = "../nodejs"

providers = {

google = google.dev

}

image = "gcr.io/node-cluster-243923/nodejs_cluster:latest"

endpoint = module.kubernetes_dev.endpoint

access_token = data.google_client_config.default.access_token

cluster_ca_certificate = module.kubernetes_dev.cluster_ca_certificate

}

essh @ kubernetes-master: ~ / node-cluster / dev $ ../terraform init

essh @ kubernetes-master: ~ / node-cluster / dev $ ../terraform apply

essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud compute instances list

NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS

gke-node-ks-default-pool-71afadb8-4t39 europe-north1-a n1-standard-1 10.166.0.60 35.228.96.97 RUNNING

gke-node-ks-node-ks-pool-134dada1-3cdf europe-north1-a n1-standard-1 10.166.0.61 35.228.117.98 RUNNING

gke-node-ks-node-ks-pool-134dada1-c476 europe-north1-a n1-standard-1 10.166.15.194 35.228.82.222 RUNNING

essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud container clusters get-credentials node-ks

Fetching cluster endpoint and auth data.

kubeconfig entry generated for node-ks.

essh @ kubernetes-master: ~ / node-cluster / dev $ kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE

terraform-nodejs-6fd8498cb5-29dzx 1/1 Running 0 2m57s 10.12.3.2 gke-node-ks-node-ks-pool-134dada1-c476 none>

terraform-nodejs-6fd8498cb5-jcbj6 0/1 Pending 0 2m58s none> none> none>

terraform-nodejs-6fd8498cb5-lvfjf 1/1 Running 0 2m58s 10.12.1.3 gke-node-ks-node-ks-pool-134dada1-3cdf none>

As you can see, the PODs were distributed across the pool of nodes, while not getting to the node with Kubernetes due to lack of free space. It is important to note that the number of nodes in the pool was increased automatically, and only the specified limit did not allow creating a third node in the pool. If we set remove_default_node_pool to true, then we merge the Kubernetes PODs and our PODs. According to requests for resources, Kubernetes takes up a little more than one core, and our POD takes half, so the rest of the PODs were not created, but we saved on resources:

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ gcloud compute instances list

NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS

gke-node-ks-node-ks-pool-495b75fa-08q2 europe-north1-a n1-standard-1 10.166.0.57 35.228.117.98 RUNNING

gke-node-ks-node-ks-pool-495b75fa-wsf5 europe-north1-a n1-standard-1 10.166.0.59 35.228.96.97 RUNNING

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ gcloud container clusters get-credentials node-ks

Fetching cluster endpoint and auth data.

kubeconfig entry generated for node-ks.

essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE

terraform-nodejs-6fd8498cb5-97svs 1/1 Running 0 14m 10.12.2.2 gke-node-ks-node-ks-pool-495b75fa-wsf5 none>

terraform-nodejs-6fd8498cb5-d9zkr 0/1 Pending 0 14m none> none> none>

terraform-nodejs-6fd8498cb5-phk8x 0/1 Pending 0 14m none> none> none>

After creating a service account, add the key and check it:

essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud auth login essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud projects create node-cluster-prod3 Create in progress for [https: // cloudresourcemanager. googleapis.com/v1/projects/node-cluster-prod3]. Waiting for [operations / cp.7153345484959140898] to finish … done. https://medium.com/@pnatraj/how-to-run-gcloud-command-line-using-a-service-account-f39043d515b9

essh @ kubernetes-master: ~ / node-cluster $ gcloud auth application-default login

essh @ kubernetes-master: ~ / node-cluster $ cp ~ / Downloads / node-cluster-prod-244519-6fd863dd4d38.json ./kubernetes_prod.json

essh @ kubernetes-master: ~ / node-cluster $ echo "kubernetes_prod.json" >> .gitignore

essh @ kubernetes-master: ~ / node-cluster $ gcloud iam service-accounts list

NAME EMAIL DISABLED

Compute Engine default service account 1008874319751-compute@developer.gserviceaccount.com False

terraform-prod terraform-prod@node-cluster-prod-244519.iam.gserviceaccount.com False

essh @ kubernetes-master: ~ / node-cluster $ gcloud projects list | grep node-cluster

node-cluster-243923 node-cluster 26345118671

node-cluster-prod-244519 node-cluster-prod 1008874319751

Let's create a prod environment:

essh @ kubernetes-master: ~ / node-cluster $ mkdir prod

essh @ kubernetes-master: ~ / node-cluster $ cd prod /

essh @ kubernetes-master: ~ / node-cluster / prod $ cp ../main.tf ../kubernetes_prod_key.json.

essh @ kubernetes-master: ~ / node-cluster / prod $ gcloud config set project node-cluster-prod-244519

Updated property [core / project].

essh @ kubernetes-master: ~ / node-cluster / prod $ gcloud config list project

[core]

project = node-cluster-prod-244519

Your active configuration is: [default]

essh @ kubernetes-master: ~ / node-cluster / prod $ cat main.tf

provider "google" {

alias = "prod"

credentials = file ("./ kubernetes_prod_key.json")

project = "node-cluster-prod-244519"

region = "us-west2"

}

module "kubernetes_prod" {

source = "../Kubernetes"

providers = {

google = google.prod

}

}

data "google_client_config" "default" {}

module "Nginx" {

source = "../nodejs"

providers = {

google = google.prod

}

image = "gcr.io/node-cluster-243923/nodejs_cluster:latest"

endpoint = module.kubernetes_prod.endpoint

access_token = data.google_client_config.default.access_token

cluster_ca_certificate = module.kubernetes_prod.cluster_ca_certificate

}

essh @ kubernetes-master: ~ / node-cluster / prod $ ../terraform init

essh @ kubernetes-master: ~ / node-cluster / prod $ ../terraform apply

Конец ознакомительного фрагмента.

Текст предоставлен ООО «ЛитРес».

Прочитайте эту книгу целиком, на ЛитРес.

Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать


Eugeny Shtoltc читать все книги автора по порядку

Eugeny Shtoltc - все книги автора в одном месте читать по порядку полные версии на сайте онлайн библиотеки LibKing.




IT Cloud отзывы


Отзывы читателей о книге IT Cloud, автор: Eugeny Shtoltc. Читайте комментарии и мнения людей о произведении.


Понравилась книга? Поделитесь впечатлениями - оставьте Ваш отзыв или расскажите друзьям

Напишите свой комментарий
x