Евгений Штольц - Облачная экосистема
- Название:Облачная экосистема
- Автор:
- Жанр:
- Издательство:неизвестно
- Год:2021
- ISBN:нет данных
- Рейтинг:
- Избранное:Добавить в избранное
-
Отзывы:
-
Ваша оценка:
Евгений Штольц - Облачная экосистема краткое содержание
Облачная экосистема - читать онлайн бесплатно ознакомительный отрывок
Интервал:
Закладка:
}
Расширяемость с помощью external resource, в качестве которого может быть скрипт на BASH:
data "external" "python3" {
program = ["Python3"]
}
Создание кластера машин с помощью Terraform
Создание кластера с помощью Terraform рассматривается в Создание инфраструктуры в GCP. Сейчас уделим больше внимания самому кластеру, а не инструментам по его созданию. Создам через панель администратора GCE проект (отображается в шапке интерфейса) node-cluster. Ключ для Kubernetes я скачал IAM и администрирование –> Сервисные аккаунты –> Создать сервисный аккаунт и при создании выбрал роль Владелец и положил в проект под названием kubernetes_key.JSON:
eSSH@Kubernetes-master:~/node-cluster$ cp ~/Downloads/node-cluster-243923-bbec410e0a83.JSON ./kubernetes_key.JSON
Скачал terraform:
essh@kubernetes-master:~/node-cluster$ wget https://releases.hashicorp.com/terraform/0.12.2/terraform_0.12.2_linux_amd64.zip >/dev/null 2>/dev/null
essh@kubernetes-master:~/node-cluster$ unzip terraform_0.12.2_linux_amd64.zip && rm -f terraform_0.12.2_linux_amd64.zip
Archive: terraform_0.12.2_linux_amd64.zip
inflating: terraform
essh@kubernetes-master:~/node-cluster$ ./terraform version
Terraform v0.12.2
Добавили провайдера GCE и запустил скачивание "драйверов" к нему:
essh@kubernetes-master:~/node-cluster$ cat main.tf
provider "google" {
credentials = "${file("kubernetes_key.json")}"
project = "node-cluster"
region = "us-central1"
}essh@kubernetes-master:~/node-cluster$ ./terraform init
Initializing the backend…
Initializing provider plugins…
– Checking for available provider plugins…
– Downloading plugin for provider "google" (terraform-providers/google) 2.8.0…
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "…" constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.google: version = "~> 2.8"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Добавлю виртуальную машину:
essh@kubernetes-master:~/node-cluster$ cat main.tf
provider "google" {
credentials = "${file("kubernetes_key.json")}"
project = "node-cluster-243923"
region = "europe-north1"
}
resource "google_compute_instance" "cluster" {
name = "cluster"
zone = "europe-north1-a"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = "default"
access_config {}
}
essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# google_compute_instance.cluster will be created
+ resource "google_compute_instance" "cluster" {
+ can_ip_forward = false
+ cpu_platform = (known after apply)
+ deletion_protection = false
+ guest_accelerator = (known after apply)
+ id = (known after apply)
+ instance_id = (known after apply)
+ label_fingerprint = (known after apply)
+ machine_type = "f1-micro"
+ metadata_fingerprint = (known after apply)
+ name= "cluster"
+ project = (known after apply)
+ self_link = (known after apply)
+ tags_fingerprint = (known after apply)
+ zone= "europe-north1-a"
+ boot_disk {
+ auto_delete = true
+ device_name = (known after apply)
+ disk_encryption_key_sha256 = (known after apply)
+ source = (known after apply)
+ initialize_params {
+ image = "debian-cloud/debian-9"
+ size = (known after apply)
+ type = (known after apply)
}
}
+ network_interface {
+ address = (known after apply)
+ name = (known after apply)
+ network = "default"
+ network_ip = (known after apply)
+ subnetwork = (known after apply)
+ subnetwork_project = (known after apply)
+ access_config {
+ assigned_nat_ip = (known after apply)
+ nat_ip = (known after apply)
+ network_tier = (known after apply)
}
}
+ scheduling {
+ automatic_restart = (known after apply)
+ on_host_maintenance = (known after apply)
+ preemptible = (known after apply)
+ node_affinities {
+ key = (known after apply)
+ operator = (known after apply)
+ values = (known after apply)
}
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
google_compute_instance.cluster: Creating…
google_compute_instance.cluster: Still creating… [10s elapsed]
google_compute_instance.cluster: Creation complete after 11s [id=cluster]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Добавлю к ноде публичный статический IP-адрес и SSH-ключ:
essh@kubernetes-master:~/node-cluster$ ssh-keygen -f node-cluster
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in node-cluster.
Your public key has been saved in node-cluster.pub.
The key fingerprint is:
SHA256:vUhDe7FOzykE5BSLOIhE7Xt9o+AwgM4ZKOCW4nsLG58 essh@kubernetes-master
The key's randomart image is:
+–[RSA 2048]–+
|.o. +. |
|o. o . = . |
|* + o . = . |
|=* . . . + o |
|B + . . S * |
| = + o o X + . |
| o. = . + = + |
| .=… . . |
| ..E. |
+–[SHA256]–+
essh@kubernetes-master:~/node-cluster$ ls node-cluster.pub
node-cluster.pub
essh@kubernetes-master:~/node-cluster$ cat main.tf
provider "google" {
credentials = "${file("kubernetes_key.json")}"
project = "node-cluster-243923"
region = "europe-north1"
}
resource "google_compute_address" "static-ip-address" {
name = "static-ip-address"
}
resource "google_compute_instance" "cluster" {
name = "cluster"
zone = "europe-north1-a"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
metadata = {
ssh-keys = "essh:${file("./node-cluster.pub")}"
}
network_interface {
network = "default"
access_config {
nat_ip = "${google_compute_address.static-ip-address.address}"
}
}
}essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply
Проверим подключение SSH к серверу:
essh@kubernetes-master:~/node-cluster$ ssh -i ./node-cluster essh@35.228.82.222
The authenticity of host '35.228.82.222 (35.228.82.222)' can't be established.
ECDSA key fingerprint is SHA256:o7ykujZp46IF+eu7SaIwXOlRRApiTY1YtXQzsGwO18A.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '35.228.82.222' (ECDSA) to the list of known hosts.
Linux cluster 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1+deb9u2 (2019-05-13) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
essh@cluster:~$ ls
essh@cluster:~$ exit
logout
Connection to 35.228.82.222 closed.
Установим пакеты:
essh@kubernetes-master:~/node-cluster$ curl https://sdk.cloud.google.com | bash
essh@kubernetes-master:~/node-cluster$ exec -l $SHELL
essh@kubernetes-master:~/node-cluster$ gcloud init
Выберем проект:
You are logged in as: [esschtolts@gmail.com].
Pick cloud project to use:
[1] agile-aleph-203917
[2] node-cluster-243923
[3] essch
[4] Create a new project
Please enter numeric choice or text value (must exactly match list
item):
Please enter a value between 1 and 4, or a value present in the list: 2
Your current project has been set to: [node-cluster-243923].
Выберем зону:
[50] europe-north1-a
Did not print [12] options.
Too many options [62]. Enter "list" at prompt to print choices fully.
Please enter numeric choice or text value (must exactly match list
item):
Please enter a value between 1 and 62, or a value present in the list: 50
essh@kubernetes-master:~/node-cluster$ PROJECT_I="node-cluster-243923"
essh@kubernetes-master:~/node-cluster$ echo $PROJECT_I
node-cluster-243923
essh@kubernetes-master:~/node-cluster$ export GOOGLE_APPLICATION_CREDENTIALS=$HOME/node-cluster/kubernetes_key.json
essh@kubernetes-master:~/node-cluster$ sudo docker-machine create –driver google –google-project $PROJECT_ID vm01
sudo export GOOGLE_APPLICATION_CREDENTIALS=$HOME/node-cluster/kubernetes_key.json docker-machine create –driver google –google-project $PROJECT_ID vm01
// https://docs.docker.com/machine/drivers/gce/
// https://github.com/docker/machine/issues/4722
essh@kubernetes-master:~/node-cluster$ gcloud config list
[compute]
region = europe-north1
zone = europe-north1-a
[core]
account = esschtolts@gmail.com
disable_usage_reporting = False
project = node-cluster-243923
Your active configuration is: [default]
Добавим копирование файла и выполнение скрипта:
essh@kubernetes-master:~/node-cluster$ cat main.tf
provider "google" {
credentials = "${file("kubernetes_key.json")}"
project = "node-cluster-243923"
region = "europe-north1"
}
resource "google_compute_address" "static-ip-address" {
name = "static-ip-address"
}
resource "google_compute_instance" "cluster" {
name = "cluster"
zone = "europe-north1-a"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
metadata = {
ssh-keys = "essh:${file("./node-cluster.pub")}"
}
network_interface {
network = "default"
access_config {
nat_ip = "${google_compute_address.static-ip-address.address}"
}
}
}
resource "null_resource" "cluster" {
triggers = {
Читать дальшеИнтервал:
Закладка: