Eugeny Shtoltc - IT Cloud
- Название:IT Cloud
- Автор:
- Жанр:
- Издательство:неизвестно
- Год:2021
- ISBN:нет данных
- Рейтинг:
- Избранное:Добавить в избранное
-
Отзывы:
-
Ваша оценка:
Eugeny Shtoltc - IT Cloud краткое содержание
IT Cloud - читать онлайн бесплатно ознакомительный отрывок
Интервал:
Закладка:
essh @ kubernetes-master: ~ / node-cluster $ kubectl get deployments -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
terraform-nodejs 3 3 3 3 25m node-js Nginx: 1.17.0 app = NodeJS
essh @ kubernetes-master: ~ / node-cluster $ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
terraform-nodejs-6bd565dc6c-8768b 1/1 Running 0 4m45s 10.4.3.15 gke-node-ks-node-ks-pool-07115c5b-bw15 none>
terraform-nodejs-6bd565dc6c-hr5vg 1/1 Running 0 4m42s 10.4.5.13 gke-node-ks-node-ks-pool-27e2e52c-9q5b none>
terraform-nodejs-6bd565dc6c-mm7lh 1/1 Running 0 4m43s 10.4.2.6 gke-node-ks-default-pool-2dc50760-757p none>
esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ docker ps | grep node-js_terraform
152e3c0ed940 719cd2e3ed04
"/ bin / bash -c 'ech …" 8 minutes ago Up 8 minutes
Kubernetes_node-js_terraform-nodejs-6bd565dc6c-8768b_default_7a87ae4a-9379-11e9-a78e-42010a9a0114_0
esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ docker exec -it 152e3c0ed940 cat /usr/share/Nginx/html/index.html
terraform-nodejs-6bd565dc6c-8768b
esschtolts @ gke-node-ks-node-ks-pool-27e2e52c-9q5b ~ $ docker exec -it c282135be446 cat /usr/share/Nginx/html/index.html
terraform-nodejs-6bd565dc6c-hr5vg
esschtolts @ gke-node-ks-default-pool-2dc50760-757p ~ $ docker exec -it 8d1cf9ef44e6 cat /usr/share/Nginx/html/index.html
terraform-nodejs-6bd565dc6c-mm7lh
esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.4.2.6
terraform-nodejs-6bd565dc6c-mm7lh
esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.4.5.13
terraform-nodejs-6bd565dc6c-hr5vg
esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.4.3.15
terraform-nodejs-6bd565dc6c-8768b
The Balancers load balance between PODs that are filtered by matching their selectors in the meta information and the Selector specified in the balancer description in the spec section . All nodes are connected to one common network, so you can connect to any node (I did this via SSH of the GCP WEB interface in the section with Compute Engine virtual machines). You can address both the IP address in the container or node host, and the host of the terraform-nodejs service in the terraform-NodeJS: 80 curl container , which is created by the internal DNS by the name of the service. You can view the external IP address EXTERNAL -IP both using kubectl at the service and using the web interface: GCP -> Kubernetes Engine -> Services:
essh @ kubernetes-master: ~ / node-cluster $ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE
kubernetes ClusterIP 10.7.240.1 none> 443 / TCP 6h58m
terraform-nodejs LoadBalancer 10.7.246.234 35.197.220.103 80: 32085 / TCP 5m27s
esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234
terraform-nodejs-6bd565dc6c-mm7lh
esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234
terraform-nodejs-6bd565dc6c-mm7lh
esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234
terraform-nodejs-6bd565dc6c-hr5vg
esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234
terraform-nodejs-6bd565dc6c-hr5vg
esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234
terraform-nodejs-6bd565dc6c-8768b
esschtolts @ gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234
terraform-nodejs-6bd565dc6c-mm7lh
essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103
terraform-nodejs-6bd565dc6c-mm7lh
essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103
terraform-nodejs-6bd565dc6c-mm7lh
essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103
terraform-nodejs-6bd565dc6c-8768b
essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103
terraform-nodejs-6bd565dc6c-hr5vg
essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103
terraform-nodejs-6bd565dc6c-8768b
essh @ kubernetes-master: ~ / node-cluster $ curl 35.197.220.103
terraform-nodejs-6bd565dc6c-mm7lh
Now let's move on to implementing the NodeJS server:
essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform destroy
essh @ kubernetes-master: ~ / node-cluster $ sudo ./terraform apply
essh @ kubernetes-master: ~ / node-cluster $ sudo docker run -it –rm node: 12 which node
/ usr / local / bin / node
sudo docker run -it –rm -p 8222: 80 node: 12 / bin / bash -c 'cd / usr / src / && git clone https://github.com/fhinkel/nodejs-hello-world.git &&
/ usr / local / bin / node /usr/src/nodejs-hello-world/index.js'
firefox http: // localhost: 8222
Let's replace the block in our container with:
container {
image = "node: 12"
name = "node-js"
command = ["/ bin / bash"]
args = [
"-c",
"cd / usr / src / && git clone https://github.com/fhinkel/nodejs-hello-world.git && / usr / local / bin / node /usr/src/nodejs-hello-world/index.js "
]
}
If you comment out a Kubernetes module, and it remains in the cache, it remains to remove the excess from the cache:
essh @ kubernetes-master: ~ / node-cluster $ ./terraform apply
Error: Provider configuration not present
essh @ kubernetes-master: ~ / node-cluster $ ./terraform state list
data.google_client_config.default
module.Kubernetes.google_container_cluster.node-ks
module.Kubernetes.google_container_node_pool.node-ks-pool
module.nodejs.kubernetes_deployment.nodejs
module.nodejs.kubernetes_service.nodejs
essh @ kubernetes-master: ~ / node-cluster $ ./terraform state rm module.nodejs.kubernetes_deployment.nodejs
Removed module.nodejs.kubernetes_deployment.nodejs
Successfully removed 1 resource instance (s).
essh @ kubernetes-master: ~ / node-cluster $ ./terraform state rm module.nodejs.kubernetes_service.nodejs
Removed module.nodejs.kubernetes_service.nodejs
Successfully removed 1 resource instance (s).
essh @ kubernetes-master: ~ / node-cluster $ ./terraform apply
module.Kubernetes.google_container_cluster.node-ks: Refreshing state … [id = node-ks]
module.Kubernetes.google_container_node_pool.node-ks-pool: Refreshing state … [id = europe-west2-a / node-ks / node-ks-pool]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Terraform Cluster Reliability and Automation
For a general overview of automation, see https://codelabs.developers.google.com/codelabs/cloud-builder-gke-continuous-deploy/index. html # 0. We will dwell in more detail. Now if we run the ./terraform destroy and try to recreate the entire infrastructure from the beginning , we will get errors. Errors are received due to the fact that the order of creation of services is not specified and terraform, by default, sends requests to the API in 10 parallel threads, although this can be changed by specifying or removing the -parallelism = 1 switch during application or removal . As a result, Terraform tries to create Kubernetes services (Deployment and service) on servers (node-pull) that do not yet exist, the same situation when creating a service that requires proxying a Deployment that has not yet been created. By telling Terraform to request the API in a single thread ./terraform apply -parallelism = 1, we reduce possible provider-side restrictions on the frequency of API calls, but we do not solve the problem of lack of order in which entities are created. We will not comment out dependent blocks and gradually uncomment and run ./terraform apply , nor will we run our system piece by piece by specifying specific blocks ./terraform apply -target = module.nodejs.kubernetes_deployment.nodejs . We will indicate in the code the dependencies themselves on the initialization of the variable, the first of which is already defined as external var.endpoint , and the second we will create locally:
locals {
app = kubernetes_deployment.nodejs.metadata.0.labels.app
}
Now we can add dependencies to the code depends_on = [var.endpoint] and depends_on = [kubernetes_deployment .nodejs] .
The service unavailability error may also appear: Error: Get https: //35.197.228.3/API/v1 …: dial tcp 35.197.228.3:443: connect: connection refused , then the connection time has been exceeded, which is 6 minutes by default (3600 seconds), but here you can just try again.
Now let's move on to solving the problem of the reliability of the container, the main process of which we run in the command shell. The first thing we will do is separate the creation of the application from the launch of the container. To do this, you need to transfer the entire process of creating a service into the process of creating an image, which can be tested, and by which you can create a service container. So let's create an image:
essh @ kubernetes-master: ~ / node-cluster $ cat app / server.js
const http = require ('http');
const server = http.createServer (function (request, response) {
response.writeHead (200, {"Content-Type": "text / plain"});
response.end (`Nodejs_cluster is working! My host is $ {process.env.HOSTNAME}`);
});
server.listen (80);
essh @ kubernetes-master: ~ / node-cluster $ cat Dockerfile
FROM node: 12
WORKDIR / usr / src /
ADD ./app / usr / src /
RUN npm install
EXPOSE 3000
ENTRYPOINT ["node", "server.js"]
essh @ kubernetes-master: ~ / node-cluster $ sudo docker image build -t nodejs_cluster.
Sending build context to Docker daemon 257.4MB
Step 1/6: FROM node: 12
––> b074182f4154
Step 2/6: WORKDIR / usr / src /
––> Using cache
––> 06666b54afba
Step 3/6: ADD ./app / usr / src /
––> Using cache
––> 13fa01953b4a
Step 4/6: RUN npm install
––> Using cache
––> dd074632659c
Step 5/6: EXPOSE 3000
––> Using cache
––> ba3b7745b8e3
Step 6/6: ENTRYPOINT ["node", "server.js"]
––> Using cache
––> a957fa7a1efa
Successfully built a957fa7a1efa
Successfully tagged nodejs_cluster: latest
essh @ kubernetes-master: ~ / node-cluster $ sudo docker images | grep nodejs_cluster
nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB
Now let's put our image in the GCP registry, and not the Docker Hub, because we immediately get a private repository with which our services automatically have access:
essh @ kubernetes-master: ~ / node-cluster $ IMAGE_ID = "nodejs_cluster"
essh @ kubernetes-master: ~ / node-cluster $ sudo docker tag $ IMAGE_ID: latest gcr.io/$PROJECT_ID/$IMAGE_ID:latest
essh @ kubernetes-master: ~ / node-cluster $ sudo docker images | grep nodejs_cluster
nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB
gcr.io/node-cluster-243923/nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB
essh @ kubernetes-master: ~ / node-cluster $ gcloud auth configure-docker
gcloud credential helpers already registered correctly.
essh @ kubernetes-master: ~ / node-cluster $ docker push gcr.io/$PROJECT_ID/$IMAGE_ID:latest
The push refers to repository [gcr.io/node-cluster-243923/nodejs_cluster]
194f3d074f36: Pushed
b91e71cc9778: Pushed
640fdb25c9d7: Layer already exists
b0b300677afe: Layer already exists
5667af297e60: Layer already exists
84d0c4b192e8: Layer already exists
a637c551a0da: Layer already exists
2c8d31157b81: Layer already exists
7b76d801397d: Layer already exists
f32868cde90b: Layer already exists
0db06dff9d9a: Layer already exists
latest: digest: sha256: 912938003a93c53b7c8f806cded3f9bffae7b5553b9350c75791ff7acd1dad0b size: 2629
Читать дальшеИнтервал:
Закладка: