Eugeny Shtoltc - IT Cloud

Тут можно читать онлайн Eugeny Shtoltc - IT Cloud - бесплатно ознакомительный отрывок. Жанр: foreign-comp, год 2021. Здесь Вы можете читать ознакомительный отрывок из книги онлайн без регистрации и SMS на сайте лучшей интернет библиотеки ЛибКинг или прочесть краткое содержание (суть), предисловие и аннотацию. Так же сможете купить и скачать торрент в электронном формате fb2, найти и слушать аудиокнигу на русском языке или узнать сколько частей в серии и всего страниц в публикации. Читателям доступно смотреть обложку, картинки, описание и отзывы (комментарии) о произведении.
  • Название:
    IT Cloud
  • Автор:
  • Жанр:
  • Издательство:
    неизвестно
  • Год:
    2021
  • ISBN:
    нет данных
  • Рейтинг:
    5/5. Голосов: 11
  • Избранное:
    Добавить в избранное
  • Отзывы:
  • Ваша оценка:
    • 100
    • 1
    • 2
    • 3
    • 4
    • 5

Eugeny Shtoltc - IT Cloud краткое содержание

IT Cloud - описание и краткое содержание, автор Eugeny Shtoltc, читайте бесплатно онлайн на сайте электронной библиотеки LibKing.Ru
In this book, the Chief Architect of the Cloud Native Competence Architecture Department at Sberbank shares his knowledge and experience with the reader on the creation and transition to the cloud ecosystem, as well as the creation and adaptation of applications for it. In the book, the author tries to lead the reader along the path, bypassing mistakes and difficulties. To do this, practical applications are demonstrated and explained so that the reader can use them as instructions for educational and work purposes. The reader can be both developers of different levels and ecosystem specialists who wish not to lose the relevance of their skills in an already changed world.

IT Cloud - читать онлайн бесплатно ознакомительный отрывок

IT Cloud - читать книгу онлайн бесплатно (ознакомительный отрывок), автор Eugeny Shtoltc
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

* Docker images to view existing containers;

* Docker rmi to remove the image.

But with the growing popularity, the teams became more and more and it was decided to group them into groups, so instead of the simple "Docker run", the "Docker container" command appeared, which has 25 commands in the 19 version of Docker. These are cleanup, and stop and restore, and logs and various kinds of container connections. The same fate befell the work with images. But, the old commands have remained so far due to compatibility and convenience, because in most cases a basic set is required. Let's stop at it:

Starting a container:

docker run -d –name name_container ubuntu bash

Remove a running container:

docker rm -f name_container

Output of all containers:

docker ps -a

Output of running containers:

docker ps

Output of containers with consumed resources:

docker stats

Displaying processes in a container:

docker top {name_container}

Connect to the container through the sh shell (there is no BASH in alpine containers):

docker exec -it sh

Cleaning the system from unused images:

docker image prune

Remove hanging images:

docker rmi $ (docker images -f "dangling = true" -q)

Show image:

docker images

Create image in dir folder with Dockerfile:

docker build -t docker_user / name_image dir

Delete image:

docker rmi docker_user / name_image dir

Connect to Docker hub:

docker login

Submit the latest revision (the tag is added and shifted automatically, if not specified otherwise) the image on the Docker hub:

docker push ocker_user / name_image dir: latest

For a broader list at https://niqdev.github.io/devops/docker/.

Building a Docker Machine can be described in the following steps:

Creating a VirtualBox virtual machine

docker-machine create name_virtual_system

Creating a generic virtual machine

docker-machine create -d generic name_virtual_system

List of virtual machines:

docker-machine ls

Stop the virtual machine:

docker-machine stop name_virtual_system

Start a stopped virtual machine:

docker-machine start name_virtual_system

Delete virtual machine:

docker-machine rm name_virtual_system

Connect to virtual machine:

eval "$ (docker-machine env name_virtual_system)"

Disconnect Docker from VM:

eval $ (docker-machine env -u)

Login via SSH:

docker-machine ssh name_virtual_system

Quit the virtual machine:

exit

Run the sleep 10 command in the virtual machine:

docker-machine ssh name_virtual_system 'sleep 10'

Running commands in BASH environment:

docker-machine ssh dev 'bash -c "sleep 10 && echo 1"'

Copy the dir folder to the virtual machine:

docker-machine scp -r / dir name_virtual_system: / dir

Make a request to the containers of the virtual machine:

curl $ (docker-machine ip name_virtual_system): 9000

Forward port 9005 of host machine to 9005 virtual machine

docker-machine ssh name_virtual_system -f -N -L 9005: 0.0.0.0: 9007

Master initialization:

docker swarm init

Running multiple containers with the same EXPOSE:

essh @ kubernetes-master: ~ / mongo-rs $ docker run –name redis -p 6379 -d redis

f3916da35b6ba5cd393c21d5305002b78c32b089a6cc01e3e2425930c9310cba

essh @ kubernetes-master: ~ / mongo-rs $ docker ps | grep redis

f3916da35b6b redis "docker-entrypoint.s…" 8 seconds ago Up 6 seconds 0.0.0.0:32769->6379/tcp redis

essh @ kubernetes-master: ~ / mongo-rs $ docker port reids

Error: No such container: reids

essh @ kubernetes-master: ~ / mongo-rs $ docker port redis

6379 / tcp -> 0.0.0.0:32769

essh @ kubernetes-master: ~ / mongo-rs $ docker port redis 6379

0.0.0.0:32769

Build is the first solution to copy all files and install. As a result, when any file changes, all packages will be reinstalled:

COPY ./ / src / app

WORKDIR / src / app

RUN NPM install

Let's use caching and split the static files and the installation:

COPY ./package.json /src/app/package.json

WORKDIR / src / app

RUN npm install

COPY. / src / app

Using the base image template node: 7-onbuild:

$ cat Dockerfile

FROM node: 7-onbuild

EXPOSE 3000

$ docker build.

In this case, files that do not need to be included in the image, such as system files, for example, Dockerfile, .git, .node_modules, files with keys, they need to be added to node_modules, files with keys, they need to be added to .dockerignore .

–v / config

docker cp config.conf name_container: / config /

Real-time statistics of used resources:

essh @ kubernetes-master: ~ / mongo-rs $ docker ps -q | docker stats

CONTAINER ID NAME CPU% MEM USAGE / LIMIT MEM% NET I / O BLOCK I / O PIDS

c8222b91737e mongo-rs_slave_1 19.83% 44.12MiB / 15.55GiB 0.28% 54.5kB / 78.8kB 12.7MB / 5.42MB 31

aa12810d16f5 mongo-rs_backup_1 0.81% 44.64MiB / 15.55GiB 0.28% 12.7kB / 0B 24.6kB / 4.83MB 26

7537c906a7ef mongo-rs_master_1 20.09% 47.67MiB / 15.55GiB 0.30% 140kB / 70.7kB 19.2MB / 7.5MB 57

f3916da35b6b redis 0.15% 3.043MiB / 15.55GiB 0.02% 13.2kB / 0B 2.97MB / 0B 4

f97e0697db61 node_api 0.00% 65.52MiB / 15.55GiB 0.41% 862kB / 8.23kB 137MB / 24.6kB 20

8c0d1adc9b9c portainer 0.00% 8.859MiB / 15.55GiB 0.06% 102kB / 3.87MB 57.8MB / 122MB 20

6018b7e3d9cd node_payin 0.00% 9.297MiB / 15.55GiB 0.06% 222kB / 3.04kB 82.4MB / 24.6kB 11

^ C

When creating images, you need to consider:

** changing a large layer, it will be recreated, so it is often better to split it, for example, create one layer with 'NPM i' and copy the code on the second;

* if the file in the image is large and the container is changed, then from the read-only image layer the file will be completely copied to the editing layer, therefore, the containers are supposed to be lightweight, and the content is usually placed in a special storage. code-as-a-service: 12 factors (12factor.net)

* Codebase – one service – they are a repository;

* Dependeces – all dependent services in the config;

* Config – configs are available through the environment;

* BackEnd – exchange data with other services via an API-based network;

* Processes – one service – one process, which allows in the event of a fall to unambiguously track (the container itself ends) and restart it;

* Independence of the environment and no influence on it.

* СI / CD – code control (git) – build (jenkins, GitLab) – relies (Docker, jenkins) – deploy (helm, Kubernetes). Keeping the service lightweight is important, but there are programs not designed to run in containers like databases. Due to their peculiarity, certain requirements are imposed on their launch, and the profit is limited. So, because of big data, they are not only slow to scale, and rolling-abdate is unlikely, and the restart must be performed on the same nodes as their data for reasons of performance of access to them.

* Config – service relationships are defined in the configuration, for example, docker-compose.yml;

* Port bindign – services communicate through ports, while the port can be selected automatically, for example, if EXPOSE PORT is specified in the Dockerfile, then when a container is called with the -P flag, it will be terminated to the free one automatically.

* Env – environment settings are passed through environment variables, not through configs, which allows them to be added to the service config configuration, for example, docker-compose.yml

* Logs – logs are streamed over the network, for example, ELK, or printed to the output, which is already streamed by Docker.

Dockerd internals:

essh @ kubernetes-master: ~ / mongo-rs $ ps aux | grep dockerd

root 6345 1.1 0.7 3257968 123640? Ssl Jul05 76:11 / usr / bin / dockerd -H fd: // –containerd = / run / containerd / containerd.sock

essh 16650 0.0 0.0 21536 1036 pts / 6 S + 23:37 0:00 grep –color = auto dockerd

essh @ kubernetes-master: ~ / mongo-rs $ pgrep dockerd

6345

essh @ kubernetes-master: ~ / mongo-rs $ pstree -c -p -A $ (pgrep dockerd)

dockerd (6345) – + – docker-proxy (720) – + – {docker-proxy} (721)

| | – {docker-proxy} (722)

| | – {docker-proxy} (723)

| | – {docker-proxy} (724)

| | – {docker-proxy} (725)

| | – {docker-proxy} (726)

| | – {docker-proxy} (727)

| `– {docker-proxy} (728)

| -docker-proxy (7794) – + – {docker-proxy} (7808)

Docker-File:

* cleaning caches from package managers: apt-get, pip and others, this cache is not needed in production, only

takes up space and loads the network, but nowadays it is not often not relevant, since there are multi-stage

assembly, but more on that below.

* group commands of the same entities, for example, get APT cache, install programs and uninstall

cache: in one instruction – the code of only programs, with the spaced version – the code of the programs and the cache,

because if you do not delete the cache in one instruction, then it will be saved in the layer, regardless of

follow-up actions.

* separate instructions by frequency of change, so for example, if not split installation

software and code, then when you change something in the code, then instead of using the ready-made

layer with programs, they will be reinstalled, which will entail significant preparation time

image that is critical for developers:

ADD ./app/package.json / app

RUN npm install

ADD ./app / app

Docker alternatives

** Rocket or rkt – containers for the CoreOS operating environment from RedHut, specially designed to use containers.

** Hyper-V is an environment for running Docker on the Windows operating system, which is a wrapper (lightweight virtual machine) of the container.

Docker has branched off its core components, which it uses as primitives, which have become standard components for implementing containers such as RKT, bundled into the containerd project:

* CRI-O – OpanSource project aimed from the beginning to fully support CRI (Container Runtime Interface) standards, github.com/opencontainers/runtime-spec/">Runtime Specification and github.com/opencontainers/image-spec">Image Specification as a general interface for the interaction of the orchestration system with containers. Along with Docker, support for CRI-O 1.0 has been added to Kubernetes (more on this) since version 1.7 in 2007, as well as MiniKube and Kubic. Has a CLI (Common Line Interface) implementation in the Pandom project, which almost completely repeats Docker commands, but without orchestration (Docker Swarm), which is the default tool in Linux Fedora.

* CRI (Kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-Kubernetes/">Container Runtime Interface) – an environment for running containers, universally providing primitives (Executor, Supervisor, Metadata, Content, Snapshot , Events and Metrics) for working with Linux containers (process spaces, groups, etc.).

** CNI (Container Networking Interface) – work with the network.

Portainer

The simplest monitoring option would be Portainer:

essh @ kubernetes-master: ~ / microKubernetes $ cat << EOF> docker-compose.monitoring.yml

version: '2'

>

services:

portainer:

image: portainer / portainer

command: -H unix: ///var/run/Docker.sock

restart: always

ports:

– 9000: 9000

volumes:

– /var/run/Docker.sock:/var/run/Docker.sock

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать


Eugeny Shtoltc читать все книги автора по порядку

Eugeny Shtoltc - все книги автора в одном месте читать по порядку полные версии на сайте онлайн библиотеки LibKing.




IT Cloud отзывы


Отзывы читателей о книге IT Cloud, автор: Eugeny Shtoltc. Читайте комментарии и мнения людей о произведении.


Понравилась книга? Поделитесь впечатлениями - оставьте Ваш отзыв или расскажите друзьям

Напишите свой комментарий
x