Eugeny Shtoltc - IT Cloud

Тут можно читать онлайн Eugeny Shtoltc - IT Cloud - бесплатно ознакомительный отрывок. Жанр: foreign-comp, год 2021. Здесь Вы можете читать ознакомительный отрывок из книги онлайн без регистрации и SMS на сайте лучшей интернет библиотеки ЛибКинг или прочесть краткое содержание (суть), предисловие и аннотацию. Так же сможете купить и скачать торрент в электронном формате fb2, найти и слушать аудиокнигу на русском языке или узнать сколько частей в серии и всего страниц в публикации. Читателям доступно смотреть обложку, картинки, описание и отзывы (комментарии) о произведении.
  • Название:
    IT Cloud
  • Автор:
  • Жанр:
  • Издательство:
    неизвестно
  • Год:
    2021
  • ISBN:
    нет данных
  • Рейтинг:
    5/5. Голосов: 11
  • Избранное:
    Добавить в избранное
  • Отзывы:
  • Ваша оценка:
    • 100
    • 1
    • 2
    • 3
    • 4
    • 5

Eugeny Shtoltc - IT Cloud краткое содержание

IT Cloud - описание и краткое содержание, автор Eugeny Shtoltc, читайте бесплатно онлайн на сайте электронной библиотеки LibKing.Ru
In this book, the Chief Architect of the Cloud Native Competence Architecture Department at Sberbank shares his knowledge and experience with the reader on the creation and transition to the cloud ecosystem, as well as the creation and adaptation of applications for it. In the book, the author tries to lead the reader along the path, bypassing mistakes and difficulties. To do this, practical applications are demonstrated and explained so that the reader can use them as instructions for educational and work purposes. The reader can be both developers of different levels and ecosystem specialists who wish not to lose the relevance of their skills in an already changed world.

IT Cloud - читать онлайн бесплатно ознакомительный отрывок

IT Cloud - читать книгу онлайн бесплатно (ознакомительный отрывок), автор Eugeny Shtoltc
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

"main": "index.js",

"scripts": {

"test": "echo \" Error: no test specified \ "&& exit 1"

},

"author": "ESSch",

"license": "ISC"

}

Is this ok? (yes) yes

First, let's create a WEB server. I'll use the library to create it:

vagrant @ ubuntu: ~ / nodejs $ npm install Express –save

npm WARN deprecated Express@3.0.1: Package unsupported. Please use the express package (all lowercase) instead.

nodejs@1.0.0 / home / vagrant / nodejs

└── Express@3.0.1

npm WARN nodejs@1.0.0 No description

npm WARN nodejs@1.0.0 No repository field.

vagrant @ ubuntu: ~ / nodejs $ cat << EOF> index.js

const express = require ('express');

const app = express ();

app.get ('/ healt', function (req, res) {

res.send ({status: "Healt"});

});

app.listen (9999, () => {

console.log ({status: "start"});

});

EOF

vagrant @ ubuntu: ~ / nodejs $ node index.js &

[1] 18963

vagrant @ ubuntu: ~ / nodejs $ {status: 'start'}

vagrant @ ubuntu: ~ / nodejs $ curl localhost: 9999 / healt

{"status": "Healt"}

Our server is ready to work with Prometheus. We need to configure Prometheus for it.

The Prometheus scaling problem arises when the data does not fit on one server, more precisely, when one server does not have time to record data and when the processing of data by one server does not suit the performance. Thanos solves this problem by not requiring federation setup, by providing the user with an interface and API that it broadcasts to Prometheus instances. A web interface similar to Prometheus is available to the user. He himself interacts with agents that are installed on instances as a side-car, as Istio does. He and the agents are available as containers and as a Helm chart. For example, an agent can be brought up as a container configured on Prometheus, and Prometheus is configured with a config followed by a reboot.

docker run –rm quay.io/thanos/thanos:v0.7.0 –help

docker run -d –net = host –rm \

–v $ (pwd) /prometheus0_eu1.yml:/etc/prometheus/prometheus.yml \

–-name prometheus-0-sidecar-eu1 \

–u root \

quay.io/thanos/thanos:v0.7.0 \

sidecar \

–-http-address 0.0.0.0:19090 \

–-grpc-address 0.0.0.0:19190 \

–-reloader.config-file /etc/prometheus/prometheus.yml \

–-prometheus.url http://127.0.0.1:9090

Notifications are an important part of monitoring. Notifications consist of firing triggers and a provider. A trigger is written in PromQL, as a rule, with a condition in Prometheus. When a trigger is triggered (metric condition), Prometheus signals the provider to send a notification. The standard provider is Alertmanager and is capable of sending messages to various receivers such as email and Slack.

For example, the metric "up", which takes the values 0 or 1, can be used to poison a message if the server is off for more than 1 minute. For this, a rule is written:

groups:

– name: example

rules:

– alert: Instance Down

expr: up == 0

for: 1m

When the metric is equal to 0 for more than 1 minute, then this trigger is triggered and Prometheus sends a request to the Alertmanager. Alertmanager specifies what to do with this event. We can prescribe that when the InstanceDown event is received, we need to send a message to the mail. To do this, configure Alertmanager to do this:

global:

smtp_smarthost: 'localhost: 25'

smtp_from: 'youraddress@example.org'

route:

receiver: example-email

receivers:

– name: example-email

email_configs:

– to: 'youraddress@example.org'

Alertmanager itself will use the installed protocol on this computer. In order for it to be able to do this, it must be installed. Take Simple Mail Transfer Protocol (SMTP), for example. To test it, let's install a console mail server in parallel with the Alert Manager – sendmail.

Fast and clear analysis of system logs

OpenSource full-text search engine Lucene is used for quick search in logs. On its basis, two low-level products were built: Sold and Elasticsearch, which are quite similar in capabilities, but differ in usability and license. Many popular assemblies are built on them, for example, just a delivery set with ElasticSearch: ELK (Elasticsearch (Apache Lucene), Logstash, Kibana), EFK (Elasticsearch, Fluentd, Kibana), and products, for example, GrayLog2. Both GrayLog2 and assemblies (ELK / EFK) are actively used due to the lesser need to configure non-test benches, for example, you can put EFK in a Kubernetes cluster with almost one command

helm install efk-stack stable / elastic-stack –set logstash.enabled = false –set fluentd.enabled = true –set fluentd-elastics

An alternative that has not yet received much consideration are systems built on the previously considered Prometheus, for example, PLG (Promtail (agent) – Loki (Prometheus) – Grafana).

Comparison of ElasticSearch and Sold (systems are comparable):

Elastic:

** Commercial with open source and the ability to commit (via approval);

** Supports more complex queries, more analytics, out of the box support for distributed queries, more complete REST-full JSON-BASH, chaining, machine learning, SQL (paid);

*** Full-text search;

*** Real-time index;

*** Monitoring (paid);

*** Monitoring via Elastic FQ;

*** Machine learning (paid);

*** Simple indexing;

*** More data types and structures;

** Lucene engine;

** Parent-child (JOIN);

** Scalable native;

** Documentation from 2010;

Solr:

** OpenSource;

** High speed with JOIN;

*** Full-text search;

*** Real-time index;

*** Monitoring in the admin panel;

*** Machine learning through modules;

*** Input data: Work, PDF and others;

*** Requires a schema for indexing;

*** Data: nested objects;

** Lucene engine;

** JSON join;

** Scalable: Solar Cloud (setting) && ZooKeeper (setting);

** Documentation since 2004.

At the present time, micro-service architecture is increasingly used, which allows due to weak

the connectivity between their components and their simplicity to simplify their development, testing, and debugging.

But in general, the system becomes more difficult to analyze due to its distribution. To analyze the condition

in general, logs are used, collected in a centralized place and converted into an understandable form. Also arises

the need to analyze other data, for example, access_log NGINX, to collect metrics about attendance, mail log,

mail server to detect attempts to guess a password, etc. Take ELK as an example of such a solution. ELK means

a bunch of three products: Logstash, Elasticsearch and Kubana, the first and last of which are heavily focused on the central and

provide ease of use. More generally ELK is called Elastic Stack, since the tool for preparing logs Logstash

can be replaced by analogs such as Fluentd or Rsyslog, and the Kibana renderer can be replaced by Grafana. For example, although

Kibana provides great analysis capabilities, Grafana provides notifications when events occur, and

can be used in conjunction with other products, for example, CAdVisor – analysis of the state of the system and individual containers.

EKL products can be self-installed, downloaded as self-contained containers for which you need to configure

communication or as a single container.

For Elasticsearch to work properly, you need the data to come in JSON format. If the data is submitted to

text format (the log is written in one line, separated from the previous one by a line break), then it can

provide only full-text searches as they will be interpreted as one line. For transmission

logs in JSON format, there are two options: either configure the product under investigation to be output in this format,

for example, for NGINX there is such a possibility. But, often this is impossible, since there is already

the accumulated database of logs, and traditionally they are written in text format. For such cases, it is necessary

post processing of logs from text format to JSON, which is handled by Logstash. It is important to note that if

it is possible to immediately transfer data in a structured form (JSON, XML and others), then this follows

do, because if you do detailed parsing, then any deviation is a one-sided deviation from the format

will lead to inoperability, and if superficial – we lose valuable information. Anyway, parsing in

this system is a bottleneck, although it can be scaled to a limited extent to a service or log

file. Fortunately, more and more products are starting to support structured logging, such as

the latest versions of NGINX support logs in JSON format.

For systems that do not support this format, you can use the conversion to it using such

programs like Logstash, File bear and Fluentd. The first one is included in the standard Elastic Stack delivery from the vendor

and can be installed in one way ELK in Docker – container. It supports fetching data from files, network and

standard stream both at the input and at the output, and most importantly, the native Elastic Search protocol.

Logstash monitors log files based on modification date or receives over the network telnet data from a distributed

systems, for example, containers and, after transformation, it is sent to the output, usually in Elastic Search. It is simple and

comes standard with the Elastic Stack, making it easy and hassle-free to configure. But thanks to

Java machine inside is heavy and not very functional, although it supports plugins, for example, synchronization with MySQL

to send new data. Filebeat provides slightly more options. An enterprise tool for everything

cases of life can serve Fluentd due to its high functionality (reading logs, system logs, etc.),

scalability and the ability to roll out across Kubernetes clusters using the Helm chart, and monitor everything

data center in the standard package, but about this relevant section.

To manage logs, you can use Curator, which can archive old ones from ElasticSearch

logs or delete them, increasing the efficiency of its work.

The process of obtaining logs is logical carried out by special collectors: logstash, fluentd, filebeat or

others.

fluentd is the least demanding and simpler analogue of Logstash. Customization

produced in /etc/td-agent/td-agent.conf, which contains four blocks:

** match – contains settings for transferring received data;

** include – contains information about file types;

** system – contains system settings.

Logstash provides a much more functional configuration language. Logstash agent daemon – logstash monitors

changes in files. If the logs are not located locally, but on a distributed system, then logstash is installed on each server and

runs in agent mode bin / logstash agent -f /env/conf/my.conf . Since run

logstash only as an agent for sending logs is wasteful, then you can use a product from those

the same developers Logstash Forwarder (formerly Lumberjack) forwards logs via the lumberjack protocol to

logstash to the server. You can use the Packetbeat agent to track and retrieve data from MySQL

(https://www.8host.com/blog/sbor-metrik-infrastruktury-s-pomoshhyu-packetbeat-i-elk-v-ubuntu-14-04/).

Also logstash allows you to convert data of different types:

** grok – set regular expressions to rip fields from a string, often for logs from text format to JSON;

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать


Eugeny Shtoltc читать все книги автора по порядку

Eugeny Shtoltc - все книги автора в одном месте читать по порядку полные версии на сайте онлайн библиотеки LibKing.




IT Cloud отзывы


Отзывы читателей о книге IT Cloud, автор: Eugeny Shtoltc. Читайте комментарии и мнения людей о произведении.


Понравилась книга? Поделитесь впечатлениями - оставьте Ваш отзыв или расскажите друзьям

Напишите свой комментарий
x