Dmitry Abramov - Elements applications of artificial intelligence in transport and logistics

Тут можно читать онлайн Dmitry Abramov - Elements applications of artificial intelligence in transport and logistics - бесплатно ознакомительный отрывок. Жанр: Прочая научная литература. Здесь Вы можете читать ознакомительный отрывок из книги онлайн без регистрации и SMS на сайте лучшей интернет библиотеки ЛибКинг или прочесть краткое содержание (суть), предисловие и аннотацию. Так же сможете купить и скачать торрент в электронном формате fb2, найти и слушать аудиокнигу на русском языке или узнать сколько частей в серии и всего страниц в публикации. Читателям доступно смотреть обложку, картинки, описание и отзывы (комментарии) о произведении.

Dmitry Abramov - Elements applications of artificial intelligence in transport and logistics краткое содержание

Elements applications of artificial intelligence in transport and logistics - описание и краткое содержание, автор Dmitry Abramov, читайте бесплатно онлайн на сайте электронной библиотеки LibKing.Ru
Abramov Dmitry, Moscow Polytechnic UniversityKorpukov Alexander, Pirogov Russian National Research Medical UniversityShmal Vadim, Federal state autonomous educational institution of higher education «Russian university of transport»Minakov Pavel, Federal state autonomous educational institution of higher education «Russian university of transport»

Elements applications of artificial intelligence in transport and logistics - читать онлайн бесплатно ознакомительный отрывок

Elements applications of artificial intelligence in transport and logistics - читать книгу онлайн бесплатно (ознакомительный отрывок), автор Dmitry Abramov
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Google co-founders Larry Page and Sergey Brin published an article on the future of robotics in 2006. This document includes a section on developing intelligent systems using deep neural networks. Page also noted that this area would not be practical without a wide range of underlying technologies.

In 2008, Max Jaderberg and Shai Halevi published «Deep Speech». In it was presented the technology «Deep Speech», which allowed the system to determine the phonemes of spoken language. The system entered four sentences and was able to output sentences that were almost grammatically correct, but had the wrong pronunciation of several consonants. Deep Speech was one of the first programs to learn to speak and had a great impact on research in the field of natural language processing.

In 2010, Jeffrey Hinton describes the relationship between human-centered design and the field of natural language processing. The book was widely cited because it introduced the field of human-centered AI research.

Around the same time, Clifford Nass and Herbert A. Simon emphasized the importance of human-centered design in building artificial intelligence systems and laid out a number of design principles.

In 2014, Hinton and Thomas Kluver describe neural networks and use them to build a system that can transcribe a person with a cleft lip. The transcription system has shown significant improvements in speech recognition accuracy.

In 2015, Neil Jacobstein and Arun Ross describe the TensorFlow framework, which is now one of the most popular data-driven machine learning frameworks.

In 2017, Fei-Fei Li highlights the importance of deep learning in data science and describes some of the research that has been done in this area.

Artificial neural networks and genetic algorithms

Artificial neural networks (ANNs), commonly referred to simply as deep learning algorithms, represent a paradigm shift in artificial intelligence. They have the ability to explore concepts and relationships without any predefined parameters. ANNs are also capable of studying unstructured information that goes beyond the requirements of established rules. Initial ANN models were built in the 1960s, but research has intensified in the last decade.

The rise in computing power opened up a new world of computing through the development of convolutional neural networks (CNNs) in the early 1970s. In the early 1980s, Stanislav Ulam developed the symbolic distance function, which became the basis for future network learning algorithms.

By the late 1970s, several CNNs were deployed on ImageNet. In the early 2000s, floating point GPUs provided exponential performance and low power consumption for data processing. The emergence of deep learning algorithms is a consequence of the application of more general computational architectures and new methods for training neural networks.

With the latest advances in multi-core and GPU technology, training neural networks with multiple GPUs is possible at a fraction of the cost of conventional training. One of the most popular examples is GPU deep learning. Training deep neural networks on GPUs is fast, scalable, and requires low-level programming capabilities to implement modern deep learning architectures.

Optimization of genetic algorithms can be an effective method for finding promising solutions to computer science problems.

Genetic algorithm techniques are usually implemented in a simulation environment, and many common optimization problems can be solved using standard library software such as PowerMorph or Q-Learning.

Traditional software applications based on genetic algorithms require a trained expert to program and customize their agent. To enable automatic scripting, genetic algorithm software can be distributed as executable source code, which can then be compiled by ordinary users.

Genetic algorithms are optimized for known solutions that can be of any type (e.g. integer search, matrix factorization, partitioning, etc.). In contrast, Monte Carlo optimization requires that an optimal solution can be generated by an unknown method. The advantage of genetic algorithms over other optimization methods lies in their automatic control over the number of generations required, initial parameters, evaluation function, and reward for accurate predictions.

An important property of a genetic algorithm is its ability to create a «wild» configuration of parameters (for example, alternating hot and cold endpoints) that correspond to a given learning rate (learning rate times the number of generations). This property allows the user to analyze and decide if the equilibrium configuration is unstable.

The downside of genetic algorithms is their dependence on distributed memory management. While extensive optimization techniques can be used to handle large input sets and multiple processor / core configurations, the complexity of this operation can make genetic algorithm decisions vulnerable to resource constraints that impede progress. Even with the genetic algorithm code, in theory, programs based on genetic algorithms can only find solutions to problems when run on the appropriate computer architecture. Examples of problems associated with a genetic algorithm running on a more limited architecture include memory limits for storing representations of the genetic algorithm, memory limits imposed by the underlying operating system or instruction set, and memory limits imposed by the programmer, such as limits on the amount of processing power, allocated for the genetic algorithm and / or memory requirements.

Many optimization algorithms have been developed that allow genetic algorithms to run efficiently on limited hardware or on a conventional computer, but implementations of genetic algorithms based on these algorithms have been limited due to their high requirements for specialized hardware.

Heterogeneous hardware is capable of delivering genetic algorithms with the speed and flexibility of a conventional computer, while using less energy and computer time. Most implementations of genetic algorithms are based on a genetic architecture approach.

Genetic algorithms can be seen as an example of discrete optimization and computational complexity theory. They provide a short explanation of evolutionary algorithms. Unlike search algorithms, genetic algorithms allow you to control changes in parameters that affect the performance of a solution. For this, the genetic algorithm can study a set of algorithms for finding the optimal solution. When an algorithm converges to an optimal solution, it can choose an algorithm that is faster or more accurate.

In the mathematical language of programmatic analysis, a genetic algorithm is a function that maps states into transitions to the next states. A state can be a single location in a shared space or a collection of states. «Generation» is the number of states and transitions between them that must be performed to achieve the target state. The genetic algorithm uses the transition probability to find the optimal solution, and uses a small number of new mutations each time a generation ends. Thus, most mutations are random (or quasi-random) and therefore can be ignored by the genetic algorithm to test behavior or make decisions. However, if the algorithm can be used to solve the optimization problem, then this fact can be used to implement the mutation step.

Transition probabilities determine the parameters of the algorithm and are critical for determining a stable solution. As a simple example, if there was an unstable solution, but only certain states could be traversed, then the algorithm for finding a solution could run into problems, since the mutation mechanism would contribute to a change in the direction of movement of the algorithm. In other words, the problem of transition from one stable state to another will be solved by changing the current state.

Another example might be that there are two states, «cold» and «hot», and that it takes a certain amount of time to transition between these two states. To transition from one state to another in a certain amount of time, the algorithm can use the mutation function to switch between cold and hot states. Thus, mutations optimize the available space.

Genetic algorithms do not require complex computational resources or detailed network architecture management. For example, a genetic algorithm could be adapted to use a conventional computer if computing resources (memory and processing power) were limited, for example, for simplicity in some scenarios. However, when genetic algorithms are constrained by resource constraints, they can only calculate probabilities, which leads to poor results and unpredictable behavior.

Hybrid genetic algorithms combine a sequential genetic algorithm with a dynamic genetic algorithm in a random or probabilistic manner. Hybrid genetic algorithms improve the efficiency of the two methods by combining their advantages while retaining important aspects of both methods. They do not require a deep understanding of both mechanisms, and in some cases do not even require special knowledge in the field of genetic algorithms. There are many common genetic algorithms that have been implemented for different types of problems. Some notable use cases for these algorithms include extracting geotagged photos from social media, traffic prediction, image recognition in search engines, genetic matching between stem cell donors and recipients, and public service evaluations.

A probabilistic mutation is a mutation in which the probability that a new state will be observed in the current generation is unknown. Such mutations are closely related to genetic algorithms and error-prone mutations. Probability mutation is a useful method for checking that a system meets certain criteria. For example, a workflow has a certain error threshold that is determined by the context of the operation. In this case, the choice of a new sequence depends on the probability of getting an error.

Although probabilistic mutations are more complex than deterministic mutations, they are faster because there is no risk of failure. The probabilistic mutation algorithm, unlike deterministic mutations, can represent situations where the observed mutation probability is unknown. However, in contrast to the probabilistic mutation algorithm, parameters must be specified in a real genetic algorithm.

In practice, probabilistic mutations can be useful if the observed probabilities of each mutation are unknown. The difficulty of performing probabilistic mutations increases as more mutations are generated and the higher the probability of each mutation. Because of this, probabilistic mutations have the advantage of being more useful in situations where mutations occur frequently, and not just in one-off situations. Since probabilistic mutations tend to proceed very slowly and have a high probability of failure, probabilistic mutations can only be useful for systems that can undergo very high mutation rates.

There are also many hybrid mutation / genetic algorithms that are capable of generating deterministic or probabilistic mutations. Several variants of genetic algorithms have been used to create music for composers using the genetic algorithm.

Inspired by a common technique, Harald Helfgott and Alberto O. Dinei developed an algorithm called MUSICA that generates music from the sequences of the first, second, and third bytes of a song. Their algorithm generated music from a six-part extended chord composition. Their algorithm produced a sequence of byte values for each element of the extended chord, and the initial value could be either the first byte or the second byte.

In April 2012, researchers at Harvard University published the Efficient Design of a Quality Assured Musical Genome, which described an approach using a genetic algorithm to create musical works.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать


Dmitry Abramov читать все книги автора по порядку

Dmitry Abramov - все книги автора в одном месте читать по порядку полные версии на сайте онлайн библиотеки LibKing.




Elements applications of artificial intelligence in transport and logistics отзывы


Отзывы читателей о книге Elements applications of artificial intelligence in transport and logistics, автор: Dmitry Abramov. Читайте комментарии и мнения людей о произведении.


Понравилась книга? Поделитесь впечатлениями - оставьте Ваш отзыв или расскажите друзьям

Напишите свой комментарий
x