Artificial intelligence – not only Big Data

Industry 4.0 and Big Data

Invention of steam engine, electrification and computerization are regarded as three industrial revolutions which completely transformed the global economy. Nowadays, the Fourth Industrial Revolution is taking place which is focused on automation – forecasting, optimization, decision-making, and artificial intelligence algorithms which are given ever more of responsibilities formerly attributed to humans.

Currently, machines decide which movies we watch (Netflix, YouTube), which books we read and which products we buy (Amazon), they decide what we should know by choosing what to show us when we search for information (Google), they decide if we should get a loan, if we are a fit for a job, what times we should work, and sometimes can even fire us. On one hand, it is a serious threat to our freedom in making autonomous decisions regarding our lives. This revolution completely redefines social relationships, not always in a good way. On the other hand, automating repetitive tasks enables humans to focus on performing more creative tasks, usually involving interaction with other human beings or requiring a more universal approach in which machines still lack skills. This revolution may even ultimately bring us closer to the prophecy by John Maynard Keynes that through automation people will need to work only three hours a day.

Regardless of positive and negative aspects of the Fourth Industrial Revolution no business can afford to ignore this trend. Remaining competitive requires automation of repetitive tasks, including decision-making processes. The world’s technological giants, such as Google, Amazon, or Microsoft rely on huge datasets (Big Data), e.g. billions of searches in Google or YouTube or shopping decisions made every day through the platforms of those companies. The majority of companies do not have such datasets at their disposal, hence they need a much broader array of available technologies in order to employ automation.

Decision systems based on “Little Data”

Artificial intelligence systems utilized by global technological giants are based on so-called Artificial Neural Networks (ANN) which are perfectly suited to learn activities for which there exists a large set of examples of how to do them and which do not change over time. For example, an ANN taught to recognize cats in ancient Egypt would perform just as well in recognizing modern cats since cats’ appearance did not change much since that time. It is much harder to teach ANN sequential decision-making. It is a process in which we make a decision, observe the outcome, make subsequent decision etc. This is a typical method how people learn, and it is called trial and error method. Since ANNs cannot perform any exploration, they need a large dataset from the get go. Technically, we could supply the networks with information how people perform the same task. In reality, however, this method also has its drawbacks. First, such dataset is rarely available. What’s more, if the environment changes rapidly a human will adapt, but a machine will not. ANNs are reflections of the past.

An example of such a sequential decision-making process is the drug discovery process: we try one combination of components in the drug, observe the new drug’s properties and make decision about which new combination to try in a subsequent test. Another example is logistics. Managers learn where to send their trucks to avoid dead legs on their way back. Also, optimization of a setup of any factory machine can also be regarded as sequential decision-making process. We set the “nobs” to some position and observe the performance. After trying a few configurations we stick to the one that performs best. In each of the aforementioned examples our goal is to perform as few tests as possible and bear the least of expenditures before arriving at the most optimal decision.

Bayesian optimization methods and the field of Optimal Learning were precisely invented to handle such challenges. One of the techniques studied within this field is Knowledge Gradient, a method invented by prof. Warren Powell from Princeton University. This algorithm on one hand tries to utilize its current knowledge to minimize losses or maximize profits and simultaneously picks its next decision is such a way to learn if there exists an even better solution.

Let’s imagine that we are in a casino in an ideal world in which casino not always wins. We have three one-armed bandit machines in front of us and we can make profit on at least one of them. However, we do not know which one that is. We need to test each of them, bearing the cost of playing. How many times should we try each of them in order to learn which one is profitable and not lose too much money in the process? Knowledge Gradient is a solution to that problem.

Professor Powell has successfully implemented this algorithm in finding optimal drug compositions, truck logistics, or power grid management involving renewable energy sources. His most significant achievements in industry are driver scheduling systems and fleet planning systems for the biggest American truckload carriers in the U.S. – Swift, Schneider and Hunt, which saved tens of millions of dollars by using those systems.

This method can also be implemented in any setting in which we need to make a decision multiple times and in which pace of learning and adaptiveness to changing conditions is of utmost importance.

Optimal learning in Poland

In Poland the techniques of Bayesian Optimization along with other Machine Learning methods are heavily utilized by a start-up ORA AI (and its branch dedicated solely to the hospitality industry – RoomSage). By cooperating with prof. Powell and his research team we have managed to develop algorithms for managing bidding processes in Google Ads. We initially focused on providing a solution for the hospitality industry and currently our algorithm is used by a couple dozens of hotels throughout Europe and North America. On such campaigns our AI typically increases profit by about 20-25%, but for some campaigns this figure goes to several hundred percent. Our algorithms have also won nearly all tests in which they competed against Google’s algorithms. Currently we expand our business to all industries in which companies advertise themselves through Google Ads and we already have some clients in this vertical too. Ultimately, we consider ourselves a technological company, able to use so nowadays hyped neural networks, but more focused on developing adaptive artificial intelligence algorithms based on Optimal Learning. We strongly believe that companies which want to leverage the benefits of the Fourth Industrial Revolution should take more interest in these methods, as they provide more flexibility in approaching automation challenges.


Author:
Piotr Zioło, Director of Data Science

Last Updated on October 28, 2020 by Łukasz

Udostępnij
CATEGORIES