Table of Contents
Introduction
Artificial intelligence refers to “an application capable of processing tasks that are, for the time being, performed more satisfactorily by human beings insofar as they involve high-level mental processes such as perceptual learning, memory organization and critical thinking”.
The American researcher Marvin Lee Minsky considered the father of AI, defines this concept. In 1956, at a meeting of experts in Dartmouth (south of Boston) organized to reflect the creation of thinking machines, he convinced his audience to accept the term.
History
Following initial work, predominantly around expert systems, AI emerged much later. In 1989, the Frenchman Yann Lecun established the first neural network to recognize handwritten numbers.
But it was not until the end of 2019 that his research and Canadians Geoffrey Hinton and Yoshua Bengio remained crowned with the Turing Prize. What for? Because to work, deep learning faces two obstacles. First, the computing power wanted to train neural networks.
The rise of graphics processors in the 2010s provided a solution to the problem. Second, learning involves having massive volumes of data. In this regard, the Gafam has since done well, but data sets have also remained published in an open-source such as ImagiNET.
How to Embark on an Artificial Intelligence Project?
Before boarding on the deployment of an AI, it will be necessary to integrate the vocabulary of artificial intelligence and the potential and constraints of the main machine learning methods: supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning…
Similarly, many machine learning algorithms are accessible from the simplest to the most complex: regression, decision tree, random forest, support vector machine, neural network (read our article Which artificial intelligence is right for you?).
Depending on the problem to be resolved and the quality of the training data set, each will result in predictions with a more or less accurate accuracy score.
Comparative
Infrastructures or libraries of machine learning, deep learning, automated machine learning environments, data science studio. Tools abound in the field of artificial intellect. Hence the importance of comparing the strengths and weaknesses of each to make the right choice.
Who Uses Artificial Intelligence?
Automotive, banking-finance, logistics, energy, industry, The rise of artificial intelligence spares no sector of activity. And for a good reason. Machine learning algorithms are available at all levels according to business issues.
What are the benefits of Artificial Intelligence?
Artificial intelligence is powering autonomous vehicles via deep learning models (or neural networks) in the automotive industry.
For Banking-finance, she estimates the risks of investments or trading. In transport, it calculates the best routes and optimizes flows within warehouses.
Energy and retail forecast customer consumption to maximize inventory and distribution. Finally, in industry, it makes it possible to anticipate equipment failures (whether for a robot on an assembly line, a computer server, an elevator.) even before they occur—objective: to carry out maintenance operations in a preventive manner.
Daily, artificial intelligence remains also used to implement intelligent assistants (chatbot, callboy, voice bot) or smartphone cameras to take a shot in any circumstance.
Conclusion
Artificial Intelligence has given rise to a whole host of new skill profiles. The first of these is none other than the data scientist. He remains expected to have big data, algorithms, statistics, data visualization, and business expertise.
With the rise of AI projects, a new profile supports the generalist data scientist: the machine learning engineer. He is a specialized data scientist whose mission is to cover the entire life cycle of a learning model, from its design and training to its monitoring.
And obviously through its deployment (read the article Machine learning engineer: new star profession of data science).