Artificial Intelligence is defined by Cambridge Dictionary as the study of how to produce machines that have some of the qualities that the human mind has, such as the ability to understand language, recognize pictures, solve problems, and learn. Another definition describes artificial intelligence as the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
Based on the two definitions above, we can simply say that artificial intelligence is intelligence demonstrated by machines, in contrast to the natural intelligence (NI) displayed by humans. This can be seeing an object, perceiving its colour, its texture, its size or its shape, and demonstrating an act according to the data received. Alternatively, this can be hearing some noises and speeches, selecting specific sounds and words of these sound waves, and acting accordingly.
With Nova, we will be doing computer vision and image processing projects, where Nova will be recognizing a colour, a face, a pattern or a shape, and track it. Recognizing a specific object out of a video frame and acting accordingly is one small example of artificial intelligence. As a second example, we will be customising Nova by adding voice recognition modules, which will allow Nova to select the words, sentences or a sequence of noises, and act accordingly. Finally, we will start scratching the surface of a huge topic: Machine Learning and Deep Learning. By examining and learning what neural networks are, we are going to develop machine learning algorithms based on visual and audio based materials.
The earliest (and easiest to understand) approach to AI was symbolism, such as formal logic: "If an otherwise healthy adult has a fever, then they may have influenza". A second, more general, approach is Bayesian inference: "If the current patient has a fever, adjust the probability they have influenza in such-and-such way". The third major approach, extremely popular in routine business AI applications, is analogizers such as SVM and nearest-neighbor: "After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza". A fourth approach is harder to intuitively understand, but is inspired by how the brain's machinery works: the neural network approach uses artificial "neurons" that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to "reinforce" connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms; the best approach is often different depending on the problem.
Artificial intelligence was founded as an academic discipline in 1956, and this discipline was based on the claim that human intelligence "can be so precisely described that a machine can be made to simulate it". Starting from this ideology, there has been many breakthrough achievements in the field of artificial intelligence, that mostly goes parallel with the improvements in the computational technologies. In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.
Artificial intelligence as we know it began as a vacation project. Dartmouth professor John McCarthy coined the term in the summer of 1956, when he invited a small group to spend a few weeks musing on how to make machines do things like use language. He had high hopes of a breakthrough toward human-level machines.
Those hopes were not met, and McCarthy later conceded that he had been overly optimistic. But the workshop helped researchers dreaming of intelligent machines coalesce into a proper academic field.Early work often focused on solving fairly abstract
problems in math and logic. But it wasn’t long before AI started to show promising results on more human tasks. In the late 1950s Arthur Samuel created programs that learned to play checkers. In 1962 one scored a win over a master at the game. In 1967 a program called Dendral showed it could replicate the way chemists interpreted mass-spectrometry data on the makeup of chemical samples.
As the field of AI developed, so did different strategies for making smarter machines. Some researchers tried to distill human knowledge into code or come up with rules for tasks like understanding language. Others were inspired by the importance of learning to human and animal intelligence. They built systems that could get better at a task over time, perhaps by simulating evolution or by learning from example data. The field hit milestone after milestone, as computers mastered more tasks that could previously be done only by people.Deep learning, the rocket fuel of the current AI boom, is a revival of one of the oldest ideas in AI. The technique involves passing data through webs of math loosely inspired by how brain cells work, known as artificial neural networks. As a network processes training data, connections between the parts of the network adjust, building up an ability to interpret future data.
With various projects with Nova, we will try to understand what artificial intelligence is, what kind of applications can be achieved in the field of computer vision and voice recognition, and start discovering machine learning over a few examples of neural networks based on visual material. These practices will give you a fundamental idea about artificial intelligence and inspire you about possible applications of this broad subject of AI in the industry.