Artificial intelligence and its danger: when the machine becomes "rave"



The danger of artificial intelligence: when the machine raves!Source image
Getty Images

The passenger can notice the stop signal and panic when the vehicle in which he is driving accelerates rather than stops. The passenger opens his mouth to shout at the driver and asks him to stop immediately. Then he sees a train approaching him, he remembers that there is no driver at all. With 125 miles per hour the train ran over the self-driving car and went on and on.

This is a fantastic scenario, but it highlights the real shortcomings in artificial intelligence systems in their current form. In recent years many examples of errors have been seen where the machine was seen and heard in conflict with reality. With "jamming" interfere with their recognition systems, the machine becomes "frenzied". In the worst case, the risk of "delirium" can be as severe as in the previous example, as if the signal is not aware of the interruption, even if it is visible.

This strange phenomenon occurs because of what artificial intelligence workers & # 39; opposing models & # 39; or, simpler, & # 39; strange events & # 39; to mention.

"It happens because of inputs that the network must receive in a certain way, but is an unexpected mechanism to identify," said Anish Atali, a computer expert at the Massachusetts Institute of Technology at Cambridge.

  • Use artificial intelligence to diagnose Parkinson's disease
  • How can artificial intelligence be used to predict the growth of cancer?

Source image
Kevin Eykholt et al

Caption image

Some small stickers on the stop signal may interfere with the computer's optical recognition system, although the human eye does not miss it

Visual Hallows

To date, expert efforts have focused on visual identification systems. Atali showed the possibility to manipulate for example the image of a cat that is not missed by the human eye, so that the machine sees a kind of vegetables through the so-called artificial neural networks, which depend on the method of learning by automated algorithms that the current to represent artificial intelligence. These optical recognition systems are used in smartphones to automatically identify photos of friends and select items in the image.

More recently, Attali and his colleagues have focused their attention on objects. By changing the texture and color of some, the team has deceived artificial intelligence and created it in a completely different way. An example of this is seeing a baseball like a cup of coffee, or a turtle like a gun. The team identified 200 examples of computer failures that are of concern, with automated applications entering our homes, self-propelled aircraft in our air and self-propelled vehicles on our streets.

Atali says it was first discovered out of curiosity and curiosity, but concerns about the safety and security of these systems have quickly increased among people. For example, self-driving cars that have now entered field tests rely on complex learning networks to help them identify and walk their environment on the roads.

But last year researchers showed cases in which Artificial Neural Networks could not read artificial intelligence, stopping as markers of maximum speed by simply putting small labels on the mark.

Source image
MIT

Caption image

Artificial intelligence can be mistaken for a turtle, for example as a pistol, which can have serious consequences when it is actually applied

Audio hallucinations

Other forms of automatic learning, except visual recognition, are also sensitive to strange phenomena.

"In all areas, from image classification through automatic speech recognition to translation, artificial neural networks can be blocked to misrepresent," says Nicholas Carlini, an expert in Google Brain's smart device development team. By adding some audio inputs in the background, the device can read the wrong voice.

For Carlini, these "opposing models" clearly demonstrate that machine learning "did not reach the level of human capacity even in the simplest of tasks".

A deeper understanding

Artificial neural networks imitate in one way or another the way the human brain deals with visual information and how it learns. For example, a young child is aware of what the cat is by recognizing this object repeatedly, so that it begins to observe certain patterns and realizes that the body has four legs, fine fur, two pointy ears, drawn eyes and a long hairy tail. This is done by means of sequences of neurons in the visual part of the cerebral cortex.The child releases his nerve impulses to visual details of horizontal and vertical lines, making the child the mental & # 39; image of the world can create and learn.

The same thing happens in the case of artificial neural networks that are used in automated learning. Data goes through industrial cells until the network, after training hundreds or even thousands of examples of the same, begins to follow patterns that predict what it sees. The more complex systems of this mechanism, or the so-called deep learning, are dependent on passing on data with more series.

Source image
MIT

Caption image

By applying a simple change in the texture, the operators misled it by making it look like a baseball ball like a cup of coffee

Although computer experts know how industrial networks work, they do not necessarily know the exact details of their data processing. "We do not currently understand these networks sufficiently to explain the adverse phenomena or how to treat them," said Attali.

"Our learning frameworks are basically aimed at achieving a good performance-average," says Alexander Madre, a computer expert at the Massachusetts Institute of Technology.

To address this problem, industrial networks may need to be trained on more difficult models of the target to be more able to identify anomalies. However, it is still possible to change the appearance of an image or an object to confuse the machine.

In order for the image classification mechanism to succeed, it must reach the level of human cognition of agreements, realizing that a simple drawing drawn by a snapshot, a snapshot and a real cat is all the same. Despite the enormous evolution of computerized networks for in-depth learning, it does not respond to the ability of the human brain to classify things, understand the environment, and deal with unexpected things.

To develop intelligent machines that perform successful tasks, it is necessary to return to the human brain to get a better understanding of how it works.

Perception of relationships

Although artificial neural networks are inspired by the human cerebral cortex, there is still a huge difference between the two: the human brain not only recognizes visual functions such as frames and images, but also codes the relationships that connect these functions, From something bigger, through which we realize the meaning of what we see.

Simon Strenger, from the Oxford Institute of Neuroscience and Artificial Intelligence, says that when we look at a cat, we see all the characteristics of the cat and the relationships that are attached to it. This interdependence of information makes us aware of the world around us, but that basic characteristic is not present in the current generation of artificial intelligence networks.

Source image
Getty Images

Caption image

We easily realize that what we hear is music, but input can be manipulated and the music machine can be interpreted as a conversation about my words

The designers of artificial intelligence frames have overlooked different properties of biological neurons to simplify them. The disadvantages of this have become clear. Human nerve cells communicate with each other by slowing the transmission of impulses and vary in terms of the transfer rate of information between fast and slow. Many neurons release their impulses, depending on the timing of what they receive in turn from impulses.

"Industrial networks depend on the unification of their cells, but brain cells multiply in their forms, which leads me to believe that this should have a close functional relationship," said Jeffrey Bowers, a neuroscientist at the University of Bristol and a researcher at the University of Bristol. brain aspects currently lacking in artificial neural networks.

Another difference is that while artificial neural networks depend on the transmission of signals across one-way sequences, neurons in the cerebral cortex move the signals from top to bottom and move them in the other direction from bottom to top, says Strenger, who is in his lab works to develop simulations For the human brain to get a better understanding of how it works.

The scientific team is working with a laboratory for the science of defense and technology in England to develop the next generation of artificial neural networks that can be used militarily, such as guarding enemy tanks with smart cameras that prove self-trails.

The goal of Streinger is to get the intelligence of the machine within twenty years in the brain of test mice. He says that access to human intelligence can take much longer.

"It has become clear that the human brain is very different from the way the current deep-learning models work," Madre said. "Increasing the level of the human brain will take an unknown time."

Until then, we can not blindly rely on artificial intelligence and self-driving cars that will occur more frequently in the future – the machine is also vulnerable to "delirium".

You can read the original article on BBC Future

————————————————– —

You can receive messages about the most important topics after you have downloaded the latest version of the BBC Arabic application on your mobile phone.


Source link