The invasion of robots will not start tomorrow or the day after tomorrow. So there is time to understand them in more detail.
The first was Madalin.
Back in 1959, she used her impressive intelligence to eliminate echoes in telephone conversations that no one had previously known how to deal with (back then, on long-distance calls, speakers were often disturbed by the echo of their own voice).
Madalin solved this problem: if the incoming signal was exactly the same as the outgoing signal, it was removed electronically, and the solution was so elegant that it is used to this day. Of course, we are not talking about a person – it was a system of several adaptive linear elements, in English Multiple ADAptive LINear Elements or Madaline. This was the first time artificial intelligence (AI) was used in practice.
Today we now and then hear that robots will take away our work – we will still finish our morning coffee, and they will already redo all our affairs, because they do not need smoke breaks, lunch breaks and night sleep. In fact, while many jobs will be automated in the very near future, this new breed of supercar will most likely work with us, not instead of us.
Despite tremendous progress in various areas – today programs are able to recognize fraudsters even before a crime is committed and it is more reliable for doctors to diagnose cancer – even the most modern AI systems have not come one iota to the so-called general intelligence.
According to a report by consultancy McKinsey, 5% of jobs can be fully automated, but in 60% of jobs, robots will only be able to take on about a third of tasks.
… So, before we cover ourselves with a sheet and crawl to the cemetery, let’s discuss exactly how collaboration with robots can be arranged and what reasons will prevent them from taking our places.
Reason # 1: Robots can’t think like humans
Around the same time Madalin began her service on telephone lines, the Hungarian-British philosopher Michael Polanyi reflected a lot on human intelligence. He realized that there are skills that can be described with clear rules and explained (like the correct placement of commas), but this is only part of what we do.
People can do a lot of different things without even realizing exactly how they do it. Polani put it this way: “We know more than we can
to tell”. This can include practical skills such as cycling or kneading dough, as well as higher-level tasks. And, alas, if we do not know the rules, we cannot transmit them to the computer – this is Polanyi’s paradox.
Scientists, instead of trying to recreate human intelligence, went the other way – they began to develop data-driven thinking.
Rich Caruana, a senior researcher at Microsoft Research, says: “Some people think that AI will work like this: we understand how people think, and then we build a machine in our image and likeness. It was not so “. He gives an example of airplanes that were invented long before we understood how birds fly – yes, they are arranged according to completely different aerodynamic principles, but in the end we fly higher, faster and further than any living creature.
Like Madalin, many AI systems are neural networks, meaning they learn from huge amounts of data using mathematical models. For example, Facebook trained its facial recognition software DeepFace on a set of four million photos. He looked for patterns in pictures tagged as images of the same person, and ended up learning how to correctly match pictures about 97% of the time.
Programs like DeepFace are the rising stars of Silicon Valley, and they have already surpassed their creators in many ways: in driving a car, in voice recognition, translating text from one language to another, and, of course, in photo markup. And this is not the end, in the future they will penetrate a wide variety of areas, from healthcare to finance.
Reason # 2: Our new robot friends aren’t perfect, they make mistakes
Alas, learning from existing data means that AI can make mistakes, sometimes unexpected ones. Let’s say a neural network can conclude that a turtle printed on a 3D printer is a rifle (this is a real experiment by scientists at the Massachusetts Institute of Technology). The program cannot think like a person, arguing that this is an animal with a shell and scaly skin, that is, a turtle. She thinks in patterns – in this case graphic, that is, combinations of pixels, and changing just one pixel can turn a reasonable answer into a completely ridiculous one.
In addition, robots do not have common sense, without which there is nothing to do at any job, because it allows you to apply existing knowledge to new situations that we have not encountered before.
A classic example is DeepMind artificial intelligence. Back in 2015, he was given the classic computer game Pong, and he began practicing it, gradually raising the level of the game. As expected, after a few hours he began to beat people, and even invented several completely new tricks leading to victory, but he had to start mastering almost the same game Breakout from scratch.
However, these issues are now getting a lot of attention, and there is a system called IMPALA, which demonstrates the transfer of knowledge between 30 different environments.
Reason # 3: Robots cannot explain why they made this or that decision
Another AI problem is the modern version of Polanyi’s paradox. Since we do not really understand how our own brains work, we taught the AI to think “statistically”, and now we do not know what is going on “in his head” either.
This is usually called the “black box problem”: we know what data was submitted to the input, we see the result, but we have no idea how the box in front of us came to this conclusion. Caruana says, “So now we have two kinds of intelligence, and we do not understand any of them.”
Neural networks cannot speak, so they cannot explain to us what they are doing and why. And besides, like any AI, they have no common sense.
Several decades ago, Caruana applied a neural network to some medical data. At the entrance, he entered symptoms and treatment outcomes into the system, wanting to calculate the risk of death of patients over time – based on these predictions, doctors could take preventive measures. Everything seemed to be working well, but one night a graduate student at the University of Pittsburgh noticed something strange. He walked through the same data using a simpler algorithm to trace the logic of decision making, and found that the system considers asthma to be a positive factor in pneumonia.
Caruana says: “We went to the doctors with this, and they said that it was not, and that the system needs to be corrected – asthma affects the lungs, so this is an additional risk factor for pneumonia.” How it happened, we will never know, but the developers have a version that, perhaps, patients with asthma are more attentive to lung diseases and go to the doctor earlier, which means they recover more often.
Interest in the use of AI in various fields of human activity is growing, and this is causing concern among experts. This year, new rules of the European Union come into force, according to which a person, regarding whom the AI has made a particular decision, has the right to find out the logic according to which this decision was made. And Darpa, the research arm of the US Department of Defense, is investing $ 70 million in an “explainable” AI program.
David Gunning, Project Manager for Darpa, says: “The accuracy of these systems has increased by an order of magnitude lately, but we pay for this with increasing complexity and opacity – we do not know why she makes this or that decision, and why in the game she does it that way. move “.
Reason # 4: Robots can be biased
Against this backdrop, fears are expressed that algorithms may inherit prejudices from us, including sexism or racism. For example, it was recently found that a program that calculates the likelihood of reoffending for a criminal was twice as likely to indicate a possible relapse for a black convict.
It all depends on the data on which the algorithm learns – if they are clear and consistent, then the decision is likely to be correct. However, datasets often have human prejudice built in.
Here’s a prime example from Google Translate: If you translate from English to Hungarian the phrase “He is a nurse. She is a doctor “(this means” He is a nurse, she is a doctor “, but in English, unlike Russian, the word nurse does not have a gender sign), and then translate her back into English, she will turn into the opposite,” She’s a nurse … He is a doctor “, that is,” She is a nurse, he is a doctor. “
The fact is that the service’s algorithm has been trained on trillions of pages, and from them he knows that a doctor is more likely to be a man, and nurses are more likely to be women.
Another problem is the balance. Often, when analyzing data, AI, like people, “weighs” different parameters, deciding which of them are important and which are not, and the algorithm can decide that the postcode of a person characterizing the area of residence is associated with a credit rating, and thus representatives ethnic minorities living in poor areas will be further discriminated against.
But racism and sexism are not limited to – there are prejudices that we do not expect. A lifelong student of human irrationality, Nobel laureate Daniel Kahneman explained the problem well in his 2011 interview with the Freakonomics blog:
“If we use heuristic generalizations, they by their very nature will inevitably show biases in estimates, and this is true for both humans and artificial intelligence – only AI does not have human heuristics.”
Robots are coming, and, of course, their arrival will irreversibly change the labor market, but until they learn to think like humans, they cannot do without us. It’s just that tomorrow we will have silicon specimens among our colleagues.