Dystopian nightmares and possible futures. Should we worry about AI?
Those of you that have watched ‘Black Mirror’ are perfectly familiar with dystopian futures where technology plays the key role. Humans stuck in a matrix, confined to algorithms and computer commands, unable to break away from the written programmes. The 2019 crash of Boeing 737 could be perceived as such a dark scenario when due to faulty programming, the pilots were unable to escape the built-in algorithm. And what about the time when a self-driving car deadly hit a pedestrian in Arizona?
How not be worried about technology when for decades, we have been fed with science fiction films acquainting us with the idea of machines taking over the world, enslaving humans, and basically turning the Earth into a dusty, dystopian nightmare? Yet, according to many, including Ron Schmelzer from ‘Forbes,’ who says that our fear of AI stems from general anxiety about machine intelligence. A worry that such ‘intelligence’ would be used in a morally-wrong way, fear of mass unemployment, and an overall caution concerning technology. And the very lack of understanding of artificial intelligence is the cause of the fear mentioned above. Lance Eliot from ‘AI Trends Insider’ lists lessons for AI in the case of planes and self-driving cars and suggests further and better training of human operators. So in some way, you can relax - we are still needed.
Positive, negative, and intentional
Artificial intelligence can be more seen as an umbrella term than just be simplistically defined. Royal Society explains that in reality, it is used to enhance search engines, individualise advertising, and software alerts detecting banking fraud. Machine learning is essentially what allows electric car manufacturer Tesla to produce their self-driving cars. And it enables NASA to analyse chemical compositions found in space. The AI Research and Advisory company informs that fast-evolving machine learning is also used in medicine in fields such as behaviour modification, drug discovery, and many more.
Nick Bostrom, a Swedish philosopher and the director of Future of Humanity Institute at the University of Oxford, explains that a computer's capabilities largely supersede a human brain's powers, primarily due to size. The human brain is confined to the skull, whereas computer size is unconstrained. Neurons in the human brain work much slower than their machine counterparts of silicon transistors, chips, and circuits, and human memory seems far outpowered by voluminous computer drives. Hence why, according to Bostrom, computers can be more competent than humans, performing complex calculations much quicker, having endless possibilities. He talks about the idea of superintelligence, on which human life will be dependent. In short - in his opinion, we should refrain from anthropomorphising artificial intelligence because we cannot grasp the power that such machines possess. Furthermore, artificial intelligence can far supersede humans when it comes to mathematics, logic, decision making, and handling big data.
Steven Pinker, an American cognitive psychologist and Harvard University professor, who specialises in visual cognition and psycholinguistics, points out that it is illogical to assume that machines can be smart enough to take over humanity or do so by mistake. They should be able to do so on purpose, not due to seemingly minor detail. The key factor of understanding how machines perform tasks lies in intentionality - a distinction between an intentional and unintentional act.
Intentionality is ‘the power of minds and mental states to be about, represent, or, to stand for, things, properties, and states of affairs,’ says the Stanford Encyclopedia of Philosophy. Humans possess the ability to make their thoughts, acts, representations, and so on intentional. Probably one of the most straightforward examples is Hilary Putnam, an American analytic philosopher and a mathematician, who is most known for his critique of the ‘brain in a vat’ thought experiment. Putnam explained the idea of intentionality by describing the difference between ants leaving a trail, which resembles a portrait of Winston Churchill, and a person painting the portrait of Winston Churchill. The difference, according to Putnam is that the painter has an intention to paint the portrait (he does it intentionally), whereas the ants did it incidentally.
Artificial intelligence, much like the ants, still lacks the ability of intentionality. When told that the person chatting to them feels ‘sad,’ even chatbots, despite choosing the most effective and usually appropriate response from their dataset, do not intend to cheer the sad person up. At the moment, artificial intelligence has no capability of acting intentionally. But is intentionality that important? To some extent - yes. Without it, the machines are not capable of notions such as ‘free will,’ ‘desire,’ and ‘need.’ Although it is controversial whether machines are capable of emotions; if to consider that they have no intentionality, they also do not intend for their actions to have any emotional or emotive results. From this, we can have a simple conclusion that as long as machines do not possess intentionality, they cannot ‘feel mistreated’ and thus want to organise a revolution that the mainstream media loves to thrive on.
Also, Steven Pinker appears to disregard the atmosphere of ‘doomsday’ when it comes to the topic of malevolent artificial intelligence. It is, however, still difficult to predict all the outcomes. Hence, a more neutral but somewhat ominous statement should be given by Stephen Hawking, who claimed that AI could be either the best or the worst thing in human history.
Continuous observation
It is impossible to explain the whole topic of AI in one article. But the question still is, should we be worried about AI? If by worrying, we mean minimalising the chance of misconduct – then, yes. However, if by worrying, we mean assuming that due to a seemingly inconspicuous detail, the machines will decide that they have felt mistreated, the future of our entire population will go extinct, and the remaining humans will be android slaves, then no. Still, there will never be a final answer to the question of worry about AI as it is vastly developing. My suggestion is that aside from monitoring the development of AI by scholars involved in various fields, we should stress the observation of the potential development of artificial intentionality. If this is possible, or when this happens, it will be a milestone in technology's practical application and status. For now, it is counterproductive to spread fear about machines eventually, inevitably, slaughtering humans as in the end, it actually saves our lives in many fields.