In 1942, Isaac Asimov formulated the Three Laws of Robotics. First, a robot must not harm a human being. Second, a robot must obey the orders that a human gives it. Third, a robot may protect its own existence, so long as this protection does not conflict with the first and second laws. Let us analyze how these laws apply to the current situation, when big data and artificial intelligence are becoming increasingly prevalent and their misuse is gaining momentum. 

AI’s history dates back to the 1950s. It began with the hypothesis that a machine is capable of thinking like a human brain. Progress started in the 1970s when game theory and experimental psychology, two fields driving the innovation of intelligent machines, enormously influenced the field of AI. In the 1990s, it flourished tremendously when IBM’s Deep Blue supercomputer won a game of chess against the world  champion, Garry Kasparov. A big shift in AI’s development occurred in 2010 with the influx of huge amounts of data, commonly known as big data. AI runs on vast amount of data. With the advent of social media, data in the form of text, photos and videos has become widely available. This paradigm gave rise to new technologies such as social media networks, streaming applications and numerous other data-related services. Data science has become an emerging field of research and development. Consequently, the usage of big data is increasing exponentially. This situation has created some unexpected scenarios where AI experts have to face non-technical issues, which I discuss below.

AI-based systems have opened tremendous opportunities for business innovation and profit in nearly every aspect of the global economy. From self-driving cars, finding locations via GPS, diagnosing diseases, education, healthcare, law enforcement and mass employment, the proliferation of AI systems in the social domain is growing at an astonishing rate. AI systems use big data from prison management systems and decide which prisoners can be released on bail.  Similarily, AI systems are being employed to expand surveillance services by private and government agencies. While AI-based systems are reshaping and impacting the lives of millions of people every day, these services remain unchecked due to a lack of proper processes and accountability. There is significant evidence suggesting that AI systems harm the environment, exacerbate the climate crisis and encode biases. For instance, Amazon, Microsoft and Google have made multi-million dollar deals with oil and gas extraction companies to provide automation and AI-related services. Along with its promises, AI systems have given rise to a whole range of technical, social, ethical and legal challenges.

Take, for example, self-driving cars. The idea of self-driving cars seems very promising and convenient, yet results in tragic and preventable casualties. Many big companies such as Google and Agro1 are investing in ride-sharing services and are planning to launch patented driverless cars. However, the industry’s confidence wavered when a self-driving car owned by Uber killed an Arizona woman in 2018. Another case involved the deaths of three Tesla drivers who were killed when the auto-pilot system failed to avoid a crash. The industry aims to keep the features mainly unregulated and unproven on the real-world roads. The AI-based systems built in driverless cars were not good enough to interpret the stop signs, make sharp turns and ability to recognise traffic signals. In 2015, Google image recognition system powered by AI notoriously classified photos of several Black people as photos of gorillas. In 2017, Apple introduced the iPhone X, which has facial recognition technology. Customers reported that the phone could not distinguish Chinese faces from one another. Microsoft created a Twitter-bot called Tay that took only 12 hours to become a disturbingly misogynistic and hate-spewing dumpster fire. These issues arose because of biased data sampling and racism, both of which affect the programming of AI systems.  

AI also poses an issue of spreading disinformation. In 2017, there was a widespread campaign against the White Helmets humanitarian volunteers to name the organization as a Western proxy working in Syria to incite rebellion. According to the New York Times’ investigative reporting in Brazil, YouTube’s recommendation engine, powered by AI, had been recommending far-right, radical content, influencing peoples’ political ideologies in favor of the Brazilian government. AI-powered systems were involved in election rigging allegations by Cambridge Analytica in the 2016 U.S. elections. Instances of the corrupted and harmful effects of AI make the problem of regulating it bigger and more difficult to resolve. 

In 2017, teachers in Houston sued the school administration for requiring the use of a computer program which compared students’ standardized test scores with the state average. Some teachers felt that the system unfairly penalized teachers for student test performance. The company which builds the algorithm did not disclose its secret. Furthermore, the complex mathematical formulas and neural networks built in the form of algorithms are incomprehensible and teachers cannot challenge the results. The designers of AI use very complex neural networks, and they may not explain how the decisions are made. The judge ruled that use of the Education Value-Added Assessment System violated the teachers’ civil rights, and the teachers prevailed. Similarly, in 2018, Amazon Warehouse workers in Minneapolis staged protests against Amazon’s automated management system and Uber drivers launched a nation-wide revolt in the U.S. The laws such as  the European General Data Protection Regulation (GDPR), adopted in April 2016, and brought into effect in May 2018 and in the U.S., the California Consumer Privacy Act, passed in 2018, will come into effect in 2020 are important and deal with AI: one addresses data privacy; the second is regulation that affects automated decisions.

In the military, the use of AI poses many challenges, which oftentimes cross ethical and moral boundaries. For instance, the most concerning issue regarding AI in the military is the use of Lethal Autonomous Weapons (LAWs), which are AI-driven machines that can autonomously, and without human input, attack people. Google, Microsoft, Facebook and Amazon have faced criticism for their involvements in military influenced projects. Google had to terminate its project Joint Enterprise Defense Infrastructure because of  public outcry. The JEDI contract, issued by the U.S. Department of Defense, was worth $10 billion and was meant to transform the military’s computing systems. The project faced criticism because of the way it captures and exploits data to fight.

Activists, journalists and researchers have raised concerns about the potential hazards of AI in both the technological and social domains. Data is gathered from social media, dating websites, restaurant cameras and college campuses. AI systems have brought changes in  nearly every aspect of our society that are difficult, if not impossible, to measure. New legislation is required to ensure public safety and the reliability of safe and controllable AI systems. These systems are being controlled by a handful of tech giants whose corporate interests are not aligned with those of the public. It is necessary for lawmakers to enact such laws that hold AI systems accountable. Bribed with large salaries, the programmers perform as their employers want them to and hardly discuss the potential downsides of their products. 

In addition, AI has become an interdisciplinary field and has influenced other social domains. The field should not solely be in the hands of computer science and engineering. Faculty and students in the computer and engineering science departments at universities are often not trained in AI’s use in social contexts. Deeper understanding from the field of social sciences and humanities is needed in spaces where AI systems are applied to human populations. Political representatives must investigate the intentions of the tech companies and enact legislation to establish the boundaries of influence for the tech giants and provide better awareness for the public. After Cambridge Analytica's role in the 2016 elections, AI companies must be checked for their interference in the public’s ability to make decisions whose consequences will be widespread and last for decades. Autonomous weapons may be beneficial in the short term, but if AI becomes advanced enough to make its own decisions, then it is very dangerous for the existence of humankind.