With step changes in AI, automation, and telecommunications, technology has the potential to unlock prosperity for our society. How can we harness its power for the greater good?

 

   From the XVIIIth century Industrial Revolution, to the 90’s Tech Boom in California, great technological revolutions and their promises of development often came hand-in-hand with the exploitation of workers’ labor and the widening gap between the socioeconomic classes. Nowadays, as we are on the brink of a new revolution, it is easy to conceive how reckless use of artificial intelligence would mark a new destructive era for the planet, which wouldn’t be a first in history. However, millennia ago, philosopher and scientist Aristotle argued that our ultimate goal as human beings consisted in a persisting desire for the innovation of ourselves, our societies, and the environments surrounding us. In that sense, isn’t it our responsibility to support development, and in our context, artificial intelligence? And if we shall, how can we harness its power for the greater good? It is pertinent to look at the establishment of regulatory guidelines for AI; as well as the introduction of fairness and ethical behavior in AI systems. 

     Developing a conducive environment where innovative approaches are highlighted with established guidelines, is one of the ways to embrace AI’s full potential. The genesis of a development-fostering ecosystem begins by encouraging businesses to have an entrepreneurial spirit, possible through big national investments in research and development. China’s government understands and embraces this strategic approach, with investments estimated to reach $38.1 billion by 2027 correlated to the projected 25% GPD gain by 2030 (graph-1). To yield further positive outcomes, another possible strategy may consist of shared data sources. This favorable setting assisted by collective data access, would nurture citizen engagement and collaboration for pioneering ideals, by encouraging exchanges between individuals. 

 

                                               Graph 1

                                                     

Data extracted by PricewaterhouseCoopers / Published by Financial Times

     However, for these initiatives to become feasible, we’ll need to crush the tendency in modern times to overregulate technological breakthroughs, constraining opportunities for inventive ideas. For example, the US passed bills endorsing the initiative of AI use such as the National AI Initiative Act, while other countries are falling behind with the implementation of overwhelming terms and conditions, restraining the ever-changing and innovative field. The EU’s AI draft act requirement for risk assessment perfectly depicts this phenomenon of “throwing a wrench in the works”, by focusing on defining a “perfect AI” which is not even conceivable yet. Instead, governments should foster an educative approach to social awareness by stimulating skills development in AI-related fields, facilitating public-private partnerships for AI advancements, and providing resources and infrastructure for AI initiatives.

     While nurturing the optimal environment is essential, it would be of no use without ensuring fairness and just behavior in AI systems. 

     Biased AI caused by misinterpretation of data patterns is a perfect illustration of the concerns posed, jeopardizing the whole utility of such automation. For example, Amazon’s AI looked at all the past resumes submitted to the company, in search of a recruiting model. However, since technology is a male-dominated field, the system concluded that men must simply be better applicants, which is obviously not true. Seeing that the problem is accuracy, we should constrain what models can do: if 30% of applicants are women, then at least 30% of jobs would need to be offered to women. However, this could be quite a hurdle when applied in the case of "black-box" algorithms, where the choice-making mechanisms are not clear. Therefore, efforts to make AI fairer will involve creating processes to explain its decisions while integrating larger amounts of data into the system. 

     Nonetheless, to ensure algorithmic fairness, it is important to keep in mind that computers might require a moral code upheld by ethical standards. An experiment conducted by J. Bonnefon, A. Shariff, and I. Rahwan, tested participants' responses to moral dilemmas, concluding that the acceptance of utilitarian moral decisions occurs only when the individuals aren’t personally implicated. Who would want a car that looks after the greater good over one that looks after themselves? On the other hand, the car also better not listen to the conductor's preferences or smart cars may as well speed, tailgate, and engage in road rage endangering others’ safety. Hence, teaching the car an ethical philosophy appears unpopular; instead, bots should recognize commonalities in the users' ethical preferences, strengthened by legal principles like “stare-decisis,” binding the AI to follow past precedents whenever possible. 

     To conclude, AI has the potential to unlock prosperity for our society with the right environment promoting innovation, with governments undeterred by revolutionary principles, with accessible AI devoid of excessive regulations, and with the establishment of a moral conduct. However, this might be challenging if a third of Americans think high-level machine intelligence will have a negative impact on society (graph-2). Despite this, I truly believe that with the recruitment of sociologists and philosophers to work on every step of AI development, society would have every reason to embrace the thrilling technological changes we are witnessing.

                                  Graph 2

Data extracted by the Center for the Governance of AI at the University of Oxford  / Published by Statista

 

 

- Hedi El-Matri -