Regulating Artificial Intelligence

The previous blog post describes what artificial intelligence (AI) is and how it is used in business applications. It touched on the expected disruption in employment and training that automation and artificial intelligence will bring. AI has been compared to the industrial revolution in its demand for changes in education and shifts in the nature of work.

The majority of occupations involve at least some activities that are automatable, according to a report by McKinsey Global Institute. Maid service or harvesting crops by hand are examples of occupations which have not been automated. The McKinsey report’s midrange projection suggests that about 400 million workers worldwide could be displaced by automation between 2016 and 2030. Jobs changed by automation will affect more occupations than those that are lost. In that same time period, between 555 million and 890 million jobs will be added globally. Governments, businesses and institutions, together, are in positions to prepare the workforce for these changes, smooth a transition that could lead to temporary spikes in unemployment, and avoid a rise in inequality due to increased wage polarization.

And that addresses only the employment challenges of AI. The nature of AI introduces further challenges to privacy, bias, safety and abuse. Privacy concerns are already on the agenda of regulatory bodies worldwide. The European Union has been a leader in this area, adopting the General Data Protection Regulation (GDPR) in May 2018. The GDPR sets rules for data protection and privacy.

Enormous volumes of data must be amassed to train AI systems to perform their assigned function. Biases embedded in the data being collected, for example in job recruiting, will be perpetuated by the new system. Such a bias occurred when Amazon employed AI in its recruiting. Based upon Amazon’s previous ten-year hiring history, during which time the majority of applicants were men, the system “learned” to select male applicants over female applicants. Similarly, bias will continue into AI systems for mortgage lending or prison sentencing. Recognizing that bias has occurred will be difficult if the underlying algorithm is not transparent and monitored.

When algorithms are driving cars, safety is expected to improve, but systems will not be perfect. When injuries or deaths occur, legal issues of responsibility will arise. A report from the Wharton School discusses business ethics as related to automation and robotics. The power of AI will find applications in the military, robotics, politics and social media, where misuse can be dangerous. International rivalry over military power, economic domination or form of government presents fertile ground for disruption. A hefty diet of data points fed into a system over an extended period of time is ripe for unknown manner of malicious activity.

The need for intervention at both government and private levels is obvious. K-12 education, retraining, safety net, infrastructure, data security, job growth, and, not least, social stability, are just a sample of issues requiring thoughtful attention. To date, more attention is going to the commercialization of AI than the ethics.

Recent post