- As AI tools become increasingly sophisticated, business leaders and buyers must consider their AI systems’ transparency, potential for bias, and accountability
- Because a single algorithm can be used in countless ways, developers must carefully consider the ethical impacts of as many use cases as possible
- Organizations should create and adhere to a formal code of AI ethics that clearly defines their vision for the technology as well as acceptable and unacceptable processes and principles to achieve that goal
“Can AI be Ethical?” originally appeared in Forbes.
2018 was filled with machine learning and artificial intelligence (AI) news and controversies, from the coverage of U.S. Department of Defense’s Project Maven to shareholder outrage at Amazon after it provided its facial recognition software to law enforcement officials. Saudi Arabia sparked a heated debate about the fundamental rights of robots by officially granting citizenship to “Sophia”, and multiple high-level tech innovators testified before the United States Congress.
We’ve reached a point where AI’s promise goes far beyond consumer-facing applications like text prediction or anti-lock braking systems. Its potential is seemingly limitless and has certainly caught the attention of governments and other powerful agencies around the world.
As these applications become increasingly sophisticated and widespread, AI systems’ transparency, potential for bias and accountability issues must be examined—as well as the potential consequences of these machines making important decisions autonomously.
Can AI be ethical?
AI is not inherently good or bad. It’s a tool, and like all tools, it can be used in a multitude of ways. A hammer can be used to build a house or destroy it. Facial recognition can be used to locate thousands of missing children, or it can be used for mass civilian surveillance. Because a single algorithm can be used in countless ways, developers must carefully consider the ethical impacts of as many use cases as possible.
I believe technology creators and developers are responsible for bettering the world as we push the boundaries of innovation. Despite the intimidatingly wide scope of political, ethical and legal considerations for development, there are actionable ways companies can ensure transparency, accountability and fairness in their AI systems.
Ensure data quality and quantity
Machine learning trains algorithms to recognize patterns through exposure to Big Data, so they become a reflection of the data they’re fed. If your data is biased, your algorithm will be biased—and this can have devastating, long-lasting consequences.
The American justice system, for example, uses AI algorithms in multiple states to predict an inmate’s likelihood to reoffend. Drawing from data such as prior record, age, family history and employment history, these risk assessments are predictive models that depend on the past to forecast the future.
Typically, these tools are used to help determine bail or identify low-risk inmates to release on parole. But some states, such as Pennsylvania, are seriously considering using these assessments to determine whether or not to incarcerate someone at all—and for how long.
In a world where algorithms can literally determine someone’s freedom, organizations must teach them to make fair, reasonable decisions by relentlessly protecting against potential biases. The best way to curb bias is by aggregating diverse data, and lots of it. Remember that data can be biased based not only what is included, but also what is excluded, how it’s framed, and how it’s presented.
Ethical data sourcing and distribution
While access to large data sets is necessary for peak performance, focusing on data collection at any cost can lead to oversharing information at the expense of user privacy.
In the U.K., the country’s National Health Service made headlines and lost a tremendous amount of trust and support after the Information Commissioner’s Office found that the agency improperly shared 1.6 million patients’ healthcare records in an AI field trial.
Last year’s Cambridge Analytica controversy is another prime example of why organizations should prioritize transparent data collection and distribution practices, or risk permanent damage to their brands.
Ultimately, organizations must balance providing enough data to optimize machine learning—including a healthy mix of both structured and unstructured data—while maintaining ethical sourcing and distribution.
Develop an AI Code of Ethics and employ company ethicists
Consider professions that have a direct, significant impact on human lives: doctors, law enforcement, professional engineers, judges. In each field, there’s a clear code of ethics, an oath and legal obligation to act in the best interest of others.
Arguably, emerging AI applications can have an unprecedented impact on human lives. Yet, while some tech organizations—such as the Association for Computing Machinery and the Data & Society Research Institute—have developed or proposed strict codes of ethics to guide framework and decision-making, there’s currently no industry-wide charter for ethical AI development.
Understandably, developing a singular code of conduct to encompass all potential AI applications is challenging. But that shouldn’t deter organizations from developing their own. I believe all companies should adhere to a formal code of ethics when it comes to their AI. This code should clearly define the ultimate vision for the technology, as well as acceptable and unacceptable processes and principles to achieve that goal.
Furthermore, many pioneering organizations have hired dedicated ethicists to help navigate the complicated ethical terrain. Others are interviewing candidates for similar positions. Still others are instituting AI review boards and/or audit trails. At my company, Ultimate Software, we developed an internal AI Ethics team to guide our product and engineering decisions with a clear ethical framework.
In the emerging world of AI innovation, ethical decision-making will rely on a comprehensive checks-and-balances system to critically review algorithms and provide transparency and accountability into the processes.
Great power, even greater responsibility
AI has already drastically improved the lives of millions—paving the way for more accurate and affordable healthcare, improving food-production capacity, and building fundamentally stronger organizations. This technology could very well be the most influential innovation in human history, but with major promise comes major potential pitfalls. As a society, we must proactively address transparency, ethical considerations and policy issues to ensure we’re applying AI to put people first and fundamentally make the world a better place.