How Can Ethical Organizations Build Responsible AI?
The increased use of artificial intelligence (AI) is transforming society. Public and private organizations, nonprofits and for-profits alike are adopting AI as part of their internal and customer-facing processes. As AI becomes more and more pervasive, how can an organization that strives for ethical behavior ensure that they are responsible users of the technology?
AI adoption requires sophisticated technologies like machine learning (ML), hardware, and a robust team of data engineers, data scientists, and technologists to coordinate their collective efforts to build effective AI tools to meet the organizational needs. However, the organizational needs considered are often just operational (e.g., reduce cost, improve efficiency, enable automated decisions based on specific parameters).
While these business-driven considerations are often at the forefront of AI development, organizations should include additional consideration that go beyond optimizing the AI that serve purely the bottom line. For customers and employees alike, organizations should strive to develop AI in an ethical manner, striving to reduce the potential for bias, and limiting the negative externalities on society or underrepresented groups in society.
With AI adoption increasing at every level, what guardrails exist to protect users (both internal employees and consumers) and society from the negative aspects that can and do occur when AI is thoughtlessly implemented or, worse, overtly biased and unfair? There are three options that can enable AI that reduces (but not likely to eliminate) the negative effects of bad AI: Ethical Technologist, Ethical Guidelines, Regulation.
Ethical Technologists: The expectations that ethical data scientists and technologists or ethical organizations will make ethical models and algorithms has already showcased itself to be unrealistic. The unintended effects of many AI models built by ethical data scientists are rampant in the news.
Additionally, it was reported that only 15% of instructors are teaching AI ethics and only 18% of data science students reported learning anything about data science ethics. Ethics is complicated to begin with but the application of ethics on AI is even more challenging given its relative novelty and its complexity (e.g., black–box models). AI is so complicated because it resides at the intersection of human behavior, Moore’s Law of exponential computational growth, capitalistic demand, and society. Despite the clear necessity for AI ethics, academia is not providing the needed leadership regarding ethical training for data scientists.
Ethical Guidelines: AI ethics guidelines are pervasive; companies and organizations all develop and share their ideas on the principles and guidelines that data scientists should utilize in the development and application of their AI models. While all organization should be applauded for the development of Ethical AI guidelines or principles, these documents often live above operations and the development team who make and operate the AI solutions within an organization. While these touchpoints are critical to establish the framework for an ethical culture, they lack the means to make the ethical guidelines real. How does a data scientist operationalize a principle in their AI development?
Regulations: Throughout the world, governments at every level from states to supernational — from the European Union and Canada, to California, among others — strive to provide guidelines and regulation but, as often is the case, the regulators are one or two steps behind this highly innovative sector. It is hard to regulate technological development that grows (or has the potential to grow) exponentially.
Regulations have the problem of being too general (like the ethical guidelines already mentioned) or too specific requiring organizations to develop check-the-compliance–box solutions. The check–the–box solutions are rampant after the EU implemented the General Data Protections Regulation (GDPR) and webpages began requesting permission to use cookies or when California’s California Consumer Privacy Act (CCPA) required organizations to document certain considerations when developing AI. The result of the regulations was not necessarily better AI, rather AI development to had a compliance component (i.e., CYA) that was not generally applied by the AI developers.
Pathway forward: While no perfect solution is evident, it is clear that the current status quo for responsible AI is not sustainable in the short term, let alone the long term. The solutions need to come from AI practitioners, both individuals and organizations. Without the collective work to make AI fair, ethical, accountable, and transparent (FEAT), there is great potential that consumer will lose trust and find alternatives. We have the requirement to build the AI society deserves. We need to find tools to enable the development, deployment, and monitoring of ethical AI.
So.Ai’s vision is to enable society to develop ethical, unbiased, and socially responsible AI, data, and technology. Our mission is to provide an open and flexible platform to enable assessment of the risks, benefits, fairness, ethics, accountability, and transparency of AI models, data sets, and other technologies.
We believe that a focus on process is critical to the responsible use of technology. An ethical company culture requires a well-developed set of ethical processes and structures to support developers, the compliance team, and leadership.
Contact us for more information.