How to Develop Guiding Principles for Digital Ethics and Put Them into Practice
As artificial intelligence (AI) moves toward becoming a standard technology in daily business — such as the use of conversational user interface technology in call centers and financial close processes driven by machine learning — it is critical that organizations have a well-thought-out set of guiding principles in place for mitigating any potential ethical issues that can arise with the use of this technology. For example, automated decision making that doesn’t factor in extenuating circumstances, the perception of misuse of personal data, and machine learning based on faulty data can have a significant impact on the security of a company’s business model and reputation.
A good example of these types guiding principles are the ones outlined by SAP to steer the development and deployment of its own AI-based software. So how do you go about building a set of principles that ensures the ethical use of AI? There are three tasks in particular that are key: gathering requirements for ensuring the implementation of ethical AI, adding ethical AI information to your existing standards and policies, and monitoring and auditing AI activities. Let’s take a closer look.
Gathering Requirements for Ensuring the Implementation of Ethical AI
To build a successful set of principles, it is a good practice to first gather requirements based on customer feedback and academic discussions. Questions might include: In which cases must a system involve a human decision maker? What should the human-machine interaction look like? Which processes must be logged or monitored? Which parameters must be customizable to enable ethical system behavior? Within the purchasing process, for example, it could be a requirement to define a certain level of fair-traded goods and instruct the AI-based software to choose vendors accordingly. It could also be a requirement to ask users before using their personal data, even if the data is anonymized. These types of requirements must be gathered in close collaboration with customers, AI providers, and people who handle business ethics questions (executive leaders, portfolio or compliance managers, and sustainability departments, for instance).
Explore related questions
Checklists can also be helpful for identifying the requirements needed to ensure ethical AI. Checklist items should include questions related to human involvement in AI, such as the end user’s cultural values, how the end user’s current context is evaluated, and situations in which the end user will want AI functionality turned off. Additional checklist items should focus on AI algorithms and boundary conditions, such as how “learn and forget” processes should be monitored to detect fraudulent activities, how a minimum training status can be determined, and to what extent computational results must be reproducible. Checklist items should also consider legal compliance requirements (such as data privacy regulations), how to unveil hidden override directives, and how to assess the potential long-term impact of AI operations. Will humans — or humanity — lose knowledge or capabilities? How can behavioral changes of the AI system be detected (due to hacking activities, for instance)?
The requirements you gather will help you identify the areas in which you need to operationalize your guiding principles. The next step is to transform the requirements into additions that you make to your existing product standards and corporate policies.
Adding Ethical AI Information to Existing Standards and Policies
Product standards and policies are proven to help ensure quality, including security aspects. Your organization’s definition of ethical AI — and how to monitor it — can be included in implementation and operations standards as well as in security and audit policies to ensure widespread awareness and understanding across the business.
Practical instructions for everyone involved in the AI life cycle can result from adding this information to policies and standards. The information must include patterns for human-machine interaction in specific situations, customization parameters to fulfill specific cultural requirements, and procedures to overrule AI (if reasonable and secure).
Monitoring and Auditing AI Activities
Automated controls — for tracking the level of fair-traded goods in a purchasing process or the use of anonymized, human-related data, for instance — can help with monitoring AI activities and supporting audits of the AI system’s behavior by ensuring that procedures are being followed. For example, automated controls could monitor price-finding algorithms for scenarios such as water shortages and apply rules for handling cases in which human health might be affected (to stop any automated price increases, for instance). You can also support audits of the AI system’s behavior by reviewing any available information about why an AI system came to a particular decision. Keep in mind that evaluating a user’s current situation is as important as assessing the potential risks related to alternative actions.
Another method for auditing the reasonable operability of AI is to turn off the AI algorithms and use the raw data and different calculation methods to generate the results. If the resulting data is different from the AI-based results, something obviously is wrong. Turning off AI functionality and providing more basic data to the user can also be a requirement — humans sometimes do not want to only rely on a system output, but rather support their gut feelings and come to their own decisions. Of course, using “exit doors” — that is, turning off AI algorithms — is not always possible, especially if immediate action is required, such as with high-speed trading. In cases where trend-setting decisions are required — such as adjusting a product portfolio, closing a branch office, or making a decision about investments — the ability to turn off AI algorithms at least for test purposes may help to avoid misuse or identify fraudulent changes by hackers or competitors. The results of such analyses must become part of product standards to ensure that business managers can rely on AI-based proposals.
In general, the process of AI auditing will be of high interest to insurance agents and lawyers when it comes to liabilities based on a system’s decisions and proposals, but it is also relevant and useful for any organization that is planning on using AI-based software. It is important to be aware of any potential issues related to the use of AI-based systems, since these issues represent risks. It is good practice to be proactive about how to mitigate these risks using additional security-related measures, such as implementing random reviews of AI behaviors, controls, and audits, or specifying human-machine interaction schemas for situations in which someone must make a decision relevant to a person’s ethical attitude.