Ethical AI & The AI Trust Gap

Creating and following the rules is foundational to establishing trust. Sports fans might not like the results of certain games, but they accept the outcome if they feel that the referee’s calls were fair.  The same applies to politics, where people standing for office and the voters might not like a particular outcome, but they accept the results if they can trust the voting rules laid out by a voting registrar. Trust in any system is vital and is established by having two things: transparency into the underlying rules and consistency in following the rules. For example, when watching professional football, the National Football League (NFL) has a set of rules that the players and coaches must follow. The rules are stated on their website and referees are trained to uphold these rules. Aside from the occasional questionable call, there is a good deal of consistency in how the rules are applied, making it hard to suggest that the game is flawed or unfair.

Now, let’s consider how this applies to Artificial Intelligence. Many people use AI daily, ordering groceries through Amazon’s Alexa or asking Apple’s Siri how long it will take to get home from work. As companies automate simple tasks, tasks that we know how to do ourselves, we appreciate the convenience and saved time instead of questioning the system. Yet, as AI systems evolve, the limit to what can be done is endless. As we start relying on AI systems to decide jail sentences or home loan approvals, an opportunity arises for those systems to impact us in adverse ways.

When we are relying on a system to make decisions of such importance, it becomes increasingly difficult to trust the result, especially when you can’t see how the rules are applied. Companies are not quick to share the algorithms behind their systems, and as AI becomes more complex, providing transparency becomes increasingly difficult. How do we know the data isn’t flawed or if the algorithm is producing unethical or biased results? With this comes the AI trust gap and the importance of ethical AI.

The New Rule Formation 

We have seen a rapid advancement in technology capabilities in our lifetime. During most of this time, we operated with traditional systems which followed a simple process.  We provided the system with a defined set of rules, fed a controlled set of data into the system, and then tested the output to confirm the accuracy of the results.  After validating the process, the system was set, and we could send through additional data, where the clear and transparent rules were applied accordingly.

With the advent of Artificial Intelligence, system development has fundamentally changed.  We provide a set of constraints and goals to the system and supplement this with training data. The system learns from the data and detects patterns. Through its processing, the system develops the rules, and with certain systems, it may continue to learn and create new rules as it works with more data.

With traditional systems, there are tight controls on the features, and we can trace the flow of data to show how the decision was made starting at the source. With AI systems, goals and constraints are provided and training data is used to create the rules. These rules are built using a variety of features and a large volume of input data, which when combined make it difficult to easily trace back the results. This, in effect, is at the core of the AI trust challenge.

Artificial Intelligence offers tremendous potential as we continue to explore the capabilities and test the constraints. We have witnessed how AI systems can use vast amounts of data and make decisions instantaneously, and we have systems in place that function without question (internet, cloud computing, big data, sensors, facial recognition). But as AI evolves, the stack will become increasingly complicated. As more systems are built with deep and reinforced learning and the use of generative adversarial networks (GANs), the business, technological, social, legal, political, and moral implications will grow, making it even more important to understand how to harness, utilize, and trust these technologies.

The Impact of AI Flaws on People

AI systems are force multipliers and make things easier to accomplish, but they can also be flawed, propagate challenges, and impact individuals and society at large. The reality is that most AI teams end up thinking of ethics as an afterthought and not a must-have.  Worse yet, some organizations engage in ethics washing to embellish their ethical actions. We may not think of the impact of this when simply interacting with Siri on our phones, but this becomes real when we consider the impact of AI in the hands of authorities or when YouTube algorithms push conspiracy theory videos to increase views.

Criminal Risk Assessment Algorithms

Imagine standing in a court where an algorithm evaluates your profile, calculates your recidivism score, determines your bail, and provides the judge with sentencing recommendations. The argument for the system is that data-driven decisions can remove bias, better allocate resources for rehabilitation, determine if someone is a risk to themselves in jail, and help a Judge process more cases. Sounds good until you see the other side.

ProPublica, in their article on Machine Bias, highlighted how a seasoned criminal who had been convicted of armed robbery and spent five years in prison was classified as a lower risk to commit a future crime than a juvenile arrested for petty theft. A small but glaring bit of data: the seasoned criminal was white while the juvenile was black.

It did not help that the company that created the algorithm would not publicly disclose the calculations that were used to arrive at the defendant’s risk score. While there may have been valid reasons for the score, trust in the system begins to erode when we cannot understand and evaluate the details, especially those impacting real lives.

Apple Card Credit Limits for Women

David Heinemeier Hansson, the creator of the open-source tool Ruby on Rails, started an uproar when he questioned how the Apple Card was setting credit limits.  He tweeted that his approved credit limit was about 20 times higher than his wife’s, despite them having shared assets and her having a higher credit score. Steve Wozniak, the Apple co-founder, added that he had a similar experience and noticed that his credit limit was 10 times higher than the credit limit his wife received. That led to the New York State Department of Financial Services launching an investigation into whether the Apple Card has a gender bias and looking into Goldman Sachs, the issuing bank for the card.  ­­

Since the initial uproar, others, including journalist Jen Wieczner at Fortune, reported the opposite. Her credit limit was almost three times higher than her husband’s, despite both having virtually identical scores, paying their cards in full each month, and not carrying any debt.

All of this led to the CEO of Goldman Sachs USA Bank releasing a statement saying: “Some of our customers have told us they received lower credit lines than they expected. In many cases, this is because their existing credit cards are supplemental cards under their spouse’s primary account — which may result in the applicant having limited personal credit history. Apple Card’s credit decision process is not aware of your marital status at the time of the application.”  They are now offering customers the ability to appeal their credit limit. While this attempts to explain that the algorithm was not set up to score men and women differently, it does little to assuage the public perception, and once again creates a lack of trust.

There is, however, good news in all of this. As the topic of ethics in AI gains prominence, organizations are starting to pay attention. For example, the Department of Defense has set up a Defense Innovation Board to evaluate AI ethics principles for the Pentagon. MIT recently announced plans to create a new $1B college for AI where a key focus area will be encouraging students and researchers to think about ethical concerns and potential impacts of computing and AI.

Addressing the AI Trust Gap

Creating rules is not a straight-forward process. The legislative branch of our government, which creates law, has an established process on how legislation is created and modified. Policy makers study rulemaking and often rely on public review of proposed rules. Unfortunately, rules created for AI systems do not follow the same process. Consider the following challenges when designing AI systems:

Following Precedent

When we create rules, we often consider the proposed rules, debate the benefits or issues, and consider the precedent of how similar rules have been applied in the past.  When using AI, the system designers are more likely to provide broad constraints thereby limiting the system’s use of precedents. This can be beneficial and allow for creative thinking that leads to new and novel forms of rules but lacks the debate and consideration that historically goes into rulemaking. Designers of AI systems should find balance in the use of precedent to help create a level of foundational trust.

Unintended Consequences of the System

AI systems allow us to do things faster, better, and creatively, helping solve problems across a variety of industries.  These systems help us optimize daily operations, operate with fewer errors, explore new concepts, and perform more calculations and tasks than a human can, resulting in the elimination of jobs. While technology has threatened jobs in the past, the impact feels different, because AI is displacing knowledge-workers in addition to blue-collar workers.

Additionally, AI systems could be used for warfare, surveillance, and law and order, in a manner that might clash with moral values. Certain uses of facial recognition technology, such as Amazon Rekognition, are under fire from civil liberties groups claiming they facilitate human rights violations. Related to this, Jeff Bezos stated, “Technologies always are two-sided. There are ways they can be misused.” He added, “The last thing I’d ever want to do is stop the progress of new technologies, even when they are dual-use.” Designers of AI systems should debate the potential consequences of these systems, acknowledge and address criticism and concern, and proactively discuss topics, such as job displacement and responsible use.

The Algorithm Made Me Do It

One of the biggest challenges is the mentality that if a system determines something, then it must be right. Too often we find users of AI systems willing to accept the information provided without questioning the recommendation. We know that AI will increasingly augment workers by doing much of the detailed work and allowing them to review and act on the information. Whether this is a pilot flying a plane, a judge making a sentencing recommendation, or a financial planner recommending an investment portfolio, there is a perceived risk that humans will lose the ability to make their own decisions and become overly reliant on the system.

Without adequate training on acceptable risk, humans might not identify when the system is making incorrect decisions. This played out with pilots of the Boeing 737 Max 8 aircraft, where the Maneuvering Characteristics Augmentation System (MCAS) made it difficult for pilots to understand the issue and take control of the plane. Designers of AI systems should evaluate the level of reliance that a user may develop and determine the training that should be required to identify and correct system errors.

Correlation Does Not Imply Causation

Much has been written about correlation and causation, and the danger posed by using these techniques incorrectly.  Variables are correlated when there is a statistical relationship between them – a change in one happens at the same time as another.  Causation, on the other hand, is where an action on one variable causes a specific outcome on another. Determining causation takes longer and is established through controlled experiments.

Understanding and developing context for the underlying data is incredibly important.  For example, we have statistically determined that people with bigger hands have a larger vocabulary. This is because adults, who have bigger hands than children, tend to know more words. We can see the correlation between large hands and vocabulary, but we shouldn’t conclude that between two adults, the one with larger hands will have a broader vocabulary. Designers of AI systems should carefully consider the underlying data and model assumptions when interpreting results.

Black Box. White Box

One of the most famous algorithms is Google Search. Sunder Pichai, the Google CEO, had to describe the algorithm to lawmakers, explaining that the search algorithm uses over 200 signals, including relevance and popularity, to determine page rank. A bipartisan bill was recently proposed by US lawmakers that would require internet giants such as Google, Facebook, Yahoo and AOL to disclose their search algorithms. The reason cited was that these algorithms give companies too much power to decide what users are shown through search and companies could choose to exclude or skew results as a form of censorship. Google and other organizations dispute this perspective and feel that disclosure of their algorithms would conflict with long-standing legal protections for trade secrets and other intellectual property. This brings up another question: can intellectual property be established if algorithms keep changing and do not represent what was initially filed?

The General Data Protection Regulation (GDPR) already requires that companies give European Union citizens “meaningful information about the logic” and factors that go into the algorithms. This regulation does not require organizations to provide a complex explanation of the algorithm or the source code used but asks for a simple explanation of the rationale used to make decisions. Unfortunately, as algorithms get even more complex, trying to explain the rationale in simple terms will not be easy.

As countries attempt to regulate AI, we will see the balance between the legal protections for these algorithms and the consumer interest in fairness play out on the public stage and in court cases. The Global Partnership on AI is studying and formulating best practices on AI technologies, working to advance the public’s understanding of AI, and serving as an open platform for discussion and engagement about AI. Other organizations, such as the Defense Advanced Research Projects Agency (DARPA ), are evaluating XAI or Explainable AI as the next generation of AI to encourage transparency and trust with AI systems.

Current AI Systems

Explainable AI Systems

7 Steps to Bridge the Trust Gap and Create Ethical AI

While there isn’t an easy answer, there are a series of actions that organizations can take to join the conversation and address the AI trust challenge. Organizations and teams should actively take on these activities to better manage and work through existing and potential AI challenges:

1. Minimize Bias in Learning Data

Organizations need to recognize that bias is inherent in data and care needs to be considered when processing it. Simply creating rules based on historical data may surmise that males are more likely to be better engineers and select for this. For example, Amazon had to scrap its recruiting system after discovering it was biased against female engineers. According to MIT’s Media Lab, racial bias in AI systems sold by tech giants guessed the gender on male faces better than female faces and encountered error rates of up to 35% for darker-skinned women, misidentifying  Oprah, Michelle Obama, and Serena Williams.

To address this:

  • Create a bias hypothesis and then evaluate and test for bias.
  • Assess and recognize data collection, data sampling, and data validity issues relating to bias.
2. Assess the Potential for Ethical Risk and Setup Algorithm Controls

Designers of AI models should work to reduce algorithmic bias, ensure they continue to evaluate the system for vulnerabilities and hacks, and implement ethical reflection in workflows.

On Twitter, It took less than 24 hours for a chatbot, Micro­­­­­soft Tay, to go from mimicking a playful and casual 19-year old, to being misogynistic and racist.  Microsoft recognized that they had failed to put the appropriate algorithmic controls in place and had to shut Tay off.

To address this:

  • Evaluate various model designs for balanced fitting between bias and variance.
  • Perform proactive ethical risk sweeps to evaluate solution vulnerabilities.
3. Setup Diverse Development Teams

The perspectives of women and people of color are underrepresented on AI teams. Diverse teams foster broad perspectives and result in fewer pitfalls.  Less than 2% of the technical staff at Facebook and Google are black, and less than a fifth of the technical workforce at the eight largest tech companies are women.

The Department of Internal Affairs in New Zealand had a system that rejected the photo from a man of Asian descent claiming he had his eyes closed, even though his eyes were clearly open. While there is no guarantee that a diverse team would have solved this issue, there is a higher likelihood it would have been addressed in the development phase.

To address this:

  • Include diverse viewpoints from multiple disciplines such as legal and social science thinkers during your design process.
  • Create a model that allows you to hire or subcontract with diverse participants.
4. Create a Policy Guide

Organizations should define their position related to AI technologies and communicate this through a policy statement or, at a minimum, a set of guidelines. They should ensure that all teams incorporate these principles into their design thinking and solutions implementations. Finally, they should keep track of global principles and standards, as they evolve, and weave them into their guidelines where appropriate.

A Stanford professor published a report stating that AI can more accurately detect sexual orientation from photographs. This could lead to people being targeted or even jailed in countries where it is illegal to be gay because a system profiled them as such. This technology would be extremely detrimental and emphasizes the importance of evaluating the implications and future use of technology.

To address this:

5. Educate AI Teams

Begin with foundational education for AI teams and ensure there is a healthy internal debate around ethics in AI.  Take the time to have these tough conversations, listening to multiple perspectives and revisiting the conversation during the design and build phases.

There are various ways for people to educate themselves on issues related to ethics in AI. At a minimum, it requires actively following discussions on the topic.  Additionally, it involves attending courses and seminars. Most importantly, it requires getting a broad understanding of the various factors at play and how they apply to your AI initiatives.

To address this:

  • Require teams to participate in Ethics of AI courses like those from MIT and IEEE.
  • Have the AI teams discuss and present internally on ethics topics.
6. Leverage Automated Tools

Leverage technology to evaluate AI systems for potential bias or flaws.  This reduces the manual aspect of the task, makes it quicker and more efficient, and ensures consistency in how rules are applied.

One way to increase the transparency of AI systems without revealing the underlying intellectual property is to use counterfactuals.  For example, a person rejected for a loan would be provided with the conditions that would have resulted in approval instead of the data that resulted in rejection.

To address this:

  • Use a tool like What-If Tool to increase automated evaluation of models.
  • Implement counterfactual applications to promote explainable AI.
7. Track and Participate in Industry Regulation

Organizations should ensure compliance with existing standards for ethical AI, keep track of areas where the law might not be clear, and support standards that help set ethical precedents.

Mariya Gabriel, the European Commissioner for Digital Economy and Society, stated: “Any decision made by an algorithm must be verifiable and explained.” This has been the catalyst for the European Union and other nations that establish a consistent global policy of ethical AI. Although these nations are struggling, it is positive that they are actively involved and trying to establish their national AI Strategy.

To address this:

What’s Your Next AI Play?

With the landscape and environment of Artificial Intelligence frequently changing, it can be hard to decide on the next steps and determine the correct path forward. As you prioritize AI ethics, you may struggle to see progress as you implement certain policies, guidelines, and workflow changes, but when combined, the various steps work together to advance your organization.  Our goal with these 7 steps is to help you understand the AI trust gap challenge and attempt to bridge the gap. Regardless of whether you follow one step or numerous steps, by taking the time to invest in ethical AI you raise the awareness of the issue and move the discussion forward.

Whether you are at the start of your AI journey or somewhere along the way, we welcome you to reach out to Sense Corp.

As you work to scale your AI organization, make sure to review our Reaching the AI Summit eBook for step-by-step guidance and recommendations.

 

Sign up to receive the latest insights.

We promise not to spam you and only send the good stuff!