Where AI risks arise and how to control for them
Risks spanning the entire life of an AI solution, from its conception to when it’s used and monitored, can touch off unintended consequences. We’ve identified risk-specific controls that can help companies manage them.
Enterprise-wide controls
2. Data
management
3. Model
development
5. Model use
and decision making
1. Conceptualization
1/15
Potentially unethical use cases
1. Conceptualization
4. Model implementation
Insufficient learning feedback loop
Potentially unethical use cases
Other regulatory noncompliance
Incomplete or inaccurate data
Unsecured “protected” data
Model instability or performance degradation
Biased or discriminatory model outcomes
Nonrepresentative data
Insufficient training and skills
Poor technology- environment design
Implementation errors
Cybersecurity threats
Slow detection of/response to performance issues
Technology- environment malfunction
Failure at the human–machine interface
2/15
• Feedback requirements and feedback loops built into the
model-development and -redevelopment life cycle
• Systematic tracking and reporting of all models’ performance,
including errors and near misses
• New sources of fraud identified during performance monitoring
of a fraud model not incorporated into next-generation models
1. Conceptualization
• Enterprise-wide definition of approved AI use cases based on
organization vision and values
• Independent review of model purpose, proposed analytic methods,
anticipated variables, and intended use
Control examples
• Marketing outreach models used directly or indirectly to manipulate
public opinion through social media
Examples
3/15
• Minimum data-quality requirements
• Descriptive statistics and anomaly detection to identify
potential quality issues
Control examples
• Mislabeled medical scans used to automate diagnoses
Examples
Incomplete or inaccurate data
2. Data management
5/15
Control examples
Examples
Other regulatory noncompliance
2. Data management
4/15
• Minimum data- and model-access requirements, including prevention
of sensitive data download
• PII masked before data can be used in models
Control examples
• Personally identifiable information (PII) stored without encryption
in an analytics environment
Examples
Unsecured “protected” data
2. Data management
• Race, gender, sexual orientation, or their proxies used to inform credit
decisions, in violation of regulation
• Clear roles and responsibilities for maintaining a view of regulations
and their applicability to data management
6/15
• Guidelines for selecting training data sets, based on a view
of desired model applicability
• Algorithm explainability testing, linking drivers to specific outcomes
Control examples
• Recruiter model trained only on applications received from
a single university
Examples
Nonrepresentative data
3. Model development
13/15
• Risk tiering of all models, incorporating a view of model materiality
and severity of a false positive
• Model-performance monitoring, with higher frequency for
higher-risk models
• Continued use of fraud models that have lost significant discriminatory
power as malicious actors evolve strategies
Slow detection of/response
to performance issues
5. Model use and decision making
7/15
• Statistically significant input variables reviewed to validate usability
(eg, to ensure no violations with fair lending)
• Distribution of model results (eg, scored records) independently
reviewed and validated to be free from bias
• Criminal-sentencing model systematically disfavors a minority group
Examples
Biased or discriminatory model outcomes
3. Model development
8/15
• Out-of-sample/out-of-time testing and back testing
to ensure usability for intended use case
• Performance of all models assessed periodically for
degradation or bias
• Chatbot that learns based on interactions on social media
makes increasingly offensive comments
Examples
Model instability or performance degradation
3. Model development
9/15
• Proof-of-concept testing and/or controlled pilots before the model
goes into production
• User-acceptance testing; independently verified
Control examples
• Incorrect measurement units used in a clinical-trial model leading to
incorrect dosage of medicine being administered to the subject
Examples
Implementation errors
4. Model implementation
10/15
• Detailed model testing and review based on comprehensive guidelines
and across a broad range of scenarios
Control examples
• Autonomous vehicles relying on real-time data that end up being
unavailable due to connectivity issues
Examples
Poor technlogy-environment design
4. Model implementation
11/15
• User training, covering how the AI model works in the overall system,
how to use the insights it generates, and how/when to override its
outputs, systematically developed, delivered, and documented
Control examples
• Physicians following the recommendation provided by an AI-driven
diagnostic tool without questioning it
Examples
Insufficient training and skills
4. Model implementation
5. Model use and decision making
12/15
• End-to-end infrastructure and application resiliency controls, as well
as detailed business-continuity planning and disaster recovery
• Data-center outage preventing new data from flowing into a live,
next-product-to-buy model, leading to degraded or erroneous
recommendations
Examples
Technology-environment malfunction
5. Model use and decision making
15/15
14/15
• Real-time monitoring of environment and maintenance of capabilities
to respond rapidly to potential threats or vulnerabilities
• Minimum data and model access requirements for employees,
contractors, and third parties
• Sensitive customer personal and financial data stolen by an
external actor
Examples
Cybersecurity threats
5. Model use and decision making
Failure at the human–machine interface
• A loan officer failing to override a wrong (for example, a noncompliant)
credit decision or overriding a judicious one based on bias or lack
of knowledge
• Systematic tracking, reporting, and root cause analysis of errors,
near misses, and overrides; implications drawn for both models and
user training
Where AI risks arise and how to control for them
Examples
Insufficient learning feedback loop
Control examples
Control examples
Control examples
Control examples
Examples
Control examples
Control examples
Examples
Control examples