Responsible Systems
Existing rules apply to AI/GenAI, “automated systems”, “predictive analytics”, and other “innovative new technologies”

Current and evolving regulations
Amid heightened public policy and legislative interest in AI and ML, regulators will apply existing regulations to “AI/GenAI, automated systems” and “innovative new technologies” (e.g., software; models; predictive analytics; and algorithmic processes, such as AI, ML, NLP, and LLMs) across the full lifecycle of design, development, deployment, and continuous monitoring. Regulators across financial services (banking, capital markets, insurance) will focus on whether the technologies (algorithms, tools, and products) can be trusted, work as claimed, and do so without causing harm (financial or otherwise) to users.
Firms should anticipate regulatory attentions in 2024 to focus on:
- Risk Management: Regulators will examine risk management and governance of the design, development, deployment, and ongoing monitoring of “automated systems” from the perspective of:
- Safety and effectiveness (e.g., protections against unintended or inappropriate access or use).
- Anti-bias and anti-discrimination (i.e., protections against, and ongoing testing).
- Data governance and data privacy.
- Transparency (e.g., what/how information is used; potential impacts to businesses/ consumers).
- Accountability and oversight.
- Fairness/Consumer Protection: Following the direction of the Administration, regulators will take a
“whole of government”
approach to supervision and enforcement of fairness and consumer protections in applications of “automated systems” under existing (and evolving) laws and regulations.
- Key areas of concern include data and data sets, model opacity and access, and design and use—which drive the decision-making enabled by the automated system or related tools. Regardless of the technology used, supervisors will evaluate:
- Fairness (e.g., UDAAP/UDAP, fair lending, “fair and balanced” marketing, conflicts of interest).
- Consumer and other legal protections (e.g., civil rights, nondiscrimination, equal employment opportunity).
- Purpose Limitation and Data Minimization: With the proliferation of consumer data collection
alongside the increasing application of
automated systems,
regulatory scrutiny and enforcement will focus on:
- Limitations around collection, access, use, retention, and disposal of consumer data for specific and/or explicit purposes, subject to permission, consent, opt in/out, and authorization, as appropriate (e.g., only what is needed).
- Limitations on data retention (e.g., only for the statedpurpose).
- Safeguards on access and use.
Regulatory complexity
Expect regulatory approaches and areas of supervisory focus on automated systems and new technologies will evolve and may diverge across state, federal, and global jurisdictions, increasing the complexity of compliance. Similarly, regulatory expectations in other evolving areas that touch on system inputs and outputs, or customer impacts (e.g., fairness, privacy, security), may overlap with expectations for automated systems and new technologies, heightening scrutiny and additional complexity.
- Regulatory Divergence: Diverging regulatory approaches or areas of supervisory focus on automated systems would greatly expand the complexity of both risks and compliance and necessitate reassessment of current and target state compliance functions and approaches to compliance risk assessments. Impact assessments, jurisdictional risks, regulatory awareness, and timing would also need to be considered.
- Jurisdictional Challenge: Legal challenges to regulatory approaches, such as application of existing consumer protection regulations to examine firms’ systems, technologies, data, and algorithms, could similarly present added complexity if uncertainties around regulatory jurisdiction and authorities persist.
Spanning risks
The benefits and risks of automated systems will touch areas across firms, including aspects of operations, products and services, and customer interactions. Areas to watch for upcoming regulatory developments include:
- “Trustworthiness”: Focus on the trustworthiness of automated systems and new technologies, particularly around safety, efficacy, fairness, privacy, “explainability,” and accountability. This will necessitate a holistic reassessment of the purpose and application of automated systems throughout the firm, including data collection, inputs and outputs, use, privacy, and security.
- Model Risk Management: Expectation that the design, development, deployment, and monitoring of automated systems will be incorporated into the firms’ MRM framework (and legacy risk frameworks will be adapted, as appropriate), including:
- Areas such as approved use, ongoing monitoring, and risk ratings.
- Protocols for modeling usage that align to the MRM standards.
- Monitoring of legislative/regulatory actions necessitating potential changes to practices or business models.
- Misuse or Inaccuracy: Risks associated with misuse or
inaccuracy will be assessed through the
AI risk management framework; regulators
will look for:
- Robust development, implementation, and use (e.g., clear statement of purpose, sound design, theory, or logic).
- Effective validation conducted independently of design and development.
- Sound governance, policies, and controls (e.g., access controls, training).
What to Watch
Key regulatory actions to watch concerning the design, development, deployment, and ongoing monitoring of responsible applications of automated systems and new innovative technologies will include:
- Executive Order on Safe, Secure and Trustworthy AI: White House Executive Order calling for AI risk management actions/standards for privacy, security, consumer/investor protections, and innovation.
- Enforcement/Supervision to “Automated Systems”: Interagency Statement (CFPB, DOJ, EEOC, and FTC) on supervision and enforcement (under existing authorities, e.g., civil rights, nondiscrimination, fair competition, consumer protection) of automated systems and new technologies.
- Automated Valuation Models: Interagency (FRB, OCC, FDIC, CFPB, NCUA, and FHFA) proposal on quality control standards for automated valuation models (AVMs), including compliance with applicable nondiscrimination laws.
- “Covered Technologies” and Conflicts of Interest: SEC proposal “to eliminate conflicts of interest associated with interactions with investors [e.g., correspondence, online, advertising] through the use of technologies [e.g., predictive analytics, AI, ML] that optimize for, predict, guide, forecast, or direct, investment-related behaviors or outcomes.”
- Ensuring Trust in AI: Commerce Department Request for Comment: A request for information on AI accountability measures and policies with a focus on how to provide “reliable evidence to external stakeholders—that is t provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy.”
Call to Action…
- Establish and maintain a governance framework: Implement and maintain a governance framework that guides the design, use, and deployment of automated systems ensuring adherence to ethical standards, regulatory requirements, and best practices.
- Conduct predeployment testing and ongoing monitoring: Perform thorough predeployment testing, risk identification, and mitigation for automated systems to ensure their safety and effectiveness. Conduct runs in parallel with existing processes and have demonstrable uplift from a regulatory perspective (e.g., decrease in false positives) before full deployment. Stay up to date on regulatory developments; implement continuous monitoring and evaluation practices to identify potential issues, biases, and undesirable outcomes in a system’s performance; and adjust accordingly.
- Promote transparency and accountability: Foster a culture of transparency and accountability within the organization, clearly communicating the goals, functionality, and potential impacts of automated systems to both internal and external stakeholders.
- Implement effective MRM: Adopt a robust MRM framework to ensure models are reliable, accurate, and unbiased. Conduct regular validation, testing, and monitoring of the models, and timely address any identified issues to minimize adverse impact on investors and comply with regulations. Provide transparency regarding model performance and risk exposure to the board and management.
- Provide human alternatives and remediation: Offer human alternatives and fallback options for customers who wish to opt out from using automated systems, where appropriate. Establish mechanisms for customers to report errors, contest unfavorable decisions, and request remediation, demonstrating the organization’s commitment to fairness and responsible use of technology.
- Understand system strategy and roadmap: Align the organization’s vision, strategy, and operating model for system solutions with their broader goals. Assess the board-level oversight and maintain an inventory of the system landscape within your organization. Monitor third-party risks associated with data protection, storage, and access to confidential data, and evaluate software tools acquired to maintain data security and privacy.
Explore more
Regulatory Insights
A source for updates and perspectives on regulatory activity and issues
Read moreMeet our team


