AI in Finance Under the Microscope: Key Takeaways from the SEC’s March 2025 Roundtable

AI in Finance Under the Microscope: Key Takeaways from the SEC’s March 2025 Roundtable

On March 27, 2025, the U.S. Securities and Exchange Commission (SEC) hosted a public roundtable in Washington, D.C., to examine the growing role of artificial intelligence (AI) in financial markets. The event brought together industry leaders, regulators, technologists, and academics to explore the risks, opportunities, and evolving regulatory landscape surrounding AI.

As the financial industry moves rapidly to integrate AI into everything from trading strategies to compliance monitoring, SEC-regulated entities must begin preparing now for a future in which AI governance, risk management, and disclosure obligations are increasingly under regulatory scrutiny.

Here’s what you need to know:

Event Highlights: A Shift Toward Action

Chairman Mark Uyeda opened the roundtable by emphasizing the SEC’s interest in ensuring that AI innovations are deployed in a way that supports investor protection and market integrity. He called for a technology-neutral but vigilant approach to AI oversight.

Commissioners Hester Peirce and Caroline Crenshaw stressed the need for transparency and explainability, particularly when AI is involved in decision-making that impacts investors. Across four substantive panels, participants addressed key themes including fraud prevention, cybersecurity, governance, and the future trajectory of AI in the financial ecosystem.

Key Takeaways for SEC-Regulated Entities

  1. AI Use Must Align with Fiduciary Duties and Investor Protection

AI tools cannot absolve firms of their foundational obligations. Whether used in portfolio management, client servicing, or internal decision-making, firms must ensure that AI applications serve the best interests of clients and do not lead to outcomes that could harm or mislead investors.

Firms should periodically review how AI models are applied to investment decisions, marketing, and disclosures—and ensure outputs are consistent with fiduciary standards and anti-fraud provisions under the Advisers Act and federal securities laws.

  1. Build Robust AI Governance Frameworks

Roundtable participants repeatedly stressed that AI models must be explainable, auditable, and subject to meaningful oversight. The “black box” nature of many AI systems presents real challenges in supervision, particularly if firms rely on third-party vendors.

Firms should implement:

  • Clear model governance policies,
  • Internal documentation standards, and
  • Escalation procedures for unexpected or anomalous AI outputs.

Senior management should understand the risks associated with AI tools and ensure that appropriate risk owners are accountable.

  1. Cybersecurity and Fraud Risks Are Evolving

While AI offers new ways to detect fraud and mitigate cyber threats, it also introduces novel vulnerabilities. AI models can be manipulated through adversarial inputs or “data poisoning,” potentially leading to serious compliance failures.

The SEC encouraged firms to incorporate AI-specific cybersecurity testing and monitoring, especially for models that touch sensitive data or impact customer transactions. Firms should consider adding AI-specific controls to their cybersecurity incident response plans.

  1. Transparency and Accuracy in AI Disclosures Matter

The SEC has signaled growing concern about “AI washing”—the overstatement or mischaracterization of AI capabilities in marketing or investor communications. This concern is backed by recent enforcement actions involving firms that misled clients about the role AI played in their advisory services.

To avoid potential liability, firms should:

  • Accurately describe AI tools and their limitations,
  • Avoid vague or exaggerated claims in public statements,
  • Ensure consistency across marketing, client disclosures, and regulatory filings.
  1. Get Ready for Future Rulemaking

The SEC’s July 2023 proposed rule on predictive data analytics was a clear indicator that AI-specific regulations are on the horizon. The March 2025 roundtable reinforced this direction, with multiple speakers noting the need for regulatory clarity and potentially formal rules on AI governance and disclosures.

Firms should stay engaged in the regulatory process, consider submitting comments to proposed rules, and begin conducting AI risk assessments that evaluate potential conflicts of interest, data dependencies, and unintended consequences.

Final Thoughts

The SEC’s roundtable underscores a growing consensus: AI is no longer a peripheral issue—it is a core governance and compliance concern. Regulated firms cannot afford to treat AI as an experimental add-on. It must be integrated into existing risk, compliance, and supervisory frameworks with the same rigor applied to other critical systems.

By taking proactive steps now—enhancing AI governance, ensuring transparency, and preparing for evolving expectations—firms can align innovation with regulatory compliance and long-term client trust.

About Kennyhertz Perry, LLC

Kennyhertz Perry, LLC is a business and litigation law firm representing clients in highly regulated industries. Our dedicated Artificial Intelligence practice group is focused on helping clients navigate the legal, regulatory, and ethical complexities of deploying artificial intelligence in highly regulated sectors such as finance, banking, and real estate. To learn more about the firm, visit kennyhertzperry.com.

*The choice of a lawyer is an important decision and should not be based solely upon advertisements.