
On December 5, 2025, the North American Securities Administrators Association (NASAA) sent a clear message to Congress: do not strip states of their authority to regulate artificial intelligence.
The letter opposed proposals that would broadly preempt state AI laws in favor of a single federal framework. NASAA argued that state regulators play a critical role in protecting investors from emerging AI-enabled fraud while allowing responsible innovation to evolve in real-world conditions.
For RIAs-especially state-registered advisers-this position materially changes how AI compliance must be approached in 2026.
It reinforces a reality many firms underestimate: AI risk will be examined through both federal priorities and state-specific enforcement lenses, often simultaneously.
As AI-powered scams accelerate and “AI washing” becomes an enforcement target, RIAs can no longer treat AI as a purely technological issue. It is now a core compliance, supervision, and disclosure challenge.
NASAA’s Case Against Federal Preemption
NASAA’s letter rests on three arguments that directly affect advisory firms.
1. AI Scam Protection Requires State-Level Enforcement
NASAA cited FBI data showing $50.5 billion in losses tied to AI-facilitated fraud, including voice cloning, impersonation, and highly personalized scams.
States argue they are uniquely positioned to:
- Issue rapid investor alerts
- Respond to regional fraud patterns
- Enforce advertising and disclosure standards more quickly than federal bodies
For RIAs, this means AI-related misconduct is more likely to surface first in state exams-not federal rulemaking.
2. States as “Laboratories” for AI Regulation
NASAA framed states as practical testing grounds for regulating AI in financial services.
Rather than a one-size-fits-all federal model, states can:
- Adapt to emerging AI tools used by advisers
- Respond to misuse faster
- Set expectations around disclosures, supervision, and fiduciary alignment
This matters most for state-registered advisers, who may adopt AI for:
- Portfolio analytics
- Client communications
- Marketing and lead generation
- Vendor-provided decision tools
Innovation without documentation and supervision will not be tolerated.
3. Alignment With Recent Regulatory Signals
NASAA’s position builds on prior regulatory actions, including:
- The 2024 SEC/FINRA joint investor alert on AI-related fraud
- NASAA’s 2025 “Top Investor Threats” report
- NASAA’s 2025 AI compliance guide for advisers
Taken together, the message is consistent: AI use is permissible, but only when it is explainable, supervised, and documented.
Why This Matters for RIAs in 2026
For RIAs, NASAA’s stance signals three practical realities for the coming exam cycle.
Heightened State Examination Focus
Expect more state examiners to ask:
- Where AI is used in the firm
- Who approved its use
- How risks are monitored
- What disclosures clients receive
“AI Washing” Is an Enforcement Risk
Claims about AI-driven performance, personalization, or risk management-if undocumented or exaggerated-can trigger advertising violations.
Undisclosed or poorly governed AI use invites penalties.
Dual Jurisdiction Is the New Normal
SEC priorities will still apply, but states retain flexibility to:
- Interpret risk differently
- Focus on investor harm
- Move faster on enforcement
Firms must be prepared for layered compliance scrutiny, not just federal alignment.
AI Compliance Risks Highlighted by NASAA
NASAA has consistently pointed to several AI-related risk vectors that directly affect advisory firms.
Fraud Vectors
- Voice cloning targeting clients
- Deepfake communications impersonating advisers
- Hyper-personalized phishing campaigns
Adviser-Specific Risks
- Misrepresenting AI capabilities in marketing
- Using AI-driven tools without understanding limitations
- Failing to disclose model assumptions or risks
Enforcement Trends
NASAA’s 2025 reporting shows steady enforcement activity tied to misleading claims, supervision failures, and poor documentation, not just outright fraud.
The emphasis is on responsible use, not AI prohibition.
Five Steps RIAs Should Take to Prepare for AI Scrutiny
AI compliance is operational, not theoretical.
RIAs should focus on execution.
1. Map All AI Use Cases
Document:
- Internal AI tools
- Vendor-provided AI functionality
- AI claims in marketing or client communications
Each use case should include a risk assessment.
2. Update Policies Using NASAA Guidance
Policies should address:
- AI disclosures
- Model testing and validation
- Supervisory oversight
Generic language will not satisfy state examiners.
3. Train Staff on AI Fraud and Ethics
Training should include:
- AI-enabled scam recognition
- Responsible AI use
- Documentation expectations
Training completion and attestations must be logged for exams.
4. Strengthen Vendor Oversight
Vendor reviews should include:
- AI ethics clauses
- Breach notification requirements
- Transparency around AI decision-making
5. Document Everything
Maintain:
- AI usage logs
- Client disclosures
- Supervisory reviews
- Incident response testing
If it isn’t documented, it didn’t happen.
Common AI Compliance Pitfalls
Many firms fall into predictable traps:
- Treating AI as “set it and forget it”
- Relying on generic disclosures
- Failing to link AI decisions back to fiduciary duty
- Lacking an audit trail that examiners can follow
These gaps tend to surface quickly under state-level scrutiny.
How Smartria Bridges Federal and State AI Compliance
Smartria is designed for firms navigating dual-jurisdiction AI oversight.
It supports RIAs with:
- AI policy templates aligned to NASAA and SEC guidance
- Automated training and attestation tracking
- Vendor oversight dashboards with AI risk scoring
- Centralized logs for state and federal exams
- Marketing review workflows to prevent “AI washing”
The result is operational clarity, not just policy alignment.
Conclusion
NASAA’s opposition to a federal AI ban ensures that states will remain agile, active enforcers in the AI compliance landscape.
RIAs must match that agility with:
- Clear documentation
- Active supervision
- Transparent disclosures
- Defensible workflows
Smartria helps firms operationalize AI compliance across federal and state expectations-before examiners ask the questions.




