The first time the SEC examined Concreit, I didn’t sleep the night before.
Not because we’d done anything wrong. Because we’d done everything right — and I still wasn’t sure it was enough. That’s the thing about operating under real regulatory scrutiny: the anxiety isn’t about getting caught. It’s about the gap between what you believe your systems do and what you can prove they do when someone with subpoena power asks.
Concreit Fund Management LLC was a nationwide SEC-registered investment adviser under the Investment Advisers Act of 1940. At the time of our first examination, we were in the process of registering as a transfer agent under Section 17A of the Securities Exchange Act of 1934 — a registration we’ve since completed. Two distinct federal regulatory frameworks, simultaneously. The platform let anyone invest in fractional real estate starting from $1 — real securities, qualified under Regulation A+ Tier 2, with a FINRA-member broker-dealer handling distribution.
I’m not telling you this to impress you. I’m telling you because the muscle memory I built navigating those frameworks is the single most important thing I brought to Runline — and it’s the thing most AI vendors selling to credit unions have never developed.
What an SEC Examination Actually Feels Like
Here’s what they don’t tell you in compliance training: an SEC examination is not a checklist. It’s an interrogation of your entire operating reality.
It starts weeks before anyone shows up. The pre-examination document request arrives — a detailed list of every record, policy, procedure, communication, and system artifact the examiners want to review. Books and records. Client communications. Marketing materials. Every representation you’ve made about how your platform works.
Then the examiners arrive, and they do something that changes how you think about technology forever: they test your representations against reality.
You said your algorithm does X? Show us. Walk us through the code path. Show us the audit trail. Show us what happens when X fails. Show us the edge case you didn’t think about.
You said client data is handled according to policy Y? Show us the access logs. Who touched this data? When? Why? Can you prove that the access was authorized?
You said your risk management process follows framework Z? Show us a specific instance. Not the policy document — a specific case where the process was triggered. What happened? Who decided? Where’s the documentation?
This is the crucible that shaped how I build technology. You don’t get to say “our AI does X.” You have to prove it does X, show how it does X, and demonstrate what happens when X fails — all with documentation that an examiner can independently verify.
Most technology companies never experience this. Most AI vendors selling to credit unions have never sat across from a regulator who could shut down their business based on what they find in the next three hours. And it shows — in their architectures, in their audit trails, in the gap between their marketing claims and their operational reality.
The Translation: SEC Muscle Meets Credit Union AI
Credit unions are about to enter their own version of this crucible.
In September 2025, the NCUA published its Artificial Intelligence Compliance Plan. It establishes five requirement categories that every credit union deploying AI must address:
1. Risk management practices. Assess and document AI risks before deployment. Implement controls to prevent non-compliant high-impact AI.
2. Monitoring and control capabilities. Real-time visibility into what your AI systems are doing. The ability to control access and permissions for every AI system.
3. Termination process. When an AI system doesn’t meet requirements, you must be able to restrict access immediately, isolate or shut down the system completely, archive associated data, draft detailed documentation including termination rationale, and notify all relevant stakeholders.
4. Governance requirements. Maintain an AI Use Case Inventory. Conduct security, privacy, and technical reviews by senior officers. Keep comprehensive, up-to-date documentation.
5. Vendor transparency. Understand what your AI vendors are actually doing with your data and inside your systems.
Credit unions have approximately 12-18 months to demonstrate compliance. The clock is ticking.
When I read these five requirements, I didn’t see a regulatory burden. I saw an SEC examination translated into credit union language. Every single requirement maps to something I’d already built at Concreit — not because I was prescient, but because operating under federal regulatory scrutiny forces you to build this way. You can’t survive an examination without monitoring, without audit trails, without the ability to explain and shut down every system you operate.
The SEC taught me that compliance isn’t a layer you add on top of a product. It’s the foundation you build the product on. The NCUA is teaching credit unions the same lesson — they just haven’t realized it yet.
The Standards Are Forming, But Nobody’s Landed Yet
Here’s the uncomfortable reality for credit union leaders: there is no “SOC 2 for AI” stamp you can buy today. The governance landscape is a patchwork of converging frameworks, and navigating it requires the kind of regulatory fluency that most institutions haven’t built yet.
Let me walk you through what exists right now, because understanding this landscape is the first step to operating within it:
NIST AI Risk Management Framework (AI RMF 1.0) — The NCUA itself recommended this as the governance baseline for credit unions. It’s voluntary, but it’s becoming the de facto standard. Organized around four functions: Govern, Map, Measure, Manage. If you align to one framework, start here.
ISO/IEC 42001 — The first international AI management system standard. It’s certifiable — meaning you can get audited against it and receive a certificate. Early adopters in financial services are getting certified now. Think of it as ISO 27001 (information security) extended to AI systems.
COSO GenAI Risk and Control Considerations — Published February 2026. An internal controls framework specifically for generative AI, extending the same COSO framework that underpins SOX compliance. If your board understands SOX, they’ll understand this.
HITRUST AI Security Assessment — 44 specific controls for AI security. Certifiable today. This is the closest thing to “SOC 2 for AI” that actually exists right now, and financial institutions should be paying attention to it.
Colorado AI Act — Takes effect June 2026. The first U.S. state law requiring impact assessments for “high-risk” AI used in consequential decisions like lending and insurance. It’s a template for what other states will follow. If your credit union makes lending decisions and operates in or serves members in Colorado, this applies to you.
AICPA — Currently developing AI assurance engagement guidance that extends SOC 2 Trust Services Criteria to AI systems. Estimated 12-18 months from finalization. When this lands, it will likely become the industry standard — but it doesn’t exist yet.
GAO Report (GAO-25-107197, May 2025) — This one should concern every credit union CEO. The GAO explicitly called out that the NCUA has limited model risk management guidance and — critically — no vendor examination authority. Translation: your regulator cannot examine your AI vendor directly. Your vendor’s AI compliance is your responsibility.
The pieces are converging. NIST provides the risk framework. ISO provides the management system. COSO provides the internal controls. HITRUST provides the security assessment. The AICPA will eventually provide the audit standard. But right now, it’s a jigsaw puzzle where none of the pieces have the same edge shape.
Here’s why this matters for credit unions: the institutions that wait for a single unified standard to emerge will be two years behind the institutions that start building governance infrastructure now. When Concreit launched, there was no “robo-advisor compliance playbook.” We had to compose our own governance approach from existing SEC frameworks — Investment Advisers Act, Securities Exchange Act, Regulation A+ requirements, FINRA rules. Credit unions deploying AI are in that exact same position today. The playbook doesn’t exist yet. You have to write it. And the ones who write it first will be the ones examiners hold up as models.
The CTO Gap
Here’s the uncomfortable truth that nobody in the credit union ecosystem wants to say out loud: most credit unions don’t have a CTO.
Not a VP of IT who manages vendor relationships and keeps the core processor running. A CTO — someone who thinks in systems architecture, evaluates technical strategy, and can assess whether an AI vendor’s infrastructure is examiner-defensible.
The credit union leadership pipeline was built for a different era. The average CU CEO tenure is 15+ years. The C-suite is populated with COOs, Chief Lending Officers, VPs of Member Services, and Compliance Officers — all critical roles for running a relationship-driven financial institution. But the person who can look at an AI vendor’s architecture diagram and ask “where’s the audit trail for this decision path?” or “what happens to data isolation when you run multi-tenant inference?” — that person barely exists in the CU ecosystem.
This isn’t a criticism. It’s a structural reality of institutions that were built on relationship banking, not software engineering. Credit unions have been phenomenally successful at their core mission — serving members — without needing deep technical architecture expertise. The core processor handled the technology. The vendor managed the infrastructure. The CU focused on people.
But AI changes the calculus. Unlike a core processor that runs predetermined code paths, AI systems make probabilistic decisions. They can behave differently on Tuesday than they did on Monday — same input, different output — because the model updated, the context shifted, or the prompt was interpreted differently. Governing this requires a different kind of technical fluency than managing a Symitar installation.
This creates a gap — and the gap is the single biggest risk in credit union AI adoption. Bigger than the technology itself. Bigger than the cost. Because if you can’t evaluate whether your AI infrastructure is sound, you can’t defend it to an examiner. And if you can’t defend it to an examiner, you shouldn’t be deploying it.
The good news: you don’t have to become a technology company. You don’t need to hire a $300K CTO who speaks in transformer architectures and attention mechanisms. What you need is a technology partner who has the regulatory DNA to build examiner-defensible infrastructure — and the ability to translate it for your board, your compliance officer, and your examiner in plain language.
That’s a very specific combination: deep technical capability plus deep regulatory fluency plus the ability to communicate without jargon. It barely exists in the market. Which is exactly why it matters.
The Questions Your AI Vendor Can’t Answer
Most AI vendors selling to credit unions have never operated under financial regulation themselves. They’ve built technology for technology’s sake — optimizing for features, speed, and demo impressions — without the constraint of having to defend every architectural decision to a federal examiner.
You can identify this gap in about five minutes. Ask your AI vendor these questions:
“Which AI governance framework do you align to — NIST AI RMF, ISO 42001, HITRUST AI, or something else?” If they can’t name one, they’re building for demos, not examinations.
“Can you show me the audit trail for a specific decision your AI made last Tuesday at 2:47 PM?” Not aggregate metrics. Not a dashboard summary. A specific decision, with the data inputs, the reasoning path, and the output — timestamped and retrievable.
“If my examiner asked you to shut down your AI system right now, how fast could you do it? And who controls the switch — you or me?” If the answer is “submit a support ticket,” that’s not a kill switch. That’s a request form.
“What happens to my member data inside your system? Where does it go? Who can access it? Can you prove it?” Not what your privacy policy says. What actually happens at the infrastructure level.
“Have you ever been examined by a financial regulator?” Not audited by an accounting firm. Examined — the way the SEC examines registered investment advisers, or the way the NCUA examines credit unions. Where someone with enforcement authority tested your representations against your operational reality.
The vendor who stumbles on these questions isn’t necessarily a bad company. They might have excellent technology. But they haven’t been through the crucible that teaches you to build for auditability, defensibility, and examiner-readiness from day one. And in a regulatory environment where the GAO has explicitly stated that the NCUA lacks vendor examination authority — meaning your vendor’s AI compliance is your problem — that gap in your vendor’s experience becomes a gap in your compliance posture.
The Regulatory Advantage
I want to close with a pattern I’ve seen play out three times in my career, because I think credit unions are about to experience it for a fourth time.
SOX (2002). When Sarbanes-Oxley passed after the Enron collapse, every public company saw it as a burden — expensive audits, internal controls, CEO certifications. The companies that treated SOX as a chance to genuinely professionalize their financial reporting built investor confidence, attracted better capital, and created operational discipline that made them more resilient. Two decades later, nobody questions whether SOX was worth it.
PCI-DSS. Payment Card Industry Data Security Standards forced every payment processor to implement encryption, access controls, and audit trails. The companies that built PCI compliance into their architecture from day one — Stripe, Square — didn’t just pass audits. They became the dominant platforms because merchants trusted them. Compliance was the product.
GDPR (2018). When Europe’s data privacy regulation hit, most companies scrambled. The ones that used GDPR as a forcing function to understand their data flows and build privacy-respecting products gained 18% better customer retention and 10-15% price premiums from privacy-conscious consumers. Compliance became a selling point.
The pattern is always the same: regulation arrives, most companies treat it as overhead, and a small number of companies treat it as a design specification. The second group builds better products, earns deeper trust, and creates competitive advantages that the first group can never catch.
The NCUA’s AI Compliance Plan is the next instance of this pattern. The credit unions that build examiner-ready AI infrastructure in the next 12-18 months won’t just pass their examination. They’ll operate at a fundamentally different level of trust — with their members, their boards, their examiners, and their own staff — than the ones who treated compliance as a checkbox to be cleared at the last minute.
This is what Concreit taught me. Not that compliance is something you survive. That compliance — done right, from day one, woven into the architecture — is the thing that makes your product trustworthy enough to matter.
Every agent action logged. Every decision auditable. Every system stoppable in seconds. Not because it’s trendy. Because I’ve been examined, and I know what examiners actually ask for.
The credit unions that understand this will build AI infrastructure that their examiners respect, their staff trusts, and their members deserve. The ones that don’t will find out what it feels like to sit across from a regulator who’s testing your representations against reality — without the muscle memory to survive it.
I’d rather you build the muscle now.
Sean Hsieh is the Founder & CEO of Runline, the secure agentic platform for credit unions. Previously, he co-founded Flowroute (acquired by Intrado, 2018) and Concreit, an SEC-regulated WealthTech platform managing real securities under dual federal regulatory frameworks.
Next in the series: “Solo, Not Alone: Building an AI Company with AI” — how Runline runs on its own AI agents, and why the dog-food-everything philosophy is the strongest trust signal a vendor can offer.


