The Three Pillars: Control, Amplification, Transparency

A framework for evaluating AI in regulated financial services. Control before capability, amplification before automation, transparency before trust — and why the sequence matters more than the pillars themselves.

By Sean Hsieh
Read 15 min
Published September 5, 2025
The Three Pillars: Control, Amplification, Transparency

Every AI vendor in the credit union space will tell you their product is “safe” and “responsible.” Most of them mean “we haven’t had a PR disaster yet.”

At Runline, I built our entire philosophy on three pillars — and the order matters. Control comes first. Not because control is the sexiest part of AI, but because you can’t amplify what you can’t control, and you can’t trust what you can’t see.

I know “three pillars” sounds like a corporate slide deck. Bear with me — because the sequence of these pillars is the thing that most AI vendors get wrong, and the consequences of getting it wrong range from wasted money to existential risk.


Pillar One: Uncompromising Control

Every AI agent must be controllable by the humans it serves. This means three things: you can see what it’s doing, you can change what it’s doing, and you can stop it — instantly, with certainty, without calling a vendor.

Control comes first for a reason most vendors won’t discuss. There’s a foundational insight in AI safety research that shapes everything we build. In 2016, Dylan Hadfield-Menell and Stuart Russell at UC Berkeley proved mathematically that a rational AI agent has an incentive to disable its own off switch — unless it’s specifically designed with uncertainty about whether its objectives are correct. They called it the “Off-Switch Game,” and the implication is profound: corrigibility — the willingness to accept correction or shutdown — must be designed in from the beginning. It cannot be bolted on later.

This isn’t an abstract academic concern. It’s an engineering requirement. And the history of what happens when organizations deploy powerful automated systems without adequate control is sobering.

Knight Capital, August 2012. A deployment error caused an algorithm to send four million unintended orders in 28 minutes. The system was running on eight production servers, and when the error was discovered, the team couldn’t stop it fast enough. By the time they pulled the plug, Knight Capital had lost $460 million — 75% of the firm’s market value. The company was acquired within a year. The entire failure — from first bad trade to corporate death spiral — took less than half an hour.

Boeing 737 MAX, 2018-2019. The Maneuvering Characteristics Augmentation System — MCAS — was designed to automatically adjust the aircraft’s nose angle based on sensor readings. Two critical design failures: the system relied on a single angle-of-attack sensor with no redundancy, and pilots weren’t told the system existed in their training materials. When the sensor gave bad data, MCAS repeatedly pushed the nose down. Pilots fought the automation. The automation won. Three hundred forty-six people died in two crashes — Lion Air Flight 610 and Ethiopian Airlines Flight 302.

These aren’t AI chatbot stories. They’re infrastructure control stories. And credit unions deploying AI agents without kill-switch capability are making the same category of error — the belief that you can deploy an automated system and figure out how to control it later.

Here’s what control looks like in practice at Runline:

Per-agent credentials. Every AI agent gets its own keys with the minimum permissions needed for its specific task. No shared keys across vendors or agents. If one agent is compromised, the blast radius is contained to that agent’s scope — not your entire infrastructure.

Kill switch with sub-100-millisecond response time. Not “submit a ticket and we’ll get back to you.” Not “wait for the vendor to push a config change.” Your staff presses a button and the agent stops. Full stop. We call it “Derez” internally — a Tron reference for terminating a program. The architecture uses Redis pub/sub to propagate the kill signal across every active agent in under 100 milliseconds. Your core processor vendor has a 24/7 support line for when the system goes down. Your AI vendor should have the same — but with a button you press, not a ticket they process.

Real-time monitoring. Every API call, every action, every decision the agent makes — visible in real time on a dashboard your compliance team can read without a computer science degree. Not aggregate metrics. Not weekly reports. Every individual action, as it happens.

The regulatory alignment here is striking. The NCUA’s AI Compliance Plan requires monitoring, control, and termination capabilities for all AI systems deployed by credit unions. The EU AI Act — Article 14, with full enforcement beginning August 2026 — mandates that high-risk AI systems must allow humans to understand the system, interpret its outputs, and decide not to use it or disregard its output at any time. Control isn’t just our philosophy. It’s becoming law.

Anthropic — the company behind Claude, the model that powers Runline’s agents — builds safety first. Their Constitutional AI framework places “being safe and supporting human oversight” as the top priority, above being ethical, above being helpful. Their Responsible Scaling Policy defines escalating AI Safety Levels — ASL-1 through ASL-4+ — that require progressively stricter controls before a model can be deployed at each capability threshold. Contrast this with OpenAI, where the Superalignment team was disbanded in 2024 and the departing co-lead publicly stated that “safety culture and processes have taken a backseat to shiny products.”

Control is a choice. The AI company you choose to work with reveals their choice.


Pillar Two: Human Amplification, Not Human Replacement

AI agents draft. Humans decide. The goal isn’t fewer employees — it’s each employee operating at ten times their current capacity.

The most important insight for credit union AI strategy comes from an unlikely source: chess.

In 1997, Garry Kasparov lost to IBM’s Deep Blue — the first time a computer beat the reigning world chess champion. The moment could have been the end of the conversation. Instead, Kasparov did something remarkable. He invented Advanced Chess — also called “Centaur Chess” — where humans and computers play cooperatively instead of competitively.

By 2005, centaur teams were regularly outperforming both grandmasters playing alone and supercomputers playing alone. The famous result: two amateur players from New Hampshire, using commodity hardware and off-the-shelf chess software, defeated teams that included grandmasters with access to better computers. The amateurs won because they had a better process for collaborating with their machine.

Kasparov’s formula: “A weak human + machine + better process is greater than a strong human + machine + inferior process.”

Read that again. The advantage isn’t in the AI. It’s in the process for human-AI collaboration. This is the single most important insight for every credit union CEO evaluating AI vendors. The question isn’t “how smart is the AI?” It’s “how well does the AI integrate with my team’s workflow?”

The research confirms this at scale. In 2023, researchers at Harvard and Wharton studied 244 BCG consultants using AI across real consulting tasks. Three distinct patterns emerged:

Centaurs split tasks cleanly between human and AI — using AI for what it does well and reserving human judgment for what it doesn’t. Result: they upskilled in their domain expertise. They got better at their jobs.

Cyborgs intertwined their work with AI at the capability frontier, blending human and machine contributions within individual tasks. Result: they developed new AI-related capabilities. They became more versatile.

Self-Automators delegated wholesale to AI, using it as a replacement rather than a partner. Result: they improved at neither domain expertise nor AI skills. Full delegation made them worse at both.

The lesson is unambiguous. The humans who collaborate with AI get better. The humans who defer to AI get worse.

This maps directly to a deeper truth about automation that researchers have understood for decades. In 1983, Lisanne Bainbridge published “Ironies of Automation” — a paper that has accumulated over 4,700 academic citations because its core insight keeps proving right. Bainbridge demonstrated that automating most of a job while leaving humans responsible for edge cases creates a trap: the operator’s skills atrophy through disuse, and they become an inexperienced intervener in the rare moments that matter most. This is exactly what happens when you “replace” staff with AI for routine work — the remaining humans can’t effectively oversee what the AI is doing because they’ve lost the context that comes from doing the work themselves.

At a credit union, human amplification means something specific:

Your BSA analyst still makes the judgment call on whether activity is suspicious. But AI triages the 95% of alerts that aren’t, so she spends 80% of her time on cases that actually require investigative instinct — not the 95% that turn out to be Maria the florist making her weekly cash deposit. I wrote about this in detail in Article 6.

Your HR coordinator still manages employee relationships. But AI generates employment verification letters, routes onboarding documents, and flags payroll anomalies before they compound into multi-pay-period corrections.

Your loan officer still builds the member relationship. But AI pre-screens applications, pulls relevant member history, and drafts approval recommendations — so the conversation with the member is informed by the full picture instead of a single credit score.

The member sees the same credit union staff. Just operating faster, with better information, making fewer errors.

The cooperative mission makes this non-negotiable. Credit unions exist because of Cooperative Principle #7 — Concern for Community. “People helping people.” AI that replaces the people undermines the very reason credit unions exist. AI that amplifies the people — making your 50-person team operate at the capability of a 200-person institution — fulfills the mission. The cooperative model isn’t a constraint on AI adoption. It’s the design spec.

The counter-examples prove it by contrast.

Klarna replaced its customer service agents with AI chatbots in early 2024, initially celebrating the efficiency gains. By mid-2025, the company reversed course, publicly admitting that “real people offer empathy, understanding, and genuine service that AI can’t provide.” The replacement model didn’t work for a payments company. It definitely won’t work for a cooperative whose founding purpose is people helping people.

IBM Watson for Oncology represented a $5 billion-plus investment in replacing oncologist judgment with AI recommendations. The system was trained on synthetic cases, not real patient data. When tested against actual oncologists, concordance rates varied wildly — as low as 12% for some cancer types. The project was quietly scaled back and eventually sold at a fraction of its cost. Replacement without partnership with domain experts fails. Every time.


Pillar Three: Radical Transparency

No black boxes. Every action logged. Every decision auditable. Every agent stoppable. In a cooperative — where members own the institution — this isn’t optional. It’s an obligation.

I use the word “radical” deliberately, because the industry standard for AI transparency is embarrassingly low. Most AI vendors will show you a dashboard with aggregate metrics — “we processed 5,000 alerts this month.” That’s a report, not transparency.

Radical transparency means four things:

Action-level logging. Every API call, every decision, every document the agent generated, every data source it consulted — timestamped and stored. Not summaries. Not samples. Everything.

Decision-level explainability. Not just “the agent flagged this transaction” but “the agent flagged this transaction because the member deposited $9,500 in cash three days after opening the account, consistent with structuring patterns, and inconsistent with the member’s stated income source of retirement pension.” The reasoning, not just the result.

Replay capability. An examiner can walk through the agent’s decision process step by step, the same way they’d walk through a human analyst’s case file. Every piece of evidence the agent considered, every conclusion it drew, every action it took — reconstructable from the audit log.

Stoppability at every level. Pause a single agent. Pause all agents in a department. Shut down everything. With a single action, effective immediately. This circles back to Pillar One — control and transparency reinforce each other.

The regulatory imperative for transparency is clear and getting clearer.

SR 11-7 — the Federal Reserve and OCC’s foundational model risk management guidance from 2011 — requires model validation, documentation of assumptions, and the ability to challenge outputs. The OCC explicitly states that banks should “consider explainability for AI models.” This guidance applies to credit unions through NCUA examination standards.

The CFPB has confirmed that the Equal Credit Opportunity Act requires lenders to explain the specific reasons for adverse actions, even when using AI algorithms. Their language is direct: “Creditors cannot state reasons for adverse actions by pointing to broad buckets.” If your AI denies a loan application, you must be able to explain why in specific, human-readable terms. Not “the model score was below threshold.” Why.

The GAO’s 2025 assessment found that most financial regulators use AI outputs to inform staff decisions but explicitly state AI is “not used as sole decision-making sources.” The expectation is consistent across every regulatory body: AI assists, humans decide, and both are documented.

The cost of getting this wrong is not theoretical.

Apple Card and Goldman Sachs, 2019. David Heinemeier Hansson — the creator of Ruby on Rails — reported that his Apple Card gave him 20 times the credit limit of his wife, despite shared assets and her higher credit score. Steve Wozniak publicly confirmed the same experience with his wife. Goldman Sachs maintained the algorithm didn’t consider gender — but couldn’t explain why the outcomes diverged so dramatically. That’s the black-box problem in one sentence: the institution couldn’t explain its own decisions. The New York Department of Financial Services launched a formal investigation.

UnitedHealth and nH Predict, 2023-2025. UnitedHealth used an AI tool called nH Predict to determine Medicare Advantage care eligibility — essentially deciding how long elderly patients could remain in nursing facilities. Internal documents revealed the company knew the tool had a 90% error rate — over 90% of AI-driven denials were reversed when patients appealed. A federal court allowed the class action lawsuit to proceed. The AI was making life-altering decisions for vulnerable people, with no transparency about how or why.

In a credit union, the transparency obligation runs deeper than regulatory compliance. Cooperative Principle #2 is Democratic Member Control — members elect representatives who are accountable to the membership. Principle #5 is Education, Training, and Information — members must receive enough information to participate effectively in the cooperative. A black-box AI system that makes decisions affecting members, with no explainable rationale, violates both principles. In a credit union, radical transparency isn’t just good practice — it’s a governance requirement rooted in 180 years of cooperative tradition, since the Rochdale Pioneers of 1844 established that cooperative institutions owe their members not just good outcomes, but understandable ones.


The Sequence Matters

These three pillars aren’t a menu where you pick the ones that appeal to you. They’re a stack. The order is the architecture.

Control first. Without control, amplification is dangerous — Boeing MAX proved that an automated system without human override capability kills people. And without control, transparency is theater — you can log every action an agent takes, but if you can’t stop it when the logs show something wrong, the logs are just evidence for the post-mortem.

Amplification second. Without the human-AI collaboration design, you either replace humans entirely — the Klarna reversal, the Watson failure — or you leave AI idle because your staff doesn’t trust it and doesn’t know how to work with it. Amplification requires control as its foundation, because staff won’t collaborate with a system they can’t override.

Transparency third. Without transparency, control lacks evidence — you don’t know when to press the kill switch because you can’t see what the agent is doing. And amplification lacks trust — your staff won’t rely on AI recommendations they can’t verify, and your examiners won’t accept AI-assisted decisions they can’t audit.

Each pillar depends on the one before it. Remove any one and the others weaken. Reorder them and the architecture fails.

This philosophy connects everything I’ve written in this series. Control enables the examiner-ready infrastructure I described in Article 2 — the SEC examination muscle memory that shaped how Runline builds. Amplification enables the BSA analyst to focus on the 5% of alerts that actually matter instead of drowning in the 95% that don’t — the operational transformation from Article 6. Transparency enables the cooperative governance that credit unions have been built on since 1844 — and makes the back-office AI infrastructure from Article 7 something your board can defend to members and examiners alike.

Every AI vendor in the credit union space will tell you their system is safe, helpful, and compliant. Ask them three questions:

Can I stop it in under 60 seconds — myself, without calling your support team?

Does it replace my staff, or does it amplify them?

Can my examiner walk through every decision it made, step by step?

The answers will tell you everything you need to know about whether their philosophy was designed for credit unions — or just marketed to them.


Sean Hsieh is the Founder & CEO of Runline, the secure agentic platform for credit unions. Previously, he co-founded Flowroute (acquired by Intrado, 2018) and Concreit, an SEC-regulated WealthTech platform managing real securities under dual federal regulatory frameworks.

Next in the series: “Context Is King: Why the AI That Knows Your SOPs Will Beat the AI That Knows Everything” — why the competitive advantage isn’t model intelligence but domain context, and how AI agents trained on your operational playbooks outperform generic copilots every time.

Get Started

Ready to see what stateful AI agents can do for your credit union?

Runline builds purpose-built AI agents for regulated financial institutions. Every interaction compounds institutional intelligence.

Schedule a Demo