Last week, Geoff Charles — Ramp’s Head of Product — published a piece on X that racked up 932,000 views and 5,000 bookmarks in 48 hours. The title: “How to get your company AI pilled.” The numbers he dropped were staggering:
- AI usage up 6,300% in one year.
- 99.5% of the team active on AI tools.
- 84% using coding agents weekly.
- 1,500 apps shipped on their internal platform in six weeks, from 800 different builders.
- Non-engineers now account for 12% of all human-initiated pull requests on Ramp’s production codebase — thousands per month.
Read that last bullet again. People in sales ops, risk, finance, and L&D — people who have never written a line of code professionally — are shipping changes to a production financial services codebase. At scale. Every day.
If you’re a credit union CEO reading those numbers, you probably felt two things simultaneously: exhilaration and dread. Exhilaration because AI’s potential is real, not theoretical. Dread because your institution has five people in IT, a core processor that predates the iPhone, and a board that still debates whether to allow ChatGPT on the network.
I felt both too. And then I spent a week pulling Geoff’s playbook apart — not to dismiss it, but to translate it. Because the principles inside that piece are some of the most important things written about AI adoption this year. And almost none of them can be implemented at a credit union the way Ramp implemented them.
That’s not a knock on credit unions. It’s the starting point for figuring out what your version looks like.
The Honest Gap
Let me start with intellectual honesty, because I think the AI adoption conversation in financial services suffers from a dangerous amount of false equivalence. Vendor pitches that show you a Fortune 500 case study and imply you’ll get the same results. Conference keynotes that describe what Ramp or Stripe or Palantir did and leave you to figure out how that maps to a 200-person institution in Michigan with a $2.5 million technology budget.
Let me name the gap explicitly so we can stop pretending it doesn’t exist and start engineering around it.
Ramp has ~1,000 employees, most of them in engineering and product. The average credit union in America has 147 employees (NCUA, 2025 year-end data), most of them in branches and member services. Ramp’s engineering bench is deeper than most credit unions’ entire headcount.
Ramp built Glass — their own internal AI platform — with a team of four engineers in three months. Those four engineers had deep expertise in LLMs, agent architectures, and the Anthropic Claude Agent SDK. They built on top of Ramp’s existing modern infrastructure: Snowflake, Salesforce, Gong, Slack, Notion, all wired through Okta SSO. Finding four engineers with that skillset would cost a credit union more than most technology budgets allow — and that assumes you could hire them, which you probably can’t, because they want to work at Ramp.
Ramp gave everyone unlimited AI budget. Geoff’s cost argument is correct and important: “Token consumption per employee today isn’t even close to double-digit percentages of their salary. But if someone is 2x more productive with AI, you should be willing to spend their entire salary again in tokens.” The math is right. But Ramp has the revenue and margins to back that stance. A credit union operating on 80 basis points of net income (the 2025 NCUA industry average) doesn’t have the same financial cushion for experimentation.
Ramp has a culture of velocity. Geoff says it plainly: “At Ramp, our culture is velocity. It shapes every process and team ritual.” This is a company where tools have a shelf life of weeks, where the tools shipped in January are already obsolete by April, and where nobody thinks that’s weird. Credit unions have a culture of stability. Your members chose you because you’re not volatile. Your examiners reward consistency, not disruption. That’s not a weakness — it’s a structural feature of the cooperative model. But it means the adoption playbook looks fundamentally different.
Ramp operates in fintech with a bank-as-a-service partner. They don’t have NCUA examiners walking through their office. They don’t file Call Reports. They don’t have to explain to a field examiner why an AI agent accessed member data at 2:47 AM on a Tuesday. The regulatory surface area is different by orders of magnitude.
I’m naming all of this not to discourage you, but because the worst thing you can do right now is try to copy Ramp’s playbook directly. You’ll burn budget, frustrate your team, and conclude that “AI doesn’t work for us.” It works for you. But the implementation path is different. The principles are universal. The tactics are not.
| Dimension | Ramp | Credit Union |
|---|---|---|
| Employees | ~1,000 (eng-heavy) | ~147 avg (branch-heavy) |
| AI Budget | Unlimited | Constrained (80bps net income) |
| Culture | Velocity | Stability |
| Regulation | Light (fintech + BaaS) | Heavy (NCUA examiners) |
| Build Capacity | 4 engineers built Glass in 3 months | Must partner |
| Institutional Context | 8 years | 50-80+ years |
| Capital Structure | VC-backed (quarterly pressure) | Cooperative (patient capital) |
| Member Relationships | Transactional | Generational |
What Ramp Got Universally Right
Here’s what I’ll defend about Geoff’s piece with zero caveats — the principles that apply to every organization regardless of size, industry, or regulatory burden.
Culture eats strategy
Ramp didn’t start with a master plan. Geoff says it directly: “The interesting part isn’t the numbers or the tools. It’s that we didn’t have a plan. All we had was a culture and talent, and we kept doubling down on the things that were working.”
This is the most important sentence in the entire piece. And it maps to credit unions perfectly — because credit unions have a culture that is, in many ways, better suited to AI adoption than a Silicon Valley fintech. You just don’t know it yet.
Credit union culture is built on:
- Service orientation — your people wake up wanting to help members.
- Collaborative problem-solving — you don’t have the resources to silo, so people naturally work across functions.
- Institutional loyalty — average credit union employee tenure is 7.2 years, versus 4.1 in banking and 2.8 in tech, according to CUNA’s 2025 HR benchmarking report. Your people know the institution.
- Resourcefulness — you’ve been doing more with less for decades.
Every one of those traits is an accelerant for AI adoption — if you give people the right infrastructure. The risk analyst at Ramp who automated 16 hours per month of manual financial modeling? You have that person. They’re in your BSA department, drowning in false positives, wishing someone would fix the stupid spreadsheet. The difference isn’t talent or motivation. It’s that Ramp gave that person a tool that met them where they were, and you haven’t. Yet.
The “Aha” moment is everything
Geoff describes a critical realization: “Despite hitting 90%+ adoption of AI tools across the company, most people were stuck on a basic chat interface. The models were good enough. The harness wasn’t. Terminal windows, npm installs, MCP configurations — these were simply too hard for the majority of people to grok.”
This is the single most important paragraph in the piece for credit union leaders. Read it again. The AI models are good enough. The delivery mechanism is wrong.
Every credit union that has tried AI has made this mistake. You bought a chatbot. You put it on the website. Members asked it questions. It gave mediocre answers. You concluded AI isn’t ready. But the failure wasn’t the AI — it was the harness. You wrapped a powerful model in a terrible interface and pointed it at the wrong problem.
Ramp’s solution was to build Glass — their own AI platform that auto-configures on install, authenticates through SSO, and pre-connects 30+ tools. A team of four built it in under three months. 700 daily active users within a month.
The lesson isn’t “build your own Glass.” The lesson is that the infrastructure layer is the bottleneck, not the intelligence layer. Once Ramp removed the friction between people and AI, usage exploded. Not because they mandated it. Because the tool was finally worth using.
Your credit union needs the same thing: an infrastructure layer that removes friction between your people and AI. This is what we’re building at Runline — parts of this are live with credit union partners today, and we’re building toward the full vision. That layer needs to understand your core processor, your compliance requirements, your member data model, and your regulatory constraints. It needs to auto-configure so your BSA analyst doesn’t need to understand API keys. It needs guardrails so your examiner doesn’t need to worry about what data the agent accessed. It needs to be yours — not a shared model training on everyone’s data simultaneously.
I’ll come back to what that infrastructure looks like. First, let’s translate Ramp’s principles.
The Eight Principles, Translated
Geoff structured his piece around eight principles. Here’s what each one means when you strip away the Ramp-specific context and rebuild it for a regulated cooperative with 200 employees.
1. “The second best time to start is today.”
Ramp’s version: We told the whole company at our January 2025 kickoff that we would become the most productive company in the world. We had no idea how. We just started.
Your version: You don’t need a three-year AI strategic plan. You need a 90-day pilot with one real workflow, one real team, and one real outcome you can measure.
Here’s the mistake I see credit unions make: they form an AI committee. The committee meets monthly. They evaluate vendors. They write an RFP. They present to the board. They launch a pilot nine months later. By the time the pilot starts, the technology has leapfrogged two generations and the committee’s requirements document is already obsolete.
Ramp’s instinct is right: just start. But “just start” for a credit union doesn’t mean “give everyone Claude and see what happens.” It means picking the workflow where AI will have the most visible, measurable impact with the least regulatory risk, and running a disciplined pilot with clear success criteria.
Where to start: BSA alert triage. I’ve made this case in Article 7, Article 14, and Article 29, so I won’t re-argue it here — but the short version: 95-98% false positive rates (per ACAMS benchmarking data), analysts spending 80% of their time documenting why alerts aren’t suspicious, and the whole thing is intelligence work that an AI agent — what we call a Runner at Runline — can handle while humans focus on the 5% requiring genuine judgment.
At one credit union partner — an institution nearing $2 billion in assets — we’re already seeing early signals that AI-assisted BSA triage can reduce analyst workload by 60% or more. We don’t have full performance data yet, but the trajectory is becoming clear: the impact will be profound. We believe the real number will exceed that initial estimate as the system accumulates institutional context. That’s not a theoretical projection — it’s what happens when AI handles the 95% of alerts that are noise and analysts focus on the 5% that require genuine human judgment.
Start there. Get a win. Let the organization see what’s possible. Then expand.
2. “Treat AI proficiency as a learning curve, not a light switch.”
Ramp’s version: They defined four levels of AI proficiency:
- L0: Sometimes uses ChatGPT. Hasn’t changed any workflows.
- L1: Built custom GPTs, dabbled in Claude Code. Starting to see what’s possible.
- L2: Built an app that automates part of their job. Committed code.
- L3: Systems builders. They build infrastructure that levels everyone else up.
Your version: The framework is brilliant. The levels need recalibration for a credit union context — both because the starting point is different and because the ceiling is different.
Here’s the Credit Union AI Maturity Model I’d propose:
Level 0 — Unaware or resistant. “We don’t use AI here.” Or worse: “We blocked ChatGPT on the network.” This is where a large share of credit unions sit today — a third to half, based on conversations I’ve had with over 200 CU leaders in the past year. That’s an informal read, not a formal survey, but the pattern is consistent. The risk at L0 isn’t that you’re not using AI — it’s that your employees are using it, on personal devices, without oversight. I wrote about this in Article 29: shadow AI is already inside your building. The question is whether you’re managing it.
Level 1 — Experimenting. A few individuals are using AI tools — ChatGPT, Claude, Copilot — for personal productivity. Drafting emails, summarizing documents, researching questions. It’s ad hoc, unsanctioned, and invisible to IT. The value is real but unscalable and ungovernable. No audit trail. No data governance. No institutional learning.
Level 2 — Piloting. The institution has sanctioned one or two AI pilots in specific departments — typically BSA/AML, lending, or member services. There’s a vendor relationship. There’s a governance framework (even if it’s basic). There’s measurement happening. The pilot team is seeing results, but the rest of the organization views AI as “that thing compliance is doing.”
Level 3 — Integrating. AI is embedded in multiple workflows across departments. Not as a separate tool you switch to — as part of how work gets done. Your BSA agents triage alerts. Your lending agents assemble loan packages. Your member service agents handle routine inquiries with full context. There’s a governance layer that covers agent identity, permissions, audit logging, and kill switches. Employees at all levels interact with AI daily, not because they’re mandated to, but because the tools are genuinely useful.
Level 4 — Compounding. This is Ramp’s current state, translated. AI agents are generating institutional knowledge — not just completing tasks, but learning patterns, surfacing insights, and making the entire organization smarter over time. Every interaction adds context. Every correction refines a Playbook. The gap between you and institutions that haven’t started is widening daily — not because you have better AI, but because your context moat (Article 34) is deepening with every interaction.
Most credit unions are at Level 0 or Level 1. The ones that will thrive in five years are the ones that reach Level 3 within the next 18 months. That’s the window I described in Article 16 — and it’s narrowing.
The critical insight from Ramp: the progression isn’t linear. You don’t need everyone at L4. You need a few people at L3 who pull everyone else up. Geoff calls them “force multipliers.” In a credit union, that’s your BSA team lead who figures out how to use the agent platform and becomes the internal evangelist. That’s your lending VP who sees the early productivity gains and asks “what else can we automate?” That’s your operations manager who builds the first Playbook and shares it with three other departments.
Find those people. Give them tools. Get out of their way.
AI is blocked or ignored. Employees use personal tools on personal devices without oversight. Shadow AI is the real risk.
Individuals dabble with ChatGPT or Claude for personal productivity. Ad hoc, unsanctioned, invisible to IT. Value is real but ungovernable.
One or two sanctioned AI pilots in specific departments. Vendor relationship established. Basic governance and measurement in place.
AI embedded in multiple workflows across departments. Governance layer covers agent identity, permissions, audit logging, and kill switches.
AI agents generate institutional knowledge. Every interaction deepens context. The gap with non-adopters widens daily.
3. “Embrace creative destruction.”
Ramp’s version: “Many of the tools we shipped in January 2026 are already obsolete — replaced by better versions, often from the same builders. We’ve gotten comfortable with a shelf life of weeks, not months.”
Your version: This is where the translation gets interesting, because creative destruction is antithetical to credit union culture — and that’s actually fine.
You don’t need a shelf life of weeks. You need a platform that evolves underneath you without requiring you to rebuild. This is the fundamental architectural distinction between what Ramp built and what a credit union needs.
Ramp built Glass in-house with four engineers. When the underlying models improve, those same engineers update Glass. When new capabilities emerge, they ship new features. The creative destruction is internal — Ramp’s engineering team is both the builder and the destroyer.
A credit union can’t do that. You don’t have four engineers on standby to rebuild your AI platform every quarter. But you don’t need to — if your platform is architected correctly.
This is why the infrastructure layer matters so much. When you build on a platform designed for credit unions, the creative destruction happens at the platform level, not at your level. The underlying models improve? Your agents get smarter without you lifting a finger. New regulatory requirements emerge? The compliance guardrails update across the platform. Better reasoning capabilities ship? Your loan document assembly gets more nuanced automatically.
The key question to ask any AI vendor: “When the AI gets better, does my institution get better automatically — or do I need to buy a new version?”
If the answer is “buy a new version,” you’re buying a tool. If the answer is “automatically,” you’re on a platform. Tools have a shelf life. Platforms compound.
Geoff’s deeper insight is worth preserving, though: “People aren’t attached to their tools. They’re attached to their problems.” That’s universally true. Your BSA analyst doesn’t care whether the AI uses GPT-4, Claude, or whatever ships next year. They care whether it correctly identifies structuring patterns and produces examiner-ready documentation. Focus your people on the problem, not the tool. The tool will change. The problem won’t.
4. “Build from the center, drive from the spokes.”
Ramp’s version: They tried centralized (one team builds for everyone — demand outstripped capacity). They tried decentralized (every team builds their own — redundant re-learning). The answer was both: a small central team builds platforms and plumbing, functional teams build on top and drive the roadmap.
Your version: This is the principle that translates most directly to credit unions — and the one most credit unions will get wrong.
The instinct at most credit unions is full centralization: the IT department owns AI. Every request goes through IT. Every deployment requires IT approval. Every integration is an IT project.
This will fail. Not because IT is bad at their jobs — because demand will outstrip capacity in the first month. When your BSA team sees what AI can do and your lending team wants the same thing and your member services team is asking questions and your marketing team heard about it at a conference — one five-person IT department cannot be the bottleneck for all of it.
But full decentralization will also fail. You can’t have every department spinning up their own AI tools with their own vendors, their own data connections, and their own idea of what “secure” means. That’s the shadow AI problem from Article 29, just with a budget.
The answer is what Ramp discovered, adapted for your scale:
The center (IT + a cross-functional AI lead) owns three things:
- The platform — the infrastructure layer that connects to your core, your data warehouse, your document systems, and your compliance frameworks. This is the “Glass” equivalent. You don’t build it. You adopt it.
- Governance — agent identity, permissions, audit logging, data access controls, kill switches. Non-negotiable. Every agent that touches member data must have a defined identity, a trust tier, and an audit trail.
- Enablement — training, office hours, documentation. Not a one-time workshop. An ongoing support structure.
The spokes (department leads + power users) own three things:
- Identifying workflows — they know their problems better than IT does.
- Defining Playbooks — encoding their SOPs into formats the AI can execute.
- Validating output — humans remain the judges of quality. The AI proposes. The human disposes.
This model lets IT maintain control without becoming a bottleneck. Departments get agency without creating security risks. The center invests in infrastructure once; the spokes multiply that investment across every function.
- Identify Workflows
- Define Playbooks
- Validate Output
- Identify Workflows
- Define Playbooks
- Validate Output
- Identify Workflows
- Define Playbooks
- Validate Output
- Identify Workflows
- Define Playbooks
- Validate Output
5. “Give people a stage, not just a mandate.”
Ramp’s version: They lit small fires — a Slack channel (#ramp-uses-ai, now 1,000+ members spawning 40+ team channels and 20,000 messages per month), AI office hours every Friday, dedicated all-hands time for demos, spotlighting early converts.
Your version: This might be the most important principle for credit unions, because it directly addresses the cultural challenge.
Credit union employees are not going to adopt AI because the CEO sent a memo. They’re going to adopt it because they saw Linda in compliance save 15 hours a week and they want that for themselves.
Geoff nails the psychology: “The biggest surprise wasn’t who built the most. It was how many people had been waiting for permission to build at all.”
I’ve seen this exact dynamic at credit union partners. The moment a BSA analyst demonstrates an AI-assisted SAR narrative to the rest of the compliance team, three things happen:
- The other analysts immediately ask “can I do that?”
- The compliance officer asks “is this examiner-ready?”
- The lending team across the hall overhears and asks “can it do loan docs too?”
That cascade is more powerful than any mandate. And it requires zero additional budget — just visibility.
Concrete actions for credit unions:
- Monthly AI demo at all-staff meetings. Five minutes. One person shows what they built or how AI changed their workflow. Make it a standing agenda item, not a special event.
- Internal Slack or Teams channel. Call it #ai-wins or #smart-shortcuts or whatever fits your culture. Low barrier. People post screenshots, quick tips, questions. The channel moderator (your AI lead) responds to every question within 24 hours.
- Quarterly AI showcase. Invite your board. Let employees present directly to the directors. Nothing motivates adoption like telling your board about the tool you built that saves the institution $40,000 a year. And nothing motivates board investment like hearing it from front-line staff instead of a vendor slide deck.
- Spotlight the unexpected builders. Ramp’s biggest insight: the early converts mattered more than anything. Find your version of the sales ops lead who figured out something nobody expected and make them visible. In a credit union, that might be the teller supervisor who used AI to rebuild the cash balancing checklist, or the HR coordinator who automated the onboarding document assembly.
6. “Get people to the ‘Aha’ moment as fast as possible.”
Ramp’s version: “Training doesn’t work. The world’s best teacher is staring right in front of you: it’s AI. The single biggest unlock is getting someone to experience a real result on day one.”
Ramp built Glass to solve this — auto-configures, SSO, 30+ tools pre-connected. A team of four built it in three months. 700 daily active users within a month.
Your version: This is the most expensive principle to implement — and the one where the “build vs. buy” decision becomes existential.
You cannot build Glass. I want to be direct about this because I’ve watched three credit unions try — two in the $1-2 billion asset range, one closer to $500 million — and all three failed. They each spent the better part of a year and several hundred thousand dollars building internal AI platforms that were outdated before they launched. In each case, the same pattern: they hired a consultant, picked a model, built custom integrations to their core, and by the time they deployed, the model had been superseded and the integrations needed rebuilding. The technology moves too fast. The expertise required is too specialized. The maintenance burden is too high.
But you must have the equivalent of Glass — an infrastructure layer that gives your people an “Aha” moment on day one. Without it, AI adoption stalls at Level 1 forever. People will dabble with ChatGPT, get inconsistent results, and conclude it’s not useful for “real work.”
The “Aha” moment for a credit union employee looks different than for a Ramp engineer:
- BSA analyst: The agent pulls the member’s full transaction history, identifies the pattern that triggered the alert, cross-references it with the member’s known business activity, and produces a draft SAR narrative with examiner-ready formatting — in three minutes instead of three hours.
- Loan processor: The agent assembles the entire loan package — pulling docs from imaging, verifying conditions, flagging missing items, and formatting the file for underwriter review — while the processor handles the member conversation.
- Branch manager: The agent surfaces the morning’s priority members — who’s at risk of leaving, who has a CD maturing this week, who called yesterday with an unresolved issue — with enough context to have a meaningful conversation, not a sales pitch.
- Compliance officer: The agent monitors regulatory updates, maps them against your current policies, and flags the three things that actually require action this month — instead of your compliance officer reading 47 pages of NCUA guidance to find the two paragraphs that matter.
Each of these takes someone from “AI is interesting” to “I can’t do my job without this.” That’s the “Aha” moment. And it only happens when the AI has deep context about your institution — your core data, your policies, your member relationships, your regulatory history. A generic chatbot can’t deliver this. An institutionally-aware agent platform can.
7. “Make it a competition.”
Ramp’s version: Internal leaderboard tracking AI usage across every team and individual. Sessions run, skills used, apps shipped, tools connected. Visible to everyone.
Your version: Proceed with extreme caution.
Leaderboards work at Ramp because Ramp’s culture is competitive by design. Credit union culture is collaborative by design. If you create a leaderboard that ranks individuals by AI usage, you’ll get two outcomes: the top 10% will game the metrics, and the bottom 50% will feel inadequate and resentful. Neither outcome advances adoption.
What works instead: team-level recognition, not individual ranking.
- Track AI-assisted productivity gains by department, not by person.
- Celebrate outcomes, not usage. “The BSA team processed 40% more alerts this quarter with the same headcount” matters. “Linda ran 847 AI sessions” doesn’t.
- Share ROI stories at board meetings. When the board hears that the lending department reduced loan processing time by 35%, that creates institutional momentum. When they hear that one person used AI more than everyone else, that creates institutional anxiety.
- Create friendly inter-department challenges with specific goals. “Which department can identify the highest-value process improvement using AI this quarter?” gives people a target without creating a ranking.
Geoff is right that competitive dynamics accelerate adoption. But the competition should be against the problem, not against each other. Credit unions compete against inefficiency, against member attrition, against regulatory burden. Frame AI as a weapon in those fights, and the collaborative culture becomes the accelerant instead of the obstacle.
The one element of Ramp’s approach that translates directly: hiring and performance expectations. Ramp now requires AI proficiency for every hire. For PM candidates, there’s a dedicated interview session where they build a product live. This is directionally correct for credit unions too — not at the same intensity, but as a consideration.
Within 18 months, “comfortable using AI tools” should be on every credit union job description. Not “expert in prompt engineering.” Not “can build apps.” Just: “demonstrates willingness and ability to use AI-assisted tools in daily work.” The bar is low. The signal is high. It tells candidates that your institution is moving forward, and it tells current employees that this isn’t optional.
8. “Remove every constraint between your people and AI.”
Ramp’s version: Unlimited AI budget. No token limits. No tiered access. No IT approval queues for connectors. Pre-connected 30+ tools through SSO.
Your version: Remove every unnecessary constraint. Keep every necessary one.
This is where the credit union version diverges most sharply from Ramp’s playbook, and for good reason. Ramp’s “remove every constraint” philosophy works because Ramp’s constraints are primarily bureaucratic — procurement approvals, budget caps, IT ticket queues. Remove those, and you get velocity.
Credit union constraints are primarily regulatory. Remove those, and you get NCUA findings. You get examiner criticism. You get data breaches that affect real members with real money.
The art is distinguishing between the two types:
Constraints to remove (bureaucratic):
- Requiring board approval for AI tool evaluation (board should set policy, not approve individual tools)
- Requiring IT tickets for every AI-related question (create self-service resources and office hours instead)
- Blanket bans on AI tools with no alternative offered (ban personal AI use on the network, but provide an institutionally-sanctioned platform)
- Budget silos that prevent departments from investing in AI-assisted workflows (create a shared AI innovation budget)
- Multi-month vendor evaluation cycles for pilot programs (evaluate in 30 days, pilot in 90, decide in 120)
Constraints to keep (regulatory/fiduciary):
- Agent identity and audit logging for anything touching member data
- Human-in-the-loop approval for any action that affects a member’s account
- Data residency controls — member data stays in your environment
- Permission tiers — not every agent needs access to everything
- Kill switches — the ability to shut down any agent instantly if something goes wrong
- Examiner-ready documentation of what AI does, what data it accesses, and what decisions it influences
The goal isn’t “no constraints.” The goal is “only the constraints that actually protect members and satisfy regulators.” Everything else is friction that slows adoption without adding value.
- Board approval for individual AI tool evaluation
- IT tickets for every AI-related question
- Blanket bans with no alternative offered
- Budget silos blocking AI investment
- Multi-month vendor evaluation for pilots
- Agent identity and audit logging
- Human-in-the-loop for member account actions
- Data residency controls
- Permission tiers for agent access
- Kill switches for instant shutdown
- Examiner-ready documentation
Geoff’s cost argument deserves specific attention for credit unions: “Token consumption per employee today isn’t even close to double-digit percentages of their salary. But if someone is 2x more productive with AI, you should be willing to spend their entire salary again in tokens.”
The math is the same for credit unions. If your BSA analyst earns $65,000 and AI makes them twice as productive, the equivalent of hiring a second analyst, what would you pay for that? The AI infrastructure costs a fraction of a salary. The ROI is immediate, measurable, and compounds over time. Frame it that way to your board, and the budget conversation changes from “how much does AI cost?” to “how much does not having AI cost?”
The Advantage Nobody Talks About
Here’s where I want to push beyond Geoff’s framework, because there’s something credit unions have that Ramp doesn’t. Something that makes your AI adoption story not just different, but potentially better.
Patient capital
I wrote about this in Article 33: credit unions don’t have terminal value in the equity-pricing sense. There is no stock to reprice. No public market applying duration discounts. No activist investor demanding you stop investing in year-seven capabilities because the market won’t pay for them.
Ramp’s AI investments need to show returns on a venture-backed timeline. Their investors want growth metrics, revenue acceleration, a path to IPO or acquisition. That’s not criticism — it’s the reality of the capital structure.
Your capital structure is different. Cooperative capital is patient capital. You can invest in AI infrastructure that takes 18 months to mature because nobody is going to short your stock in month six. You can build a context moat (Article 34) that compounds over years because your board thinks in decades, not quarters.
This is an enormous structural advantage. Use it.
Institutional context depth
Ramp is 8 years old. Your credit union might be 80. You have decades of institutional knowledge — examiner preferences, member behavioral patterns, market-specific insights, seasonal rhythms, community relationships — that no amount of engineering talent can replicate.
When that institutional knowledge meets a stateful AI platform that can encode it, learn from it, and apply it at scale, something remarkable happens: the credit union’s history becomes its competitive weapon. Every year of institutional memory becomes training data. Every veteran employee’s expertise becomes a Playbook. Every examiner interaction becomes a calibration signal.
Ramp started from zero institutional context and built impressive capability in 18 months. Imagine what your credit union can build when it has 50 years of institutional context to draw from.
Member relationship permanence
Ramp’s customers are businesses using a corporate card. The relationship is transactional and portable. Your members are people with checking accounts, mortgages, car loans, and retirement savings. The relationship is deep, multi-product, and generational — families who’ve been members for decades.
This permanence means something for AI: your agents will serve the same members repeatedly, building deeper context with every interaction. The Runner that learned Maria’s flower shop deposit pattern isn’t structuring (Article 8) will remember that context the next time she applies for a line of credit. The member doesn’t have to re-explain herself. The institution doesn’t have to re-learn her. That continuity doesn’t exist in a transactional business model.
The credit union that gets AI right will deliver something no fintech can: personalized, contextually-aware service at scale, backed by decades of relationship history, governed by a regulatory framework that members can trust.
The 90-Day Adoption Playbook
Theory is worthless without action. Here’s the concrete playbook for a credit union that wants to move from Level 0 or Level 1 to Level 2 within 90 days.
Days 1-14: Foundation
Week 1: Assess and align.
- Survey your staff anonymously: “Are you currently using any AI tools for work? Which ones? For what tasks?” You will be surprised by the answers. At the last credit union where I ran this exercise — a $1.1 billion institution, 210 employees — 67% of staff reported they were already using personal AI tools on personal devices, with no institutional oversight. The sample wasn’t scientific, but the compliance officer’s face when I shared the results was.
- Identify your top 3-5 most time-consuming, repetitive workflows. Not the ones that sound impressive on a vendor demo. The ones that make your people groan on Monday morning. BSA alert triage. Loan document assembly. New member onboarding paperwork. Vendor due diligence. Board packet preparation.
- Brief your board. Not a 45-minute AI strategy presentation. A 10-minute update: “Here’s what our employees are already doing with AI. Here’s the risk of doing nothing. Here’s our 90-day plan to get ahead of it. We need $X and your support.”
Week 2: Select and scope.
- Choose ONE workflow for your first pilot. Criteria: high volume, high time cost, low regulatory complexity (save BSA for the second pilot if you’re nervous), and a champion in the department who’s excited about AI.
- Select your infrastructure partner. Ask five questions:
- Does my data stay in my environment, or does it train a shared model?
- Can I see a full audit trail of every AI action?
- What happens to my institutional knowledge when I cancel?
- Can my compliance officer review and modify the AI’s operating rules?
- Does the platform get better automatically when the underlying models improve?
- If any answer is unsatisfying, keep looking.
Days 15-45: Pilot
Weeks 3-4: Deploy and baseline.
- Deploy the pilot workflow with your champion team.
- Measure the baseline FIRST: how long does this workflow take today? How many errors? How many touches? What’s the current throughput?
- Set clear success criteria: “We’ll consider this pilot successful if processing time decreases by 30%+ with no increase in error rate.”
Weeks 5-6: Iterate and expand.
- Your first week will be messy. The AI will get things wrong. Your team will be frustrated. This is normal and expected. The question isn’t whether the AI is perfect on day one — it’s whether it’s learning.
- Document every correction. Every time a human overrides the AI, that’s a calibration signal. A good platform turns those corrections into improvements automatically. A bad platform requires a support ticket.
- Invite two or three people from other departments to observe the pilot. Don’t pitch them. Let them watch. The “Aha” moment is contagious.
Days 46-90: Expand and formalize
Weeks 7-8: Second workflow.
- Launch a second pilot in a different department. Use the lessons from the first pilot to move faster.
- Your champion from the first pilot should present results to the broader team. Let them tell the story. Peer credibility beats vendor slides every time.
Weeks 9-10: Governance framework.
- Formalize your AI governance policy. It doesn’t need to be 50 pages. It needs to cover:
- Which AI tools are sanctioned for institutional use
- What data categories can and cannot be processed by AI
- Agent identity requirements (every AI touching member data has a name and an audit trail)
- Human-in-the-loop requirements by risk tier
- Incident response: what happens when an AI makes a mistake
- Examiner communication: how you’ll explain your AI program during the next exam
Weeks 11-12: Board update and next phase planning.
- Present pilot results to the board with hard numbers. Not “AI is the future.” Instead: “Our BSA team processed 40% more alerts with the same headcount. Error rates were flat. Here’s the audit trail. Here’s what the next six months look like.”
- Plan your Level 2 → Level 3 roadmap: which workflows next, what infrastructure investments are needed, what training your team needs.
Week 13: Retrospective.
- What worked? What didn’t? What surprised you?
- Update your governance framework based on real-world experience.
- Identify your force multipliers — the people who took to AI naturally and are already teaching others.
- Write down what you learned. Not for a filing cabinet. For the next credit union leader who calls you and asks “how did you do this?”
- Staff AI usage survey
- Identify top 3-5 workflows
- Board briefing + budget ask
- Deploy first workflow
- Measure baseline + iterate
- Cross-department observation
- Second workflow in new department
- Governance framework formalized
- Board results presentation
The Compounding Curve
Geoff ends his piece with the right framing: “In lieu of a master plan, we just started. We kept building tools, kept raising the bar, kept investing in data and AI infrastructure, kept creating venues for people to show off. Each track compounded separately. As they reinforced each other, the curve went vertical.”
The compounding curve is real. And it’s the single most important concept for credit union leaders to internalize — because the curve works in both directions.
If you start now, every month adds institutional context — the anti-fragile moat I described in Article 34. By month 18, you have a context layer that no competitor can replicate because it’s built from your data, your corrections, your institutional knowledge.
If you wait, the curve compounds against you. The gap between Level 3 and Level 0 institutions will become visible to members within two years: one credit union responds to complex questions in seconds with full context; another puts you on hold for eight minutes while someone searches three systems manually. Members will notice. Younger members especially will notice. And they’ll vote with their deposits.
Explore the animated version at assets.runlineai.com/diagrams/the-adoption-playbook-companion.html
This is the window I described in Article 16 — the 18-month window where the credit unions that act will separate from those that don’t. Ramp’s numbers show what’s possible when an organization commits. Your numbers won’t look like Ramp’s — they’ll look like yours. But the compounding curve is the same.
What Ramp Doesn’t Tell You
One more thing. Geoff’s piece is honest, well-written, and full of useful principles. But there are things it doesn’t address — things that matter enormously for credit unions.
It doesn’t address AI governance in a regulated environment. Ramp doesn’t have examiners. You do. Your AI program needs to be examiner-ready from day one — not as an afterthought, not as a bolt-on after the pilot succeeds. Build the audit trail first. Build the agent identity framework first. Build the human-in-the-loop approvals first. Then deploy. I covered the full governance architecture in Article 29: every agent needs a name, a trust tier, defined permissions, and a kill switch.
It doesn’t address the vendor risk of shared-model AI. Ramp built everything in-house. Most credit unions will partner with vendors. And most AI vendors in financial services are running shared-model architectures where your institutional knowledge subsidizes your competitor’s improvement. I covered this in Article 18: the distinction between renting intelligence and owning it isn’t academic. It determines who captures the long-term value of every AI interaction your institution has.
It doesn’t address what happens when the AI is wrong. Ramp’s AI ships code and tools — if an internal tool breaks, an engineer fixes it. Your AI will interact with member data, compliance systems, and regulatory filings. When it’s wrong, the consequences are different. Your governance framework needs to account for AI errors as a first-class concern, with incident response procedures, member notification protocols, and examiner communication plans.
It doesn’t address the human side beyond productivity. Ramp frames everything through a productivity lens — which makes sense for a high-growth fintech. But credit unions serve communities. Your AI program needs to account for the members who don’t want to talk to a bot. The employees who are afraid AI will take their job. The board members who saw Terminator and think you’re building Skynet. The human side of AI adoption requires empathy, communication, and genuine reassurance — not just a leaderboard.
The Invitation
Ramp proved something important last week: that a company can transform its entire workforce’s relationship with AI in 18 months. Not with a master plan. With culture, infrastructure, and relentless iteration.
Your credit union can do the same. Not by copying Ramp’s playbook — by building your own. One that respects your regulatory reality, draws on your institutional depth, and compounds on the patient capital structure that makes cooperatives the most resilient financial model ever designed.
The principles are universal: start now, build infrastructure, give people stages, remove unnecessary friction, and watch it compound.
The implementation is yours to define.
In the next article, I’ll walk through the specific technology architecture that makes this possible — the infrastructure layer that serves as your institution’s “Glass,” built for regulated financial institutions from the ground up. What it connects to, how it governs itself, and why the credit unions beginning to deploy it are building context moats that will define the next decade of competitive differentiation.
The curve is compounding. The window is open. The only question is whether you step through it.
Sean Hsieh is the Founder and CEO of Runline, an AI operations platform purpose-built for credit unions. He previously founded Concreit, an SEC-regulated real estate investment platform, and served in leadership roles at Flowroute (acquired by Intrado). He can be reached at sean@runlineai.com.


