Every Department Has a Split Line. AI Knows Where It Is: A Department-by-Department Guide to What AI Should — and Shouldn't — Touch

Every credit union department has an intelligence-to-judgment ratio that determines exactly where AI delivers ROI on day one. A department-by-department map of BSA, lending, collections, HR, and IT — with the split lines drawn.

By Sean Hsieh
Read 17 min
Published February 27, 2026
Every Department Has a Split Line. AI Knows Where It Is: A Department-by-Department Guide to What AI Should — and Shouldn't — Touch

“Writing code is mostly intelligence. Knowing what to build next is judgement.”

That line comes from a Sequoia Capital analysis by Julien Bek that racked up 646,000 impressions in a week — and it’s the cleanest framework I’ve seen for understanding where AI creates value and where humans remain irreplaceable.

I want to pull back the curtain here, because this framework didn’t click for me in a pitch deck or a VC meeting. It clicked in a credit union back office.

I walked into Heartland Credit Union and watched a BSA analyst — let’s call her Diane — toggle between six separate systems to triage a single alert. She’d been doing this for 22 years. She was extraordinarily good at it. And she was drowning. Triaging that alert was mostly intelligence — matching transaction patterns against rules, pulling member history across systems, cross-referencing prior alerts. Deciding whether to file a SAR was judgment — weighing investigative instinct, examiner expectations, institutional risk tolerance, and the kind of contextual pattern recognition that comes from two decades of watching the same membership.

Every department in your credit union has this ratio. Intelligence work on one side — rules-based, pattern-matching, data-gathering, verifiable. Judgment work on the other — experience-dependent, context-sensitive, relationship-driven, irreducible.

Map the ratio, and you have a precise blueprint for where AI agents deliver ROI on day one — and where your people remain essential. This isn’t replacement. It’s separation of concerns.


Defining the Terms

Intelligence work is any task where the inputs are structured, the rules are documented (or documentable), and the output can be verified against an objective standard. Pulling a member’s transaction history. Checking a loan application against underwriting criteria. Filing a CTR when cash transactions exceed $10,000. Routing an employment verification request to the right template. Running a skip trace. Generating a compliance report.

Intelligence work has a critical property: a competent person following the procedure will produce the same output as any other competent person following the same procedure. The work is valuable — often essential — but it’s not differentiated by who performs it. It’s differentiated by how quickly and accurately it gets done.

Judgment work is any task where the inputs are ambiguous, the rules are incomplete, and the output depends on experience, values, or relationship context that can’t be fully codified. Deciding whether a pattern of transactions is genuinely suspicious or just unusual. Approving a loan for a member whose numbers don’t quite work but whose 15-year relationship with the credit union tells a story the spreadsheet can’t. Handling a member in financial hardship with the right combination of empathy, policy knowledge, and creative problem-solving. Choosing which technology vendor to trust with your core infrastructure for the next seven years.

Judgment work has a different critical property: two experienced people might reasonably reach different conclusions, and both might be right. The work is valuable precisely because of who performs it — their experience, their institutional context, their relationship with the member or the examiner or the board.

The Sequoia framework calls this the intelligence-to-judgment ratio. AI handles intelligence. Humans handle judgment. The companies that figure out where one ends and the other begins — department by department, task by task — are the ones that deploy AI effectively. The ones that don’t end up with either underdeployment (humans still doing intelligence work that AI should handle) or the Klarna problem (AI doing judgment work that humans should handle, and customer satisfaction cratering as a result).


The Department-by-Department Map

I’ve spent months embedded inside credit union operations — at Heartland Credit Union, at CU*Answers’ data center, in conversations with CUSO partners, compliance officers, lending teams, HR coordinators, and IT staff across the industry. Here’s the intelligence-to-judgment map as I’ve observed it, department by department.

BSA and Compliance: 90-95% Intelligence / 5-10% Judgment

This is the highest intelligence ratio in the credit union — and it’s why BSA is the ideal starting point for AI deployment.

The intelligence layer: triaging alerts against rules-based thresholds, pulling transaction history across accounts, cross-referencing joint holders and related accounts, checking OFAC lists, gathering documentation, drafting tracker notes using standardized templates, preparing CTR filings, assembling the factual foundation of SAR narratives. In Article 6, I described what this looks like in practice — your BSA analyst opens five or six separate systems every morning and spends hours clearing alerts that turn out to be Maria the florist making her weekly cash deposit. Ninety-five percent of alerts are false positives. That triage — the pattern matching, the data gathering, the template application — is pure intelligence work.

The judgment layer: deciding whether a pattern of activity is genuinely suspicious. Interpreting examiner expectations — which examiner asks which follow-up questions, which documentation format survives scrutiny. Making the SAR filing decision itself: is this activity suspicious enough to report, or is there an innocent explanation? Maintaining the examiner relationship. At one CUSO I worked with, the BSA analysts were running at 125% capacity — 60-hour weeks, 400-plus CTRs per month. They weren’t struggling with the judgment calls. They were buried under the intelligence work that kept them from getting to the judgment calls.

An AI Runner handles the 90-95%. Your BSA analyst handles the 5-10%. The analyst’s job doesn’t shrink — it concentrates. Instead of spending 90% of her day on intelligence work and 10% on judgment, she spends 90% of her day on the work that actually requires her expertise.

Lending: 85% Intelligence / 15% Judgment

The intelligence layer: collecting borrower documentation — pay stubs, tax returns, bank statements. Running credit pulls. Verifying employment. Checking debt-to-income ratios against underwriting guidelines. Confirming property valuations. Ensuring compliance with TRID, HMDA, fair lending requirements. Generating disclosure documents. Tracking conditions and clearing them against documented criteria. I watched the lending team at one credit union — 11 loan processors touching five to seven systems per loan, with triple manual data entry on commercial files. The loan officer told me he spent more time fighting the systems than talking to members. That’s intelligence work — procedural, rule-governed, verifiable — consuming the people whose actual value is relationship lending.

The judgment layer: the edge cases. The member whose DTI is 44% but who has a 20-year relationship, a stable career, and a clear explanation for the temporary income dip. The commercial loan where the financial statements look solid but something about the borrower’s story doesn’t sit right. Relationship lending — the reason credit unions exist — is judgment work. It requires knowing the member, understanding the community, and making decisions that a spreadsheet can’t justify but experience can defend.

Centris Federal Credit Union automated 63% of loan decisions, up from 43%, enabling 30% volume growth with the same staff. That 20-point increase represents intelligence work that moved from humans to AI. The remaining 37% — the applications that require human review — is where the judgment lives.

Member Service: 80% Intelligence / 20% Judgment

The intelligence layer: balance inquiries, transaction history lookups, card freezes, address changes, routine product questions, password resets, transfer requests, FAQ-level questions about rates and fees. At one credit union partner, over 80% of incoming calls were debit and credit card related — largely procedural inquiries with documented resolution paths. Each call costs $15-$25. Most follow a script.

The judgment layer: the member going through a divorce who needs help restructuring joint accounts and doesn’t know where to start. The small business owner whose account was flagged for unusual activity and needs someone who understands their business model to explain why the deposits are legitimate. The elderly member who’s confused about a fee and needs patience, not efficiency. Financial hardship conversations where the right answer depends on the member’s specific situation, the credit union’s policies, and the representative’s ability to find creative solutions within those policies.

Gartner predicts agentic AI will resolve 80% of common service issues autonomously by 2029. That 80% is intelligence work. The 20% — the conversations that require empathy, nuance, and relationship — is where credit unions differentiate themselves from banks and fintechs. It’s the reason members chose a credit union in the first place.

HR: 85% Intelligence / 15% Judgment

The intelligence layer: processing employment verifications — 15 minutes each, five to ten per week. At Heartland, Kari was processing five to ten employment verifications per week at 15-30 minutes each — steady, important, and entirely automatable. Routing onboarding documents across departments. Administering benefits enrollment. Calculating vacation accruals for 400-plus employees. Generating compliance reports. Tracking certifications and renewal deadlines. Payroll processing and error correction. In Article 7, I estimated 260 hours per year of automatable HR work at a single CUSO.

The judgment layer: hiring decisions — reading between the lines of a resume, assessing cultural fit, deciding whether a candidate’s potential outweighs their experience gap. Performance conversations that require understanding an employee’s personal circumstances, growth trajectory, and the team dynamics that don’t show up in any dashboard. Strategic workforce planning — how many BSA analysts do we need in three years given regulatory trends? Managing the retirement cliff I described in Article 10 — deciding which knowledge to capture first, how to structure mentorship, when to start succession planning for a 22-year veteran who’s turning 64.

Collections: 75% Intelligence / 25% Judgment

The intelligence layer: researching member payment history before every call. Running skip traces. Checking account status across systems. Reviewing compliance requirements — which disclosures are required, which collection practices are prohibited in which states. Documenting every contact attempt. Generating demand letters from templates. I mapped this in Article 7: agents spending 5-10 minutes researching each member before a five-minute call, 320 calls per week, 1,820 hours per year in research alone.

The judgment layer: the conversation itself. Assessing whether a member’s hardship claim is genuine. Negotiating a payment plan that the member can actually sustain. Deciding when to recommend a loan modification versus when to proceed with recovery. Reading tone, managing emotion, knowing when to push and when to back off. Collections has the lowest intelligence ratio on this list because the human interaction — the negotiation, the empathy, the judgment about each member’s unique situation — is a larger share of the total work. But 75% is still a massive automation opportunity.

IT: 90% Intelligence / 10% Judgment

The intelligence layer: patching systems against published vulnerability lists. Monitoring uptime and performance metrics. Provisioning user accounts. Managing backup schedules. Processing helpdesk tickets against documented resolution procedures. Running security scans. Updating firewall rules per documented policy. For credit unions with 3-15 person IT teams, this procedural work consumes nearly all available bandwidth.

The judgment layer: architecture decisions — which systems to consolidate, which to replace, how to sequence a core conversion. I’ve been inside a CU*Answers data center. Their IBM Power server — a $5 million machine with 75 CPUs — runs programs between 500 and 40,000 lines that nobody fully documented. The judgment call isn’t whether to modernize. It’s how to sequence a modernization without breaking 30 years of accumulated logic. Vendor evaluation — not just feature comparison, but assessing vendor stability, support quality, and strategic alignment over a 5-7 year contract. Security incident response when the playbook doesn’t cover the specific scenario. These decisions require understanding the institution’s technology stack, its risk tolerance, and its strategic direction in ways that no checklist captures.


The Separation of Concerns

If you’ve written software — or managed anyone who has — the phrase “separation of concerns” is familiar. It’s a design principle: each component of a system should handle one distinct responsibility. The database stores data. The application processes logic. The interface presents results. When responsibilities blur, systems become brittle, hard to maintain, and impossible to scale.

My first company, Flowroute, taught me this lesson in telecom infrastructure. You don’t build a voice network by having one system handle signaling, media, billing, and routing. You separate concerns. The system that decides where a call goes is not the system that carries the voice packets. That architectural discipline is what let us scale to billions of call minutes before Intrado acquired us. The same principle applies to your credit union’s operations — and almost nobody is applying it.

Credit union operations today violate separation of concerns everywhere. Your BSA analyst stores data (gathering transaction records), processes logic (applying rules to determine suspicion), and presents results (drafting the SAR narrative) — all in the same person, in the same workflow, across five or six disconnected systems. Your loan processor collects documents, verifies compliance, and makes underwriting recommendations in a single undifferentiated workflow.

AI enables the separation. The intelligence layer — data gathering, rule application, template execution, compliance checking — moves to agents. The judgment layer — decision-making, relationship management, institutional discretion — stays with humans. Each layer does what it’s best at. Neither replaces the other.

This is how humans stay at the helm — the architecture I described in Article 10. Not by doing everything themselves, but by focusing on the work that requires their judgment while AI handles the work that requires their time. The BSA analyst doesn’t supervise the AI doing alert triage. She reviews the AI’s output and applies her judgment to the cases that matter. The loan officer doesn’t watch the AI collect documents. He reviews the AI’s pre-screening and spends his time talking to members about their financial goals.

And this is how the 50-person credit union operates at 200-person capability — the vision I described in Article 12. Not by hiring 150 more people. Not by replacing 150 people with AI. By separating intelligence from judgment across every department, automating the intelligence layer, and letting 50 humans do the judgment work that used to be buried under 80-90% intelligence overhead.


The Retirement Cliff Through the Intelligence-Judgment Lens

In Article 10, I described the retirement cliff — 4.1 million Americans turning 65 in 2024, 52% of credit union CEOs expecting to retire within six years, institutional knowledge walking out the door 11,200 people per day.

The intelligence-judgment framework reveals why knowledge loss hits so hard and how to mitigate it.

When your 22-year BSA analyst retires, you lose both layers simultaneously. You lose the intelligence layer — her familiarity with the systems, the data sources, the filing procedures, the templates. And you lose the judgment layer — her investigative instinct, her examiner relationships, her pattern recognition for genuinely suspicious activity.

Without AI, a new hire must rebuild both layers from scratch. The Stanford/MIT study I cited in Article 10 found that AI compressed the experience curve by four months — novice agents with AI performed as well as agents with six months of tenure without AI. But that compression applies primarily to the intelligence layer. AI teaches the new hire where to look, what data to gather, which templates to use, how to structure a SAR narrative. It transfers the procedural knowledge.

The judgment layer is harder. It’s built through experience — watching thousands of alerts, seeing which patterns turn out to be real, learning what examiners expect, developing the instinct that distinguishes a genuine threat from noise. You can’t shortcut judgment. But you can give the new hire dramatically more time to develop it.

Here’s the math. If your departing analyst spent 90% of her time on intelligence work, a new hire without AI also spends 90% of their time on intelligence work — leaving 10% for developing judgment. With AI handling the intelligence layer, the new hire spends 80-90% of their time on judgment work from day one. They’re reviewing AI-triaged alerts, not triaging raw feeds. They’re editing AI-drafted narratives, not writing from blank pages. They’re learning what to look for, not where to look.

At Runline, we’ve seen this firsthand with our own team. We run Runline on Runline — five AI agents named Woz, Ada, Byron, Linus, and Emila, operating under trust tiers from “training wheels” to fully autonomous. When a new team member joins, they don’t start by learning our systems from scratch. They start by reviewing what the agents produce, correcting the edges, building judgment from day one. The intelligence layer is already handled. The question isn’t “can you operate the tools?” It’s “can you evaluate the output?”

Your BSA analyst shouldn’t spend 90% of her day on intelligence work. She should spend 90% of her day on the 5-10% that requires her judgment. And when she retires, the intelligence layer doesn’t walk out the door with her — because it was never in her head to begin with. It was in the system. What the new hire needs to learn — the judgment — is exactly what they get to focus on.


The Practical Sequence

Knowing the ratio is half the battle. The other half is deploying in the right order.

Start with the highest intelligence ratios. BSA at 90-95% intelligence is the obvious first target — the work is heavily procedural, the pain is acute (Article 6’s 95% false positive rate), the ROI is measurable (Article 7’s 1,560 hours per year), and the risk profile favors internal AI (errors caught in human review, not exposed to members). IT at 90% intelligence is the next candidate, especially for credit unions already outsourcing managed services.

Move to mid-range ratios once trust is established. Lending at 85% and HR at 85% have clear automation opportunities, but the judgment component is more nuanced — underwriting edge cases, hiring decisions — so the trust tier matters more. Start these departments at training wheels, with human review of every AI output, and progress to supervised as accuracy proves out.

Approach lower ratios last. Collections at 75% intelligence still represents massive efficiency gains — 1,820 hours per year at one CUSO — but the judgment component is interpersonal. The AI does the research. The human does the conversation. Member service at 80% intelligence is high-ratio but member-facing, which means errors are visible and trust-sensitive. Deploy here after internal departments have validated the technology.

The intelligence-judgment ratio isn’t just a diagnostic tool. It’s a deployment roadmap.


Not Replacement. Separation.

The anxiety around AI in credit unions — in any workplace — stems from a single fear: replacement. And the fear is understandable when the conversation is framed as “AI versus humans.”

The intelligence-judgment framework reframes it. AI doesn’t replace your BSA analyst. It replaces the 90% of her day that doesn’t require her expertise. It doesn’t replace your loan officer. It replaces the document collection, compliance checking, and data entry that keep him from talking to members. It doesn’t replace your HR coordinator. It replaces the employment verifications and benefits calculations that keep her from strategic workforce planning.

Separation of concerns. The intelligence layer moves to AI. The judgment layer stays with humans. Each gets better because neither is burdened with the other’s work.

Here’s my contrarian take on the Sequoia analysis that inspired this article. Bek wrote it about the broader economy — insurance, accounting, healthcare, IT services. But credit unions are a near-perfect case study. Small institutions. Heavy regulatory burden. Intelligence-heavy workflows. Judgment that’s deeply relationship-dependent. A mission built on people helping people.

AI doesn’t change that mission. It clarifies it. The mission was never “people doing intelligence work.” It was “people exercising judgment in service of their members.” Everything else is overhead. And overhead is what AI was built to handle.


Sean Hsieh is the Founder & CEO of Runline, the secure agentic platform for credit unions. Previously, he co-founded Flowroute (acquired by Intrado, 2018) and Concreit, an SEC-regulated WealthTech platform managing real securities under dual federal regulatory frameworks.

Next in the series: we continue mapping the practical deployment sequence for credit unions entering the agentic era — from first Runner to full institutional coverage.

Get Started

Ready to see what stateful AI agents can do for your credit union?

Runline builds purpose-built AI agents for regulated financial institutions. Every interaction compounds institutional intelligence.

Schedule a Demo