Your Best People Will Get Busier

AI handles 80–90% of the work, but the last mile — judgment, taste, exception, relationship — doesn't compress. It expands. The bar moves with the floor. Today's 99% solution is tomorrow's 50%. Credit unions that build for the asymptote become value factories. The ones that don't lose their best people to the ones that did.

By Sean Hsieh
Read 11 min
Published April 28, 2026
Your Best People Will Get Busier

By Sean Hsieh, Founder & CEO, Runline


Last week’s piece was about teaching agents how to use the credit union. This one is about what your best people do after the agents have done their work — and why that work gets richer, not thinner.

Aaron Levie, the CEO of Box, posted on X this week. He calls it the “never-ending last mile of work” — and it crystallized something I have been circling for months.

His argument runs in three beats. AI agents handle a remarkable share of previously-manual work — call it 90%. The last mile — review, judgment, taste, the call where you fix what the agent got wrong — is what determines whether the output is valuable or slop. And: “Doing 90% of the work is irrelevant if you haven’t been able to ship the whole thing.” The last mile is the work.

This is not a software-industry observation. It is more true in regulated finance, where every last mile has an examiner sitting at the end of it, every judgment call has a member relationship attached, and every exception is the thing that earns the next thirty years.

Most credit-union leaders I talk to read this kind of post and conclude “good — AI handles the volume, my people focus on relationships.” That gets the direction right and the magnitude wrong. The work is not compressing. It is expanding. And the credit unions that do not restructure for it will face three specific outcomes in the next twenty-four months: senior people who quit (because they are doing routine work the AI should have absorbed), mid-tier people doing senior work badly (because the seniors left), and an examiner who notices (because the bar for SAR narratives, lending memos, and member-services QA has risen and your output has not kept up).

That is the cost of getting the magnitude wrong.


Three roles, three years out

In Article 24, “Intelligence Work vs Judgment Calls,” I mapped the snapshot ratio between what AI handles and what humans handle, department by department. This piece is about what happens to that slice over time — because the slice does not sit still.

In my read, the trajectory points roughly here. Three roles. Each follows the same arc: today’s split → year three → year five.

BSA / fraud analyst. Today, AI clears the bulk of false-positive alerts. The irreducible slice is roughly 5%. By year three, AI hits near-99% triage accuracy and the irreducible slice is closer to 1% — but each case is harder than any of today’s hardest 5%. The patterns that survived 99 layers of automated review are the ones that look novel. By year five, each case takes longer per case than today’s hardest, because the bar for what counts as a complete narrative has risen. The analyst’s day moves from clearing routine alerts in minutes to investigating a small number of cases at depth. Same hours. Deeper work.

A concrete educational example: a $9,400 cash deposit — the kind of just-under-threshold pattern that triggers analyst review — used to take a few minutes to clear or escalate. The same deposit a few years from now triggers a cross-account analysis that produces a richer narrative than the analyst could have written in year one.

I have been watching this start to play out. At one credit union I work with, the compliance team’s headcount has not shrunk in over a year of observation. The analyst-per-case time grew as the easy alerts disappeared. The team did not get smaller. It got sharper.

Lending officer. Today, AI pre-qualifies, runs the math, drafts the memo, surfaces comparable members. By year three, AI auto-decisions the bulk of standard applications. The lending officer’s day is the irreducible slice — the borderline DTI where the member’s mom is co-signing, the underwriting exception that earns a thirty-year relationship, the read on whether this small-business owner’s seasonal dip is a blip or a slow collapse. By year five, those exceptions take longer per case than today’s exceptions did, and the officer is now the source of the rule the agent encodes.

Member service representative (MSR). Today, AI handles routine inquiries — balance, transactions, transfers. By year three, AI absorbs the bulk of routine and most policy lookup. The MSR’s day is the calls where empathy, policy interpretation, and judgment converge — the call where you save a member from a $200 overdraft cascade because you knew her flower shop has a Tuesday cycle. By year five, the MSR is the team’s lead trainer for the agents that handle the rest, and the source of the cases the agent flags up.

In each role, AI eats the volume and reveals the irreducible core. The expertise required to occupy that core grows. The judgment density per hour grows. This is not a softening of these jobs. It is a sharpening of them.

Three roles, three years out

The volume shrinks. The depth grows. Same hours, deeper work — and the human slice gets darker (denser, harder, higher-bar) at every step.

AI clears the bulk of false-positive alerts. The irreducible slice is what survives 99 layers of automated review — the patterns that look novel.

TodayAI 95% / Human 5%
agent

AI clears bulk of false-positives. Human: ~5% of alerts at a few minutes each.

Year 3AI 99% / Human 1%
agent

AI hits ~99% triage. Human: ~1% — but each case harder than today’s hardest.

Year 5AI 99.5% / Human 0.5%
agent

Cases take longer per case. Bar for the SAR narrative has risen with examiner expectations.

Numbers are illustrative projections, not industry forecasts.


The asymptote

Today’s 99% solution is tomorrow’s 50% solution. The bar moves with the floor.

That is Levie’s recursive insight — and once you take it seriously, it changes how you plan a workforce.

The dynamic is mathematical. As automation gets better, the routine slice shrinks toward zero, but the irreducible slice keeps refining, keeps demanding more of whoever occupies it. There is no value of automated % at which the human disappears. The limit recedes with each step toward it. This is the mathematics of an asymptote — a function that approaches a limit it never reaches. You can drive the routine slice arbitrarily small; you cannot eliminate the slice.

The math, in one picture

The historical pattern is the same in every prior automation wave. Two cases, then the economic frame that explains them.

Tax. TurboTax did not shrink the tax code. The federal tax code and regulations together fill more than 4 million words, and the CCH Standard Federal Tax Reporter — the compilation tax professionals actually use — runs to 70,000 pages. As the routine return automated, the role of the CPA expanded: into multi-state arbitrage, the IRS challenge, the unique structure that doesn’t fit the form. The CPA’s job got harder, not easier. Professionals who could not move up the value stack lost their seats. The ones who could built bigger practices.

Code. The Linux kernel grew from roughly 20 million lines of code in 2015 to 40 million lines by January 2025 — doubled in a decade, even as IDEs got better, even as AI coding assistants entered. Engineers ship more software, against more sophisticated quality bars, in larger codebases. The bar rose with the floor. As Andrej Karpathy put it in 2023, “the hottest new programming language is English.” The implicit second half is the part that matters: the human who specifies in English has to know more than the human who used to write the code.

What the historical pattern shows, the economists are now formalizing in real time. David Autor and Neil Thompson at MIT made a version of this argument in their June 2025 paper, “Expertise.” Their thesis: AI both devalues and amplifies expertise, and which side a role falls on depends on how the work is structured. Devalues it where the work was procedural and is now automated — the procedural job evaporates. Amplifies it where the work requires judgment that compounds with the tools the expert wields — the irreducible slice, where the bar moves with the floor. Credit unions can pick which side of that fork their roles end up on by how they structure the work.

For credit unions specifically, every percentage point of automation makes the operational bar higher, not lower.

  • When BSA AI hits 99% triage accuracy, examiners expect richer narratives, faster filings, deeper context per case.
  • When member-services AI hits 99% on routine inquiries, the unhappy member expects a real human faster — and the bar for what that human delivers rises.

I do not see a version of this future where the relationship slice — the trust, the exception, the read on a member’s character — gets automated away. The asymptote means human judgment never stops mattering. The bad news is the inverse: the work gets harder, not easier. The credit unions that do not restructure for the new last mile will be measured against an ever-rising bar with the same headcount, and they will lose to the ones that did.

That is the math of the next decade. The strategy question is what you build on top of it.


The value factory

The credit union of the next decade is not an automated assembly line that needs fewer humans. It is a value factory — a machine whose output is judgment, where AI handles throughput so humans can operate at the asymptote.

A value factory has three properties.

1. Expansive last mile. Hire and structure for more judgment per hour, not less. The BSA team’s hours do not shrink — they re-allocate from triage to investigation, from clearing alerts to designing the rules the agents follow. The MSR team does not get smaller — the calls they take get harder, and the team’s senior members get freed to mentor and design rather than answer the same balance question eighty times a day. The total cognitive output of the institution rises. The ratio of judgment-work to routine-work flips.

2. Visible last mile. The right humans see the right cases. Runners surface, not bury. When the lending Runner flags an exception, the system has to know which lending officer is the right one for this member, this situation, this timing. Routing the right case to the right last-mile human is itself a design problem — and it is the natural extension of the institutional memory thread from Article 30, “The AI That Remembers You.” The credit union that knows which analyst handles which kind of case, and which member trusts which MSR, captures more value per judgment than the one that routes randomly.

3. Compoundable last mile. The institution that captures the override patterns will compound; the one that does not will train its competitors’ agents indirectly, by hiring the people they later poach. Every exception, every override, every relationship call is potential training input for the agents that come next. (At Runline, this is the principle Joy — our HR Runner — was built around: an override-capture loop where each refinement an HR director makes informs the next playbook revision. Institutional judgment compounds in code rather than walking out at retirement.)

What to do this quarter

  1. Audit your senior people. For each role — BSA, lending, member service — identify the 10–20% of the week that nobody else can do. That is the new role. Everything else is on the agentic chopping block within twenty-four months. If you do not audit, you will lose those people to a credit union that did.

  2. Re-allocate against the asymptote. Free those people from the routine 80%. Pay them for judgment density, not throughput. Restructure their team so the routine routes to agents and the exceptions route to them with full context attached. If you do not re-allocate, your senior people will be doing junior work badly while your junior people watch the agents and learn nothing. Both will leave within a year.

  3. Capture the judgment. Every exception, every override, every relationship call is a potential training input for the agents that come next. The institution that captures these compounds. If you do not capture, you will train your competitors’ agents indirectly — by hiring the people they later poach.

The question is not “can we fully automate?” The honest answer is no — not in the work that creates value at a credit union, because that work is judgment under uncertainty, novelty, and relationship, and those are asymptotic. The real question is “how do we design our humans for the asymptote?”

The credit unions that figure this out will not need fewer people. They will need different ones — at higher altitude, with higher bars, doing work that compounds. Build the value factory.

Get Started

Ready to see what stateful AI agents can do for your credit union?

Runline builds purpose-built AI agents for regulated financial institutions. Every interaction compounds institutional intelligence.

Schedule a Demo