Why Operations Comes Before AI

Why Operations Comes Before AI

Aditya Shah currently leads Operations at Junction, and previously led operations/growth at Instacart, Nova Pioneer and AllStripes.

Most conversations about AI and machine learning in healthcare focus on their potential to shift care from reactive to preventive, often by automating administrative overhead. The assumption is that with enough data, better care follows.

But the question is whether your operations are ready to support that AI.

In many early- and growth-stage healthtech companies, the answer is no. Predictive models are often built in environments that assume structured and timely data inputs, processed through scalable workflows. The real world doesn’t work that way. Data quality varies. Vendor formats change without notice. And healthcare’s edge cases—ranging from ambiguous symptoms to out-of-network care—are all the rule, rather than the exception.

When AI initiatives stall, the issue is rarely the technology, but the infrastructure around it. The ever-important human-in-the-loop workflows and audit trails are often too brittle to support real-world production use. AI often raises the bar for operational maturity. And if that bar isn’t met, even high-performing models won’t be trusted or adopted.

For leaders building AI/ML healthtech platforms, success depends on investing early in the connective tissue: operations that can absorb complexity, support scale and keep the system trustworthy. Here’s where that work begins.

Design Human-in-the-Loop Systems That Expect Complexity

The idea that AI can replace human judgment is both attractive and dangerous. Healthcare data is noisy and often contradictory. Many cases sit in a gray zone that resists clean categorization. To avoid silent failures or operational gridlock, systems should be designed to anticipate your hands-on problem-solving.

Define escalation rules at model design.

Confidence thresholds and data incompleteness should trigger automatic routing to human reviewers—not left to ad hoc decision-making.

Integrate review workflows into core systems.

Avoid side channels like Slack or email. Review activity should be visible, traceable and auditable.

Hire for expertise, not just throughput.

Exception handling shouldn’t be treated as grunt work. It requires clinical literacy and technical awareness. If junior staff are the first line of defense, your escalation protocols need to be robust.

The goal is to absorb real-world complexity, at scale and under pressure, without losing the benefits of intelligent automation.

Make Operations A Feedback Engine, Not A Cleanup Crew

It’s common for operations teams to be positioned downstream of product and engineering decisions. In healthtech, ops is often the first to detect failure, such as when file formats break or patient flows stall midway. Even before AI, errors in managing clinical data contributed to an estimated $200 billion in avoidable U.S. healthcare costs each year. If those signals aren’t feeding back into system design, you’ll be spending more time firefighting than learning.

Institutionalize rapid postmortems.

Don’t wait for quarterly reviews. Run short, structured retros after incidents or releases while the details are still fresh.

Align incentives across teams.

If engineering is measured on feature velocity and ops on SLA compliance, you’re set up for conflict. Shared metrics—like incident recurrence or system reliability—encourage shared responsibility.

Expose operational failures upstream.

Create shared dashboards that surface integration errors and exception trends. If a SQL query is required to see and understand what’s broken, it’s likely it’ll stay broken.

When ops, product and engineering operate in a shared learning loop, reliability improves, and your AI models continue learning in production.

Deploy With The Weight It Deserves

In most industries, a new product release is a code push. In healthcare, it’s a commitment, with clinical and legal consequences. AI outputs can influence eligibility, care coordination, billing and even clinical decisions. And the operational overhead of electronic health records shows how fragile trust in digital systems can be when design shortcuts are taken. Every prediction or action needs to be explainable and traceable.

Define prelaunch operational readiness checklists.

These should include everything from the usual performance metrics, to post-deployment monitoring plans, to risk scenarios and your human override paths.

Log everything.

Inputs, outputs, reviewer actions—structured logging is the only defense when something breaks and you need to retrace the decision path, or debug and retrain.

Simulate failure before it’s real.

Run tests that mimic missing data and malformed inputs before patients are affected, and understand how to handle the next steps. Don’t assume the system will fail gracefully—ensure it does.

Your reputation, your customers and your patients all depend on how well you manage what happens when systems fail. Deployment is not a handoff.

Scale Before Growth Forces Your Hand

Many startups delay scaling work until success makes it unavoidable. But AI systems don’t scale cleanly. A 10x increase in volume can generate 100x the exceptions and operational noise unless your foundations are solid. Avoid the scramble by stress-testing systems early.

Map your most manual processes.

Document and streamline them now, before they become the bottlenecks that slow everything else down.

Model operational load, not just user growth.

What happens when you have 10 times the claims, lab connections or case escalations? If the answer scares you, start rearchitecting.

Make documentation actionable.

SOPs should live in the tools where work happens, not in PDFs and knowledge bases collecting dust. Redundancy is resilience.

You don’t need to over-engineer from day one, but you should build systems that won’t collapse under their own success.

Finally: Stop Treating Operations As The Backend

In healthtech, operations is often what determines whether an AI product ever reaches patients. Data architecture, structured human review, cross-functional learning loops, deployment governance and scalability planning—these are what separate a working prototype from a platform clinicians trust and payers adopt.

Teams that grasp this early will design systems built for real-world complexity, while those that don’t will need to keep asking why their models excel in testing but disappear in production. AI may be the brain, but operations is the connective tissue. Without it, nothing moves.


Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?


link

Leave a Reply

Your email address will not be published. Required fields are marked *