I genuinely believed that if we built better technology, healthcare would automatically get better.
I thought the problem was purely technical. Doctors needed faster documentation, so we’d build faster documentation tools. Patients needed better communication, so we’d create smarter systems. Simple.
I was incredibly naive about how complex this industry really is.
When I started at Scribe Technology Solutions—which later rebranded to MediLogix as we advanced our go-to-market strategy—I watched a pediatrician during one of our early pilots treating a young child with chronic asthma. Before our ambient AI system, she was constantly glancing at her computer screen, typing notes while trying to examine the child. The kid was clearly anxious, and the mother was trying to explain concerning symptoms, but the doctor kept having to ask them to repeat things because she was mentally juggling the conversation with documentation requirements.
With our AI running, something completely shifted.
She put her tablet aside, knelt down to the child’s eye level, and had this genuine conversation about how the inhaler was working, what activities were difficult. She was fully present, asking follow-up questions, picking up on the mother’s subtle concerns about medication side effects that she might have missed while typing. The child actually started laughing at one point.
After the visit, she told me something that stuck: “I felt like a doctor again, not a court reporter.”
The Real Cost Nobody Talks About
Healthcare providers aren’t just losing time when they spend 2-3 hours on documentation for every hour of patient care. They’re losing their connection to why they became healthcare providers in the first place.
The real cost isn’t the hours. It’s the cognitive load of constantly switching between being a clinician and being a data entry clerk.
When you’re in an exam room thinking about what boxes you need to check for documentation instead of fully listening to what the patient is telling you, you’re compromising the quality of care. Provider burnout has dropped from 56% in 2021 to 48% in 2023, but that still represents a massive crisis.
We’re essentially asking highly trained medical professionals to become part-time secretaries. That’s a massive waste of human capital and expertise.
Providers are leaving the profession not because they don’t love medicine, but because they can’t stand the administrative burden that’s been layered on top of it. They’re losing time with family and loved ones, losing their sanity, and eventually the passion for why they became medical professionals in the first place.
As a result, organizations are losing medical professionals, revenue, and continuity of care while having to fork out hundreds of thousands of dollars to rehire from a pool facing critical shortages.
Why the Pre-Visit Stage Changes Everything
Most healthcare AI focuses on either the encounter or the aftermath. We built a three-stage system that starts before the patient even walks in the door because that’s where most healthcare encounters are actually won or lost.
I realized this when I saw how much time providers were wasting on preventable issues. A patient shows up and the provider discovers they’re on three medications that weren’t in the system, or they have a wound that’s been developing for weeks but there’s no baseline documentation.
We were essentially starting every encounter from zero information.
This meant the provider had to spend the first 10-15 minutes of a 20-minute appointment just gathering basic data that should have been collected beforehand. That’s not clinical time. That’s administrative time disguised as clinical time.
When we implemented our pre-visit chatbot with speech-to-text capabilities, something interesting happened. Patients actually gave more detailed, honest responses when they weren’t sitting across from the provider feeling rushed.
They’d mention things like “I missed my medication twice this week because I was traveling.” Admissions they might not make face-to-face. The provider walked into that room with a complete picture and could immediately focus on clinical decision-making.
The pre-visit stage isn’t just about efficiency. It’s about starting the clinical relationship from a position of knowledge rather than ignorance.
The Assumptions That Technology Destroys
We’ve been operating under this myth that more face-to-face time automatically equals better care, but that’s not always true. I’ve seen patients lie directly to their doctors about medication compliance, smoking, or drinking because they feel judged or rushed.
But give them a private moment with our AI system, and suddenly they’re admitting they haven’t taken their blood pressure medication in three weeks.
Another assumption that’s been shattered is this idea that technology creates distance between provider and patient. I watched an orthopedic surgeon who used to spend half his consultation time with his back turned to the patient, typing notes about range of motion and pain scores.
Now, with our ambient AI capturing everything, he’s physically examining the patient, demonstrating exercises, making eye contact while explaining treatment options. The technology actually made the interaction more human, not less.
Patients don’t want their doctors to remember everything perfectly. They want their doctors to have access to perfect information, which is different. They don’t care if the doctor remembers that their pain level was a 7 last Tuesday. They care that the doctor can instantly access that information and use it to make better decisions today.
The Ethical Minefield of Emotional AI
Our emotional tonality analysis through our partnership with Soniox gets into territory that makes a lot of people uncomfortable. We’re essentially teaching machines to interpret human emotions in medical settings.
I’ll give you a concrete example that crystallized this for me.
We had a primary care provider treating an elderly patient for what seemed like routine hypertension follow-ups. The patient always said he was “fine” and “doing well” when asked directly. But our emotional tonality analysis started flagging subtle changes in his speech patterns over several visits.
Slight hesitations, changes in vocal stress when discussing his medications, a flattening of affect that wasn’t obvious to the human ear.
The provider initially dismissed it, but the AI kept noting these patterns. Finally, she decided to dig deeper and discovered the patient was having significant financial stress about his medication costs and had been rationing his pills for months. He was too proud to admit it directly, but his voice was betraying his anxiety every time medication compliance came up.
Here’s the ethical minefield: the patient never consented to having his emotional state analyzed. He thought he was just having his words transcribed. We’re essentially doing psychological profiling without explicit permission.
But in this case, it potentially saved his life because untreated hypertension was putting him at serious risk.
The question that keeps me up at night is: if we can detect emotional distress that humans miss, do we have an obligation to use that capability? Or are we crossing a line that patients didn’t know existed?
Global Expansion Reveals Cultural Complexity
When we started expanding internationally to Africa and the UK, I learned we can’t just export American healthcare AI. We have to rebuild it for each cultural and systemic context.
In our preliminary work in Africa, we discovered that the concept of a machine “listening” to emotional cues was deeply problematic in certain communities. There’s a cultural expectation that emotional and spiritual wellness are interconnected with physical health, but that relationship is meant to be interpreted by human healers, not algorithms.
We had to completely disable the emotional tonality features in several pilot programs because it felt invasive in ways we hadn’t anticipated.
The UK presents a different challenge. The NHS culture is much more focused on clinical evidence and standardized protocols than the American system’s emphasis on individual provider autonomy. British doctors were actually more skeptical of our customizable templates.
They wanted to know why they needed personalization when NICE guidelines already provide standardized approaches.
But here’s what’s been fascinating. The basic human problem we’re solving is universal. A GP in Manchester spending 3 hours on documentation after seeing patients all day is just as burned out as a family physician in Ohio. Our support for 62 languages helps bridge these gaps, but the cultural adaptation goes much deeper than translation.
The Economics That Could Kill Innovation
The thing that terrifies me isn’t the technology failing. It’s the economics of healthcare actively working against adoption, even when the clinical benefits are undeniable.
I watched a specialty practice where our AI was working beautifully. Providers were seeing more patients, documentation was perfect, satisfaction scores were through the roof. But then the practice administrator pulled me aside and said something that chilled me:
“This is great, but we bill based on time spent with patients and complexity of documentation. If your system makes everything faster and easier, we might actually make less money per patient.”
That’s the nightmare scenario. We create technology that makes healthcare objectively better, but the reimbursement models punish efficiency. Fee-for-service systems reward complexity and time consumption, not outcomes.
The deeper problem is that healthcare organizations have built their entire business models around the current inefficiencies. They’ve staffed for documentation burden, they’ve priced services assuming administrative overhead, they’ve structured workflows around the assumption that clinical time is scarce.
When we eliminate those constraints, we’re not just changing technology. We’re threatening established economic structures.
We’ve had to completely reframe our value proposition from “save time” to “increase capacity and quality without increasing costs.” Instead of positioning efficiency as the main benefit, we focus on revenue protection and growth opportunities.
A dermatologist who was seeing 20 patients a day can now see 25-30 with better documentation and higher patient satisfaction. That’s not just efficiency. That’s revenue growth.
The Data Supporting Transformation
The evidence for AI-powered documentation is becoming undeniable. AI scribes have produced estimated time savings of more than 15,700 hours for users over one year, equivalent to 1,794 working days, while being used by 7,260 physicians across 2.5+ million patient encounters.
The psychological transformation is equally compelling. 84% of physicians reported that AI scribes had a positive effect on communication with patients, while 82% said their overall work satisfaction improved.
The highest adoption rates occurred in departments that typically suffer from the highest levels of documentation burden and burnout: mental health, primary care, and emergency medicine.
Our automated coding actually helps practices capture higher reimbursement levels they were missing due to incomplete documentation. We’ve seen practices increase their average reimbursement per visit by 15-20% just through better coding accuracy.
Reading the Organizational Tea Leaves
I’ve learned to identify which organizations aren’t ready for this kind of transformation, regardless of how much they might benefit from the technology. The biggest red flag is when an organization treats our AI as a band-aid for deeper systemic problems.
I’ve walked into health systems where they’re asking us to fix provider burnout, but they won’t address the fact that they’re scheduling 40 patients a day per physician or requiring providers to use three different EHR systems that don’t talk to each other.
We had one large hospital network that was excited about our efficiency gains because they thought it meant they could reduce their clinical staff while maintaining the same patient volume. That’s exactly backwards.
They were trying to use AI to extract more value from fewer people instead of using it to make their existing team more effective and satisfied.
The most telling indicator is how they respond when we explain that our system will change their workflows, not just digitize their existing ones. Organizations that are ready for transformation get excited about that possibility. The ones that aren’t ready say things like “we just want it to work exactly like our current process, but faster.”
The Future That’s Coming Whether We’re Ready or Not
The assumption that’s going to sound absolutely ridiculous in five years is this idea that AI in healthcare is about replacing human judgment. Right now, every conversation about medical AI starts with “but the doctor still makes the final decision,” as if we’re building systems to eventually eliminate physicians.
That’s completely backwards.
What we’re actually building is augmented clinical intelligence. Systems that make human doctors superhuman, not redundant. In five years, the idea that a physician would try to practice medicine without AI assistance will seem as absurd as a surgeon operating without proper lighting or a cardiologist reading an EKG without magnification.
Patients who experience truly seamless AI-enhanced healthcare, where their provider has perfect information, unlimited time for interaction, and can focus entirely on their needs, never want to go back to the old system.
In five years, patients will expect their doctors to have AI assistance, and providers who don’t will be seen as practicing outdated medicine.
What Building Medilogix Really Taught Me
Healthcare isn’t broken because of bad technology. It’s broken because of misaligned incentives, entrenched power structures, and cultural resistance to change that goes back decades.
The technology is actually the easy part. The hard part is convincing an entire industry to reimagine how it operates.
I used to get frustrated when a health system would love our clinical results but struggle with implementation. Now I understand that we’re not just asking them to adopt new software. We’re asking them to question fundamental assumptions about how care should be delivered, documented, and paid for.
That’s terrifying for organizations that have built their entire business models around current inefficiencies.
The biggest shift in my thinking has been realizing that sustainable healthcare improvement requires changing hearts and minds, not just workflows. Every successful implementation we’ve had started with someone inside the organization who was willing to champion not just our technology, but the vision of what healthcare could become.
Without that internal advocate who understands both the clinical and business implications, even the best AI in the world will fail.
I’ve gone from being a technology entrepreneur who thought we could engineer our way out of healthcare’s problems to being someone who understands that lasting change requires equal parts innovation, economics, psychology, and politics.
Building what started as Scribe Technology Solutions and evolved into MediLogix has taught me that improving healthcare isn’t a technical challenge. It’s a human one that happens to involve technology.
The pediatrician who felt “like a doctor again, not a court reporter” understood something I’m still learning. The best technology doesn’t replace human connection. It makes space for it to flourish.