Byte-Sized Intelligence October 2 2025

Microsoft's AI in healthcare and the broken thread in time

This week: Microsoft’s Dragon Copilot shows why integration matters for adoption; we explore AI’s struggles with time, memory, and continuity.

AI in Action

Microsoft’s AI Attempt at Healthcare Adoption [AI Adoption/Healthcare]

Microsoft’s new Dragon Copilot is drawing attention in U.S. hospitals not because it promises breakthroughs in diagnosis, but because it trims paperwork. Doctors spend a third to half of their working hours on documentation, according to studies, often more than with patients. That burden has made burnout a flashpoint in healthcare, and Copilot is pitched as relief. Built on Nuance Dragon, a dictation tool already trusted by clinicians, and embedded directly in Epic, the country’s dominant electronic health record system, it listens during visits, drafts structured notes, and files them straight into the chart. With Nuance already in use at roughly 77 percent of U.S. hospitals, Copilot feels like an extension of familiar workflows rather than a disruption. Outside the U.S., adoption may prove slower, as countries like Canada and those in Europe contend with fragmented systems and stricter privacy rules.

Doctors describe the appeal in simple terms: they can walk from one exam room to the next while the system fills in the record in the background, ready for approval. Copilot is stitched into existing workflows, which lowers resistance. Yet risks remain. Errors can slip through if notes are skimmed too quickly, and hospitals know each new system handling patient data widens the surface hackers may target. Physicians still carry ultimate responsibility, which reassures regulators but leaves open the question of whether the tool reduces strain or simply changes its shape.

The larger lesson extends beyond medicine. AI adoption tends to work best when it builds on trusted infrastructure and familiar workflows, and it falters when introduced as a stand alone tool that adds friction. Dragon Copilot shows why integration matters: it has a head start because doctors already know Nuance and Epic, and because its value lies in everyday efficiency rather than bold promises. Whether it ultimately eases burnout or simply shifts the burden to faster review remains to be seen. What is clearer is that AI wins trust when it makes the familiar lighter and easier, not when it asks professionals to reinvent their work. The question is what invisible work in your industry is waiting for its own Copilot?

Bits of Brilliance

The Missing Thread of Time in AI [AI Limits]

Large language models can write, summarize, and explain with ease, yet they do not keep time the way people do. They are trained on data frozen at a moment, then updated in occasional jumps. There is no internal clock or lived memory. When a model says “last week,” it is predicting a plausible phrase rather than consulting a calendar. That is why systems sometimes blur timelines, describe future events as if they have already happened, or mix old facts with current ones in a way that sounds fluent but is chronologically wrong.

I see this when I ask my AI to save a topic for the following week’s newsletter. Unless I remind it by pointing back to “last week’s topic,” the thread breaks. It cannot connect one issue to the next because there is no continuous sense of time. The same limitation carries higher stakes in business. An AI asked about “the new regulation” may conflate a draft rule with one already in effect, a confusion that could expose an organization to compliance risk. The lesson is not that AI is careless. It is that, by design, it does not perceive time unless we supply it.

Workarounds exist, and they help, but they come with tradeoffs. Retrieval systems can fill gaps by pulling in fresh data, which we’ll explore in a future issue. For now, what matters is that memory and time remain open frontiers. Many researchers see solving continuity as the next big leap in AI. The shift that could make systems far more useful for long-term projects, planning, and enterprise adoption. Philosophers often argue that memory is what makes time real; without it, moments collapse into fragments. Human memory is far from perfect, yet it carries a thread of continuity. AI, by contrast, has only engineered recall, not lived memory, which is why its intelligence remains powerful but incomplete.

Curiosity in Clicks

Prompting AI Around Time [Experiment]

AI often stumbles when you ask it to recall “last week” or “next month.” To avoid this, ground your prompts with concrete markers. Instead of “What did we cover last week?” say “Summarize the October 2, 2025 issue on Dragon Copilot.” Or instead of “What’s happening next month?” ask “What events are scheduled for November 2025?”

A few tips to make time prompts clearer:

Anchor with specific dates or names rather than relative words.

Tie requests to distinct events or identifiers like project names or meeting titles.

When asking for a timeline, add “list in chronological order” to keep sequences straight.

Test it out: give your AI one vague prompt and one anchored with details, then compare the answers. The difference shows how much clearer results become when you take time confusion out of the equation.

Byte-Sized Intelligence is a personal newsletter created for educational and informational purposes only. The content reflects the personal views of the author and does not represent the opinions of any employer or affiliated organization. This publication does not offer financial, investment, legal, or professional advice. Any references to tools, technologies, or companies are for illustrative purposes only and do not constitute endorsements. Readers should independently verify any information before acting on it. All AI-generated content or tool usage should be approached critically. Always apply human judgment and discretion when using or interpreting AI outputs.