Byte-Sized Intelligence August 28 2025

Inside MIT's GenAI Divide

This week: We look at MIT’s latest report that uncovers why most AI pilots stall; and the hidden factors that help GenAI succeed in real workflows.

AI in Action

Crossing the GenAI Divide [Enterprise/Adoption]

If 95% of generative AI projects are failing, what does it take to be in the 5% that win? That’s the question behind a new report published by MIT (The GenAI Divide: State of AI in Business 2025). The study reviewed more than 300 AI projects, interviewed leaders at 52 organizations, and surveyed 153 executives earlier this year. Its criteria was strict: only projects showing sustained productivity gains tied directly to profit and loss counted as a success.

The topline result is stark: 95% of initiatives delivered no measurable business impact, while just 5% created real value. The difference was not in the technology, but execution. The winners focused on specific workflows, measured outcomes in business terms, integrated tools tightly, and often leaned on vendor partners to scale. In fact, the study found vendor-led projects were about twice as likely to succeed as in-house builds. The rest got stuck in pilot, layering shiny tools onto old workflows without ever changing how work actually gets done.

The study also shows where momentum is building. ROI (real business returns like revenue growth, cost savings, or scaled productivity) is hiding in the back office, like finance, HR, and document-heavy processes often deliver cleaner results than front-office pilots, even though budgets skew toward sales and marketing. It matters who moves faster: vendor partnerships are about twice as likely to succeed as in-house builds, and mid-market firms outpace large enterprises in moving from pilot to rollout. The real barrier is workflow fit, since most tools still lack the memory and context needed to stick in real processes.

Think of the 95% figure less as gospel and more as a spotlight. It doesn’t mean AI is doomed, but highlights where adoption is breaking down. The real takeaway is that generative AI isn’t failing because the models don’t work, it’s failing because most organizations don’t know how to use them effectively. Companies that start focused, measure outcomes in ROI terms, and integrate deeply will cross the divide. Those that don’t may find themselves left behind.

Bits of Brilliance

Enterprise AI’s Memory Problem [Enterprise/Adoption]

In MIT’s GenAI Divide report, we saw that most enterprise AI projects fail, not because the models don’t work, but because they don’t fit real workflows. One overlooked reason? Most enterprise AI tools don’t have memory.

Unlike your personal ChatGPT, which can remember details about you across sessions, enterprise AI systems are usually designed to forget. There are good reasons for this: remembering raises compliance risks (think privacy laws and data security), creates ownership questions (whose memory is it, the team, the department, or the whole company?), and demands tricky integration across platforms. Even when the technology is capable, regulators expect audit trails, and many organizations are not culturally ready to let AI carry history without strong guardrails. Forgetting by default is simply safer.

This purposeful design comes with big constraints. It is like working with a colleague who nails today’s task but shows up tomorrow having forgotten the project even exists. A bank piloting a GenAI loan assistant may see a polished demo, yet in production the tool forgets what it flagged last week, asks for the same tax form twice, and cannot track the application through underwriting and compliance. Employees lose trust, customers lose patience, and the project never scales.

Some firms are experimenting with workarounds: plugging AI into external databases that act as “retrieval memory,” limiting memory to single sessions, or leaning on human-in-the-loop systems of record to carry context. On the horizon, agentic AI seems to promise safe, auditable memory that adapts across tasks while still satisfying regulators. However, that memory also raises fresh risks: a bigger attack surface for hackers, the chance of cross-team data leakage, and tougher compliance obligations. In other words, memory may be the missing piece for enterprise AI adoption, but it comes with its own trade-offs that leaders must navigate.

Curiosity in Clicks

How Energy Aware is your AI? [AI at Work]

When we imagine AI at work, we often think of tools that automate tasks. What if you could design a true teammate, one that challenges you, collaborates, and helps you grow? Let’s ask your chatbot to design an AI teammate for you. Give it three traits you’d value in a colleague (e.g. someone who asks sharp questions, spots patterns you miss, or preps you for tough meetings). See how it shapes the “job description.”

Prompt to try: “I work in[industry/title], you are my AI teammate. I want you to have three qualities: [list your three traits]. First, write a short job description for yourself as my teammate, list out what you can/can not do. Then, roleplay a short conversation with me to show how you’d collaborate.”

The fun part isn’t the chatbot’s answer, it’s noticing what qualities you chose. They reveal how you’d want to collaborate with AI, and how enterprises may need to think about designing AI teammates for real workflows.

Byte-Sized Intelligence is a personal newsletter created for educational and informational purposes only. The content reflects the personal views of the author and does not represent the opinions of any employer or affiliated organization. This publication does not offer financial, investment, legal, or professional advice. Any references to tools, technologies, or companies are for illustrative purposes only and do not constitute endorsements. Readers should independently verify any information before acting on it. All AI-generated content or tool usage should be approached critically. Always apply human judgment and discretion when using or interpreting AI outputs.