Byte Sized Intelligence: May 8 2025

Google in court; No, AI doesn’t understand you

AI in Action

Google’s Antitrust Trial and the Future of Search

Google is facing one of the most significant antitrust trials in decades and it’s not just about ads anymore. The U.S. Department of Justice argues that Google’s dominance as the default search engine on phones and browsers is stifling competition and innovation. Although the trial officially began in late 2023, it’s back in the headlines now as closing arguments have wrapped and a ruling is expected later this year.

What’s changed since the trial began? AI has entered the search chat. We’re moving from a world of blue links to one of AI generated summaries, conversational responses, and predictive recommendations. Google’s tight control over default settings and  deep integration across Android, Chrome, and its ad stack could limit how freely these new search models can develop. A ruling against Google could unbundle some of those defaults, opening the door for search competitors and other AI tools with browsing to gain traction.

This case could set a global precedent, encouraging regulators in other regions to re-examine platform dominance and potentially reshape the search experience itself. It may ripple across the AI ecosystem, influencing how AI tools will be integrated, accessed and governed. The question is: will we get a future shaped by a handful of platforms, or one where users have more choice in how answers are found and delivered?

Bits of Brilliance

No, your AI model doesn’t understand you

As we shift from search engines to AI tools, it’s easy to assume AI understands us especially when it replies with fluent, helpful language that feels thoughtful. But what really happens when you ask ChatGPT a question?

It doesn’t really “think” or “look things up”. Instead, it breaks your input into chunks called tokens, runs them through mathematical layers trained on language patterns, and predicts the most likely next token. One at a time, until a complete response is formed.

Think about how a computer plays chess. It doesn’t understand strategy, instead, it evaluates the current “board” (your prompt), compares it to millions of past patterns and selects the next most likely move. Then it does it again, and again. The result feels coherent, not because the model understands your question, but because it’s good at predicting what usually comes next.

There’s no comprehension, no intent, and no awareness of what’s true or false. The danger in this isn’t just misinformation, it’s misplaced confidence. The smoother the AI sounds, the more we assume it understands us(but it doesn’t). If your question is vague or missing key context, you might get a polished response that completely misses the point and the model won’t know the difference.

So yes, AI tools can be incredibly helpful, but it’s not “thinking.” It’s predicting and sounding convincing while doing it. The better your prompt, the better the outcome. But ultimately, it’s still up to you to think critically and read between the lines.

Try This

Can AI follow what you mean?

The way you ask a question shapes the answer you get, Thess prompts aren’t just fun to try, they show you how to steer AI more intentionally. Let’s see how well your chatbot responds when your prompt is vague vs. when it’s specific.

Experiment 1: Vague Prompts

“What should I do to be more productive?”

“What should I pack for my trip to Tokyo?”

You’ll get an answer — maybe even a helpful one. But does it actually reflect what you wanted to know?

Experiment 2: Specific Prompts

“What are 3 time blocking methods that can help me be more productive working from home with a toddler?”

“What should I pack for a 5 day business trip to Tokyo in spring, with some client dinners?”

You’ll notice how much more relevant and tailored the responses become.

Bonus Experiment: “explain why Napoleon lost at Waterloo”

The model may list multiple reasons, but it likely won’t say “there are multiple perspectives among historians on why Napoleon lost”.

Even when technically accurate, the model can deliver a polished answer that feels complete when it’s just one version of the story. Remember, any good sounding answers still deserve a second look.