AI & TechArtificial IntelligenceNewswireQuick ReadsReviewsTechnology

AI Isn’t Ready to Out-Surf You on the Web, Yet

▼ Summary

– The author tested AI browsers like Chrome, Atlas, Comet, Edge, and Dia to see if they could deliver a better, more efficient internet experience than traditional search.
AI browsers often failed at complex tasks like sorting important emails, requiring excessive “prompt babying” and still delivering unreliable or irrelevant results.
– These tools performed better for specific, contained tasks like summarizing documents or compiling data from a webpage, acting as a helpful assistant rather than an autonomous agent.
– In a practical test to research and purchase shoes, AI browsers provided contradictory advice, required many back-and-forth prompts, and did not deliver a truly “hands-off” shopping experience.
– The overall conclusion is that current AI browsers do not live up to the hype of being better than humans at web tasks, as they demand significant user adaptation and mental effort for inconsistent results.

Finding the perfect pair of walking shoes online can feel like a full-time job, sifting through endless reviews and dubious deals. The promise of AI-powered browsers is to cut through that noise, acting as a personal digital assistant that shops, researches, and surfs on your behalf. Tech leaders envision a future where artificial intelligence navigates the web as skillfully as a human, but after putting several of these new tools through their paces, that future feels distant. The current reality involves more effort, not less, as you learn to communicate with a chatbot that often misunderstands your intent.

The landscape currently features two primary types of AI-enhanced browsing. Some are familiar browsers like Chrome and Microsoft Edge with an AI assistant added in a sidebar. Others, such as ChatGPT Atlas, Perplexity’s Comet, and The Browser Company’s Dia, are built from the ground up to prioritize AI, sometimes replacing the traditional search bar entirely. Their bold claim is that AI can handle complex tasks like booking reservations or finding the best products. To test this, I evaluated five browsers on key points: their usefulness for common tasks, the amount of precise prompting required, and whether I could trust an “agent” to complete actions for me.

The core issue became apparent quickly: crafting the perfect prompt is an art form. This is a stark contrast to the intuitive, forgiving nature of a traditional Google search. My first challenge was managing a flooded inbox. A simple request to “summarize my emails” yielded useless, literal descriptions. Refining the prompt to “identify important emails based on urgency” only surfaced irrelevant pitches. Success was fleeting and inconsistent. Comet eventually highlighted two genuinely relevant emails using a specific prompt, but when I tried that same prompt elsewhere, other browsers fixated on keyword-stuffed messages I could ignore. The process demanded exhaustive, detailed instructions, turning a simple request into a complex negotiation.

There were, however, glimpses of utility. When faced with a dense 48-page legal document, asking an AI browser to list relevant pages and summarize sections provided a faster starting point than combing through it alone. AI excels at interacting with the content on a specific webpage. While researching a phone upgrade, having a bot compile specs from Apple’s site into a clean table was genuinely helpful. The most reliable function across all browsers was summarization and data compilation, saving time and browser tabs. The mindset shift was crucial: asking “how can AI help me interact with this page?” worked far better than asking it to perform a task independently.

True complexity, however, exposed the limitations. Asking browsers to extract a transcript from a YouTube video yielded mixed results. Some refused, some provided partial transcripts, and only ChatGPT Atlas delivered a full, downloadable file. The promise of “just telling the AI what you want” crumbled under the weight of follow-up questions and inconsistent capabilities.

This all led back to the original quest: buying New Balance shoes. The research phase was manageable but required an excessively detailed prompt listing foot type, style preferences, step count, and budget. The AI often provided contradictory recommendations within a single response. While it helped narrow options to the New Balance 530, a model I’d also identified manually, the AI offered useful reasoning behind its suggestions, whereas my own choices were often based on aesthetics.

The final hurdle, finding the best deal and purchasing, was where the “agent” concept stumbled. Requests to find the lowest price in my size and zip code produced different results across browsers. When an AI did manage to add shoes to a cart, it required multiple confirmations and even tried to change my pickup preference to delivery. I watched one browser struggle for a full minute just to close a pop-up ad. The process was anything but hands-off.

Ultimately, AI is not yet better than a human at surfing the web. The experience reinforces that we spend considerable time teaching the AI how to help us, adapting our natural questions to its logic. A good outcome assumes you are adept at prompting, understand chatbot strengths, and possess immense patience for their weaknesses. While these tools can be useful assistants for specific, contained tasks, they demand a significant learning curve that may not feel worthwhile for most. For now, the most straightforward solution for my shoe dilemma isn’t a smarter browser, it’s a visit to an actual store.

(Source: The Verge)

Topics

ai browsers 95% AI Assistants 90% Prompt engineering 88% ai limitations 87% ai evaluation 85% online shopping 85% User Experience 82% search engines 80% information summarization 80% email management 78%