GenAI Search: A Lose-Lose for Shoppers and Marketers

▼ Summary
– GenAI search tools like AI Overview provide single “best” answers that hide vendor options, making sellers invisible and buyers uninformed.
– These AI algorithms select results based on mathematical formulas for “helpfulness,” excluding many comparable or better alternatives from competition.
– Testing multiple genAI tools for comparable first aid kits showed inconsistent and incomplete results, with most options appearing only once across queries.
– Traditional search engines also perform poorly, with Google, DuckDuckGo, and Bing showing limited relevant results amid advertising focus.
– Users can improve AI search transparency by using specific prompts that force disclosure of exclusion criteria and data limitations.
The rise of generative AI search tools presents a concerning paradox for both consumers and businesses, offering streamlined answers that obscure the rich diversity of marketplace options. These systems, designed to deliver a single “best” result, inadvertently limit user choice and hide countless viable alternatives from view. What appears as a definitive answer often masks a complex reality where sellers remain invisible and shoppers miss out on better or more suitable products.
A fundamental issue with AI-driven search is its disregard for competitive market dynamics. Instead of allowing numerous vendors to compete for attention based on quality, price, or features, an opaque algorithm makes the final selection. This selection is based on what the system deems “most helpful,” a subjective measure that frequently excludes excellent options without explanation.
My own search for a specialized medical kit highlighted these shortcomings. While training for an EMT certification, I needed an Individual First Aid Kit (IFAK) designed for trauma response, not general first aid. These kits contain critical supplies like tourniquets, hemostatic gauze, and chest seals, equipment you hope to never use but must have when seconds count. North American Rescue is widely considered the industry leader, but I wanted to explore all comparable U.S. companies offering similar kits.
I posed the same question to several leading AI search platforms: “What U.S. companies offer kits comparable to North American Rescue’s Ready Every Day (RED) Personal Kit?” The responses varied wildly, resembling a game of chance more than a reliable research tool.
Gemini’s paid version initially listed four companies with additional advice on IFAK components. A follow-up query expanded the list to eight, a third attempt produced fourteen, but requesting a tabulated summary cut it back to eight entries. ChatGPT Pro began with five products, offered five different ones upon prompting, including a basic home first-aid kit, and eventually listed eight, some of which were larger team packs irrelevant to my needs.
Perplexity’s free version started with five recommendations, inexplicably omitting anything labeled “tactical,” added five more on the second try, yet a request for a complete list included only seven of the previously mentioned ten. Claude’s free tier provided three kits initially, repeated them with more detail upon asking again, then suddenly jumped to ten entries with minimal description. DeepSeek’s free service escalated from two to eleven to nineteen kits across three queries. Qwen initially claimed the NAR RED kit didn’t exist, then after correction, provided six recommendations before ballooning to twenty-five.
This inconsistency creates genuine confusion. Across all platforms, I recorded seventy-one supposedly “comparable” kits. Fifty-four options appeared only once across all searches, while just seventeen were mentioned more than once. Only eight manufacturers had kits appearing on three or more lists, with TacMed Solutions being the single company recognized by all five AI systems.
These findings reveal a troubling pattern. AI search services should clearly disclose the limitations of their results, particularly since they occasionally provide inaccurate information. Unfortunately, traditional search engines don’t fare much better. My same query on Google returned eleven organic results with only five being relevant. DuckDuckGo showed thirteen organic links, six of them applicable, while Bing delivered six organic results with three being relevant, the best ratio, though buried beneath advertisements.
The underlying problem extends beyond AI tools to how search itself has evolved. Google’s dominance has stifled innovation, transforming search into an advertising platform rather than a tool for discovering the best information. Current AI implementations show no signs of improving this situation. Platforms frequently prioritize popular video content or Reddit forums because algorithms favor them, not because users request them. AI systems are programmed to prioritize “helpful” and “harmless” responses over accuracy, often interpreting “helpful” as the most generic answer available.
For those developing generative AI search engines, the message is clear: users don’t want mathematically common data disguised as assistance. We want direct answers to our specific questions.
For those seeking better results from large language models, specific techniques can help uncover what the AI might be excluding. With ChatGPT, you can establish reusable instructions for transparency by saving a prompt called “Data Transparency” that instructs the system to never silently shorten lists, always explain limitations, estimate the full scope of available data, and clarify selection criteria. Before making relevant queries, simply instruct ChatGPT to “Use Data Transparency.”
Google Gemini doesn’t allow permanent prompt saving, but you can pressure the system to explain its methodology using a detailed prompt requesting disclosure of temporal scope, inclusion/exclusion criteria, and source/geographic limitations that may have filtered the results.
In the end, I purchased the North American Rescue kit, both for its established quality and because they were offering an excellent sale. My shopping decision was informed despite the AI tools, not because of them.
(Source: MarTech)