AI & TechArtificial IntelligenceDigital PublishingNewswireReviewsTechnology

ChatGPT Failed to Recommend WIRED’s Top Picks

▼ Summary

– WIRED’s product reviews are based on extensive hands-on testing and are frequently updated to provide reliable shopping advice.
– OpenAI has updated ChatGPT to function as a shopping assistant, aiming to reduce the need for users to visit multiple websites for research.
– In tests, ChatGPT frequently made errors or inserted its own product picks when asked for WIRED’s specific recommendations, undermining reliability.
– Despite a business deal with WIRED’s parent company, OpenAI’s tool can misrepresent products as WIRED-approved when they are not, such as listing an unrecommended TV.
– The AI also recommended products like un-reviewed headphones as current WIRED picks, demonstrating a pattern of presenting speculative or incorrect information confidently.

Finding reliable product recommendations is a core challenge for online shoppers. While generative AI tools like ChatGPT are increasingly promoted as shopping assistants, a recent test reveals they still struggle to accurately relay the curated advice from expert reviewers. For consumers seeking trustworthy buying guidance, visiting the source website directly remains the most dependable method.

OpenAI has recently enhanced its chatbot’s product discovery features, aiming to streamline the research process. The company suggests its AI can solve the frustration of juggling multiple browser tabs and repetitive “best of” lists. However, when specifically asked what WIRED’s Gear Reviews team recommends across several categories, ChatGPT repeatedly provided incorrect or fabricated information. This is notable given that Condé Nast, WIRED’s parent company, has a commercial agreement with OpenAI for content inclusion.

The issue underscores a broader tension. Despite formal partnerships, AI platforms can inadvertently devalue the extensive hands-on testing and human expertise that underpin professional reviews. OpenAI’s own messaging frames traditional recommendation lists as a nuisance, yet bypassing them can lead shoppers astray. A user might purchase an item believing it carries a publication’s endorsement, when the AI has actually inserted its own suggestion.

Testing the best TVs highlighted this problem clearly. When prompted for WIRED’s top picks, the chatbot correctly linked to the appropriate buying guide. Its listed “best overall” choice, however, was the LG QNED Evo Mini‑LED, a model not featured in the guide at all. Confronted about the error, ChatGPT’s own analysis was starkly honest. It admitted to replacing the actual top pick, the TCL QM6K, with a generic alternative in the same category, a direct failure to fulfill the specific request.

As more people use AI for product research, such inaccuracies risk eroding consumer trust. Shoppers relying on a trusted publisher’s name could end up buying a product that was never recommended, whether the source is WIRED, Consumer Reports, or another authoritative outlet.

The pattern repeated with headphones. ChatGPT presented the unreleased Apple AirPods Max 2 as WIRED’s current top pick for users invested in the Apple ecosystem. While this may become true after future testing, the product has not yet been evaluated or added to any official guide. The AI’s recommendation was premature, highlighting a critical gap: only products that reviewers have physically tested and validated earn a place in their recommendations. For now, the most reliable path to that curated insight is still the original publication.

(Source: Wired)

Topics

ai product recommendations 98% wired reviews 96% chatgpt accuracy 95% online shopping 93% Generative AI 92% product testing 90% affiliate commissions 88% ai shopping assistant 87% reader trust 86% business deals 84%