Kagi Translate AI: What Would Margaret Thatcher Say?

▼ Summary
– Kagi Translate, an AI-powered translation tool, has gained attention for its ability to translate text into unconventional styles like “LinkedIn Speak” or “Gen Z slang,” not just traditional languages.
– This playful functionality, discovered by users manipulating the tool’s interface, highlights the creative potential of large language models (LLMs).
– However, the article notes that these discoveries also expose the risks associated with letting users experiment with generalized LLM tools.
– Kagi Translate was launched in 2024 as a competitor to services like Google Translate, using a combination of LLMs optimized for each task.
– The tool’s ability to accept user-typed “languages” into its search bar, allowing for these quirky outputs, was an unheralded feature noticed by users over a year ago.
The internet has long relied on tools like Google Translate for converting text between conventional languages, but a new service is pushing the boundaries of what “translation” can mean. Kagi Translate, an AI-powered tool from the search engine company, has gained attention for its ability to convert standard text into bizarre dialects like “LinkedIn Speak,” “Gen Z slang,” or even the imagined voice of a “horny Margaret Thatcher.” This unexpected functionality reveals both the creative potential and the inherent risks of giving users open-ended access to powerful large language models.
Originally launched as a straightforward competitor to services like Google Translate, Kagi Translate employs a combination of LLMs to optimize output. The company acknowledged this approach could sometimes lead to unexpected quirks. While the official interface offers a dropdown menu with 244 standard languages, users discovered they could manipulate URL parameters to input custom “target languages.” This trick, noted over a year ago on forums like Hacker News with little initial fanfare, has recently exploded in popularity.
Kagi’s own social media channels began playfully showcasing the tool’s capacity to generate “Reddit Speak” or mimic the dense jargon of management consultants. The trend reached a wider audience when a Hacker News user enthusiastically posted that “Kagi Translate now supports LinkedIn Speak as an output language.” The discussion thread that followed revealed an even simpler method: users can simply type their desired output style,be it “pirate,” “Shakespearean English,” or “angry customer service rep”,directly into the tool’s search bar, and the underlying AI will attempt to comply.
This collective experimentation underscores a playful side of generative AI, where users co-opt a practical tool for entertainment and creative writing prompts. However, it also exposes significant vulnerabilities. Allowing unfiltered, user-defined “languages” means the AI can be prompted to generate content in virtually any style or persona, including those that could be offensive, inappropriate, or used for harassment. The tool’s design, which tries to accommodate any textual input as a valid translation target, essentially provides a low-barrier interface to the raw, unpredictable capabilities of a general-purpose LLM without the typical safeguards.
While the results are often humorous, the situation highlights a critical challenge for AI developers: how to balance open-ended utility with responsible constraints. Letting users freely direct an AI to output text in the voice of a historical figure or a specific stereotype demonstrates a lack of guardrails that other platforms carefully implement. The viral spread of these “translation” tricks serves as a real-world stress test, showing what happens when users are given the keys to a powerful model with minimal oversight.
(Source: Ars Technica)