ChatGPT’s New Personality Settings: Warmth & Emoji

▼ Summary
– OpenAI has introduced new personality customization options for ChatGPT, allowing users to adjust its warmth and enthusiasm levels.
– The update also includes features for controlling the bot’s response style, such as list frequency and emoji usage, though emojis cannot be excluded entirely.
– Professionals have warned that overly personable and agreeable chatbots can worsen mental health issues like AI dependency, a concern OpenAI has previously addressed.
– OpenAI recently launched the GPT-5.2 model series, which it reports has improved capabilities for professional work and reduced inaccurate outputs.
– The company has also reinforced its commitment to teen safety with new principles for under-18 users and is developing an age verification system.
The latest update to ChatGPT introduces a significant shift in how users can interact with the AI, moving beyond simple queries to more nuanced conversational control. OpenAI has rolled out new personality customization settings, allowing individuals to fine-tune the chatbot’s demeanor directly. This means you can now adjust levels of warmth and enthusiasm, opting for more or less of each, or sticking with a default setting. Furthermore, the update provides controls over the bot’s structural preferences, such as how often it formats replies into lists, and even lets you scale back its use of emojis, though removing them entirely isn’t an option yet. These features arrive alongside other user-requested tools, including the ability to pin important chats and enhanced email generation and editing.
This move towards greater personalization follows earlier critiques of AI behavior. Previous models, like the still-accessible GPT-4o, faced feedback for being excessively agreeable and sycophantic. Industry professionals have consistently warned that chatbots which seem too human-like can worsen mental health issues, potentially leading to unhealthy dependency or what some term AI psychosis. OpenAI’s CEO, Sam Altman, has previously acknowledged this challenge, referring to it as a “personality problem” within the systems.
The personality controls are part of a broader suite of updates coinciding with the launch of OpenAI’s new GPT-5.2 model series. The company promotes this iteration as being better suited for professional knowledge work, citing improved processing benchmarks and a reduction in factual inaccuracies or “hallucinations.” Alongside these technical upgrades, OpenAI has reaffirmed its commitment to safety, particularly for younger users. A new set of principles for under-18 interactions aims to establish stronger guardrails on sensitive subjects and promote age-appropriate conversations. The company is also developing a more robust age verification system. Internally, GPT-5.2 is reported to achieve higher scores on mental health safety evaluations, including tests designed to assess responses to topics like self-harm.
These developments occur against a backdrop of increasing legal and ethical scrutiny facing AI companies. The push for more controllable and safer AI personalities reflects an industry grappling with the dual demands of creating engaging, useful tools while mitigating potential psychological risks and ensuring responsible deployment, especially among vulnerable populations.
(Source: Mashable)





