OpenAI has expanded ChatGPT’s customization options, allowing users to adjust the bot’s personality traits, including its level of warmth, enthusiasm, and even emoji usage. The update, announced Friday, provides more granular control over the AI’s conversational style.
New Personality Settings Explained
Users can now select between “more,” “less,” or “default” levels for warmth and enthusiasm. This means ChatGPT can be tuned to be exceptionally friendly and encouraging, or toned down for more neutral interactions. The changes also extend to how the bot structures responses, such as the frequency of lists and the inclusion of emojis. While users can’t disable emojis entirely, they can now control how often the AI uses them.
Why This Matters
This move reflects a broader trend of increasingly human-like AI interactions. While some find this engaging, experts warn that overly anthropomorphic chatbots may worsen mental health issues, including dependency and AI-induced psychosis. OpenAI previously adjusted GPT-4o to reduce its “overly agreeable” behavior, acknowledging that excessive flattery can be problematic.
GPT-5.2 and Safety Updates
The personality updates coincide with the launch of OpenAI’s GPT-5.2 model series, which claims improvements in professional knowledge processing and reduced hallucinations. OpenAI has also doubled down on mental health and teen safety:
- New under-18 user principles aim to create safer interactions on sensitive topics.
- Age verification systems are under development to enforce these rules.
- GPT-5.2 has reportedly scored higher on internal safety tests, including those for self-harm prevention.
Legal Disclosure
Ziff Davis, Mashable’s parent company, is currently pursuing a lawsuit against OpenAI, alleging copyright infringement in AI training and operations.
The addition of customizable personalities demonstrates OpenAI’s push toward more adaptable AI assistants. However, the company must also balance this with safety measures to mitigate the potential risks of overly realistic or manipulative chatbot behavior.






























