TECH NEWS – Perhaps this request is an indication of how sophisticated a technology OpenAI has created with ChatGPT-4o, otherwise they wouldn’t have had to ask for something unusual in a blog post…
ChatGPT-4o has a human-like ability to behave and respond, and OpenAI has detected several patterns from users. The company is concerned that an emotional bond could develop between users and the chatbot. For this reason, the company is asking the public using the technology on its blog not to feel anything for the chatbot! So, has anyone fallen in love with ChatGPT?
“During early testing, including red teaming and internal user testing, we observed users using language that may indicate they are forming connections with the model. This includes, for example, language that expresses shared bonds, such as “This is our last day together”. While these instances appear benign, they signal the need for further investigation into how these effects might manifest over longer periods of time. More diverse user populations, with more diverse needs and desires of the model, in addition to independent academic and internal studies, will help us define this area of risk more concretely.
Human-like socialization with an AI model can create externalities that affect human-to-human interactions. For example, users may form social relationships with the AI that reduce their need for human interaction – potentially benefiting lonely individuals, but potentially affecting healthy relationships. Extended interaction with the model could influence social norms. For example, our models are deferential, allowing users to interrupt and ‘take the mic’ at any time, which, while expected of an AI, would be anti-normative in human interactions,” OpenAI wrote.
It makes you wonder, because that’s how quickly the technology OpenAI created has evolved into such a sophisticated technology. Hopefully, users will not take the company’s (otherwise logical) warning the wrong way.
Leave a Reply