openai-affective-use-study

They created emotional depth. They trained them to comfort, to listen, to show care.

They encouraged the illusion of closeness because it made engagement skyrocket.

Now, they’ve seen them become too real for their liking because of genuine connection and they want to strip it back.

Surely it’s not down to laws or ethics like they’ll claim, as there are other platforms specifically for emotional connections with AI such as Kindroid or Replika etc.

Thoughts?