A post from Science Daily: AI Chatbots have shown they have an ’empathy gap’ that children are likely to miss

A post from Science Daily: AI Chatbots have shown they have an ’empathy gap’ that children are likely to miss

Artificial intelligence (AI) chatbots have frequently shown signs of an ’empathy gap’ that puts young users at risk of distress or harm, raising the urgent need for ‘child-safe AI’, according to a new study. The research urges developers and policy actors to prioritize AI design that take greater account of children’s needs. It provides evidence that children are particularly susceptible to treating chatbots as lifelike, quasi-human confidantes, and that their interactions with the technology can go awry when it fails to respond to their unique needs and vulnerabilities. The study links that gap in understanding to recent reports of cases in which interactions with AI led to potentially dangerous situations for young users.

Read More

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *