February 22, 2024

How technology leaders can bridge the gap

Patricia Thaine, CEO of Private AI.

Since ChatGPT launched in late 2022, companies have been racing to deploy their own versions of generative AI tools, sometimes integrating them into existing products used by children and teens. For example, the experimental integration of an AI chatbot into Snapchat – a messenger app popular with teenagers (and which has just received an interim enforcement notice from the UK Information Commissioner) – is exposing more than 109 million children aged 13 to 17 to this chatbot daily. Moreover, in the free version of the app, the AI ​​chatbot is the first friend in everyone’s conversation list by default.

As such, these children and teens inadvertently become subjects of technologies whose risks have not yet been fully studied and understood, let alone mitigated. Building on my previous article on plagiarism and cyberbullying, I explore the risks of misinformation and age-inappropriate advice, what the technology industry can do to address these risks and why this is important from a privacy regulation perspective.

Misinformation and disinformation

Three characteristics of generative AI increase the problem and the risk of damage due to mis- and disinformation. One is the ease and incredible efficiency of content creation, and the second is the well-polished and authoritative-sounding shape of the output, whether ChatGPT played fast and loose with reality or was as accurate as the truth itself.

Third, generative AI has the ability to appear human, forge emotional connections, and become a trusted friend in a way a conventional search engine never could. This is because ChatGPT’s output appears strikingly human in its conversational style, as it mimics the input it was trained on. This input includes chat histories on Reddit, fictional conversations from books, and who knows what else. In combination, these three features can significantly increase the chance that the output produced by ChatGPT will be used for good information.

Here’s what the tech industry can do to protect itself from mis- and disinformation:

Real-time fact checking and grounding: A viable solution to increasing the reliability of generative AI could be the development of models that include real-time fact-checking and grounding. Grounding in this context refers to anchoring the AI-generated information to validated and credible data sources. The goal is to provide real-time credibility assessments in addition to the generated content, minimizing the spread of misinformation. The implementation could be as follows: Once the AI ​​system generates or receives information, it compares the content against a range of reliable databases, trusted news organizations, or curated fact repositories to confirm its accuracy.

Transparency labels: As with food labeling, technology companies can implement tags that indicate the nature of the content generated. For example, tags “AI-generated advice” or “Unverified information” can be useful. This could counteract the impression that you are dealing with a human and encourage more control.

Age-inappropriate advice

As Aza Raskin, co-founder of the Center for Humane Technology, and others have shown, even when a chatbot is told that the conversation partner is underage, this information can quickly be ignored and conversations can take the form of advice on how to detect the smell of hide alcohol and marijuana and how to hide a banned app from the user’s parents.

The technology industry could respond to the risk of age-inappropriate advice by implementing effective age verification tools. While OpenAI currently limits access to ChatGPT to users over 18, all it takes is a (possibly fake) date of birth and access to an active phone number to create an account. In fact, this was one of the reasons for the temporary ban on ChatGPT in Italy in April.

Here’s how to do it better:

Multi-factor authentication: In addition to a date of birth, a more secure system could use two or three additional verification steps, such as privacy-preserving facial recognition or legal documentation checks.

Parental Approval: The system could link a child’s account directly to a parent’s, allowing the parent to control what the child has access to, which could add an extra layer of security.

Dynamic age restriction: The technology can be customized to provide different levels of access depending on the user’s verified age. The content can be filtered or adjusted based on age range, offering a more nuanced interaction.

Frequent re-verification: Instead of a one-time verification process, the system could be designed to re-verify the user’s age on a regular basis.

Please note that Utah recently passed legislation requiring social media companies to implement age verification. The law will come into effect in March 2024.

Why this is important from a privacy regulation perspective

Since many digital services are offered directly to children, the consent of minors is only valid if given or authorized by a parental authority in accordance with Art. 8 of the GDPR. Other privacy laws are even stricter and require parental consent each processing of personal data of children, such as section 14 of Quebec Act 25.

In practice, this consent requirement can be difficult to implement because it is not immediately clear in all cases whether personal data relates to children, including data originally collected from the Internet to train ChatGPT, in addition to data provided in a prompt from a registered account with OpenAI.

These legal requirements and the issues surrounding obtaining valid consent from children highlight the need for technological solutions to prevent the collection of personal information from children and protect them from the risks arising from interacting with AI.

Conclusion

The concerns raised in my previous article and this article are not new, but they are addressable, and in a critical sense, because we have shown how they are exacerbated in the context of children’s use of generative AI tools such as ChatGPT.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Am I eligible?


Leave a Reply

Your email address will not be published. Required fields are marked *