When a new technology hits the market, it's often the younger generations who embrace it.
A recent study by UK regulator Ofcom has confirmed this is the case with generative AI, having found that almost 80% of UK teenagers have now used the tool.
The findings raise questions about whether the rapid integration of generative AI into the digital lives of young users is healthy, given that we know so little about its potential effects.
Generative AI: a youthful affair
Perhaps the most surprising thing about Ofcom's latest Online Nation survey is the generational divide in the use of generative AI. It's easy to think that young adults would be the biggest users, but instead a mere 31% of adult users have embraced the tech, compared to more than twice the amount for 13 to 17-year-olds.
This youthful engagement has led experts to call for more regulation until we know more about the tech and its potentially damaging effects.
It makes sense. After all, we know that gambling from an early age can affect the mentality of a teenager, which is why under-18s are banned from gambling, including using a no deposit bonus or a free bet.
Viewing adult material like porn, too, is believed to give adolescents false ideas of what sex should be like, which is part of the reason why it's regulated.
Many believe that generative AI should face similar regulation for the time being, until we know its behavioural effects.
ChatGPT is the frontrunner
The rise in such technology has led to an explosion of AI tools available to young people, some of which are much more widespread than other.
OpenAI's ChatGPT emerges as the clear frontrunner, securing its position as the most widely used among participants aged 16 and above. The reasons for embracing it were diverse. 58% engage with the technology for sheer enjoyment, mirroring the playfulness of the younger demographic. A third of them use it for professional purposes, underling the tool's growing relevance in the workplace, while 25% of users integrate it into their study routines, signaling its potential as an educational aid.
Finally, 22% turn to generative AI for seeking advice, which is one of the main reasons behind the push for regulation. If an AI tool gives bad advice about sensitive topics, then there might be severe consequences for the user.
Online Safety: the fight back
Amidst this surge in generative AI adoption, Ofcom's study delves into online safety. Following the flags raised by safety experts, it outlines preventative AI-powered measures that governments and online authorities should adopt.
New AI-powered apps and controls, for example, could aim to enhance online safety. Recent studies indicate that almost 50% of teens experience online bullying or harassment, while more than half believe they are addicted to their mobile phones.
Popular platforms like TikTok and YouTube, catering to users aged 13 and over, are introducing features such as restricted screen time and family pairing to address parental concerns.
Additionally, innovative tools like Take It Down by the National Center for Missing & Exploited Children use AI to help teens remove explicit content, offering a unique digital fingerprint to detect and prevent online sharing.
Family Keeper, another AI-based app, assists parents in monitoring social media chats and tracking their child's location. Despite the challenges, these initiatives signal a positive step towards enhancing online safety for young users.
A roadmap for the future
Ofcom's comprehensive study not only highlights the meteoric rise of generative AI, particularly among young users but also shows just how quickly the online world is changing.
As generative AI becomes woven into the fabric of digital interactions, concerns about online safety, false identities, and exposure to harmful content call for a vigilant approach.
The study serves as a crucial roadmap for regulators and users, as we seek to balance innovation with user safety in a world of rapidly evolving technology.