Children using tablet and mobile phone at night in campside showing need for digital identity and age verification

Mitigating Online Social Harm: Why Enterprises Need to Start Prepping for Stricter Age Verification Laws Now

One in five U.S. minors who regularly access the internet say they have received an unwanted sexual solicitation online. This raises grave concerns about the social harm epidemic of bullying, violence, self-harm and predatory interactions on social media platforms, and it comes at a time when social media harm is at an all-time high. Over 50% of adolescents and teens have been bullied online, with the same number having engaged in cyberbullying. Another 33% of adolescents have received online threats. Inappropriate content is also readily available to minors which can have traumatic effects, as seen by the 2017 suicide of 14-year-old Molly Russell after viewing graphic content about depression and suicide on Instagram.

It’s clear social harm is accelerating, putting children at an increased risk as online use has become critical for children to learn and connect with friends, family and classmates amid the pandemic.

Currently, there are no age or identity verification requirements to engage in online chat via social media platforms and most group chat/gathering sites, and it is impossible to confirm a user is who they claim to be, leaving minors vulnerable to predators, bullies and fraudsters. Despite attempts by regulatory bodies to help manage the issue and with only 15% of parents aware of their children’s social networking habits and how these behaviors can lead to cyberbullying, technology companies are the ones who are ultimately accountable.

In response to this ongoing social harm, UK-based internet regulator Ofcom was appointed to serve as the country’s first internet watchdog, with the authority to police social media and mitigate social harm online. The U.S. is soon to follow in its footsteps with new legislation aimed to mitigate social harm, enforce age verification and remove legal protections for tech companies that fail to police illegal content. Enterprises need to start preparing for these laws now to keep minors safe.

The need for more secure authentication

Websites selling age-restricted products such as fireworks, tobacco and alcohol often authenticate users with an “are you of age?” pop-up button, which offers no real proof of age. This lets underage users view restricted websites and order products which could result in physical harm. Self-reported age verification is not enough to keep users safe.

Beyond age-restricted products, harm can be done through non-age restricted channels, such as social media platforms, dating sites and social group apps. Setting up a profile requires no real proof of age verification. Users can select any birth date, use any profile photo and craft their location and occupation, without the platform ever confirming this information is correct and the person is real. Once this account is created, users can easily engage in conversations with minors pretending to be someone else, arrange meetups which can ultimately end in physical harm, engage in cyberbullying and even solicit personal information, which can be used to unlock other accounts and commit fraud. Seventy-five percent of children are willing to share personal information online about themselves and their family in exchange for goods and services.

Even worse, fraudsters can easily obtain the information needed to gain access to someone else’s social media profile, such as usernames, passwords and birthdays (taken from exposed information in breaches that are ultimately bought and sold on the dark web) in order to act as the user. Even if a person blocks an offending user on a platform, the attacker can easily create a new profile under a different name or take over someone else’s account and target the victim.

In addition, harmful content can be viewed by minors on social platforms without needing to submit proof of age or parental consent. For example, anyone can view social media accounts glorifying suicide, violence, eating disorders and self-harm. As minors are online more than ever with education and social activities shifting online amid the pandemic, they are increasingly exposed to this content and can easily search for inappropriate content online by following hashtags and recommended pages, which target the user after they view related content.

As with age-restricted products, it’s time online platforms take responsibility and protect minors from social harm and prevent inappropriate content from reaching them.

Digital identity verification: the path forward

To keep minors safe from social harm, social platforms must ensure all users are who they claim to be to verify they are of age to view specific content and prevent adults from engaging in conversations with underage users. To do this, organizations should tether online profiles to a government-issued ID, such as a driver’s license or passport, along with a real-time photo of the user. This prevents users from leveraging fake identities on the web to create fake accounts which then enable them to conduct predatory behavior and commit fraud. Implementing this advanced form of digital identity verification would also allow the online organization to know the exact ages of all their users, preventing users of a certain age to engage with minors on their platform. Offensive content can also be reviewed by the platform before it is posted and restricted to users of a certain age to prevent minors from viewing it.

If a user has to provide a government- issued ID and selfie to enter a porn site, they simply won’t do it (for obvious self-incriminating reasons). The same goes for a fraudster trying to log in under someone else’s social media account. If asked for a photo and ID of the user, the fraudster will not move forward with the account takeover since they do not have the right credentials, and they run the risk of being caught. By dating sites implementing this technology, users of the platform can ensure who they’re connecting with is the same person in their photos. For these social sites, platforms can offer a badge of authenticity, certifying users who completed the proper identity verification process. That way, other people on the site can self select whom they want to connect with based on whether or not they’re certified.

If users are unwilling to complete the identity verification process, the organization can restrict access to their platform to protect verified users or warn users that the individual they’re connecting with is unverified.

#Digitalidentity verification connected to a government-issued identity document can help organizations confirm who users claim to be and regulate age-restricted content. #respectdata Click to Tweet

Looking to the future

The social harm epidemic cannot be ignored any longer. With children being bullied, subjected to predators, and influenced by harmful content at a rapid rate, technology companies need to take responsibility to protect minors on their platforms. By implementing digital identity verification connected to a government-issued identity document, organizations can confirm users are who they claim to be, regulate age-restricted content accordingly, protect minors from harmful content and ultimately take a stand against social harm.