Time flies with great content! Renew in to keep enjoying all our premium content.
Prime
Why you’ll soon need an ID to use social media in Kenya
Despite mounting debates to moderate children’s time and activity online across the globe, there is no country in the world that has successfully implemented age verification mechanisms for social media users.
Kenyans will soon be required to verify their ages using national identification documents before accessing popular social media platforms, in a sweeping move by the government meant to improve online safety of minors.
The Communications Authority of Kenya (CA), in new guidelines for child online protection, has ordered ICT product and service providers to implement “age verification” mechanisms in efforts to boost child online safety in the country.
“Develop, use and implement age verification mechanisms in the deployment of ICT products and services, with a view to facilitate children’s right to freedom of expression and access to information,” the regulator has directed in the new guidelines.
An official from the CA, who requested anonymity as they are not allowed to speak to the media, confirmed that this directive will ultimately be enforced through verification of government-issued IDs.
“At the beginning, we will allow the service providers to accept user-entered ages, but ultimately we will require everyone to verify that and there’s only one way of doing age verification, and that’s through an ID,” they said.
Age verification is most accurately achieved by uploading of a government-issued ID, which can clearly ascertain that an individual is above the legal age to consume certain content on the internet.
Despite mounting debates to moderate children’s time and activity online across the globe, there is no country in the world that has successfully implemented age verification mechanisms for social media users.
Currently, anyone in Kenya can open and use a social media account without having to prove their age, meaning that children using these platforms are exposed to the same content as adults.
The guidelines, which are due to take effect in six months, aim to minimise “exposure of children to online risks and vulnerabilities,” and come amid increased scrutiny of online platforms in Kenya.
Further, the new rules aim to prevent an outright ban on children’s use of social media platforms, a measure that some jurisdictions around the world are considering.
The regulator said that the guidelines were developed through public consultation and form part of its constitutional mandate to ensure that ICT consumers, including children, have a safer internet experience.
Kenya currently ranks poorly in global benchmarks for child online protection, largely due to poor regulatory framework and lack of adequate infrastructure to safeguard online.
The latest survey by international digital intelligence think-tank DQ Institute, revealed that while Kenya excelled in school education and ICT company responsibility with regard to child online safety efforts, it performs below average in regulation, infrastructure, family support and ensuring safe use of technology by children.
This new guideline comes just a few months after the Interior ministry ordered all providers of social media services to establish a physical presence in Kenya and appoint local representatives.
The move, according to the ministry, is intended to curb “increasing abuse of social media, including harassment, hate speech, and incitement to violence” by ensuring local accountability of the social media tech giants.
The ministry did not give timelines for this directive, and so far, no social media firm has set up an office locally.
The headwinds
In Australia, a push to ban users below 16 years from social media faces headwinds despite endorsement from top leaders after the government was advised that age assurance technology has not been implemented anywhere in the world and would be extremely difficult to enforce.
Most social media platforms, like Facebook, TikTok, Instagram, WhatsApp and Pinterest, currently have a minimum age requirement of at least 13, but users can easily bypass the signup process by entering a false date of birth.
Age verification would require access to an identity document like a national ID card, a passport, or even a birth certificate to ensure compliance with the minimum age requirements for the social media platforms.
Uploading of IDs is already a requirement for onboarding with financial technology apps like digital lending, banking, mobile money, crypto exchanges and money transfer platforms as part of the Know-Your-Customer (KYC) requirement.
However, the use of IDs to prove identity or age raises privacy concerns and risks data breaches, and could lock out undocumented individuals from accessing the platforms.
Other age verification mechanisms –which have limited accuracy– include AI-based facial analysis, the use of a third party and self-declaration with parental consent for minors.
In addition to implementing an age verification mechanism, ICT industry players will also need to develop and publish their online protection policies for children and implement measures to combat child sexual abuse material.