In this year’s State of the Union Address, President Biden called for bipartisan legislation “to stop Big Tech from collecting personal data on kids and teenagers online, ban targeted advertising to children, and impose stricter limits on personal data these companies collect on all of us.”
Both Republicans and Democrats have also underscored the urgency of strengthening protections for children and teenagers online, citing concerns about privacy, exposure to certain types of content, and the detrimental impact of too much time online.
Despite the broad consensus that protecting youth online is critical, the 118th Congress has yet to pass federal data privacy legislation–neither a comprehensive bill that protects children, teens, and adults, nor a narrower bill that specifically protects minors.
One of the key issues delaying legislation is an ongoing debate about age-based data privacy and online content moderation requirements and the controversial “age assurance” technologies that online platforms may use to comply with those requirements. Opinions diverge on whether age assurance requirements–and the artificial intelligence (AI) technologies deployed to satisfy them–are more beneficial than harmful. Several advocacy groups emphasize that youth require special protection and that prioritizing youth-focused legislation is necessary. Others stress that passing data privacy, AI governance, and online content moderation legislation that protects internet users of all ages would be better. There is bipartisan cooperation on both sides of this debate.
This blog explains age assurance technologies and contextualizes their benefits and risks in ongoing legislative discussions. By analyzing this issue at the center of the “technology policy trifecta,” this piece aims to help policymakers evaluate the pros and cons of advancing data privacy and online safety legislation with and without age-based requirements and restrictions.
Which Technologies Can Enable Age Assurance?
Age assurance technologies can enable online platforms to limit access to users above a given age and create user account settings that differ based on age. For example, some social media platforms have age assurance requirements that users must satisfy before creating accounts. Other social media platforms have age assurance requirements that users must satisfy before accessing certain features or services.
Age Assurance Risks and Rewards
Policymakers, civil society organizations, academic researchers, and other stakeholders recognize that young people may be especially vulnerable to harm from particular online experiences. Proponents of age assurance requirements convey that declaring, estimating, or verifying age can help limit young people’s exposure to harmful content and interactions. Opponents of age assurance requirements stress that these requirements can limit free expression benefits from online anonymity and curtail access to helpful information.
Determining which content is age-appropriate is no easy task. Age is not the only characteristic relevant to predicting the impact that content may have on an individual; some children may be more mature than others, and even equally mature children may have different content sensitivities. Age assurance technologies can also create data privacy risks by requiring additional data processing, though specific privacy risks vary depending on the technological age assurance solutions deployed and the website or platform operator’s data governance practices.
Moving forward with much-needed federal legislation likely will require compromises between age assurance proponents and opponents on both sides of the aisle. Exploring the ways in which each approach to age assurance poses risks and rewards relative to alternative approaches can help policymakers consider which compromises may be most feasible and effective.
Legislation with Age-Based Data Privacy and Online Content Moderation Requirements
Federal and state lawmakers have recently introduced legislation with age-based data privacy and online content moderation requirements. Across jurisdictions, the scope of the bills and their implications for age assurance technologies vary significantly.
At the federal level, the Republican-sponsored Social Media Child Protection Act and the Democrat-sponsored Kids PRIVACY Act would explicitly require the use of age assurance technologies to access a wide range of social media platforms and other websites. In contrast, the Children’s and Teens’ Online Privacy Protection Act (COPPA 2.0) and the Kids Online Safety Act (KOSA), which recently passed out of the Senate Committee on Commerce, Science, and Transportation with bipartisan support, do not explicitly require age assurance. However, COPPA 2.0 and KOSA would both establish age-based data privacy and content moderation requirements that could encourage websites and apps to implement age assurance technologies.
While Congress continues to deliberate over youth-focused data privacy and online safety legislation, state legislatures have introduced and enacted legislation that is already creating a legal patchwork impacting age assurance technologies. Several state-level bills that recently became law, like Virginia’s SB1515 and Arkansas’s SB66, explicitly require the use of age assurance technologies to restrict access to pornographic websites. Other new state laws, like Utah’s SB152 and HB311, establish age assurance requirements to limit minors’ access to social media platforms. California’s Age-Appropriate Design Code Act gives a choice to each “business that provides an online service, product, or feature likely to be accessed by children.” Such businesses can either “estimate the age of child users with a reasonable level of certainty appropriate to the risks that arise from the data management practices of the business or apply the privacy and data protections afforded to children to all consumers.” Some states have also recently enacted comprehensive consumer data privacy, biometric data privacy and technology, and AI or automated decision system laws, which have implications for age assurance technologies.
In the absence of preemptive federal legislation, analyzing the complex interactions of these state laws is critical for websites and apps that are considering or actively working to implement age assurance technologies. Understanding how this legal patchwork impacts compliance costs, technological innovation, individual internet use, and civil rights is also important as Congress considers whether to pass preemptive data privacy, AI governance, and online content moderation legislation.
Kids’ and teens’ data privacy and online safety is a source of bipartisan concern in Congress and state legislatures. However, whether and how to conduct age assurance to facilitate data privacy, content moderation, and online safety requirements remains a contested topic at the center of the technology policy trifecta.
Each approach–age verification, age estimation, and age declaration–leverages different technologies and has pros and cons. Giving users a choice of which method to employ can empower users to weigh the relative risks and benefits and select the method with which they are most comfortable, but no option is risk-free. Continuing to operate in a legal vacuum at the federal level is also far from risk-free.
Members of Congress recognize the urgency of advancing youth-focused and/or generally applicable legislation to protect privacy, promote responsible AI governance, and facilitate more effective online content moderation, but debates over age assurance and other issues have slowed legislative progress. Building policymakers’ understanding of age assurance approaches’ risks and rewards can help facilitate the dialogue needed to reach compromises and advance one or more types of legislation that can help protect kids and teens online.
Support Research Like This
With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.Give Now