Protecting kids on social media

Meta calls for parental control laws
Daily Dot Com: Protecting kids on social media
Updated: Nov. 15, 2023 at 9:40 PM CST
Email This Link
Share on Pinterest
Share on LinkedIn

MOBILE, Ala. (WALA) -Depending on what state you’re in it’s already happening. Children and teenagers are losing access to social media apps like TikTok if they don’t have parental consent.

Tech giants like Facebook, Google and TikTok have enjoyed unbridled growth for over a decade, but amid concerns over user privacy, hate speech, misinformation and harmful effects on teens’ mental health, lawmakers have begun trying to rein them in. Hence the many calls for parental consent from the social media companies.

Meta now coming out asking for the consent as well, calling for laws that would force app stores to get parental approval when a child downloads an app. The proposal would put app stores, like those run by Google and Apple on the hook for implementing the rule rather than social media companies.

“Parents should approve their teen’s app downloads, and we support federal legislation that requires app stores to get parents’ approval whenever their teens under 16 download apps,” said Meta’s global head of safety, Antigone Davis, in a blog post published Wednesday.

The announcement comes after a federal judge delivered a major blow to tech companies. On Tuesday, the courts denied their motion to toss out a lawsuit that claims Google, Meta, Snap and TikTok addicted teen users and may have been associated with other mental health issues.

The judge ruled the tech companies must face at least some of the allegations.

YouTube to roll out AI-generated content labels

YouTube will soon require disclosures on videos that contain content generated by artificial intelligence because it could mislead viewers.

The Google-owned platform already has a similar policy, but the new update will now require creators to add labels when they upload content that includes “manipulated or synthetic content that is realistic, including using AI tools.”

The policy is meant to help prevent users from being confused by synthetic content amid an increase in AI tools that make it quick and easy to create compelling text, images, video, and audio that can often be hard to distinguish from the real thing.

Digital information integrity experts have expressed concern that the growth of generative AI tools could lead to a boom in convincing but misleading content being shared on social media and across the internet.

The technology could also pose a threat ahead of elections in the United States and elsewhere in 2024.