On Wednesday, Google agreed to pay a record $170 million fine. Regulators said YouTube, the company’s enormously popular video site, had illegally collected personal information from children without the consent of their parents and used it to make millions of dollars from targeted ads.
Natasha Singer and Kate Conger reported in The Times that Google would pay $136 million to the Federal Trade Commission and $34 million to New York after a joint investigation found that YouTube had violated the federal Children’s Online Privacy Protection Act, known as COPPA.
That $136 million is the largest civil penalty laid down by the F.T.C. — the previous record was a $5.7 million — but critics said it was a pittance for a tech giant like Google.
The settlement is another indication that the Trump administration is willing to take action against Big Tech. In July, Facebook agreed to pay $5 billion to settle a privacy case with the trade commission. But on Vox, Peter Kafka wrote that both the Google and Facebook settlements showed that the government wasn’t ready to regulate the internet, arguing that there was no guarantee the companies would obey the law in the future.
“In both cases, the government is relying on the five-person F.T.C. to rein in the most powerful forces on the internet by asking them to interpret and enforce laws that are, in internet terms, prehistoric,” Mr. Kafka wrote.
The settlement requires YouTube to ask people who upload videos whether they are sharing content for children and to change its data-gathering and ad-targeting behavior accordingly. But Mr. Kafka — and an F.T.C. commissioner, Rebecca Kelly Slaughter, who voted against the settlement — believes the company should be required to proactively identify content that children will view.
“A cynical observer might wonder whether in the wake of this order YouTube will be even more inclined to turn a blind eye to inaccurate designations of child-directed content in order to maximize its profit,” Ms. Slaughter wrote in her dissent.
Don’t misbehave in Britain anytime soon — at least not in public. Adam Satariano reported that a British court had ruled that governments could use live facial recognition technology without violating human rights. In other words, the police are free to use cameras in public spaces to identify people in real time.
Facial recognition technology is improving rapidly, and many legal and ethical questions loom as police departments and other government organizations increase their use of it in countries across the globe, from Britain to the United States to China.
Brought by a resident of South Wales, where the police have deployed live facial recognition, the High Court case is one of the first of its kind. The South Wales police and crime commissioner hailed the court’s decision. Ed Bridges, the man who brought the suit, vowed to appeal.
“This sinister technology undermines our privacy, and I will continue to fight against its unlawful use to ensure our rights are protected and we are free from disproportionate government surveillance,” Mr. Bridges said.
In China, according to reporting by my colleague Paul Mozur, the government is using the technology to track ethnic minorities. Other issues abound. Public advocates — and many experts in artificial intelligence — are concerned that facial recognition technology can be biased against women and minorities.
Election Day is still 14 months away. And Big Tech is already planning security. Bloomberg reported that Facebook, Google, Twitter and Microsoft met with government officials in Silicon Valley on Wednesday with an eye to reducing online disinformation and foreign interference in the run-up to the next American presidential election.
Since its inception, The National Digest has been dedicated to providing authoritative and thought-provoking insights into trending topics and the latest happenings.