Meta will begin labeling political ads in 2024 that use AI-generated images

by MMC
0 comment

WASHINGTON (AP) — Facebook and Instagram will require political ads running on their platforms to disclose whether they were created using artificial intelligencetheir parent company announced Wednesday.

Under Meta’s new policy, labels recognizing the use of AI will appear on users’ screens when they click on ads. The rule will come into effect in the new year and will be applied worldwide. No specific date has been set.

Microsoft on Tuesday unveiled its own election-year initiatives, including a tool that will allow campaigns to insert a digital watermark into their ads. These watermarks are intended to help voters understand who created the ads, while ensuring that the ads cannot be digitally altered by others without leaving evidence.

The development of new AI programs has made it easier than ever to quickly generate realistic sounds, images and videos. In the wrong hands, the technology could be used to create fake videos of a candidate or frightening images of voter fraud or violence at polling places. When linked to powerful social media algorithms, these fakes could mislead voters and sow confusion on a scale never seen before.

Meta Platforms Inc. and other tech companies have been criticized for not doing more to address this risk. Meta’s announcement Wednesday — which came on the same day that House lawmakers held a hearing on deepfakes — is unlikely to allay those concerns.

As European officials work on comprehensive regulations for the use of AI, time is running out for U.S. lawmakers to pass regulations before the deadline. the 2024 election.

Earlier this year, the Federal Election Commission began a process potentially regulate AI-generated deepfakes in political ads ahead of the 2024 election. President Joe Biden’s administration released last week a decree intended to encourage the responsible development of AI. Among other provisions, it will require AI developers to provide the government with safety data and other information about their programs.

Democratic U.S. Rep. Yvette Clarke of New York is the sponsor of legislation that would require candidates to label any advertising created with AI and run on any platform, as well as a bill that would require watermarks on synthetic images and would make it a crime to create unlabeled deepfakes inciting violence or depicting sexual activity. Clarke said Meta and Microsoft’s actions are a good start, but not enough.

“We are on the brink of a new era of disinformation warfare, aided by the use of new AI tools,” she said in an emailed statement. “Congress must establish guardrails not only to protect our democracy, but also to curb the tide of deceptive AI-generated content that can potentially mislead the American people.

The United States isn’t the only country holding high-profile elections next year: National elections are also planned in countries like Mexico, South Africa, Ukraine, Taiwan, India and Pakistan.

AI-generated political ads have already appeared in the United States. In April, the Republican National Committee released an entirely AI-generated ad intended to show the future of the United States if Biden, a Democrat, is re-elected. He used fake but realistic photos showing shuttered storefronts, armored military patrols in the streets and waves of immigrants spreading panic. The ad was labeled to inform viewers that AI was being used.

In June, Florida Gov. Ron DeSantis’ presidential campaign shared an attack ad against his Republican primary opponent, Donald Trump, that used AI-generated images of the former president hugging the infectious disease expert Dr. Anthony Fauci.

“It becomes very difficult for a casual observer to understand: What do I believe here?” said Vince Lynch, AI developer and CEO of AI company IV.AI. Lynch said a combination of federal regulation and voluntary policies from tech companies is needed to protect the public. “Companies need to take responsibility,” Lynch said.

Meta’s new policy will cover any ad about a social issue, election or political candidate that includes a realistic image of a person or event altered using AI. More modest use of the technology, to resize or sharpen an image, for example, would be permitted without disclosure.

In addition to labels informing the viewer when an ad contains AI-generated images, information about the ad’s use of AI will be included in Facebook’s online ad library. Meta, based in Menlo Park, California, says any content that violates the rule will be removed.

Google revealed a similar AI labeling policy for political ads in September. Under the rule, political ads running on YouTube or other Google platforms will have to disclose the use of AI-edited voices or images.

Along with its new policies, Microsoft released a report highlighting that countries like Russia, Iran and China will try to harness the power of AI to interfere with elections in the United States and elsewhere and warning that United States and other countries must prepare.

Groups working for Russia are already at work, the Redmond, Washington-based tech giant’s report concludes.

“Since at least July 2023, Russian-affiliated actors have used innovative methods to engage audiences in Russia and the West with inauthentic, but increasingly sophisticated, media content,” the report’s authors wrote. “As the election cycle progresses, we expect the expertise of these actors to improve while the underlying technology becomes more capable. »

Denver 7+ Colorado News Latest Headlines | November 9, 6 a.m.

You may also like

Leave a Comment

afriqaa (1)

The news website dedicated to showcasing Africa news is a valuable platform that offers a diverse and comprehensive look into the continent’s latest developments. Covering everything from politics and economics to culture and wildlife conservation

u00a92022 All Right Reserved. Designed and Developed by PenciDesign