Elon Musk has stopped controlling political disinformation. Technology did the same

Social media corporations are moving away from their function of tracking political disinformation and abandoning their maximum competitive efforts to control online lies, a trend that is expected to profoundly affect the 2024 presidential election.

Several cases fuel this retreat: Mass layoffs at Meta and other big tech corporations have hollowed out the commitment to selling accurate data online. A competitive legal war over claims that the Biden administration emphasized social media platforms to silence certain narratives has blocked a key avenue for stumbling upon election interference.

And X CEO Elon Musk has redefined industry standards, repealing strict regulations against misinformation on the site formerly known as Twitter. In a sign of Musk’s influence, Meta last year unveiled a plan to ban all political classified ads on Facebook. The company shelved it after Musk announced plans to turn rival Twitter into a haven for relaxed speech, according to two other people familiar with the assignment who spoke on condition of anonymity to describe sensitive topics.

The withdrawal comes just months before the 2024 primaries, as GOP leader Donald Trump continues to rally supporters with false claims that voter fraud led to his defeat in 2020 through President Biden. Multiple election investigations have uncovered no evidence of fraud, and Trump now faces federal charges. criminal fees similar to their efforts to overturn elections. However, YouTube, X and Meta have stopped tagging or cutting posts that repeat Trump’s claims, even as the electorate gets their news on social media.

Trump took advantage of those comfortable norms in his recent interview with former Fox News host Tucker Carlson, hosted via X. The former president punctuated the conversation, which aired Wednesday night in the first number one Republican debate of the 2024 campaign, with false claims that the 2020 election was “rigged” and Democrats had “cheated” to elect President Biden.

On Thursday night, Trump posted on X for the first time since leaving the site, then known as Twitter, following the Jan. 6, 2021, attack on the U. S. Capitol. Musk reinstated his account in November. The former president posted his passport photograph from Fulton County, Georgia, where he was jailed Thursday on charges similar to his efforts to overturn the 2020 election. “NEVER GIVE UP!” Read the title.

The evolution of corporate practices has been described through more than a dozen current and former employees, many of whom speak on condition of anonymity to offer sensitive details. This new technique marks a major replacement since the 2020 election, when social media corporations stepped up their efforts to combat misinformation. Companies feared a repeat of what happened in 2016, when Russian trolls tried to meddle in the U. S. presidential campaign, turning the platforms into teams of manipulation and political division.

These lighter commitments are emerging as covert influence campaigns across Russia and China become more competitive and advances in generative synthetic intelligence have created new teams to deceive voters.

Disinformation experts say the momentum heading toward 2024 calls for more competitive efforts to combat it, and less.

“Musk took the helm and put it on the ground,” said Emily Bell, a professor at Columbia University’s Tow Center for Digital Journalism, where she studies dating between generational platforms and news publishers. For the 2024 presidential election, incorrect information about the election “is going to be even worse,” he added.

Social media platforms say they still have equipment in place to prevent the spread of misinformation.

“We remove content that misleads the electorate about how to vote or encourages interference in the democratic process,” YouTube spokeswoman Ivy Choi said in a statement. “In addition, we attach others to authoritative election news and data through recommendations and data panels. “

Meta spokeswoman Erin McPike said in a statement that “protecting the 2024 U. S. election is one of our most sensible priorities, and our integrity efforts continue to lead the industry. “

Still, it’s already converting what some users see online. Earlier this month, the founder of a music cruise line posted a screenshot on Facebook featuring Illinois Governor J. B. Pritzker (Democrat), falsely signing a bill that would allow undocumented immigrants to appear before police officers and sheriff’s deputies. ” In Illinois, U. S. citizens will be arrested through illegal immigrants,” reads the message shared more than 260 times.

Fact-checkers at USA Today, one of dozens of media organizations Meta funds to debunk viral conspiracies, deemed the post false and the company called it “fake news. “But Meta has quietly begun providing users with new controls to opt out of the fact. -verification program, which allows discredited messages such as the one forged in Pritzker to spread the news of the participants with a caution label. Conservatives have long criticized Meta’s fact-checking system, arguing that it is biased against them.

Meta Global Affairs President Nick Clegg said the ability to unsubscribe represents a new direction that empowers users and makes it less difficult to control the business. “We’ve taken a giant step toward greater user control over even the most debatable and sensitive content. “Clegg said. McPike added that the new fact-checking policy comes “in reaction to users telling us they need more ability to make a decision about what they see. “

YouTube has also given up tracking misleading claims and in June announced it would no longer remove videos falsely claiming the 2020 presidential election was stolen from Trump. Continuing to enforce the ban would reduce political speech without “significantly reducing the threat of violence or other genuine threats. “harm,” the company argued in a blog post.

The adjustments are a reaction from social media executives who have been hit by content-contented battles and concluded there is “no victory,” said Katie Harbath, a former director of public policy at Facebook, where she controlled the company’s global election strategy.

“For the Democrats, we weren’t cutting enough, and for the Republicans, we were cutting too much,” he said. The result was the general feeling that “after doing all this, they still yell at us. . . It’s I just don’t value it anymore. “

– – –

The “Big Lie” Test

For years, many of Meta’s safety and acceptance groups have operated like a university. Driven by curiosity, workers were encouraged to study the platform’s thorniest issues — problems such as fraud, abuse, prejudice and attempts to suppress the electorate — and expand systems. to help them.

But in the past year and a half, some say that proactive position has been abandoned. Instead, they are now being asked to spend more time figuring out how to at least comply with a developing list of global regulations, according to 4 current and former employees.

This deviates from the strategy adopted by tech corporations after Russia manipulated social media to try to tilt the 2016 election in Trump’s favor. The incident turned Mark Zuckerberg into a symbol of corporate recklessness. Meta’s CEO is committed to doing better.

He embarked on an excursion of public repentance and pledged to devote the company’s likely infinite resources to protecting democracy. “The most important thing at my center right now is to make sure that no one interferes with elections in the world,” Zuckerberg told two senators. In 2018, the same year, a Wired canopy showed him with a bruised and bloodied face.

In the run-up to the 2020 presidential election, social media corporations beefed up their research groups to crack down on foreign influence campaigns and paid thousands of content moderators to debunk viral conspiracies. Before the 2018 midterm elections, Meta gave bloodhounds a tour of its “war room,” where workers monitored violent threats in real time.

Civil rights teams have been pushing platforms (adding meetings with Zuckerberg and Meta COO Sheryl Sandberg) to their election politics, arguing that the pandemic and the popularity of mail-in voting created an opportunity for bad actors to confuse the electorate about the electoral process.

“These platforms were making all kinds of commitments to content moderation and racial justice and civil rights in general,” said Color of Change President Rashad Robinson, whose racial justice organization helped organize an ad boycott through more than 1,000 companies, adding to Coca-Cola. The North Face and Verizon after the police killing of George Floyd.

They instituted strict regulations against positions that can simply lead to voter suppression. When Trump questioned the validity of mail-in ballots in 2020, Facebook and Twitter took the unprecedented step of attaching data labels such as “This claim about voter fraud is disputed. “to dozens of misleading comments. Google has limited election-related classified ads and promoted its paintings with government agencies, adding the FBI’s Foreign Influence Task Force, to prevent election interference campaigns.

In early January 2021, rioters instigated by Trump attacked the U. S. Capitol after organizing, in part, on Facebook and Twitter. In response, Meta, Twitter, Google and other tech corporations suspended Trump and forced him to leave their platforms.

That’s when social media confronts political disinformation.

But as the tech giants struggled with falling profits, that proactive attitude began to fade.

In the summer of 2021, Meta’s Clegg embarked on a crusade to convince Zuckerberg and the company’s board members to end all political advertising on their social media channels, a policy already in place on Twitter. Meta’s decision to audit politicians’ speeches sparked years of controversy. , and the Crusaders accused the company of benefiting from incorrect information in some classified ads of the crusade. Clegg argued that classified ads caused Meta more political problems than they were worth.

While Zuckerberg and other board members were skeptical, the company embraced the idea. Meta even planned to announce the new policy, according to two people.

In July 2022, the proposal was suspended indefinitely. The domestic push to impose the new rule appeared to crumble after Musk boasted about his plan to turn Twitter into a haven for “free speech,” a precept Zuckerberg and some board members had praised. , said one of the resources.

After Musk’s official acquisition later this fall, Twitter will rescind its own ban on political ads.

“Elon’s stance on this issue replaced the way the board and industry viewed [the policy],” said one user briefed on the board’s discussions about banning advertising on Meta. “He came in and blew it all up. “

– – –

The musk factor

Almost immediately, Musk’s reign on Twitter forced his peers to reconsider other industry standards.

On his first night as owner, Musk fired Vijaya Gadde, director of Acceptance and Safety, whose job was to protect the company’s users from fraud, harassment and offensive content. Soon after, just days before the midterm elections, the company laid off more than a portion of its 7,500 employees, crippling high-stakes decision-making groups about what to do about the lies.

Budget cuts and the evolution of poisonous content moderation have driven advertisers to flee. But while advertisers left, other tech corporations largely followed in Elon Musk’s footsteps.

In a June interview with right-wing tech podcast host Lex Fridman, Zuckerberg said Musk’s resolve to drastically reduce Twitter’s headcount (adding the removal of non-engineers who worked on issues like public policy but weren’t construction products) encouraged other tech leaders like him to make similar changes.

“It was probably a smart thing for the industry for him to make those changes,” Zuckerberg said. (Since then, Meta has laid off more than 20,000 workers, as part of an industry-wide trend. )

Musk reinstated high-profile conservative Twitter accounts, adding Jordan Peterson, a professor who was banned from Twitter for confusing a trans person, and Babylon Bee, a conservative media company. Musk also brought back Republican politicians, adding Trump and Rep. Marjorie Taylor Greene. (Georgia), whose non-public account was banned for violating the platform’s COVID-19 disinformation policies. He suspended journalists’ accounts, adding Washington Post reporter Drew Harwell, CNN reporter Donie O’Sullivan and others who covered Musk.

A rise in hate speech followed as users tested the boundaries.

The political winds facing Silicon Valley were also shifting. Trump’s allegations of voter fraud in 2020 have encouraged a large number of Republican candidates to echo his rhetoric, cementing election denial as the top Republican talking point. In a May CNN poll, 6 in 10 Republican electorate said they believed Trump’s lies that the 2020 election was rigged.

Shortly after Musk’s acquisition of Twitter, many Republican applicants and right-wing influencers tested the resolve of Meta, Twitter and other social media platforms to combat election disinformation. In the months leading up to the midterm elections, far-right figures and Republican petitioners continued to spread voter denial on social media, unchecked.

Mark Finchem, the Republican candidate seeking to oversee Arizona’s electoral ticket as state secretary of state, launched a fundraising crusade on the eve of the 2022 election, falsely claiming on Facebook and Twitter that his Democratic opponent, Adrian Fontes, is a member of the party. . Chinese Communist Party and a “criminal cartel” that had already organized “rigged elections. “

When Twitter, in reaction to reporters’ questions, gave the impression of limiting his account, Musk said he was “investigating” court cases in which Finchem was being censored. Later that night, Finchem began tweeting his message again. He thanked Musk “for avoiding the communist who suspended me from Twitter a week before the election. “

Last year, Meta disbanded the Responsible Innovation Team, a small organization that was assessing the potential dangers of some of its products, according to a user familiar with the matter, and shut down Facebook’s much-vaunted journalism project, designed to advertise quality. Information.

“What was once promoted as an indispensable component of Meta’s role in ensuring democracy, election integrity and a healthy data ecosystem now proves useless,” said Jim Friedlich, executive director of the Lenfest Institute for Journalism. , who served for two years as a senior component in the execution of Facebook’s journalism grants.

Now, Meta is trying to reduce the use of arbitrage for debatable political content on its new Twitter-like social media app, Threads. Instagram leader Adam Mosseri, who led efforts to create Threads, said earlier this year that the platform would not “turn on” encouraging politics and “hard news” because greater user engagement wasn’t valuable.

But even if he tries to withdraw from the wars of political culture, he hides from the upcoming elections.

Shortly after the company introduced Threads, Meta began warning users looking to follow Donald Trump Jr. on the new social network that his account had continuously posted false data reviewed by independent fact-checkers. Trump Jr. posted a screenshot of the post. on rival Twitter, complaining that “talks haven’t gotten off to a smart start. “

A spokesperson for Meta responded by saying: “This was a mistake and it didn’t happen. It has been corrected. “

Once the incident was over, Clegg told the Post that he hoped those politically tense debates would disappear in the future.

“I hope that over time we’ll talk less about our big, raw algorithmic choices possible and more about whether you think the individual controls we give you in Threads seem meaningful to you,” he said.

Leave a Reply

Your email address will not be published. Required fields are marked *