Transcripts
Hearing on Social Media and Voting Security

Hearing on Social Media and Voting Security

Google, Meta, and Microsoft leaders testify on threats to U.S. elections and voting security. Read the transcript here.

Hungry For More?

Luckily for you, we deliver. Subscribe to our blog today.

Thank You for Subscribing!

A confirmation email is on it’s way to your inbox.

Share this post
Chairman Warner (00:00):

… Chairman presumed that and that he’d have a chance to vote and stuff. So we’re going to [inaudible 00:00:07] and we’re going to roll through. There are two votes this afternoon, colleagues, and we will roll through these.

(00:13)
I’m going to call this [inaudible 00:00:14] to order, and I want to welcome today’s witnesses, Mr. Kent Walker, President of Global Affairs and Chief Legal Officer at Alphabet; Mr. Nick Clegg, President of Global Affairs at Meta; and Mr. Brad Smith, Vice Chair and President of Microsoft.

(00:28)
Today’s hearing builds on this committee’s long-standing practice of educating the public about the intentions and practices of foreign adversaries seeking to manipulate our country’s electoral process.

(00:43)
I do know we’ve all come a long way since 2017 when, as folks may remember, there was a lot of skepticism that our adversaries might have utilized American social media platforms for [inaudible 00:00:57]. It was only seven years ago that in response to inquiries from this committee, that Facebook shared the first evidence of what would become an expansive discovery documenting Russia’s use of tens of thousands of inauthentic accounts across Facebook, Instagram, YouTube, Twitter, Reddit, LinkedIn, and even smaller platforms like Gab and Tumblr and Medium and Pinterest, all to try to divide Americans and influence their votes.

(01:31)
And through this committee bipartisan investigation into the Russian interference in the 2016 election, we learned that Russia had devoted millions to wide-ranging influence campaigns that [inaudible 00:01:43] generated hundreds of millions of online impressions, which [inaudible 00:01:47] political division, racial division, and impersonated social, political and faith groups of all stripes across all ends of the political spectrum to infiltrate and manipulate our debate.

(01:59)
This committee’s bipartisan efforts also resulted in a set of recommendations for government, for the private sector and for political campaigns; recommendations for which I hope today’s hearing will serve [inaudible 00:02:15].

(02:15)
These recommendations included greater information sharing between US government and the private sector about foreign malicious activity. Not domestic, foreign malicious activity; greater [inaudible 00:02:30] measures by platforms to inform users about that malicious activity; as well as more information on the origin and authenticity of information presented to them; and, this was something [inaudible 00:02:44] a lot of attention, facilitation of open source research by academics and civil society organizations to better assist platforms here and others and the public in identifying malicious use of social media and bad faith actors.

(02:57)
[inaudible 00:03:00] side, we’ve also seen some significant progress. And let me state right now that the 2020 election I think was the most secure in the United States history. That’s verified by election security experts, and I want to commend the Trump administration for helping that come about.

(03:16)
Now it came about because the progress has been made through a combination of both bipartisan appropriation of funding for election upgrades, things that folks on both sides have been calling for for a long time; paper records; risk-limiting audits to verify results; a better-postured frankly national security committee that we have oversight on to track and expose and disrupt foreign adversarial election threats; and I think a pretty successful effort to share threat information about foreign influence activity with the private sector.

(03:50)
US tech companies as well have made progress, although as I’ve warned all of our witnesses, albeit uneven since 2016. These include …

(04:00)
And I want to [inaudible 00:04:02] because many of you were present when the three companies in front of us, 26 [inaudible 00:04:08] including companies [inaudible 00:04:12] a lot of this has taken place now, X, formerly known as Twitter, which wouldn’t even send a representative today, where 27 companies signed in Munich what was called the Tech Accord to combat deceptive use of AI in 2024 elections; not just in America, but around the world.

(04:28)
And while I appreciate [inaudible 00:04:31] that were made there, I think it has been uneven about where [inaudible 00:04:36] has actually been done. Recently, I sent letters to all 27 of those companies. Some came back with specificity, some of you; unfortunately others simply ignored even responding.

(04:49)
And [inaudible 00:04:52] there’s new factors that I think has raised my concern dramatically.

(04:57)
First is our adversaries realize this is effective and cheap. Putin clearly understands [inaudible 00:05:07] want to undermine American support for Ukraine, weighing in and frankly putting up fake information can help him in that matter. [inaudible 00:05:19] we’ve seen since the conflict between Israel and Hamas after post-October 7th, this has also been a right area for foreign misinformation and disinformation. And again, we’ve seen Iran dramatically increase their efforts to stoke social discord in the US while again potentially seeking to shape elections.

(05:42)
We’ve seen less there from China, but there have been some efforts by China on … Not at the national level, but on down-ballot races where candidates may not be taking a pro-CCP position.

(05:55)
Recently, and literally in the last eight weeks, we’ve seen a covert influence project led by RT to bankroll unwitting US political influencers on YouTube. We’ve seen a wide-ranging Russian campaign that frankly hasn’t gotten [inaudible 00:06:12] media attention because I think they focused on the guys in Tennessee and not some of the slides that we’re going to put up later. [Inaudible 00:06:18] questioning were major institutions like the Washington Post and Fox News. The bad guys have basically put out false information under those banners with the goal of spreading what sounds like credible sounding narratives [inaudible 00:06:34] shape American voters’ opinions of candidates and campaigns.

(06:37)
And [inaudible 00:06:39] this committee has called this out, efforts to infiltrate American protest over the conflict in Gaza by [inaudible 00:06:50] influence operatives, who again seek to stoke division, and in many cases in terms of these efforts, denigrate former President Trump.

(06:58)
I do want to acknowledge that in these recent efforts, you all have played a positive role. I want to thank Meta, and I hope our committee’s interest in this subject helped move you yesterday when you guys decided to take down RT and related influence operations. I want to thank Microsoft for being forward-leaning and publicly sharing information on some of the Russian activities. And I want to thank Alphabet, and I’m going to call you guys more by Facebook and Google, for the lesson [inaudible 00:07:32] when you were the first to come forward on the [inaudible 00:07:36] attacks. So compliments to all that.

(07:39)
On an overall basis though, we’ve also seen the scale and sophistication of the kind of attacks be escalated. When we think about AI tools, we all know about that, I think we originally thought this would be in the form of deep fakes, video and alteration. You see AI type tools being used to create what appears, and virtually any American voter would think, is a real Fox News or Washington Post [inaudible 00:08:08] when in reality it isn’t.

(08:09)
And unfortunately, Congress has not been able to take on this issue. But I would point out, and it’s a pretty broad swath of individual states, and they range across the political spectrum, that have really pretty significant guardrails in place at least in terms of deep fake manipulation in their state elections. And that’s Alabama, Texas, Michigan, Florida, and California. [inaudible 00:08:34] we get the best ideas from the states and bring them at the national level.

(08:39)
Most of you have indicated that you have not seen, and I think the good news so far is we’ve not seen the kind of massive AI interference that we might have expected, particularly in the British or French elections. But [inaudible 00:08:49] as we know from past times, the real time that this will gear up will be closer to the election.

(08:57)
And the truth is [inaudible 00:08:59] 2016, Russia had to create fake personas to spin wild stories. Unfortunately, we now have a case where too many Americans frankly don’t trust key US institutions, from federal agencies to local law enforcement, social media. There’s increased reliance on the internet. I think most of us would try to tell our [inaudible 00:09:30] it’s true, but the job of the adversary to amplify things that are stated by Americans goes up dramatically.

(09:39)
And finally, we’ve seen a concerted litigation [inaudible 00:09:43] that has sought to undermine the federal government’s [inaudible 00:09:46] share this vital threat information between you guys and the government and vice versa. And frankly, a lot of those independent academic third party checkers have really been bullied in some case or litigated into silence. For instance, we’ve seen the shuttering of the election disinformation [inaudible 00:10:06] Stanford’s Internet Observatory, as well as the termination of a key research project at Harvard Shorenstein Center. We need those academic research [inaudible 00:10:16] as that independent source.

(10:18)
And again, this is a question that really bothers me and we will … We may litigate this [inaudible 00:10:27], but too many of the companies have automatically cut back on their own efforts to prohibit false information. Again, we’re talking about foreign source [inaudible 00:10:37]. And we’ve seen the rise in [inaudible 00:10:39] of a foreign-owned platform that has a huge reach in the case of TikTok that has huge national security concerns. And I’m very glad that over 80% of both the House and the Senate voted to say a creative platform shouldn’t be ultimately controlled by the CCP. Now, the last open hearing we had on this topic we heard about what the federal government’s doing to [inaudible 00:11:08]. We’re going to continue to get with law enforcement and [inaudible 00:11:13] before election day. But this is really our effort to try to urge [inaudible 00:11:18] to do more to alert the public that this problem has not gone away. Lord knows we have enough differences between Americans that those differences don’t need to be exacerbated by our foreign adversaries. And again, [inaudible 00:11:32] cherry-picking these adversaries. These nations, in the law of our country, China, Russia, Iran, North Korea, others, have been designated as foreign adversaries.

(11:41)
[inaudible 00:11:43] we’re 48 days away from an election and [inaudible 00:11:45] to do all we can before the election. But I also think it’s not at the end of election night, particularly assuming how close the election will be, that this will be over. One of my greatest concerns is that the level of misinformation, disinformation that may come from our adversaries after the polls close could actually be as significant as anything that happens up to closing of the polls on election night.

(12:18)
With that, appreciate [inaudible 00:12:20].

(12:19)
And let me just, before [inaudible 00:12:23]; when we do the open hearings, and I appreciate Senator Cornyn, Cotton and a lot of my colleagues getting here early, we are going to do by seniority rather than at the gavel.

(12:31)
With that, Senator Rubio.

Vice Chairman Rubio (12:33):

Thank you for holding this hearing. Thank you all for [inaudible 00:12:35] to be here. It’s important.

(12:35)
This is a … It’s actually a tricky and difficult topic because I think there are two kinds of things we’re trying to [inaudible 00:12:44].

(12:41)
First, generated disinformation. And that I think describes some of those efforts, but that is a foreign adversary, Iran, China, Russia, they create or make something up and they amplify it. They basically make it up, they push it out there, and they hope people believe it. It’s actually something … I remember giving a speech back in 2018 or 2019 warning about AI generated videos that were going to be the way of the future in terms of trying to influence what people think and see. And we’ve seen some of that already [inaudible 00:13:16].

(13:15)
That’s pretty straightforward. Let me tell you where it gets complicated. Where it gets complicated is their pre-existing view [inaudible 00:13:25] because I generally agree with it, but this is an important example. There are people in the United States who believe that we shouldn’t have gotten involved in Ukraine. We shouldn’t have gotten involved in the conflict in Europe. Vladimir Putin so happens to believe and hope that that’s what we will conclude. And so now there’s someone out there saying something that, whether you agree with him or not, is a legitimate political voice [inaudible 00:13:47] pre-existing, and some Russian bot decides to amplify the views of an American citizen who happens to hold those views. And the question becomes, is that disinformation or is that misinformation [inaudible 00:14:00] influence operation because it’s an existing view being amplified?

(14:04)
It’s easy to say, “Well, just take down the amplifiers.” But the problem is it stigmatizes the person whose view it is. Now the accusation is that that person isn’t simply holding a view, they’re holding the same view that Vladimir Putin happens to have on that one topic, or something similar to what he has, and as a result, they themselves must be an asset. And that’s problematic and it’s complicated.

(14:24)
And we tried to manage all of this. We recall that in 2020, this is now known obviously, it’s been well-discussed, there was a laptop, Hunter Biden’s laptop. That was a story in the New York Post. And 51 former; and I say former because I have people all the time saying intelligence officers; these are former intelligence officials went out and said, “This has all of the attributes of a Russian disinformation campaign.”

(14:49)
And as a result, the New York Post, who posted the original story, had the story [inaudible 00:14:55] and taken down, their accounts [inaudible 00:14:57]. There was a concerted effort on the basis of [inaudible 00:14:59] letter to silence a media outlet in the United States on something that actually turned out not to be Russian disinformation. Even though I imagine maybe the Russians wanted to [inaudible 00:15:09] that story. They might have amplified it, but it also happened to be factual.

(15:11)
We know based on the letter from the CEO [inaudible 00:15:14] with him during the COVID pandemic to certain views, and he expressed regret about agreeing to some of that. And so there are people in this country that had their accounts locked, or even in some cases canceled out, because they questioned the efficacy of masks; something we now know Dr. Fauci agreed, that masks were not a solution to the problem; they questioned whether there was a lab leak or put out the lab leak theory, that at one time was considered a conspiracy and a flat-out like, and now intelligence agencies are saying it’s 50% likely, just as likely as the natural occurring.

(15:51)
So this is a tricky [inaudible 00:15:53], and it’s even trickier now because Russia’s still doing it more than anybody else. But you don’t have a big expensive operation to pursue some of this. I think we should anticipate that in the years to come, and it’s happening already, the Iranians are going to get into this. They already are. The Chinese are going to get into this business. They already are. And you see them using that in other countries to sow discord and division. It’s coming. It’s also North Korea. Multiple. And maybe even friendly [inaudible 00:16:23] who have a preference on how American public opinion turns.

(16:27)
So I do think it’s important to understand what our policies are today in terms of identifying what is disinformation; what is actually generated by an adversary versus the amplification of pre-existing belief in America? Because a lot of people [inaudible 00:16:42] being labeled collaborators when in fact they just hold views that on that one issue do align with what some other country hopes we believe as well.

(16:50)
And I’m very interested to learn what our internal policies are in these companies because I think it’s a minefield that we need to … And it may end up sowing … In an effort to prevent discord … I don’t want to sow discord, and that’s one of the dangers that we’re now dealing with.

(17:10)
So thank you for being here. I look forward to hearing your testimony.

Chairman Warner (17:14):

And before I go, I just want to emphasize I agree with Senator Rubio.

(17:16)
Americans have got the right to say whatever they … It’s their first amendment right to say that they agree, disagree, no matter how crazy. I do think there’s a difference when foreign intelligence services cherry-pick information and amplify it. It in many ways stokes division.

(17:38)
And that’s again where the core of this debate is, and we’re anxious to hear testimony.

(17:44)
Who drew the short straw to go first?

Mr. Kent Walker (17:49):

Happy to launch.

(17:51)
Chair Warner, Vice Chair Rubio, members of the committee, thank you all for the opportunity to be with you today.

(17:57)
Google and Alphabet is in the business [inaudible 00:18:00] the trust of our users. We take seriously the importance of protecting the [inaudible 00:18:05] and access to a range of viewpoints while also maintaining and enforcing responsible policy frameworks.

(18:11)
A critical aspect of that responsibility is doing our part to protect the integrity of democratic processes around the world. That’s why we’ve long invested in significant new capabilities, updated our [inaudible 00:18:24], and introduced tools to address [inaudible 00:18:26] integrity. We recognize the importance of enabling people who use our services in America and abroad to speak freely about the political issues that are most important to them. At the same time, we continue to take steps to prevent the misuse of our rules and our platforms, particularly from being attacked by foreign bad faith actors to undermine democratic elections.

(18:45)
[inaudible 00:18:48] we created the Google Threat Intelligence Group, which combines our Threat Analysis Group, or TAG, and Mandiant Intelligence. Mandiant Threat Intelligence identifies, monitors and [inaudible 00:19:01] coordinated influence operations and cyber espionage campaigns. We disrupt activity on a regular basis and we publish our findings, and we provide expert analysis on threats originating from the kinds of countries you’re talking about: Russia, China, Iran and North Korea, as well as from the criminal underground.

(19:20)
This year alone, we’ve seen a variety of malicious activity, including cyber attacks, efforts to compromise personal email accounts of high-profile political actors, and [inaudible 00:19:31] operations both on and off our platforms that are seeking to sow discord among Americans the way you are both describing.

(19:39)
We remain on the lookout for new tactics and techniques in both cyber security and disinformation campaigns. We are seeing some [inaudible 00:19:47] actors experimenting with generative AI to improve existing cyber attacks, like probing for vulnerabilities or creating [inaudible 00:19:57]. Similarly, we see generative AI being used to efficiently create websites, misleading news articles and robotic social media posts. We have not yet seen [inaudible 00:20:07] change in these attacks, but we do remain alert to new attack factors.

(20:12)
To help us all stay ahead, we continue to invest in state-of-the-art capabilities to identify AI-generated content. We’ve launched a [inaudible 00:20:22] tool that watermarks and identifies AI-generated content in text, audio, in images and in video. We were also the first tech company to require election advertisers to prominently disclose ads that include realistic looking content that is synthetic or digitally altered.

(20:40)
On YouTube, when creators upload content, we now require them to indicate whether it contains altered or synthetic material [inaudible 00:20:48], which we then label appropriately. And we will soon begin to use content credentials. That’s a new form of tamper evident metadata coming out of the C2PA program that we’ll discuss, I’m sure, by the [inaudible 00:21:04] Search and YouTube, and to help our users identify AI-generated material.

(21:09)
We, our users, industry [inaudible 00:21:13] and civil society all play important roles in safeguarding election integrity. We encourage our high-risk users, including elected officials and candidates, to protect their personal and official email accounts and offer the strongest set of cyber protections [inaudible 00:21:28] our advanced protection program. We also work across the tech industry, including through the Tech Accord you mentioned Chair Warner, and the [inaudible 00:21:37] Content [inaudible 00:21:38] and Authenticity, the C2PA group I mentioned, to identify [inaudible 00:21:43] and [inaudible 00:21:45].

(21:44)
We’re committed to doing our part to keep the digital ecosystem safe, reliable, and open to free expression. We appreciate the committee convening this important hearing and we look forward to answering your questions.

Mr. Brad Smith (21:57):

Thank you, Chairman Warner. Thank you, Vice Chairman Rubio. It’s a pleasure to be here.

(22:02)
And I first want to say, many days we are competitors, but [inaudible 00:22:09] when it comes to protecting the American public, all three of us and all of us across the tech sector [inaudible 00:22:16] need to be colleagues committed to a common cause of protecting our elections.

(22:22)
We have to start by recognizing that there are real serious threats, including in this election. We’ve all been reporting on them, we’ve been seeing them, and you talked about them.

(22:35)
We know that there is a presidential race between Donald Trump and Kamala Harris, but this has also been an election of Iran versus Trump and Russia versus Harris. And it is an election of Russia, Iran and China are united with a common interest in discrediting democracy in the eyes of our own voters and even more so in the eyes of the world.

(23:04)
So what do we do? What is the responsibility of the tech sector? That’s the fundamental question you have put to us.

(23:11)
First, I think we should always adhere to [inaudible 00:23:16]. The first is to preserve the fundamental right to free expression that is enshrined in our constitution that Vice Chairman Rubio spoke about. That is and needs to be our north star. And the second is to defend the American electorate from foreign nation states who are seeking to deceive the American public.

(23:39)
How do we do this? I think we have three roles.

(23:42)
The first is really to prevent foreign nation-state adversaries from exploiting American products and platforms to deceive our public. We do that with [inaudible 00:23:58] systems around AI-generated content, but we also do it by identifying and addressing content on our platform, especially AI-generated content, created by foreign states.

(24:10)
I think our second role is to protect the people who are putting themselves up for office, their campaign staffs, the political parties, the county and state election officials on which we all rely. And we do that in part by providing them with technology and know-how. Google, Microsoft, we all do that. And we do it by getting out there and working with them. Microsoft, we’ve now worked across 23 countries this year. We’ve had 150 training sessions reaching more than 4,700 people. And we do it by responding immediately in real-time [inaudible 00:24:53] to work with campaigns to help [inaudible 00:24:55].

(24:56)
And the third role we play, quite possibly the most important, is to build on your leadership and have this [inaudible 00:25:05]: to prepare the American public for the risks ahead. We do that by informing them, encouraging them, check what they see, to check before they vote. And we do it by I think recognizing that there is [inaudible 00:25:23] of peril ahead.

(25:24)
Today we are 48 days away from this election. As you said, Chairman Warner, the most perilous moment will come I think 48 hours before the election. That’s the lesson to be learned from, say, the Slovakian election last fall and other races we [inaudible 00:25:41].

(25:41)
I think above all else, even in a country that has so many divisions, I do hope we can all remember one thing. If Google and Microsoft and Meta can get together, if Republicans and Democrats and independents can work together, then I think we have an opportunity as a country to stand together to ensure that we the people of the United States will choose the people who lead us and we will protect ourselves from foreign interception.

(26:15)
Thank you very much,

Mr. Nick Clegg (26:19):

Chairman Warner, Vice Chairman Rubio, distinguished members of the committee, thank you for the opportunity to appear before you today.

(26:26)
At Meta, we are committed to free expression. Each day, more than three billion people around the world use our app to make their voice heard. By the end of this year, more than two billion people will have voted in elections around the world, and we are proud that our apps help people participate in that process.

(26:47)
No tech company does or invests more tech [inaudible 00:26:50] elections online than Meta; not just during election seasons, but at all times. We have around 40,000 people overall working on safety and security, and we’ve invested more than 20 billion dollars in safety and security since 2016.

(27:07)
Meta has developed a comprehensive approach to protect the integrity of elections based on several key principles.

(27:14)
First, we have strict policies designed to prevent voter interference and intimidation. Second, we connect people to reliable voting information. Third, we work tirelessly to combat foreign interference and the spread of misinformation. And finally, we lead the industry in transparency for political advertisements.

(27:38)
Our approach reflects the knowledge gained from prior elections, and we [inaudible 00:27:42] to adapt to stay ahead of emerging challenges.

(27:46)
One of the most pressing challenges for the industry is people seeking to interfere with elections to undermine the democratic process. We constantly work to find and stop these campaigns across our platforms. This isn’t adversarial [inaudible 00:28:01] and we are often responding to [inaudible 00:28:03] with imperfect information. We may not always be right. So we need to be cautious, and in each case we need to conduct our own independent investigation to identify what is and is not interference.

(28:18)
Where we identify coordinated inauthentic [inaudible 00:28:21], we remove the networks at issue. In fact, we have removed 200 such networks since 2017, including networks from Russia, Iran and China. We remain committed to stopping these threats and we are [inaudible 00:28:36] improving and evolving our [inaudible 00:28:39] to stay ahead of our adversaries.

(28:39)
I am pleased to appear beside other industry leaders today, and it underscores an important point. People trying to interfere in elections rarely target a single platform. [inaudible 00:28:54] industry collaboration, transparency and reporting are essential to tackle these networks across the internet. And that is why we publicize the breakdowns for all to see and share the relevant information we learn [inaudible 00:29:08] academics, including [inaudible 00:29:12].

(29:09)
[inaudible 00:29:12] elections are also taking place as more people are using generative AI tools. To date, we have not seen generative AI-enabled tactics used to subvert elections in ways that have [inaudible 00:29:27] our ability to disrupt them. However, we remain vigilant and will continue to adapt as the technology does as well.

(29:36)
We know that AI progress and responsibility can and must go hand-in-hand. That is why we are working internally and externally to address the roles of AI. We have implemented industry-leading efforts to label AI-generated content, giving people greater context to what they are seeing, and of course we are working across industry to develop common AI standards. And we are proud to have signed onto the White House’s voluntary AI commitments and the Tech Accord to combat [inaudible 00:30:07] use of AI in 2024 elections, both of which will help guide the industry towards safer, more secure and more transparent development of AI.

(30:16)
Every election brings its own challenges and complexities. We are confident our comprehensive approach can help protect the integrity not only of this year’s elections in the United States, but elections everywhere.

(30:28)
Thank you, and I look forward to your questions.

Chairman Warner (30:31):

Well, thank you, gentlemen for your … I’m going to put up the first two presentations.

(30:37)
Let me add to what Mr. Smith said. I concur that 48 hours before the election, but I would argue that 48 hours after the polls close, particularly if we have as close an election as we anticipate, could be equally, if not more significant in terms of spreading false information, disinformation

Chairman Warner (31:00):

And literally undermining the tenets of our democracy. Now, there was a lot of press attention recently on the Department of Justice in terms of the [inaudible 00:31:15] who were using, paying off influencers knowingly or unknowingly. What didn’t get much attention is first slide where under the banner of Fox News and the Washington Post, these look exactly like Washington Post and Fox News. In fact, it might not be what we thought of as AI, but these are the kind of AI techniques to make this real, matter of fact, they’ve even got real office bylines and the balance of the ads and other things are totally reflective. This came out of this DOJ indictment. I guess the question in these, you mentioned comprehensive, they appeared on your site. They also appeared on Twitter’s site, X’s site. I think it is a real shame that in the previous investigations, Twitter was a very collaborative entity. They are absent and some of the most egregious activities taking place, but I’m [inaudible 00:32:29], even a technology-savvy American is not going to figure out that these are fake. So, where does that responsibility lie? Shouldn’t your efforts have been able to spot that? How do we make sure, because only after the fact in 2016 we didn’t have real-time numbers of how many Americans were viewing the fake sites and they were literally ended up with hundreds of millions. I still remember both the Tennessee Republican Party, Black Lives Matter site, real sites had less viewership as did the Russian-based sites. How does this get through? How do we know which extent this is? And we have many, many more of these in these next 48 hours to make sure that Americans are informed to beware. Mr. Clegg?

Mr. Nick Clegg (33:22):

Firstly, Senator, you’re absolutely right that it is a hallmark of Russian foreign interference in the democratic process to generate AI stories resembling real media. As it happens, since those appeared on our site, we have over the last 48 hours banned the organization that spawned all of this activity, the disinformation. [inaudible 00:33:49] not least after the editor-in-chief gave an interview where she said publicly, and this is in effect a media organization owned and run out of The Kremlin, that she, and I quote this information, is conducting, her and her team are conducting what she called “Guerrilla Project” in the heart of American democracy. And the behind you is a manifestation of that. That is one of the reasons why-

Chairman Warner (34:15):

I want to make sure I get to my signal. I need to know how many views this and Russian generated Facebook sites that appear to be media source, I want that information as soon as possible. I also want to indicate, there is still an effort, and this is more over here of targeting by the Russians towards specific groups. In this case, it was the Doppelganger gang and it was both Americans and then Latino community. They’re very sophisticated, I guess, it wouldn’t be [inaudible 00:34:55] states that everybody else is focused on. And this again goes more towards Mr. Clegg and Mr. Walker. They’re still targeting paid advertising. We remember in 2016 when we didn’t even have controls when Russia paying with rubles for paid advertising audience. I would’ve thought eight years later we would be better of at least screening the advertising. Again, in case of YouTube and in case of Facebook, what are we doing to stop the paid advertising targeting by these adversaries?

Mr. Kent Walker (35:33):

Since Nick took the last one. We have an extensive series of checks and balances in our advertising networks that are designed to identify problematic, particularly around elections to have registration effectively. And in the 2016 situation, I remember we did an extend for review of our systems, in fact that less than $4,000 had been spent on our numbers.

Chairman Warner (36:00):

Respectfully, in January, I wrote the Treasury Department that said both of your companies have still repeatedly allowed Russia [inaudible 00:36:12] to use your ad tools. We will get that specific information to you and we are going to really need as soon as possible, the content, the bad actors, how much content have they purchased on both of your sites and frankly others, and we’re going to need it extremely fast because I think they are getting through in many, many more ways than had been.

Mr. Kent Walker (36:41):

I certainly appreciate the concern and we have taken down, as we’ve indicated earlier, something like 11,000 different efforts by Russian associated entities to post content on YouTube and the like.

Chairman Warner (36:53):

We’re just going to need as quickly as possible. Both in terms of number of Americans viewing Fox News, what they think is Fox News or Washington Post, or advertisements. We need to make sure again that we inform the public.

Mr. Rubio (37:07):

Thank you. The area I want to focus on is where political speech is involved and it’s sort of the talk of my own state. So I want to understand what the current policies and practices are as we speak regarding content moderation specifically in speech. So, just for [inaudible 00:37:27], to stop the spread of misinformation and disinformation, we have built the largest dependent fact-checking network of any platform, nearly a hundred partners from around the world to review and rate viral misinformation in more than 60 languages. Stories that they, this platform or these group of people rate as false are shown lower in feed and some page repeatedly creates or shares information significantly reduce their distribution and remove their advertising rights.

(37:55)
Let me explain, we’re not talking about the stuff that was up here. That’s fake content. That’s just purely fake content. It’s generated to look like Fox News or Wall Street Journal, New York Times. No one’s arguing that. That’s fake, that should be taken down. Those companies are wanting them down, they’re copyrighting their logo. Like we talked about, so you’ve got a group of people that are your fact-checkers from all over the [inaudible 00:38:17] take to a real-world scenario tied to what the CEO brings, that is people on point saying maybe, “I believe that the pandemic began in a lab. I believe there was an accident lab and it leaked out.” At one time, it was considered not factual. There was whole lot of pressure. [inaudible 00:38:41]. Whether that’s true or not, because it wasn’t for them. It is 50% likely. How would something like that, because there are people that are caught up.

(38:54)
I imagine that under the policies described [inaudible 00:39:00] they’re abusing the specter of a potential lab leak. It would run through this fact-checkers from all over the world. They would decide whether it’s true or not, and you could have your page diminished, potentially deplatformed if I write too much about it. So how does this policy deal with that problem, which is a real-world one.

Mr. Nick Clegg (39:21):

Indeed it is. And as I said in my… We all inhabit a world of perfect and crucially and I think the pandemic was a very good example of that, information. With the benefit of hindsight, we now understand the epidemiology of the pandemic, which we didn’t at the time. When we were in the middle of the pandemic, the map being rolled out, [inaudible 00:39:44] the trajectory was of this global pandemic. We as an engineering tech firm, of course, we’re not specialists in-

Mr. Rubio (39:53):

Yeah, but I’m not asking what happened. I want to know how your policy today would prevent that from happening, because if the government is telling you this is a lie, proof that it’s a lie, lock it down, and your fact-checkers say it’s a lie, then my account gets [inaudible 00:40:07].

Mr. Nick Clegg (40:10):

So two things. Firstly, do continue to rely on these independent fact-checkers. We don’t employ them, they’re not part of Meta. They’re independently vetted by a third party organization.

Mr. Rubio (40:21):

Who are they?

Mr. Nick Clegg (40:22):

Oh, there’s a variety of organizations which specialize in examining what they think is a reliable way of asserting whether something is missing.

Mr. Rubio (40:34):

Is there a way to know who those vetters are, or a list of someone-

Mr. Nick Clegg (40:40):

Absolutely. We have a full list. Absolutely, and we can provide them to you and they obviously work in multiple languages including the United States. I think there are 11 fact-checkers in the United States and we can provide you with all the information. Second, and Mark Zuckerberg did indeed explain this in his recent letter to the House Judiciary Committee, we learned our lesson certainly as Meta is concerned, that in the heat of the moment when governments and governments on the world, exert a particular pressure on us, on particular classes that focus on, we need to act in ways and we strongly do, and coordinate [inaudible 00:41:19] bits of content, particularly the case people were in effect panic.

Mr. Rubio (41:32):

Let me in a different context, the exact same system at the top appears, and 51 people sign a letter saying, “We used to work in the intelligence community, this is Russian,” and you fact-checkers say, “You’ve got to listen to the experts. They would know.” Does anybody… Does the New York Post get their account taken down again?

Mr. Nick Clegg (41:49):

So to be very clear, we did not take down the account or the content. I think X, they did.

Mr. Rubio (41:58):

[inaudible 00:41:57] true because it’s disinformation because some guys signed a letter saying that it was, it would lower them in the feed and potentially reduce their distribution. And if they poke the story too many times, you may actually lock them out. This policy.

Mr. Nick Clegg (42:12):

Sorry in this instance, Senator, that story demoted… I mean, it was available. Millions of people saw it, but its prominence on our services was temporarily reduced. And we used to do that to allow the fact-checkers to give them the space and the time to choose to then examine that content. In this instance, the Hunter Biden story did do so, so that temporary demotion of a few days then at least that was circulated as normally-

Mr. Rubio (42:41):

Did the fact-checkers reduce or demote the 51 that signed the letter or the letter they signed ?that turned to be out to be not true.

Mr. Nick Clegg (42:48):

I don’t think they did so. At the-

Mr. Rubio (42:48):

Thank you.

Mr. Heinrich (42:54):

I want to stay on this same topic of the sort of fraudulent news sites that look like something people would recognize from their own news preferences. Do each of your companies have a policy of removal once you become aware that it is clearly a fraudulent version of a legitimate site?

Mr. Brad Smith (43:18):

I think the answer is yes. And Vice Chairman Rubio, I thought captured it very well. It actually might not have been the topic had anything to do with politics. Those are counterfeit sites which are using the trademarks of Fox News and The Washington Post without their permission and in a way that deceives the public and diminishes value of those companies. And so yes, and I think you’d see pretty universally across the industry terms of use that prohibit that.

Mr. Heinrich (43:50):

Doesn’t seem to take long as for those sites to be identified. They remain often times longer than I think most of us would hope or expect. And then, have you been able to use AI proactively to identify some of those fake news outlets?

Mr. Brad Smith (44:09):

I think increasingly, we are using AI to detect these problems. I think AI is especially good at detecting the use of AI content. That’s one of the things we do. At our end, we see things faster. You always have to in a race, but for example, just this morning, we saw a Russian group put online, AI anti-Israel, putting in Vice President Harris’s words at a rally, words she never spoke. So I think that is one of the goals for us to keep pursuing, to identify these things faster and then where appropriate, take action.

Mr. Heinrich (44:49):

Yeah, I’m encouraged because obviously AI is being used offensively and we need to be on our game responding with those same tools to be able to identify and appropriately deal with these things on a master rate. At a hearing of the US House Committee, House Administration last week, New Mexico’s Secretary of State testified that, “Years of false election claims, theological attempts to discredit our voting systems and processes have led to increased threats and harassment to election workers.” How have you sought to improve your platform’s ability to detect and remove content that actually threatens or harasses people who are part of the democratic process now at us for fairly administrative elections?

Mr. Kent Walker (45:46):

I’m happy to take that, and I suspect it is true for all of us, there are two elements of that. One is making sure that we are trying to safeguard our election officials against threats that may be posted online. And I’m confident that all of our companies have policies against incitements to violence, direct threats, bullying, cyber attacks, et cetera. So that kind of material would come down. Second half is helping our election officials be more protected themselves. The use of some of the [inaudible 00:46:18] advanced detection program, some information not being hacked or doxed, et cetera. The personal information is not being made public and the like. So between the various companies here, including I know our Armenian group has worked with a number of election officials and agencies to make it more cyber resilient, if you will. So more robust against the threat.

Mr. Heinrich (46:38):

Mr. Clegg?

Mr. Nick Clegg (46:40):

Again, I’m sure this is true of all of those represented here, but we also encourage local election officials to use our platforms to communicate with voters. So we established a system called Voting Alerts, and I think since we established that program in 2020, around 650 million voting alerts have been issued by local state tolls on Facebook’s apps and services so that voters are properly informed about where and when to vote.

Mr. Heinrich (47:18):

I’m going to give the rest of my time back. Very uncharacteristic vibe, but nonetheless.

Ms. Collins (47:25):

I’ll take it.

Chairman Warner (47:26):

Senator Collins?

Ms. Collins (47:29):

Thank you, Mr. Chairman. Mr. Clegg, we’ve received briefings from the intelligence community that indicate that China is not focused on the presidential election race, but rather on down ballot races, the state level, county level, local level. And that concerns me because officials at those levels are far less likely to receive the kinds of briefings that we receive, or to get information from Homeland Security or the FBI on how to be on alert. China is attempting to build relationships with state and local officials. We see the Sister City programs, we see the Confucius Institutes and educational institutions. So how are your platforms attempting to help safeguard the down ballot races? The presidential race, I think everybody’s aware of the risk there, but the down ballot is what really concerns me.

Mr. Nick Clegg (48:55):

And so, I think you’re right to be concerned and that’s why our vigilance needs to be constant. It can’t just peak at the time of the presidential elections. It’s something which we need to deploy our policies and our enforcement around the world, and around the clock. And also right, Senator, to point out that what we have seen, and my colleagues, we have seen from what we call co-ordination of behavioral networks conducted by China, quite specifically targeted at particular community. So for instance, recently we’ve enabled a dozen Facebook and Instagram accounts, which were targeting the Sikh community in the United States. That is one of the reasons why the central signals that we look for aren’t related to the content or even the person, but the behavioral patterns that we see, and the telltale patterns are most especially the use of a network of fake accounts.

(50:02)
And that of course then manifests itself in lots of different ways. It’s targeted at different communities, but the underlying analysis that our teams conduct is about the behavior rather than the individual bit of content. Because as Vice Chairman Rubio said, sometimes the content can be actually consistent with things that are circulated by ordinary work in the normal day-to-day business.

Ms. Collins (50:26):

Thank you. Mr. Smith, you talked about the need for American people to be prepared and to be on the alert. Why isn’t part of the answer that we’re not getting into suppressing dissenting views or criticism of public officials, for example? Why isn’t the answer to watermark posts to indicate not whether they’re AI generated, rather where they originate? Why couldn’t you do an R if it came from Russia? That person who’s looking at the post can make his or her own determination, but they would be on alert that this isn’t Joe down the street who’s posted this, this is someone who’s in Russia.

Mr. Brad Smith (51:23):

I do think that’s a really interesting idea and it’s one that across the industry people have been giving a lot of thought to. A couple of things. First, I think actually it starts with also picking up on the idea you just described and putting Americans and American organizations in a position to put, in Metadata, in effect to put the credentials in place so it’s clear where their content has come from. We were, for example, with the Republican National Convention, they used that on more than 4,000 images that were released in Milwaukee, so that it would protect their content from being distorted. I do think one can then go further. An important question as you raised, if we find something that is coming from somewhere else, how and when should we identify it? I frankly think the most important thing is that we address content where that kind of protection has been removed and that’s been the subject of legislation being proposed, including from members of this committee to protect it from tampering, then we can think about other forms of identification for the public.

Ms. Collins (52:35):

Thank you.

Chairman Warner (52:36):

Thank you. Senator Kelly?

Mr. Kelly (52:43):

Thank you, Mr. Chairman. Thank you all of you for being here for this very important hearing. I just got back from visiting our allies in Baltic who are all border Russia, also to Finland and they have been targeted by information attacks at a pretty high level and come pretty quickly and they have a place to try to equip their citizens and their institutions to counter disinformation campaigns. They feel somewhat successfully, though it a big problem for them, but I do think we can learn something from partners in the Baltics, just exactly as you use media and internet platforms as a key vector for these campaigns that they have against us and are increasingly employing tools, we’ve talked about this, bots, generative AI. So it’s my hope that we can also count on the partnership of the American tech industry to aggressively counter these threats.

(53:45)
I want to turn a specific problem that’s of great concern to me and as my constituents learn about this, I’m sure it will be to them as well. Behind me, you can see screen capture of right-wing web pages designed to look like major American outlets news in The Washington Post. But going headlines, I went through these the other day. I know I think the chairman showed some very similar, so apologies if we’re being a little bit redundant here. But these pages were created by Russian cyber operatives to distribute Russian messages by co-opting the brand of a real news website that Americans trust, both Fox News and The Washington Post. [inaudible 00:54:36] And these are really well done. I mean, it would be hard if you were looking specifically at the URL and noticed that something was not exactly right where there was no .com, or .pm, or .something else at the end, you wouldn’t always know and you would think this is the news source.

(54:56)
They’ve also spoofed the official NATO website. Well, and they use these sites to push messages that cast doubt on Russian atrocities that we know are real. They lie about NATO suppressing peaceful protests, controversy where [inaudible 00:55:15] don’t exist. So an additional concern is they have specifically targeted state voters. So my constituents, Arizona and others, they seek to influence the outcome of these elections. This is [inaudible 00:55:32], we’ve got to do something about it. So I’m curious from each of you, and I have about two minutes here, just what are you doing about it, and specifically with these websites? If we were to go and look for them now, have they been taken down as the Fox News website, or would we still… Is there a way to say, we start with you, Mr. Walker. If we search on Google and try to find this through a Google search engine for The Washington Post, could we navigate from your website to these websites?

Mr. Kent Walker (56:08):

So, we’re obviously concerned about the larger problem. I haven’t searched these specific sites, but I can tell you we’ve launched what’s called about this image and about this result, which tells you the first time we saw an image appear on the internet. So in many cases, disinformation may not be AI-generated, it may be [inaudible 00:56:26]. Most of the disinformation we see coming out of Gaza, not AI-generated, it’s pictures from a different war. So that kind of context is valuable. And then, just quickly to say if content is AI-generated increased the ability to watermark it or [inaudible 00:56:45] through the CPA, a cross-industry group that I mentioned before will help all of us do a better job identifying and removing this kind of content.

Mr. Kelly (56:51):

But once you find the content and you know it is fake, at that point, take action to make sure that your customers cannot navigate to that conflict or to that content.

Mr. Kent Walker (57:07):

The search context is somewhat different than other contexts where we’re hosting information. So let’s say YouTube, which would be our hosted content example here. If something demonstrably false and harmful, we remove it in addition to all the policies. That’s consistent for many years. We also have a general manipulated media policy, whether it’s AI manipulation or you may remember the Deepfakes that came out some time ago, which were just slowing down videos to make a politician look as though they were intoxicated. We will remove that kind of content, yes.

Mr. Kelly (57:39):

You said if it’s false or harmful. How about if it’s just them co-opting somebody else’s website like Fox News or Washington Post.

Mr. Kent Walker (57:47):

I go back to Brad’s earlier comments with regard to the notion of trademark infringement, copyright infringement. As we get complaints about that, we will remove that content, yes.

Mr. Kelly (57:56):

All right, thank you.

Mr. Kent Walker (57:56):

Yes, sir.

Chairman Warner (57:57):

I would quickly note, I think most of your companies do a pretty good job on trademark protection. I just feel like Fox News and Washington Post should have gotten that same level of protection. Frankly, they should be weighing in as well. Senator Cotton.

Mr. Cotton (58:11):

Thank you gentlemen. Thank you for appearing. I want to bring a little perspective to the topic today. I think this committee’s own report, more than thousand pages, said that Twitter users alone produced more election related content in about three hours in 2016 than all Russian agents working together. Russia and China and Iran and North Korea are all doing these things, up to no good. And if you don’t know what they’re doing, it’s probably no good. And there’s lots of things they could do that are very bad to influence politics. Russian intelligence spent millions of dollars in the early 1980s to promote the nuclear freeze movement, which Joe Biden bought hook, line, and sinker. And Russian intelligence under Vladimir Putin has spent millions of dollars to oppose fracking, which Kamala Harris has bought, hook, line, and sinker, trying to ban fracking. And there’s plenty of things they could do in our election infrastructure as well.

(59:05)
They could hack into campaigns and leak their strategy or steal their voter contact information. Even worse, they could hack into county clerk’s office or Secretary of State’s offices and voter registration files or try to manipulate votes. They don’t even have to get into the election machinery. They turn off the electricity in a major American city on election day. There’s lot of threats that our adversaries could pose to us and our elections. I just don’t think that memes and YouTube videos are among the top, especially when we have an example of election interference here in America that was so egregious. Some of your company’s efforts in collusion with Joe Biden’s campaign led by the current Secretary of State to suppress the factual reporting about Hunter Biden’s laptop. Mr. Clegg, you acknowledged earlier that Facebook demoted that story after it was published by The New York Post. Is that right?

Mr. Nick Clegg (01:00:08):

Correct. But I should clarify, we don’t do that anymore.

Mr. Cotton (01:00:11):

Mr. Zuckerberg has said that you demoted it. He expressed regret. I assume you share that regret with your boss?

Mr. Nick Clegg (01:00:18):

Yes.

Mr. Cotton (01:00:20):

And you share what he said, that you’re not going to do it anymore, right?

Mr. Nick Clegg (01:00:22):

Correct. So that demotion does not take place today.

Mr. Cotton (01:00:25):

Mr. Walker, what about Google? Did Google suppress results about the Hunter Biden laptop?

Mr. Kent Walker (01:00:30):

We did not, sir. We ran an independent investigation that did not meet our standards for taking any action, so it remained up on our services.

Mr. Cotton (01:00:39):

Okay. Okay. And Twitter, under the old regime there was what someone said, even more egregious than Facebook or other platforms. And again, this is domestic information operations, if you’d like to say, far more influence on our elections than some memes or YouTube videos or articles, Russian intelligence agents or Chinese intelligence agents posted, which no doubt they do. I mean, just look today, like The New York Times the other day had a fit that social media was a wash, a wash they said, in AI generated memes of Donald Trump’s saving ducks and geese. I mean, are AI generated memes of Donald Trump saving ducks and geese really all that dangerous to our election? Mr. Smith, you laughed, for the record. Do you want to answer my question? Are you worried about-

Mr. Brad Smith (01:01:28):

I think that’s over Twitter.

Mr. Cotton (01:01:29):

… ducks and geese, memes of Donald Trump saving them?

Mr. Brad Smith (01:01:32):

When I create a list of the greatest worries for this election, they do not involve ducks or geese.

Mr. Cotton (01:01:37):

I wouldn’t think so either, don’t seem like that to me either. Mr. Walker, Google famously did not autofill results of people searching for the assassination attempt Donald Trump a few weeks ago. What happened there? Why was that the result of your companies?

Mr. Kent Walker (01:01:55):

We’ve had a long-standing policy center of not associating terms of violence

Mr. Walker (01:02:00):

Associated with political officials unless they had become a historic event. So assassination of Abraham Lincoln would have been allowed up until the weeks prior to the assassination attempt. It would’ve been deeply problematic, I think, to autocomplete assassination after a search for Donald Trump. Those terms are periodically updated. The assassination attempt occurred in between one of those periodic updates. It has subsequently been updated and now autocompletes appropriately.

Mr. Cotton (01:02:29):

Let me ask both your companies, this primarily Mr. Walker, for Google, and Mr. Clegg for Facebook. Gavin Newsom just signed the law, three laws actually in California into effect that will criminalize the use of so-called Deepfakes before an election. How do you plan to comply with that law? Are you going to go arrest people who are making AI-generated memes of Donald Trump running away with ducks and geese?

Mr. Walker (01:02:57):

Senator, it’s early. Just to understand, we are just receiving the laws, which were signed very recently, and we’re looking at how we might best comply with a number of laws. There are quite a few.

Mr. Cotton (01:03:09):

Mr. Clegg, a lot of ducks and geese memes on your website. Mr. Funny is laughing again. It’s fine. People laugh at them. Satire and political humor is as old as our country. It’s fine. I’m glad that you’re not going to do again what you did in 2020, but I don’t envy either of your companies dealing with what Gavin Newsom has done in California, or what many in this Congress propose to do, criminalizing and censoring core political speech. Mr. Clegg, do you have any idea how you’re going to comply with California’s law?

Mr. Clegg (01:03:39):

Well, it’s only just been signed, so again, we would need probably to look at it more closely, but I think, Senator, your central point that there is a lot of playful and innocent and innocuous use of AI and there’s duplicitous and egregious and dangerous use of AI. That is exactly why, as I think-

Mr. Cotton (01:03:57):

And I have to ask, in my time, who’s going to draw that line? Who’s going to decide what’s playful, innocuous, and harmless and what is misinformation and disinformation? I got to say, some of the people you go to like PolitiFact and Southern Poverty Law Center, don’t strike me as quite neutral sources and I don’t think you’re going to find neutral sources in the government of California or this administration either.

Mr. Warner (01:04:20):

And I would just, when we look at the California law, I’d like your analysis as well of the Deepfakes used in political advertising that was passed and signed into law in Alabama, Texas, and Florida as well. Senator King.

Mr. King (01:04:34):

Thank you Mr. Chairman. I think the bright line here should be foreign, the word foreign. As has been pointed out, as the ranking member pointed out, the vice chair in his opening remarks, it becomes very problematic when you’re talking about domestic content and then it’s being amplified by foreign, but that should be the line. I mean, I don’t want you all or the government certainly to be the arbiters of truth because one man’s truth is another man’s propaganda. I mean, I think we should have that kind of flexibility.

(01:05:10)
It seems to me what’s happening here is foreign governments are in political judo where they’re using our own strength against us. Our strength is our democracy and our regular elections, plus freedom of expression. And that’s what’s taken advantage of in order to try to manipulate our fundamental way of making decisions, which is through elections. I think that’s… But the issue should always be is there a nexus, is there a foreign influence in this matter? And I guess the question is, in this day and age, can you determine that given the fact that we’ve got very sophisticated adversaries in St. Petersburg or Moscow or wherever or in Tehran, who may be coming in via a server in Georgia, can you technically tell when something is a foreign origin? Mr. Walker?

Mr. Smith (01:06:13):

I would say the answer is not always, but often, yes. And I do think that there are some threats we should take seriously, and we should start with the word foreign, but if you want to see the risks that we should be thinking about, I will go back to Slovakia. They’re parliamentary election was last year, September 30th. Two days before on September 28th, a Russian group released a Deepfake audio. It purported to be an audio of a conversation between a mainstream journalist and the leader of the Pro-European Union political party, the two largest political parties in that race. That reflected what we see in Russia one, a good content creation strategy.

(01:07:05)
The second [inaudible 01:07:06] that same day is they released it on Telegram, which tends to be the Russian’s favorite distribution channel to get things going. They did it from what was the private account of the spouse of a major official in Slovakia. The third thing they did is they pursued an amplification strategy where then one of the most senior officials in the Russian government as they tend to, came out the very same day and accused the United States of doing what that audio recording purported to capture in Slovakia, namely a plot to buy votes and steal the election.

Mr. King (01:07:51):

In other words, it was a very sophisticated operation.

Mr. Smith (01:07:54):

It is, and this is what we need to remember, you can’t have a great play without a great playwright. The Russian government is very capable, very sophisticated, not just in technology, but in social science. Yes.

Mr. King (01:08:07):

Very determined, are they not?

Mr. Smith (01:08:09):

Absolutely. And that’s what we… There are many things, it’s right, I think, to focus on the things that should unite us and say, let’s not worry about what we’re seeing over in one direction, but let’s not close our eyes to what we could see in the other as well.

Mr. King (01:08:25):

Well, I think the question, number one, it’s happening. You’ve all testified to that. It’s happening and it’s not a minor project on the path of Iran, to some extent China. So the question is then what do we do? And I know Senator Collins asked about watermarking, some kind of way to determine the source of the information attribution, but I had a formative experience about eight or nine years ago in this building before any of the election, before 2016, meeting with group of people of politicians, political figures in Estonia who were bombardment all the time from Russian propaganda and Russian disinformation. I said, “How do you deal with it? You can’t cut off the internet, cut off your TV stations.” Their interesting answer was, “We deal with it by educating the public that it’s happening, and they say, ‘Oh hell, it’s just the Russians again.'” And that’s why I think what we’re doing here today is so important and your testimony is so important, so American people can be alerted to the fact that they may be being misled and they should check. Is that a reasonable approach?

Mr. Smith (01:09:33):

Absolutely. And what I hope we can take away from this is first of all, there’s something very important in what Senator Cotton said, not everything’s a threat. And as Senator Rubio said, we should always honor the rights of our fellow citizens to say what’s on their mind. But Senator Kelly captured something that’s critical, and you’re pointing to the same, when you go to Estonia, when you go to Finland, when you go to Sweden, when you meet people who have lived their entire lives in the shadow of Russia, they are on the alert. They know as we’ve discovered that not everything on the internet is true. They just remember that when they read something that’s new.

Mr. King (01:10:15):

My wife and I have a sign in our kitchen that says, “The difficulty with quotes of the internet is determining their authenticity. Abraham Lincoln.” Mr. Clegg, you were going to respond.

Mr. Walker (01:10:30):

Yes. Thank you. Just very briefly. In addition to those very good points with which I agree, I do think we are increasingly able to use AI to detect some of these patterns, from as we’ve discussed previously, YouTube has gone from having one view in a hundred violating our policies, to one view in a thousand. And that’s large part because we are using AI to detect some of the patterns of mis and disinformation that are out there and take action against them.

Mr. King (01:10:54):

You either can take action or you can alert your customers that this has been manipulated in some way.

Mr. Walker (01:11:01):

Agreed. And also provide high quality authoritative information. Hold line, the best remedy for bad information is good information. So the more we promote accurate information about when the polls are going to be open, people’s eligibility to vote, whatever else it might be, that’s an important part of the democratic process.

Mr. King (01:11:19):

Thank you. Thank you, Mr. Chairman.

Mr. Warner (01:11:21):

I agree on the [inaudible 01:11:23] around memes, but I will recall that this committee exposed in 2016 the effort by the Russians to incite violence in a pro-Muslim group in Texas and a pro kind of Texas separatist group before law enforcement would’ve resulted in American harm and echoing [inaudible 01:11:41], I don’t know when these slides were up, how a normal American consumer, even a relatively sophisticated one, would have the expertise to read the URL closely when everything else looks so closely like Fox or Washington Post. Senate Cornyn.

Mr. Cornyn (01:11:57):

I would like to ask each of you to respond to this question. Do you believe that [inaudible 01:12:02] should be required to divest TikTok in order for TikTok to operate in the United States? Mr. Walker?

Mr. Walker (01:12:13):

Sir, I would defer to Congress. I know you have legislated on this very question and that there was a-

Mr. Cornyn (01:12:18):

You think social media companies owned by foreign governments that are adversaries of the United States that are known to use information warfare against the United States, do you believe they should be able to operate freely in the United States?

Mr. Walker (01:12:36):

As a technology company, our area of expertise is making sure that they are not distributing malware. We have found situations where such companies were distributing malware, at which point we removed them from our services. But on the broader question of accessibility, I think that’s a question for Congress.

Mr. Cornyn (01:12:52):

I’ll put you down as undecided. Mr. Smith?

Mr. Smith (01:12:58):

You can put me down as I think you all have already decided .the Congress has passed a law, the President has signed it, the courts will adjudicate it, but assuming it’s upheld, then clearly it needs to be followed. And I’m not going to try to substitute my judgment for the judgment you all have already brought to bear.

Mr. Cornyn (01:13:14):

Mr. Clegg?

Mr. Clegg (01:13:15):

In addition to that, I would just point out that there isn’t [inaudible 01:13:19] globally. Our services, for instance, are not available to people in China, so Chinese social media apps are available here, American social media apps are not available in China. It’s been the sort of state for some time.

Mr. Cornyn (01:13:35):

What I’m looking for is some guiding principles here and Mr. Clegg, it sounds like you think reciprocity should be perhaps one of those principles.

Mr. Clegg (01:13:46):

I think the first amendment principle of voice for the maximum number of people for the maximum amount of time, wherever they reside around the world is a good principle.

Mr. Cornyn (01:13:54):

Well, the problem I think we’re having, trying to figure out what the appropriate framework is to think about what you all do day in and day out because it is just presented a bunch of novel difficult questions, but before social media companies existed, seems to be we had doctrines, laws that governed the way that we dealt with the subject matter we’re talking about here today. Of course, what’s so different today is that you are private entities. So presumably, the Constitution, the First Amendment can’t be directly applied, and the Supreme Court is wrestling with how to figure out what the right way to view social media companies is. You have your terms of use, which strike me as a pretty powerful tool to be able to regulate what’s on your site, but there’s also legitimate concerns of censorship of views, and of course Mr. Clegg, you talked about a little bit about Mr. Zuckerberg’s letter and the fact he regrets being influenced and cooperating with the federal government.

(01:15:10)
And then we had regulations that usually help us in this area or as a [inaudible 01:15:18]. So, I’m wondering, is there anything about the way that we operated in the legal framework we operated under before your companies existed that should inform the way that we view your operations today? It strikes me as we are dealing with adversaries often that view this information warfare as a legitimate tool and obviously the Russians and their active measure campaign existed, your companies were existed, but we’re an open society and we believe in freedom of exchange and free speech. But is there anything about the way that we regulated or the way the framework under which we understood, that newspapers, radio, movies, other means of communication were handled pre social media companies that should guide us here? Or are we just trying to make this up from scratch?

Mr. Smith (01:16:37):

The one thing I would say without getting into, I think your very important question about the history of regulation of communications in the country and one could have, I’m sure, a vibrant debate about section 230 and the like, is this. It’s easy to spend all our time on the issues where we disagree. I think the most important thing is we identify where we actually do agree across the political aisle and across the industry because if we can act based on common consensus to address the foreign adversaries, emphasizing again that word foreign in nation states, we could… The most important thing I think we need to do this year, and I think that can build the foundation for the future and then we’ll deal with the rest, and your very important question among that.

Mr. Cornyn (01:17:32):

Time’s up.

Mr. Warner (01:17:34):

And again I want to commend Senator Cornyn for raising this because we did actually do that on the question about CCP, control of a platform that candidly is even more popular at this point than your platforms, and 80% of the Congress, both parties, said that’s not in our national security interest. I appreciate you raising. Senator Bennett.

Mr. Bennet (01:17:55):

Thank you, Mr. Chairman. I appreciate your having this hearing and appreciate you coming to testify. Very grateful for that. I think what we are struggling with a little bit in terms of answering the question Senator Cornyn just posed, is the sheer scale of the enterprises that you represent, that presents something new to us. And as I sit here listening to us have this conversation, I’m thinking about the people that are going to be sitting in your chairs 30 years from now, and the people that are going to be sitting in our chairs 30 years from now and what are the incentives that are leading us to have the conversation we’re having right now and the answers that we’re having in this minute, for all the right reasons, are the ones we would have wished for 30 years into the future. I really wish on behalf of the American people, that the American people had had a negotiation with Mark Zuckerberg just to pick him as an example, around our privacy and around our data and around our economic.

(01:19:11)
I don’t believe we have had that negotiation. I don’t think we have, with any of these social media platforms, different, Mr. Smith, than your company, with any of these platforms about our privacy, our data, our economics, the way we want our children’s bedrooms invaded or not invaded. And for better or for worse, we’re looking to us to try to begin to have that conversation. So first, we haven’t had it and here we sit having to deal with the very, very severe consequences across our society. I say that partly as capitalist, but also as a former school superintendent who [inaudible 01:19:54] of mental health on our kids, and as members of the intelligence committee who are trying to protect the country from evasion of our democracy across your social media platform and tech platforms. When I read your CapEx numbers, it staggers my mind. I can’t even get my head around the idea that you’re going to make $170 billion over 18 months on AI investments.

(01:20:22)
I mean, that that annual expenditure for your three companies is more than we had for roads and bridges in the first infrastructure bill he passed since Eisenhower was president. And for all the telecom or broadband infrastructure across the entire United States of America, those things together are dwarfed by your annual CapEx expenditure on AI. And I feel like we’re being asked to just sort of hope for the best. I do think it’s [inaudible 01:21:00] American capitalism that you have those resources to invest in the future, but you better be making the right decisions. And part of that I think is a question of whether the commitment, you’ve really made the commitment on the front end to safeguard America’s democracy, to make sure our elections are protected, to not say that it’s up to our citizens to try to figure it out in the hailstorm of propaganda that has almost been perfected by adversaries and every day is being used by them to divide one American from the next, from the next, from the next because they see that division as a potential benefit to them and a huge detriment to us.

(01:21:54)
How much money are you investing to make sure that you are protecting our elections? Is that your responsibility, or is this just an approach that says let a thousand flowers bloom? I am a strong believer in the First Amendment, but I don’t think there’s anything about the First Amendment that obviates your need to be able to say to the American people, “We believe we have a responsibility to you because among other things, because we are creatures of this unique society and this unique democracy and we have an obligation here.” So I don’t know if anybody would like respond. Mr. Clegg.

Mr. Clegg (01:22:40):

Yeah.

Mr. Bennet (01:22:42):

Yeah, please.

Mr. Clegg (01:22:43):

So Senator, in answer to your specific question, we have around 40,000 people working on security and integrity of our services. In fact, that number is slightly up from what it was back-

Mr. Bennet (01:22:55):

I am deeply, deeply skeptical of the numbers because the numbers don’t tell you what the investment really is and we know they go up and we know they go down. And Mr. Walker said earlier, maybe the AI tools themselves are better. And I don’t doubt that, that may be true. So I’m more interested in what the total capital expenditure.

Mr. Clegg (01:23:14):

Capital expenditure is about $20 billion over the last several years, around $5 billion in the last year. And to your wider point, Senator, I strongly agree with you that the scale that we’re dealing with whether it’s from the tech company’s point of view, from legislatures and governments around the world, is clearly unprecedented because the network effects are also created by the internet. On our surfaces alone, you have what, a hundred billion messages around the world on WhatsApp every day. You’ve got now, I think about three and a half billion reshares of short form videos, reels every single day. And much as cooperation between companies at this table and indeed companies that are not represented at this table, it’s crucial to deal with the scale of all of that. I would also suggest that cooperation between different jurisdictions in the democratic world globally is important. Well, particularly between the United States, Europe, India, and so on, because I think one of the greatest risks is fragmentation of different regulatory approaches around the world for technologies which by definition are borderless.

Mr. Smith (01:24:22):

I would just go-

Mr. Warner (01:24:25):

Quickly, we’ve got a couple more minutes.

Mr. Smith (01:24:28):

I would say, first of all, I believe that the American tech sector is engine of economic growth and frankly is the envy of the world, and we should at least remember that. Number two, we do have a very high responsibility to protect elections, to think about the impact on others, on our societal responsibility in so many areas. Number three, if there is a foundational principle for this country, I believe it’s straightforward. No one should be above the law. No individual, no company, no leader, no government. But then number four, look, let’s recognize the obvious. We need laws. And I would just say I’d put it slightly differently, we haven’t had a shortage of debate in this country about an issue like privacy. We’ve had a shortage of decision making. So instead of always worrying about where we can’t reach agreement, why don’t we get something done by taking more action, by calling [inaudible 01:25:27] maybe more supportive as we could and should certain days and helping you all so that this Congress can pass the laws we need. I think that’s the recipe that we need for the future.

Mr. Warner (01:25:43):

I can’t, I’ll bite my tongue. Senator?

Mr. Moran (01:25:49):

Sure. And thank you. Thank you for showing up. We invited several more tech companies and they chose to just climb, to not to be here in the national conversation. So I do appreciate you getting to be able to be here. Let me just outline some of the challenges that we face in this that do become obvious to all of us and we get a chance to be able to look at it. This is not picking on Meta, but it’s going to be a side-by-side with TikTok, who’s not here. But this is just an example side-by-side of content delivery from a company. When there was a comparison that was done of content delivery to individuals that were 35 and younger, Instagram to TikTok, Uyghurs content, it was 11 to one Instagram. So TikTok hardly delivered it, 11 to one that if someone was talking about Uyghurs, Instagram was talking about it, TikTok wouldn’t. In Tibet, any conversations about Tibet, 41 to one, Instagram to TikTok, TikTok just screened it out.

(01:26:52)
Tiananmen, Tiananmen Square, 80 to one content on Tiananmen Square. This is among Americans by the way. Hong Kong protests, 180 to one. That seemed to be a conversation that was discussed on Instagram, just didn’t show up on TikTok for whatever reason. Ukraine, 12 to one, and this one was interesting to me. There’s 50 times more pro-Palestinian content on TikTok than pro-Israel content. Now, I say that to you to say there’s a sense of an outside foreign influence, in this case owned by a foreign entity, trying to be able to deliver content to the United States to affect the national conversation. The challenge that we have, because there’s not a challenge on what Americans want to be able to talk about. The challenge is, is a foreign entity reaching into the United States and saying, “Hey, I want to try to influence you by delivering content to your box that may try to sway opinions on this.”

(01:27:55)
So two things I would say on this. First of those is the concern is for not just a TikTok or to a foreign entity of Russia or Iran, trying to be able to put bad content in, misinformation, disinformation, also the feeding of the quantity of the algorithm. This is an area that Americans have got to be able to rebuild trust that I would say there’s a lot of suspicion because the delivery of what content is actually coming to your feed is an area of skepticism. Whether that is in Google search or whether that’s in whatever they’re getting from a social media network on it, how do we actually set in front of American people enough transparency that there’s a trust that it’s neutral in what is delivered, yet your task is looking at trying to feed them information they want to see more of. How do we hit that rhythm on it? That’ll be important just for Americans period in our own dialogue. Anybody want to try that one?

Mr. Clegg (01:28:53):

I’ll try, Senator. I think Senator, you pinpoint a very important issue, which is algorithms in a sense deal with a practical problem, which is there’s an almost infinite amount of content that you can show people, but of course people have only got a limited amount of time, they’re scrolling on their feed. So you have to somehow rank and funnel it. And I believe the way to square the circle that a sentence you quite rightly allude to is giving people confidence that these algorithms are working for them and not against their interests. First, give people real control. So for instance, on our services, you can just turn the algorithm off. You can just have it chronologically delivered instead. You can click onto the three dots and you see exactly why you’re seeing a post. You can say you don’t want to see certain ads, you can prioritize certain content and not.

(01:29:41)
I think the user controls are crucial. And secondly, we need to be transparent. We need to be transparent about what are the signals that we use in the algorithms, we need to be… We publish alongside our financial results every 12 weeks for instance, full transparency report showing how we act on content that violates our policies. We have that audited by EY so we’re not sort of marking our own homework, if I can put it like that. So I think user agency and sort of control, and a maximum amount of transparency for the companies are the key ingredients here.

Mr. Moran (01:30:17):

Mr. Walker?

Mr. Walker (01:30:18):

Just to follow up on that, we seriously take the point about maintaining and building trust in services. So some of the ways we do that are anchoring our results throughout the United States, in rural and urban areas, 49 states at last count. That’s the ground truth for many of our services. But beyond that, we do things like, for example, on YouTube, not just promoting the most popular videos, but the videos that users have found the most valuable. We will survey our users the day after, “Did you have a good experience in the service? Did you find this as valuable your time?” And make sure consistently and clearly and transparently enforcing our policies, which we also publish. It’s a responsibility we take very seriously.

Mr. Moran (01:31:03):

It is, and it’s something that’s incredibly important and it’s also consistent on this. If I could just make one quick comment on this as well, Mr. Smith in a comment that you made earlier that Iran is fighting against [inaudible 01:31:17]. We see the noise, this and awareness of it. I do think it’s important to have this conversation to be able to make Americans well aware that not everything that they see is accurate or correct and that is very deliberate. But one of the challenges that we have that we’ve got to figure out both as a committee and both from you is attribution that when something goes up, how to be able to designate it.

(01:31:43)
Here’s where it originated. Because by the time they hear it 50 times through different places, people don’t know where it originated anymore. So there’s one challenge of taking off content that’s Russian content, Iranian content that’s really meaning to attack and to disturb Americans in whatever way that may be. But another one is to be able to make sure that when it gets out there, people are well aware of it. We can’t tell the story of this disinformation, misinformation unless we get fast attribution on that. And that’s something we’ve got to be able to work out.

Mr. Warner (01:32:15):

And again, I’ve got critiques of all three of these companies and I’ll come back to some of those. But on this one, they have been more forward leaning because if they don’t share that by the time the IC or law enforcement picks up. Senator Ossoff?

Mr. Ossoff (01:32:31):

Thank you, Mr. Chairman, and thank you all for joining us. On that point of attribution and identification of foreign covert influence, Mr. Walker, give us a sense of your independent capacity, absent case-by-case warning or notification from the US government of content on your platforms that is foreign covert influence.

Mr. Walker (01:32:58):

It is challenging, as was talked

Mr. Kent Walker (01:33:00):

… talked about earlier how Russia has moved beyond paying for things in rubles and only working between 9:00 and 5:00 Moscow time. So they’re increasingly making it more difficult to identify things. That said, we have more than 500 analysts and researchers working our Mandan team, Google threat intelligence. We’re tracking between 270 and 300 different foreign state actor cyber attack groups at any given point. Tracking activities, metadata, et cetera. Feeding that through our services and sharing that with security teams that are represented here and elsewhere in the industry and also working with the FBI’s [inaudible 01:33:36] task force-

Mr. Ossoff (01:33:36):

Let me put it this way: Do you think you’re mostly across it and playing whack-a-mole or do you think you fundamentally lack the ability to know how much you don’t know?

Mr. Kent Walker (01:33:47):

I think the humble and probably accurate statement would be the latter because the adversaries are always moving forward and it is a constant cat and mouse game.

Mr. Ossoff (01:33:56):

And when you use… You mentioned earlier using machine learning or algorithmic tools to try to identify it, is that on the basis of network activity and posting tactics as opposed to content where there’s a risk of collateral damage. You might suppress bona fide American speech-

Mr. Kent Walker (01:34:12):

Yes.

Mr. Ossoff (01:34:12):

… because oftentimes what the foreign actors are finding resembles perhaps extreme or polarizing speech that’s happening organically in the country?

Mr. Kent Walker (01:34:22):

Yeah. It’s a deep and important question and the answer differs to some degree across different platforms because pure social network, as Mr. Clegg was referring to, will have more behavioral information. We may have more content-related or metadata-style information. We do try and share across the different platforms where we can, but inherently there is some sort of assessment of the nature of the content. We talked a little bit about Provenance in AI or metadata and AI, that’s going to be a component of it. Network activity is a component of it and then behavioral signals will also be a component of it.

Mr. Ossoff (01:34:56):

Okay. In addition to attribution, let’s talk about authentication. Mr. Smith, you mentioned the Slovakian example, I believe. Let’s game it out. Right? I think we need to be able to discuss in the open how this might unfold in the United States and who bares responsibility for handling it. There be some very compelling, seemingly authentic deep fake audio clip which is in fact fake and defamatory implicating a candidate for Office of United States in the hours or days or weeks before an election. How confident are you that either you or another private sector actor or somebody else has the capacity to identify it as fake? Particularly where we can’t rely on one campaign or the other necessarily to, in good faith, acknowledge that something which is useful to them because it deliberately defames and mis-characterizes the statements or conduct of their political opponent isn’t real?

Mr. Smith (01:35:57):

Well, I’d say first I think Kent had a word of wisdom in saying we always have to act with a sense of humility and hence I think we should require of us an extraordinarily high level of confidence approaching certainty before we take action. Having said that, I do think especially given our ability to use AI to identify the creation of a fake and just the good old human judgment that comes from crowdsourcing, especially for video, we can identify a great deal. I then think what it translates into is another part of your question, great, what do about it? And there will be days or it could be hours when the most important thing we’ll need to do is alert the public so that there is a well-informed conversation. But I also think this points, more broadly, to what is a systemic strategy to try to address the problem that we’re worried about here? And that’s that we need to focus on well-

Mr. Ossoff (01:36:58):

Well, look, because time is short, let me try this question. Ask it of each of you. What will you do? What is your policy if in that critical time period before an election there’s deep fake content attacking a candidate for office, which can be demonstrated to be inauthentic but cannot be decisively attributed to a foreign actor, how will you handle it?

Mr. Clegg (01:37:23):

We would label it.

Mr. Smith (01:37:25):

Yeah.

Mr. Clegg (01:37:25):

We would label it so that the users would see that the veracity and the truth of it is under real question. So we would label it.

Mr. Ossoff (01:37:32):

What about how it’s handled by the algorithm and its amplification or suppression?

Mr. Clegg (01:37:36):

We would make available to us the ability to demote the circulation of it.

Mr. Ossoff (01:37:41):

Mr. Smith?

Mr. Smith (01:37:43):

Yeah. We don’t have the same issue in terms of a consumer platform, but I think that the notification to the public, the labeling, I do think that’s the essence of what we all need to be prepared to do very quickly.

Mr. Kent Walker (01:37:55):

And I would add to that, that we would notify the foreign influenced task force so that there was government awareness of the situation.

Mr. Ossoff (01:38:03):

Thank you.

Chairman Warner (01:38:05):

Thank you, gentlemen. I’ve got a few more comments. I guess, where I start is I was there with all three of you in Munich when companies like TikTok and X signed on to that agreement. Again, amazed and disappointed, particularly X, failure to participate or failure to in any way it appears adhere to that document, but if what you just said is… I want to make sure we didn’t get off just on the Fox and Washington Post but just Publication Forward, another example. If we got a watermarking system, the fact this is content that didn’t originate with you but it was placed on your platform, these are not watermarked. I’m not sure there’s a way that anyone that’s a normal consumer because you’ve got a byline, you’ve got authentic ads on the other side, are going to find that and…. Again, since Mr. Clegg they ended up on yours, I’m going to… You know. You want to protect your brand, these are brand clients, why didn’t we catch this?

Mr. Clegg (01:39:19):

So I think the key challenge here is to disrupt and remove the underlying networks of fake accounts that generate this concept.

Chairman Warner (01:39:27):

And we appreciate what you did yesterday.

Mr. Clegg (01:39:29):

That’s the only foolproof way that we can deal with this because otherwise, as you quite rightly say, Senator, we’re just playing whack-a-mole with individual pieces of content. The companies on this table and other companies besides, I think, have made real material progress since we assembled together in Munich for instance to agree on interoperable standards of not only visible watermarking but also so-called metadata and invisible watermarking so that as we for instance, a social media platform… As we ingest content from elsewhere, we can then detect those invisible signals so that we can then alert that to our users. But, of course, bad actors… In this case, foreign actors, Russian networks are not going to introduce those-

Chairman Warner (01:40:13):

Right. They’re not going to put the watermark on.

Mr. Clegg (01:40:14):

Correct. Which is why, for us, the overriding objective is always disrupt the wider networks-

Chairman Warner (01:40:21):

But again, at the end of the day, what I don’t understand… And whether this was on Facebook or appeared on Google, or I’m sorry on YouTube, or appeared on X, the URL is the distinguishing characteristic. A consumer’s not going to get that. Should that be simply the government’s responsibility to spot that? Don’t we need you leaning in on that issue?

Mr. Clegg (01:40:45):

Yeah. Yes, of course. Absolutely.

Chairman Warner (01:40:47):

So one of the things… Because we keep coming back. We’re 49, 48 days away… I’m going to hit you, Mr. Smith, as well, but let me start with Mr. Walker and Mr. Clegg. I need to know… Starting with these kind… And we will share all of the ones that have come out of the Justice Department report, how many Russian manipulated images that are completely false, that sow dissension, that undermine campaigns. How many Americans have seen those? Because clearly you’ve got your whole metrics of model is based on how many eyeballs you get. We got to have that information.

(01:41:33)
I also believe that there are a series of ads, and we will share again with the companies in more detail, that are getting through the protections at this point. We need to know how many of those ads. Because my concern is when people either undermine and say, “This is only memes,” or, “This isn’t a serious issue”… Again, Americans the right to say anything no matter how out there it is. But I echo what Senator Cornyn said, the notion even around reciprocity, the idea that Russia or China would allow this kind of manipulation on their social media is beyond the pale. Of course they wouldn’t.

(01:42:16)
So we need that because the one thing we do know… I think most all of us would agree… in the next 49, 48 days, it’s only going to get worse. And having that data now, not to embarrass what happened, at least, on Facebook to say, “Hey, X millions of Americans saw this kind of fake content. Just beware.” Because chances are no matter what we do in the last 48, we’re not going to stop all of this coming down. But that measure would help identify. I also think on the ads… I know it’s gotten better. Mr. Walker, you mentioned the fact that we don’t take the payment… You don’t take payment in rubles anymore from 9:00 to 5:00 Moscow time, but there’s still a ton of this getting through and we need better data at this point. So I’ll expect that very shortly. And if you still have colleagues or friends at X, I sure as heck would invite them to actually be part of the solution as opposed to simply trying to be part of exacerbating, sometimes, the problem. And we have those who don’t play, I mean, X, TikTok, I mean, this whole set of other… The Discords, the Telegrams that are others that… They almost, in some cases, pride themselves of giving the proverbial middle finger to governments all around the world, which I think raises huge issues as well. So I’d like to have that information. I think Senator Ossoff has got one more… And as soon as possible. I have one last closing comment. Senator Ossoff.

Mr. Ossoff (01:44:12):

I’ll be brief, Mr. Chairman. Just to note, and the committee has made public some of the underlying information which was contained, I believe, in the charging documents related to this specific recent Russian effort for which there were 32 domain seizures, Doppelganger, which planning documents specifically identified, “Swing states whose voting results impact the outcomes of the election more than other states,” and named in particular Georgia as a destination for this covert Russian influence. We talked about attribution, we talked about authentication. I think we’ve also been discussing the importance of having a society that is resilient, that takes a skeptical, critical approach to information. One of the challenges that we have is for some avid consumers of political content, anything which seems to affirm one’s partisan perspective is deemed credible without that kind of critical scrutiny.

(01:45:27)
For my constituents in Georgia who have recently been targeted by this foreign covert influence campaign, but for the whole nation, how do you think about your role and invite you to comment on the role of public leaders, elected leaders. How do we build that kind of resilience across society such that we don’t just accept anything that seems to affirm our worldview or denounce enemies, but we recognize that foreign and domestic, there’s a lot of folks telling lies. A lot of folks have an interest in manipulating us. Mr. Clegg, why don’t you take a shot at that?

Mr. Clegg (01:46:02):

Well, the first thing I think, as has been mentioned by a number of senators already, we can learn a lot from countries like the Baltics… Moldova, I think, is a country right in the front of the line now facing a lot of Russian interference. Taiwan, the Taiwanese election recently. All of these countries in different situations were dealing with major adversaries who were trying to interfere in their elections. And public skepticism, voter skepticism, is probably the greatest antidote to a lot of this. And I do think political leadership can play a role in fostering that. The other thing which is crucial, and that’s on us is every time we find networks like that, we need to share that as widely as possible with researchers, with our colleagues in the tech industry, with government. So for instance, we now publish every 12 weeks an adversarial threat report done so in the last few years, and Doppelganger… Senator you mentioned Doppelganger, it was our threat intelligence team that identified Doppelganger first two years ago.

(01:46:58)
We blocked around 5,000 accounts and pages in three months… In a three-month period this year. We’ve placed a lot of the signals that we were able to detect on GitHub so that everybody can look at that, everyone can learn from the experience that we’ve got. People can then scrutinize it, tell us what we’ve got right, what we’ve got wrong. I think that interchange of research and data is crucial to develop public and societal resilience in the long run.

Mr. Ossoff (01:47:24):

And education plays a role as well. Let me ask this final question… Oh, Mr. Walker, go ahead.

Mr. Kent Walker (01:47:28):

Just very briefly, I wanted to give an example because it’s obviously a deep democratic question at a time when trust in institutions of all kinds is going down. But one specific case study that might be helpful: YouTube has launched a program called Hit Pause, which are a series of short videos designed to remind people not to believe everything they see. That if facts are one-sided, if it’s an overly emotional pitch, et cetera, there are a series of these sort of ways of framing things that are often used by people pushing false information and we found actually in independent research that the lasting effect of some of those short exposures can actually last months. People become more resistant to fake news.

Mr. Smith (01:48:06):

And I would just underscore that. I think that’s an excellent initiative. We’ve been doing similar work at Microsoft. We’ve really sharpened our ability in the European Union parliamentary elections. Ran a paid media advertising campaign around checking and rechecking before people make up their mind and vote. Reached 150 million people outside the United States. That’s why we’re bringing to the United States. Certainly, the swing states are critical and it’s not just advertising, it’s getting out on drive time radio, local press to help bring this message so that the American public has the information it needs.

Mr. Ossoff (01:48:46):

Thank you. Final question. Mr. Clegg, putting aside law and regulation. When you think about, for example, your employer’s social obligations and how you meet those social obligations in the decisions that you make about how content is labeled or how your algorithms treat content in a society where sharp elbowed political debate is part of the process and free speech is cherished as a value in addition to being a constitutional right, what is the distinction between the role that your teams are fulfilling and making those calls and the traditional editorial judgment that a traditional news organization would make?

Mr. Clegg (01:49:31):

The fundamental difference is that we don’t generate the content. So it’s user-generated content that circulates on our apps and services. It’s almost an inversion of the top down way in which information is selected and handpicked by editors sitting in editorial suites for newspapers-

Mr. Ossoff (01:49:49):

But you do… But you decide what’s on the page?

Mr. Clegg (01:49:52):

We decide… as I said earlier… or decide… We have systems which seek to ensure that every person’s feed is, in a sense, unique to them. It reflects their interests, it reflects what they enjoy to spend time on. As it happens, the vast majority of people don’t use Facebook and Instagram, for instance, to argue about politics. So news and news links now constitutes around 3% of the total content on Facebook. Most people use our services for much more playful, innocent, connecting with family and friends, family holidays, family birthdays, bar mitzvahs, barbecues, you name it. And that’s reflected in the overwhelming majority of the content on our services.

Mr. Ossoff (01:50:34):

Thank you.

Chairman Warner (01:50:41):

That sounds, to me, like a backhanded description around protection around section 230, which I fundamentally disagree with you on. And, again, I don’t accept that characterization. That was the same characterization that initially people made about TikTok. What could be so wrong about people sharing cat videos? Although cat videos may take on a political stripe now yet now the number’s 30%, 40%, 50% of 18 to 24 year olds get all of their news or the vast majority of their news from TikTok and I didn’t… I just do not accept the notion we’re just independent creators. There are algorithms that shift what you see, how much you see, tech colleagues that we both said they’ve never been a more creative, addictive, crack-like tool than TikTok in terms of attracting and keeping. And, again, the effort that Senator Cornyn raised, and the vast majority of us here, when at the ultimate the dials can be turned by a CCP leadership in terms of what content receive, that is, I believe, a huge national security concern.

(01:51:49)
I also want to just point out that the independent reviewers… I agree that’s good and I do think there is a role for the academic reviewers. I think we are less safe today because many of those independent academic reviewers have been litigated, bullied or chased out of the marketplace. That concerns me. I also hope, and I’d like to see not just one-off answers, but I’d like to see from all three of you something to the committee that Senator Rubio and I will then review and share with our colleagues. I think this point about the 48 hours, Brad, that you raised, I think we have put attention on that, but I think the post-election 48 hours is going to be equally important and I’d like to hear with specifics what kind of surge capacity each of your institutions are going to have as we get closer because I’m not going to litigate here whether you’ve cut back or not your content… And again, not content moderation on a political bent, but content moderation in terms of whether your users actually adhere to your own terms of service.

(01:53:07)
I would simply state for the record, the overwhelming majority of outside observers, I think across the political section, political stripes, have said most of you have cut back. But you made your points. We don’t have to re-litigate. So, again, I want to know how many folks have seen… And echoing, especially in these targeted states, how many ads have gotten through. What we’re going to move forward on. I would also… I bit my tongue earlier before Senator Lankford got on and I do think… Listen, I’ve with each of you and each of your companies and I think there are places we agree, there are places we disagree and I do believe Congress’s batting record on social media platform and on AI is virtually zero in terms of laws being passed, maybe with the exception of TikTok. I would point out that when we had the largest AI dog and pony show in the emergence of AI, when your CEO colleagues and everybody else was there and Senator Schumer at that point asked, “How many of you think we need regulation?” Everybody raised their hand.

(01:54:16)
And I’ve got a half a dozen bipartisan AI laws… or bills, some of them doing things like how do we avoid those entities that circumvent the watermarks that you and others may put in? But for the most part, and since I get the last word, I’ll leave this without contradiction. Everybody’s for it in theory until you see words on the page. And there’s always a reason why… “Oh, we can’t really do that,” or, “Oh my gosh, if we do that we’re going to slow down innovation,” or, “We do that, China’s going to leap ahead.” And this is not the topic for today, but there’s a lot of parents in America today who would say, “A few guardrails on social media in 2014, we might have a heck of a lot healthier kids in this country in terms of mental health issues.”

(01:55:07)
Not the subject for today, but something that the vast majority of Americans believe, including me. So we’ve made… As I went through… I won’t re-go through my statement. We’ve made some progress. I do worry that this is not going to lead the news tonight, the fact that Russia and Iran… But we don’t have the kind of visuals yet. I hope we will get the visuals on what Iran has done. But that Russia, using brands that most Americans on either in the political spectrum respect… Fox News, Washington Post… Are seeing things that look like it’s that content. It’s not. It’s coming from Moscow. And anyone thinks that is appropriate, I just don’t think reflects where we are in this democracy. We have… I’ll end with where I started. We have more than enough differences amongst Americans. We have a constitutionally given First Amendment that allows us to say anything no matter how stupid, unless it is the equivalent of fire in a crowded theater. But we can have those debates but sure as heck should be concerned about foreign government services.

(01:56:22)
This is not some one-off entity. These are foreign spy services who, by definition, want to undermine our country. When they are trying to sway an already very close election, we all should be concerned of that. I appreciate you all being here. I wish more of your colleagues in the sector would be as engaged. I’ve given you all some to-do work and my hope is we will have some of that information. Because the clock is ticking, as you’ve all said. I would hope we could get some preliminary information back, even by middle of next week. Let’s see if we can get this as we go into October. With that… I did promise Senator Rubio I wouldn’t go off on some other tangent so I will respect that right now and say we are adjourned.

Subscribe to the Rev Blog

Lectus donec nisi placerat suscipit tellus pellentesque turpis amet.

Share this post

Subscribe to The Rev Blog

Sign up to get Rev content delivered straight to your inbox.