(Washington, DC) Today at a House Committee on Science, Space, and Technology Subcommittee on Investigations and Oversight hearing entitled “The Disinformation Blackbox: Researching Social Media Data,” Subcommittee Ranking Member Jay Obernolte emphasized the importance of combatting misinformation while cautioning against irresponsible policies that could promote biased censorship and threaten user privacy, free speech, intellectual property, and other forms of discourse essential to individual freedoms.
“The need for data transparency and access is inherently in tension with the protection of user privacy. We must endeavor to strike a healthy balance between data transparency on the one hand, and the protection and preservation of individual privacy on the other.”
“While social media platforms have rightfully taken steps to thwart the spread of misinformation, they must also protect against overcorrection that results in censorship. Competing hypotheses about the origins of COVID-19 are compelling examples. For almost a year, the suggestion that COVID-19 could have originated from anything other than natural zoonosis was summarily dismissed as a conspiracy theory by traditional and social media alike. However, data now suggests that other hypotheses are in fact more plausible, and only recently did mainstream and social media platforms cease to censor these theories. The censorship of competing explanations has unquestionably impeded important efforts to investigate the virus’ origins.”
“Good morning. Thank you, Chairman Foster, for convening this hearing. And thanks to our witnesses for appearing before us today.
Misinformation is not a new phenomenon. Disinformation campaigns have been used throughout history to spread state propaganda and influence geopolitics. It is no secret that misinformation can change hearts and minds and influence perceptions. What is new is the impact that modern advances in information and communications technologies have had on the ability of misinformation to spread. It is easier now than ever to reach global audiences, communicate instantaneously with friends and family worldwide, and follow every move of politicians, athletes, and Hollywood stars alike.
The same technologies that facilitate and democratize global access to information also enable data dissemination at a scale and speed that we have never experienced before. This has made it more challenging to determine the accuracy, provenance, and objective truth of the information we consume. More information is presented to individual consumers than ever before and from a myriad of different sources.
The tremendous growth in the popularity of social media platforms over the past decade has resulted in the consumption of more personalized information. The information we read and view online is now perfectly tailored to our preferences, biases, and beliefs. We receive an individualized, curated data feed whenever we visit our social media platform. And it would not be a stretch to say that, at times, we are each drinking from our own information firehoses.
In this golden age of information, there are many outstanding questions about how we can assess and ultimately combat the spread of falsehoods, untruths, “fake news,” and misinformation. I’m pleased that each of the witnesses testifying before us today has undertaken research to learn more about how misinformation spreads and what we can do to combat it. This is an admirable goal, and we in Congress must take steps to facilitate further research on this critical topic. However, these efforts cannot be undertaken without ensuring appropriate constraints, limitations, and safeguards are in place.
The need for data transparency and access is inherently in tension with the protection of user privacy. We must endeavour to strike a healthy balance between data transparency on the one hand and the protection and preservation of individual privacy on the other.
We must also respect and protect the intellectual property rights of the platforms whose data researchers seek to access and analyze. Social media and technology platforms have invested significantly in developing their processes, technologies, and algorithms, which, in many ways, distinguishes the user experience of one platform from that of the others. Each platform is in a race to do it better, faster, and for less than its competitors. And they rightfully take great pains to police and protect their trade secrets from public disclosure. An appropriate balance must be reached between the intellectual property rights of platforms and the desire to access and analyze their technologies, processes, data, and algorithms for the public benefit. I’m not suggesting that it’s an easy balance to strike, but merely asserting that we must keep this in mind as we work forward.
Undoubtedly, misinformation can have harmful and even deadly real-world consequences. State-sponsored actors from Russia and China have recently engaged, and continue to engage, in coordinated disinformation campaigns. From Russia’s efforts to foment discord and chaos around American elections to China’s efforts to lay blame for COVID-19 at the feet of the American government, state-sponsored disinformation campaigns have real consequences.
While social media platforms have rightfully taken steps to thwart the spread of misinformation, they must also protect against overcorrection that results in censorship. Competing hypotheses about the origins of COVID-19 are compelling examples. For almost a year, the suggestion that COVID-19 could have originated from anything other than natural zoonosis was summarily dismissed as a conspiracy theory by traditional and social media alike. However, data now suggests that different hypotheses are more plausible, and only recently did mainstream and social media platforms cease to censor these theories. The censorship of competing explanations has unquestionably impeded essential efforts to investigate the virus’ origins.
Similarly, we must leave room for parody, satire, and commentary in our social and political discourse. An appropriate balance is necessary to ensure that such commentary is not discouraged or inappropriately discarded as conspiracy theories or misinformation. Just as misinformation can have real-world consequences, so too can overcorrection, leading to censorship of public debate about different ideas.
Combatting misinformation is not an easy endeavour. And the many researchers looking at how misinformation spreads online and how to successfully thwart it should be praised for their efforts. But if we ever expect to truly solve this problem, we must recognize that social media platforms must have a seat at the table. Without them, we cannot expect them to go it alone, and we should likewise not wish to stop the spread of harmful misinformation.
We must also determine how to balance our societal goal of minimizing the spread of misinformation with the competing goal of avoiding censorship. This balance is critical because, as history has often shown, empowering our media with the unchecked ability to censure would lead our country down a dark path.
I look forward to learning more from our witnesses about how we can work to combat the spread of misinformation on social media while simultaneously protecting users’ privacy platforms’ intellectual property, preventing overcorrection, and preserving public discourse.
Thank you, Chairman Foster, for convening this hearing. And thanks again to our witnesses for appearing before us today. I look forward to our discussion.
I yield back the balance of my time.”
Source: Press Release
Date: September 28, 2021
Contact: Heather Vaughan
(202) 680-8577
