In today’s world, information is a valuable currency. While the age of social media has inarguably broadened our horizons and our accessibility to information, our society is facing a massive crisis of counterfeiting due to the flood of misinformation. Social media sites and big technology companies continue to grace headlines for their oversight or lack thereof on matters of privacy and misinformation. The spread of misinformation is a threat and disruption to our society, especially in the emotionally charged socio-political landscape we now live in.
The Washington Post reported that election-related posts on Facebook, now governed by the branded technology conglomerate Meta, got six times more clicks than factual and credible sources during the 2020 US Elections. Since 2016, the instigators who traffic in misinformation have continued to garner major audiences on Facebook. Leading digital platforms such as Facebook are profiting from this wave of misinformation. The anti-science undercurrents of misinformation since the pandemic’s beginning are adding a considerably more sinister touch to this wave of deceptive information. This year, anti-vaxxers made over $1 billion for social media companies. Anti-vaccine content creates a sizable portion of engagement on platforms including Facebook-owned Instagram and Facebook itself.
How Bad Information Crowds Out Good Information
Gresham’s Law operates in a similar way in the context of political communication and information today, especially in what many consider the “post-truth” world marred by fake news and misinformation. According to Gresham, “bad money drives out good.” The law states that more expensive money will eventually disappear from circulation as it is exported, hoarded, and counterfeited.
This phenomenon is occurring with news, too: verified and relevant news media tends to be replaced by unverified, “fake” news. “Bad” money drives out good because the good money is more costly to make, much like diligent fact-checking and truthful reporting is frequently more costly due to the resource investment, and might not yield as much lucrative outrage.
Who Is To Blame?
Misinformation that spreads anti-vaccine rhetoric, for instance, is able to generate revenue successfully due to Big Tech’s failure to regulate and enforce actions against them. From failing to screen foreign governments’ attempts to influence the 2016 US Presidential Elections to the lax handling of user data in the Cambridge Analytica Scandal, which provided unethical access to 87 million individuals’ data, Facebook continues to gross headlines for the company’s lack of oversight on matters of privacy and misinformation. The circulation and propagation of “bad” information are able to take effect due majorly to this gross negligence and the lack of transparency and accountability.
Last month, whistleblower Frances Haugen, a former Data Scientist at Facebook, testified before a Senate subcommittee, stating that the company continues to sow discord and undermine democracy “in pursuit of breakneck growth and astronomical profits.” Hagen also stated how Facebook continues to prioritize profit over safety and ignores the propagation of toxic content to the masses. Recently, she called upon the European Parliament Committee to further expand the makeup of its Digital Services Act (DSA), specifically forcing platforms such as Facebook to assume full responsibility for matters beyond illegal content such as election misinformation.
Big Tech: Broken Beyond Repair?
While companies have expanded efforts to remove content and rank illegal activity on their respective domains, companies like Facebook continue to rely on artificial intelligence and algorithms that are innately profit-driven due to the company’s goal of advancing sales revenue. The failure to staunch misinformation and problematic content is largely rooted in these systems and surveillance-based business models. Self-regulation is reaching an impasse due to the business model of these companies. While making small surface-level changes to appeal to Congress, representatives from the company are failing to make significant internal reform. Yet, at this juncture, do we need more censorship for misinformation?
The Center for Countering Digital Hate in Washington DC is calling on big tech giants to de-platform influencers and independently led organizations contributing to the misinformation generated online. The Center cited that the “disinformation dozen” of influencers were primarily responsible for nearly two-thirds of all anti-vaccine content shared on social media between the months of February and March of this year. Some staunch proponents of free speech argue that this de-platforming is a greater ethical breach than allowing the spread of misinformation. However, misinformation and fake news greatly imperil an individual’s freedom and access to accurate and reliable information.
At present, the ongoing antitrust lawsuits brought on Facebook and Google might not even be successful in regulating their control. These cases might also take many years to litigate and most courts continue to take on a more conservative outlook to antitrust enforcement. As most advertisements and content are offered to audiences for free or with little monetary cost, U.S. antitrust laws do not fully address the gamut of non-monetary effects, and how social media companies collect detailed amounts of personal information and control misinformation in the online space.
Moving Forward: What’s Next?
While data privacy laws and international organizations have placed significant regulations to check the power of big technology companies, present, accountability frameworks remain weak at present.
Taking the example of the review of the DSA that is currently being deliberated by MEPs of the European Commission, it is important to reshape and revise legislation on these matters. The leveling up process of the DSA will hopefully prevent regulatory negligence in the future and place more safeguards to deliver more accountability to big social media giants. Rather than banning advertisements, MEPs are pushing for privacy-safe alternatives such as contextual ads, noting the harmful impacts that behavioral advertising can have on users. Crucial examples of safeguards for the future include further transparency when companies hand over data for review. Haugen also suggested further broadening the view of regulatory oversight by casting a wider net of experts and individuals to deliver the gold standard of accountability and enforcement.
Overall, big tech power continues to have sobering impacts on our access to information and speech today. While companies can still take on the effort to eliminate the propagation of misinformation, online spaces remain some of the most effective mediums and feeders to influence the masses. Unless we can alter big tech’s profit machine from within, our guards need to be up and we need to remain cognizant of their impacts on our online and public spaces and the threat they may pose to our communities.
Image Source: Tech Crunch