Big Tech VS Free Speech πͺ The end of Section 230 may be the key!
Written by Peter Boykin on May 15, 2019

Big Tech VS Free Speech πͺ
The end of Section 230 may be the key!
Please Donate to https://fundly.com/stopthebias
Together we can bring attention to the social media censorship and hold these monopolies to the exemption they have hid behind.
Itβs no longer a question of whether the Giant Social Media Companies β Google, Twitter, Facebook, Instagram, etc. β have become too powerful. Theyβve matured to the point that they can actually affect what people see, read, listen to and even what … See More they think. To make matters worse, theyβve decided that they will use these powers to change voting patterns and to Censor speech that opposes their political beliefs.
Itβs time to stop them before all is lost. Harmeet Dhillon (Attorney Suing Google and Republican Party Official) has been on Tucker Carlsonβs show frequently of late and she warns,
“Trump won’t win in 2020 and we will never win another election if we don’t stop this!”
One of the most likely ways for Congress to stop them would be to revise Section 230 of the Communications and Decency Act (CDA) that provides a special exemption from liability for content that is posted on their platforms. This exemption was initially extended to them because they claimed that their platforms would be a place for people from all points of view to post their ideas. Given their current Censorship actions, we all know that is no longer the case.
Consequently, the Social Media Platforms should be subjected to the possibility that they be responsible for all content that is posted on their sites since they selectively publish just as the New York Times or Washington Post do. In fairness, then, the Social Media Platforms should bear the same risk of liability for their content as other publishers.
This move would, of course, destroy their business model so they would be likely to change the Censorship tactics they use against Conservatives in order to avoid any changes to Section 230 of the CDA.
Alternatively, the threat of Antitrust Litigation is another avenue that may get their attention. The government should apply the same techniques against these Social Media Giants as they used to bring Microsoft to heel.
Our goal is to see our leaders pursue these remedies before itβs too late!
Reprint from:
https://www.fastcompany.com/90273352/maybe-its-time-to-take-away-the-outdated-loophole-that-big-tech-exploits
The 1996 law that made the web is in the crosshairs
Internet companies have long been shielded from legal responsibility for toxic user content by the Section 230 statute. Now that theyβre huge, rich, and behaving badly, that gift could be taken away.
In the face of that toxic contentβs intractability and the futility of the tech giantsβ attempts to deal with it, itβs become a mainstream belief in Washington, D.C.βand a growing realization in Silicon Valleyβthat itβs no longer a question of whether to, but how to, regulate companies like Google, Twitter, and Facebook to hold them accountable for the content on their platforms. One of the most likely ways for Congress to do that would be to revise Section 230.
UNDERSTANDING SECTION 230
Section 230 remains a misunderstood part of the law. As Wyden explained it to me, the statute provides both a βshieldβ and a βswordβ to internet companies. The βshieldβ protects tech companies from liability for harmful content posted on their platforms by users. To wit:
(c) (1) No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Specifically, it relieves web platform operators of liability when their users post content that violates state law by defaming another person or group, or painting someone or something in a false light, or publicly disclosing private facts. Section 230 does not protect tech companies from federal criminal liability or from intellectual property claims.
βBecause content is posted on their platforms so rapidly thereβs just no way they can possibly police everything,β Senator Wyden told me.
The βswordβ refers to the 230βs βgood samaritanβ clause, which gives tech companies legal cover for choices they make when moderating user content. Before Β§ 230, tech companies were hesitant to moderate content for fear of being branded βpublishersβ and thus made liable for toxic user content on their sites. Per the clause:
(c) (2) (a) No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
βI wanted to make sure that internet companies could moderate their websites without getting clobbered by lawsuits,β Wyden said on the House floor back in March. βI think everybody can agree thatβs a better scenario than the alternative, which means websites hiding their heads in the sand out of fear of being weighed down with liability.β
Many lawmakers, including Wyden, feel the tech giants have been slow to detect and remove harmful user content, that theyβve used the legal cover provided by Β§ 230 to avoid taking active responsibility for user content on their platforms.
And by 2016 the harmful content wasnβt just hurting individuals or businesses, but whole societies. Social sites like YouTube became unwitting recruiting platforms for violent terrorist groups. Russian hackers weaponized Facebook to spread disinformation, which caused division and rancor among voters, and eroded confidence in the outcome of the 2016 U.S. presidential election.
As Wyden pointed out on the floor of the Senate in March, the tech giants have even profited from the toxic content.
βSection 230 means they [tech companies] are not required to fact-check or scrub every video, post, or tweet,β Wyden said. βBut there have been far too many alarming examples of algorithms driving vile, hateful, or conspiratorial content to the top of the sites millions of people click onto every dayββcompanies seeming to aid in the spread of this content as a direct function of their business models.β
And the harm may get a lot worse. Future bad actors may use machine learning, natural language, and computer vision technology to create convincing video or audio footage depicting a person doing or saying something provocative that they didnβt really do or say. Such βDeepfakeβ content, skillfully created and deployed with the right subject matter at the right time, could cause serious harm to individuals, or even calamitous damage to whole nations. Imagine a deep-faked president taking to Twitter to declare war on North Korea.
Itβs a growing belief in Washington in 2018 that tech companies might become more focused on keeping such harmful user content off of their platforms if the legal protections provided in Β§ 230 were taken away.
SHIELDING GIANTS
Thereβs a real question over whether Wydenβs βshieldβ still fits. Section 230 says web companies wonβt be treated as publishers, but they look a lot more like publishers in 2018 than they did in 1996.
In 1996 websites and services often looked like digital versions of real-world things. Craigslist was essentially a digital version of the classifieds. Prodigy offered an internet on-ramp and some bulletin boards. GeoCities let βhomesteadersβ build pages that were organized (by content type) in βneighborhoodsβ or βcities.β
Over time the dominant business models changed. Many internet businesses and publishers came to rely on interactive advertising for income, a business model that relied on browser tracking and the collection of usersβ personal data to target ads.
To increase engagement, internet companies began βpersonalizingβ their sites so that each user would have a different and unique experience, tailor-made to their interests. Websites became highly curated experiences served up by algorithms. And the algorithms were fed by the personal data and browsing histories of users.
Facebook came along in 2004 and soon took user data collection to the next level. The company provided a free social network, but harvested usersβ personal data to target ads to them on Facebook and elsewhere on the web. And the data was very good. Not only could Facebook capture all kinds of data about a userβs tastes, but it could capture the userβs friendsβ tastes too. This was catnip to advertisers because the social data proved to be a powerful indicator of what sorts of ads the user might click on.
Facebook also leveraged its copious user data, including that on the userβs clicks, likes, and shares, to inform the complex algorithms that curate the content in usersβ news feeds. It began showing users the posts, news, and other content that the userβbased on their personal tastesβwas most likely to respond to. This put more attention-grabbing stuff in front of its usersβ eyeballs, which pumped up engagement and created more opportunities to show ads.
This sounds a lot like the work of a publisher. βOur goal is to build the perfect personalized newspaper for every person in the world,β Facebook CEO Mark Zuckerberg said in 2014.
But Facebook has always been quick to insist that itβs not a publisher, just a neutral technology platform. Thereβs a very good reason for that: Publishers are liable for the content
Follow @PeterBoykin on Social Media
Twitter: Suspended
Facebook: https://www.facebook.com/Gays4Trump
Instagram: https://www.instagram.com/peterboykin/
Youtube: https://www.youtube.com/c/PeterBoykin
Reddit: https://www.reddit.com/user/peterboykin
Telegram: https://t.me/PeterBoykin
PolitiChatter: https://politichatter.com/PeterBoykin
Patreon: https://www.patreon.com/peterboykin
PayPal: https://www.paypal.me/magafirstnews
Cash App: https://cash.me/app/CJBHWPS
Cash ID: $peterboykin1