Tucker Carlson recently pointed out the hypocrisy surrounding mainstream media’s coverage of the Justin Trudeau blackface scandal by comparing the way CNN’s Don Lemon handled the matter.
On Tucker Carlson Tonight on Fox News Tucker spent no time singling out CNN’s Don Lemon in a clip where it showed how the liberal Lemon host covered Trudeau’s apology by only redirecting the issue as a way to attack President Trump, Tucker even brought up an older clip from Oct 2018 of Don Lemon talking about the Kelly controversy.
“Megyn is 47 years old, she’s our age. There has never been a time in her 47 years that blackface has been acceptable… I wonder how much diversity she has on her staff. I don’t know, I’m not there but I would imagine there is not a lot,” “This is what people of the larger culture don’t understand about racism and about privilege.”
– Don Lemon
Now compare this to what Lemon said about the Prime Minister…
“He says he didn’t think it was racist at the time, now he knows better,”
“Think about it however you want to think about it. When someone apologizes, wow, we don’t often see that here. Especially a world leader who’s saying, ‘I should have known better and I’m sorry.’”
– Don Lemon
Double Standards Much?
Recently unless you have had your internet and TV turned off you would have seen that it was revealed that the Canadian prime minister was caught up in his own “blackface” scandal of which he attempted to apologize during a news conference in Winnipeg, Manitoba. Though Tucker Carlson (and nor did most of the right on the internet) buy this apology. As its more under the impression that Trudeau didn’t actually take responsibility but rather “transferred it” by blaming privilege.
“He’s moving responsibility for what he did from himself to the rest of us… it’s not a confession, it’s a justification and the only people dumb enough to buy it are the news media, of course. The dumbest people in the world,”
“Or could it be that because Justin Trudeau is a powerful leader on the left, he gets a pass no matter what he does? I think you know the answer,”
“What a difference a year makes… Megyn Kelly never wore blackface, she wouldn’t do that. She’s not Justin Trudeau,”
“She just made the mistake of not seeming quite offended enough in a TV segment about certain Halloween costumes.”
– Tucker Carlson
Follow – Peter Boykin
at Telegram – https://www.T.Me/PeterBoykin
at Facebook – https://www.Facebook.com/PeterRobertBoykin
at YouTube – https://www.YouTube.com/PeterBoykin
at Reddit https://www.reddit.com/user/peterboykin/
at Instagram https://www.instagram.com/peterboykin
Support at Patreon https://www.patreon.com/peterboykin
Support at Paypal https://www.paypal.com/paypalme2/magafirstnews
Purchase Peter Boykin’s Book #RussiaGate: Truth, Post-Truth, or Damned Lies?!
Peter Boykin needs more people to join his. Patreon.
It takes hundreds of dollars a month to run MagaOneRadio.net and MagaFirstNews.com and the other networks TheMagaNetwork.com is involved with.
If people want to make a one time donation it’s PayPal.me/MagaFirstNews thanks for your support.
Remember the MarchForTrump.net is June 15 in Greensboro NC we are still looking for speakers, sponsors, and participants.
Also July 6 Peter Boykin is speaking at … See More Demand Free Speech in Washington DC.
Peter Boykin has a donation page for that at Fundly.com/StopTheBias where he is raising funds for the trip and in order to visit more often to reach out to Congress to change the way free speech is handled on social media.
If you can help we will appreciate it. Please share to your groups and friends.
Follow @PeterBoykin on Social Media
Support Peter Boykin’s Activism by Donating
Cash App: https://cash.me/app/CJBHWPS
Cash ID: $peterboykin1
Listen to #MagaOneRadio
Join the #MagaNetwork
Read the Latest #MagaFirstNews
Support Donald Trump
Join Our Groups on Facebook:
Americans With Trump
North Carolina MAGA Network
NC Trump Club
Vote For DJ Trump
Trump Loves Winning
Straights For Trump
Grab them by the P***Y
Join Our Pages on Facebook:
Big Tech VS Free Speech ?
The end of Section 230 may be the key!
Please Donate to https://fundly.com/stopthebias
Together we can bring attention to the social media censorship and hold these monopolies to the exemption they have hid behind.
It’s no longer a question of whether the Giant Social Media Companies – Google, Twitter, Facebook, Instagram, etc. – have become too powerful. They’ve matured to the point that they can actually affect what people see, read, listen to and even what … See More they think. To make matters worse, they’ve decided that they will use these powers to change voting patterns and to Censor speech that opposes their political beliefs.
It’s time to stop them before all is lost. Harmeet Dhillon (Attorney Suing Google and Republican Party Official) has been on Tucker Carlson’s show frequently of late and she warns,
“Trump won’t win in 2020 and we will never win another election if we don’t stop this!”
One of the most likely ways for Congress to stop them would be to revise Section 230 of the Communications and Decency Act (CDA) that provides a special exemption from liability for content that is posted on their platforms. This exemption was initially extended to them because they claimed that their platforms would be a place for people from all points of view to post their ideas. Given their current Censorship actions, we all know that is no longer the case.
Consequently, the Social Media Platforms should be subjected to the possibility that they be responsible for all content that is posted on their sites since they selectively publish just as the New York Times or Washington Post do. In fairness, then, the Social Media Platforms should bear the same risk of liability for their content as other publishers.
This move would, of course, destroy their business model so they would be likely to change the Censorship tactics they use against Conservatives in order to avoid any changes to Section 230 of the CDA.
Alternatively, the threat of Antitrust Litigation is another avenue that may get their attention. The government should apply the same techniques against these Social Media Giants as they used to bring Microsoft to heel.
Our goal is to see our leaders pursue these remedies before it’s too late!
The 1996 law that made the web is in the crosshairs
Internet companies have long been shielded from legal responsibility for toxic user content by the Section 230 statute. Now that they’re huge, rich, and behaving badly, that gift could be taken away.
In the face of that toxic content’s intractability and the futility of the tech giants’ attempts to deal with it, it’s become a mainstream belief in Washington, D.C.–and a growing realization in Silicon Valley–that it’s no longer a question of whether to, but how to, regulate companies like Google, Twitter, and Facebook to hold them accountable for the content on their platforms. One of the most likely ways for Congress to do that would be to revise Section 230.
UNDERSTANDING SECTION 230
Section 230 remains a misunderstood part of the law. As Wyden explained it to me, the statute provides both a “shield” and a “sword” to internet companies. The “shield” protects tech companies from liability for harmful content posted on their platforms by users. To wit:
(c) (1) No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Specifically, it relieves web platform operators of liability when their users post content that violates state law by defaming another person or group, or painting someone or something in a false light, or publicly disclosing private facts. Section 230 does not protect tech companies from federal criminal liability or from intellectual property claims.
“Because content is posted on their platforms so rapidly there’s just no way they can possibly police everything,” Senator Wyden told me.
The “sword” refers to the 230’s “good samaritan” clause, which gives tech companies legal cover for choices they make when moderating user content. Before § 230, tech companies were hesitant to moderate content for fear of being branded “publishers” and thus made liable for toxic user content on their sites. Per the clause:
(c) (2) (a) No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
“I wanted to make sure that internet companies could moderate their websites without getting clobbered by lawsuits,” Wyden said on the House floor back in March. “I think everybody can agree that’s a better scenario than the alternative, which means websites hiding their heads in the sand out of fear of being weighed down with liability.”
Many lawmakers, including Wyden, feel the tech giants have been slow to detect and remove harmful user content, that they’ve used the legal cover provided by § 230 to avoid taking active responsibility for user content on their platforms.
And by 2016 the harmful content wasn’t just hurting individuals or businesses, but whole societies. Social sites like YouTube became unwitting recruiting platforms for violent terrorist groups. Russian hackers weaponized Facebook to spread disinformation, which caused division and rancor among voters, and eroded confidence in the outcome of the 2016 U.S. presidential election.
As Wyden pointed out on the floor of the Senate in March, the tech giants have even profited from the toxic content.
“Section 230 means they [tech companies] are not required to fact-check or scrub every video, post, or tweet,” Wyden said. “But there have been far too many alarming examples of algorithms driving vile, hateful, or conspiratorial content to the top of the sites millions of people click onto every day –companies seeming to aid in the spread of this content as a direct function of their business models.”
And the harm may get a lot worse. Future bad actors may use machine learning, natural language, and computer vision technology to create convincing video or audio footage depicting a person doing or saying something provocative that they didn’t really do or say. Such “Deepfake” content, skillfully created and deployed with the right subject matter at the right time, could cause serious harm to individuals, or even calamitous damage to whole nations. Imagine a deep-faked president taking to Twitter to declare war on North Korea.
It’s a growing belief in Washington in 2018 that tech companies might become more focused on keeping such harmful user content off of their platforms if the legal protections provided in § 230 were taken away.
There’s a real question over whether Wyden’s “shield” still fits. Section 230 says web companies won’t be treated as publishers, but they look a lot more like publishers in 2018 than they did in 1996.
In 1996 websites and services often looked like digital versions of real-world things. Craigslist was essentially a digital version of the classifieds. Prodigy offered an internet on-ramp and some bulletin boards. GeoCities let “homesteaders” build pages that were organized (by content type) in “neighborhoods” or “cities.”
Over time the dominant business models changed. Many internet businesses and publishers came to rely on interactive advertising for income, a business model that relied on browser tracking and the collection of users’ personal data to target ads.
To increase engagement, internet companies began “personalizing” their sites so that each user would have a different and unique experience, tailor-made to their interests. Websites became highly curated experiences served up by algorithms. And the algorithms were fed by the personal data and browsing histories of users.
Facebook came along in 2004 and soon took user data collection to the next level. The company provided a free social network, but harvested users’ personal data to target ads to them on Facebook and elsewhere on the web. And the data was very good. Not only could Facebook capture all kinds of data about a user’s tastes, but it could capture the user’s friends’ tastes too. This was catnip to advertisers because the social data proved to be a powerful indicator of what sorts of ads the user might click on.
Facebook also leveraged its copious user data, including that on the user’s clicks, likes, and shares, to inform the complex algorithms that curate the content in users’ news feeds. It began showing users the posts, news, and other content that the user–based on their personal tastes–was most likely to respond to. This put more attention-grabbing stuff in front of its users’ eyeballs, which pumped up engagement and created more opportunities to show ads.
This sounds a lot like the work of a publisher. “Our goal is to build the perfect personalized newspaper for every person in the world,” Facebook CEO Mark Zuckerberg said in 2014.
But Facebook has always been quick to insist that it’s not a publisher, just a neutral technology platform. There’s a very good reason for that: Publishers are liable for the content
Follow @PeterBoykin on Social Media
Cash App: https://cash.me/app/CJBHWPS
Cash ID: $peterboykin1
CPAC (Conservative Political Action Conference) is an annual conference/celebration event that is an essential part of the right leaning person’s celebration of their political leanings. One of the greatest facts is that CPAC is an event that keeps growing, with attendance topping the 10,000 marks in recent years. Ultimately, CPAC is a lively four-day experience […]
Dinesh D’Souza’s ‘Death of A Nation’ A Historic Documentary of American History By: Chrissy Piccolo Dinesh D’Souza is one of the most prolific authors, writer, political intellect and filmmaker of our time. As the debate regarding Immigration continues to fester with critics hell bent on fanning the flames against our country and President, D’Souza is […]