BitChute

1.8K posts

BitChute banner
BitChute

BitChute

@Bitchute

Video hosting and sharing platform https://t.co/o1i4RPCA5O Dangerous ideas welcome. No manipulation. No apology. For support @bitchutesupport

Worldwide Katılım Şubat 2021
8 Takip Edilen21.1K Takipçiler
Sabitlenmiş Tweet
BitChute
BitChute@Bitchute·
This week @Apple confirmed that UK iPhone owners must now verify their age to access restricted services, following pressure from the UK government and Ofcom. The on-screen notice tells users that UK law requires this. The Financial Times notes that app stores are not covered by the Online Safety Act. The legal basis for that statement is unclear. This is how it works. This is governance without legislation. The government pressures. The company complies. The compliance is presented as legal obligation. No specific law is cited because no specific law clearly applies. The infrastructure of child protection becomes the infrastructure of adult surveillance, built not through legislation but through regulatory relationships that cannot be documented, appealed, or meaningfully challenged. Once age verification exists, it can be expanded administratively. A record of which verified adult accessed which service exists, is held by someone, and is subject to government access. That is not speculation. It is the logical endpoint of a system that requires proving identity to access legal content. BitChute made a different choice. We exited the UK market rather than operate in an environment where the line between legal requirement and regulatory pressure is deliberately blurred, where compliance is achieved through informal regulatory pressure, and where honest governance becomes impossible. You cannot tell creators that a restriction is legally compelled when the legal basis is a regulator working closely with a platform under implied pressure. You cannot document the specific law when the specific law does not clearly apply. That decision cost us a significant market. We made it anyway. Leadership is not about optimizing for revenue in every jurisdiction regardless of what that requires. It is about knowing which lines you will not cross and acting on them before you are pushed. Principles over profits is not a slogan. It is a test. We took it.
English
5
12
31
27.7K
BitChute retweetledi
The Wernick Files
The Wernick Files@thewernickfiles·
"They demand that you prove to the state that you deserve your rights, instead of demanding that the state respect them." "They do not want to save the elections. They want to save themselves from you." The SAVE Act. What it actually does. 👇
English
5
46
146
73.5K
BitChute
BitChute@Bitchute·
@RT_com Yup, we also wrote about it: x.com/Bitchute/statu…
BitChute@Bitchute

Today a Los Angeles jury found @Meta and @Google liable for designing their platforms in ways that harm young users, awarding compensatory damages with punitive damages still to be determined after finding the companies acted with malice, oppression, and fraud. Yesterday a New Mexico jury ordered Meta to pay $375 million for failing to warn users about dangers and protect children from sexual predators. Two juries in two days reached the same conclusion. The dominant platforms prioritized monetization over user wellbeing. These verdicts are not about content. They are about design. The liability is moving from speech to system design. Meta and YouTube were not held liable for what users posted. They were held liable for infinite scroll engineered to prevent stopping, autoplay designed to extend viewing time indefinitely, and algorithmic amplification that maximizes emotional response regardless of whether it is true or healthy. These are deliberate design choices made in the service of engagement metrics and advertising revenue. This is a correct application of the harm principle. Mill's foundational argument holds that the only legitimate basis for restricting liberty is to prevent harm to others. The harm here is not offensive speech. It is not uncomfortable ideas. It is platform architecture deliberately designed to exploit psychological vulnerabilities for profit. That is a meaningful distinction and two juries drew it. BitChute's governance framework prohibits exactly this by design. Our algorithmic neutrality commitment is explicit. Your data will not be used to suppress, prioritize, or manipulate content. You control your feed. We do not optimize it for you. We do not optimize for engagement at the expense of user wellbeing. We do not have infinite scroll engineered to keep you from stopping. We do not have autoplay designed to maximize watch time regardless of harm. The platforms found liable are also the platforms that spend the most time telling regulators they need broad content removal powers to protect users. Today's verdicts suggest the protection users actually needed was from the platforms themselves.

English
1
0
2
160
RT
RT@RT_com·
❗️ Meta, YouTube liable for user addiction, jury finds Case was filed by a 20-year-old woman who became addicted to YouTube at 6 and Instagram at 9, later developing depression and self-harm Companies are to pay $6 million in damages — the first such case in US to reach trial
RT tweet media
English
11
25
101
11.6K
CR1337
CR1337@CR1337·
De-Google your phone and achieve more mobile privacy: Take control of your Android phone by de-Googling and maximizing mobile privacy, the following resources will help you: 1. Privacy Guides Android section gives unbiased OS and app recommendations: privacyguides.org/en/android/ 2. GrapheneOS is the top privacy and security focused mobile OS for Pixels: grapheneos.org 3. F-Droid is the best source for free and open source Android applications: f-droid.org 4. CalyxOS offers a polished de-Googled experience with strong privacy tools built-in: calyxos.org 5. Aurora Store allows anonymous access to Android apps without a Google account: auroraoss.com Don't just bookmark this. Do it & share information like this with others!
CR1337 tweet media
English
28
203
821
27.4K
BitChute
BitChute@Bitchute·
@JonHaidt You might also want to check what we wrote about it x.com/Bitchute/statu…
BitChute@Bitchute

Today a Los Angeles jury found @Meta and @Google liable for designing their platforms in ways that harm young users, awarding compensatory damages with punitive damages still to be determined after finding the companies acted with malice, oppression, and fraud. Yesterday a New Mexico jury ordered Meta to pay $375 million for failing to warn users about dangers and protect children from sexual predators. Two juries in two days reached the same conclusion. The dominant platforms prioritized monetization over user wellbeing. These verdicts are not about content. They are about design. The liability is moving from speech to system design. Meta and YouTube were not held liable for what users posted. They were held liable for infinite scroll engineered to prevent stopping, autoplay designed to extend viewing time indefinitely, and algorithmic amplification that maximizes emotional response regardless of whether it is true or healthy. These are deliberate design choices made in the service of engagement metrics and advertising revenue. This is a correct application of the harm principle. Mill's foundational argument holds that the only legitimate basis for restricting liberty is to prevent harm to others. The harm here is not offensive speech. It is not uncomfortable ideas. It is platform architecture deliberately designed to exploit psychological vulnerabilities for profit. That is a meaningful distinction and two juries drew it. BitChute's governance framework prohibits exactly this by design. Our algorithmic neutrality commitment is explicit. Your data will not be used to suppress, prioritize, or manipulate content. You control your feed. We do not optimize it for you. We do not optimize for engagement at the expense of user wellbeing. We do not have infinite scroll engineered to keep you from stopping. We do not have autoplay designed to maximize watch time regardless of harm. The platforms found liable are also the platforms that spend the most time telling regulators they need broad content removal powers to protect users. Today's verdicts suggest the protection users actually needed was from the platforms themselves.

English
0
0
1
431
Jonathan Haidt
Jonathan Haidt@JonHaidt·
Victory in the social media trial in LA! As of today, we are in a new world: a new era in the fight to protect children from online harms. A jury sided with Kaley and therefore with millions of children: Big Tech is harming kids on an industrial scale. For years, parents were told these harms were exaggerated, anecdotal, or simply the unavoidable cost of growing up online. Today, a jury affirmed what parents have long known: Meta and YouTube were designed to exploit young people, with devastating consequences. For the first time, the law aligns with common sense: social media companies no longer have a special exemption to harm children with impunity. Their shield is gone. They will be treated like any industry that knowingly harms children and lies about it. History will judge them as harshly as the tobacco industry. This bellwether case tested a new legal theory: the harm is not just what algorithms show children, but rather that these products were designed to foster addiction. The companies knew they were harming children by the millions—and did it anyway. They were negligent and dishonest. This outcome belongs first and foremost to the families, especially the many parents who, in the face of unimaginable loss, chose to speak out, demand accountability, and endure a painful legal process so that other children might be spared. This is just the beginning. Thousands of cases will follow, bringing Meta, Snap, TikTok, and YouTube to court. Much work remains in courts, legislatures, schools, and communities. But for now, let us all just savor the long-awaited arrival of justice. nytimes.com/2026/03/25/tec…
English
24
505
1.8K
201K
BitChute
BitChute@Bitchute·
@CNN An important step towards social responsibility, we wrote about it here: x.com/Bitchute/statu…
BitChute@Bitchute

Today a Los Angeles jury found @Meta and @Google liable for designing their platforms in ways that harm young users, awarding compensatory damages with punitive damages still to be determined after finding the companies acted with malice, oppression, and fraud. Yesterday a New Mexico jury ordered Meta to pay $375 million for failing to warn users about dangers and protect children from sexual predators. Two juries in two days reached the same conclusion. The dominant platforms prioritized monetization over user wellbeing. These verdicts are not about content. They are about design. The liability is moving from speech to system design. Meta and YouTube were not held liable for what users posted. They were held liable for infinite scroll engineered to prevent stopping, autoplay designed to extend viewing time indefinitely, and algorithmic amplification that maximizes emotional response regardless of whether it is true or healthy. These are deliberate design choices made in the service of engagement metrics and advertising revenue. This is a correct application of the harm principle. Mill's foundational argument holds that the only legitimate basis for restricting liberty is to prevent harm to others. The harm here is not offensive speech. It is not uncomfortable ideas. It is platform architecture deliberately designed to exploit psychological vulnerabilities for profit. That is a meaningful distinction and two juries drew it. BitChute's governance framework prohibits exactly this by design. Our algorithmic neutrality commitment is explicit. Your data will not be used to suppress, prioritize, or manipulate content. You control your feed. We do not optimize it for you. We do not optimize for engagement at the expense of user wellbeing. We do not have infinite scroll engineered to keep you from stopping. We do not have autoplay designed to maximize watch time regardless of harm. The platforms found liable are also the platforms that spend the most time telling regulators they need broad content removal powers to protect users. Today's verdicts suggest the protection users actually needed was from the platforms themselves.

English
0
0
1
31
CNN
CNN@CNN·
Jury finds Meta and YouTube liable in landmark social media trial that accused the tech giants of harming a woman's mental health cnn.it/47TlGqg
CNN tweet media
English
208
416
1.1K
187.7K
BitChute
BitChute@Bitchute·
@AP We wrote an article about it: x.com/Bitchute/statu…
BitChute@Bitchute

Today a Los Angeles jury found @Meta and @Google liable for designing their platforms in ways that harm young users, awarding compensatory damages with punitive damages still to be determined after finding the companies acted with malice, oppression, and fraud. Yesterday a New Mexico jury ordered Meta to pay $375 million for failing to warn users about dangers and protect children from sexual predators. Two juries in two days reached the same conclusion. The dominant platforms prioritized monetization over user wellbeing. These verdicts are not about content. They are about design. The liability is moving from speech to system design. Meta and YouTube were not held liable for what users posted. They were held liable for infinite scroll engineered to prevent stopping, autoplay designed to extend viewing time indefinitely, and algorithmic amplification that maximizes emotional response regardless of whether it is true or healthy. These are deliberate design choices made in the service of engagement metrics and advertising revenue. This is a correct application of the harm principle. Mill's foundational argument holds that the only legitimate basis for restricting liberty is to prevent harm to others. The harm here is not offensive speech. It is not uncomfortable ideas. It is platform architecture deliberately designed to exploit psychological vulnerabilities for profit. That is a meaningful distinction and two juries drew it. BitChute's governance framework prohibits exactly this by design. Our algorithmic neutrality commitment is explicit. Your data will not be used to suppress, prioritize, or manipulate content. You control your feed. We do not optimize it for you. We do not optimize for engagement at the expense of user wellbeing. We do not have infinite scroll engineered to keep you from stopping. We do not have autoplay designed to maximize watch time regardless of harm. The platforms found liable are also the platforms that spend the most time telling regulators they need broad content removal powers to protect users. Today's verdicts suggest the protection users actually needed was from the platforms themselves.

English
0
0
4
714
BitChute
BitChute@Bitchute·
Today a Los Angeles jury found @Meta and @Google liable for designing their platforms in ways that harm young users, awarding compensatory damages with punitive damages still to be determined after finding the companies acted with malice, oppression, and fraud. Yesterday a New Mexico jury ordered Meta to pay $375 million for failing to warn users about dangers and protect children from sexual predators. Two juries in two days reached the same conclusion. The dominant platforms prioritized monetization over user wellbeing. These verdicts are not about content. They are about design. The liability is moving from speech to system design. Meta and YouTube were not held liable for what users posted. They were held liable for infinite scroll engineered to prevent stopping, autoplay designed to extend viewing time indefinitely, and algorithmic amplification that maximizes emotional response regardless of whether it is true or healthy. These are deliberate design choices made in the service of engagement metrics and advertising revenue. This is a correct application of the harm principle. Mill's foundational argument holds that the only legitimate basis for restricting liberty is to prevent harm to others. The harm here is not offensive speech. It is not uncomfortable ideas. It is platform architecture deliberately designed to exploit psychological vulnerabilities for profit. That is a meaningful distinction and two juries drew it. BitChute's governance framework prohibits exactly this by design. Our algorithmic neutrality commitment is explicit. Your data will not be used to suppress, prioritize, or manipulate content. You control your feed. We do not optimize it for you. We do not optimize for engagement at the expense of user wellbeing. We do not have infinite scroll engineered to keep you from stopping. We do not have autoplay designed to maximize watch time regardless of harm. The platforms found liable are also the platforms that spend the most time telling regulators they need broad content removal powers to protect users. Today's verdicts suggest the protection users actually needed was from the platforms themselves.
BitChute tweet media
English
3
8
19
2.3K
BitChute
BitChute@Bitchute·
Control is moving down the stack. The infrastructure of the open internet is being brought progressively under executive control, jurisdiction by jurisdiction, layer by layer, always under a security justification, always outside formal legislation.
English
0
0
0
179
BitChute
BitChute@Bitchute·
For BitChute, this matters beyond the headline. Our governance framework commits to infrastructure neutrality and user sovereignty. Those commitments operate at the platform layer. The router is the physical layer beneath every platform commitment we have made. When the physical infrastructure of internet access is subject to executive discretion, with exemptions granted to favored manufacturers and withheld from others, the neutrality we guarantee at the platform level becomes structurally constrained.
English
1
0
0
198
BitChute
BitChute@Bitchute·
The FCC has moved to ban foreign-made consumer routers unless the Department of Defense or Department of Homeland Security grants a conditional approval. The FCC is acting on executive branch determination. No new legislation was passed. No defined standards govern exemptions. The approval authority sits entirely within the executive branch, at the discretion of agencies within the executive branch.
English
2
0
13
5.1K
BitChute retweetledi
Rick Sanchez
Rick Sanchez@RickSanchezTV·
We are going LIVE today on @Bitchute at 12:00 PM EST/4:00 PM GMT. Link to livestream shortly! Please join -- we are taking questions!
English
3
8
35
2.6K
BitChute retweetledi
The Wernick Files
The Wernick Files@thewernickfiles·
My name is Jeffrey Wernick. I am Jewish. I answer to no institution. And I am asking Governor Evers to veto AB 446. A government that decides which political opinions are acceptable is not fighting hatred. It is practicing a different kind of it. The lead drafter of the IHRA definition opposes this bill. So do I. 608-266-1212. evers.wi.gov. Veto AB 446.
English
2
11
52
2.8K