
BitChute
1.8K posts

BitChute
@Bitchute
Video hosting and sharing platform https://t.co/o1i4RPCA5O Dangerous ideas welcome. No manipulation. No apology. For support @bitchutesupport



Today a Los Angeles jury found @Meta and @Google liable for designing their platforms in ways that harm young users, awarding compensatory damages with punitive damages still to be determined after finding the companies acted with malice, oppression, and fraud. Yesterday a New Mexico jury ordered Meta to pay $375 million for failing to warn users about dangers and protect children from sexual predators. Two juries in two days reached the same conclusion. The dominant platforms prioritized monetization over user wellbeing. These verdicts are not about content. They are about design. The liability is moving from speech to system design. Meta and YouTube were not held liable for what users posted. They were held liable for infinite scroll engineered to prevent stopping, autoplay designed to extend viewing time indefinitely, and algorithmic amplification that maximizes emotional response regardless of whether it is true or healthy. These are deliberate design choices made in the service of engagement metrics and advertising revenue. This is a correct application of the harm principle. Mill's foundational argument holds that the only legitimate basis for restricting liberty is to prevent harm to others. The harm here is not offensive speech. It is not uncomfortable ideas. It is platform architecture deliberately designed to exploit psychological vulnerabilities for profit. That is a meaningful distinction and two juries drew it. BitChute's governance framework prohibits exactly this by design. Our algorithmic neutrality commitment is explicit. Your data will not be used to suppress, prioritize, or manipulate content. You control your feed. We do not optimize it for you. We do not optimize for engagement at the expense of user wellbeing. We do not have infinite scroll engineered to keep you from stopping. We do not have autoplay designed to maximize watch time regardless of harm. The platforms found liable are also the platforms that spend the most time telling regulators they need broad content removal powers to protect users. Today's verdicts suggest the protection users actually needed was from the platforms themselves.





Today a Los Angeles jury found @Meta and @Google liable for designing their platforms in ways that harm young users, awarding compensatory damages with punitive damages still to be determined after finding the companies acted with malice, oppression, and fraud. Yesterday a New Mexico jury ordered Meta to pay $375 million for failing to warn users about dangers and protect children from sexual predators. Two juries in two days reached the same conclusion. The dominant platforms prioritized monetization over user wellbeing. These verdicts are not about content. They are about design. The liability is moving from speech to system design. Meta and YouTube were not held liable for what users posted. They were held liable for infinite scroll engineered to prevent stopping, autoplay designed to extend viewing time indefinitely, and algorithmic amplification that maximizes emotional response regardless of whether it is true or healthy. These are deliberate design choices made in the service of engagement metrics and advertising revenue. This is a correct application of the harm principle. Mill's foundational argument holds that the only legitimate basis for restricting liberty is to prevent harm to others. The harm here is not offensive speech. It is not uncomfortable ideas. It is platform architecture deliberately designed to exploit psychological vulnerabilities for profit. That is a meaningful distinction and two juries drew it. BitChute's governance framework prohibits exactly this by design. Our algorithmic neutrality commitment is explicit. Your data will not be used to suppress, prioritize, or manipulate content. You control your feed. We do not optimize it for you. We do not optimize for engagement at the expense of user wellbeing. We do not have infinite scroll engineered to keep you from stopping. We do not have autoplay designed to maximize watch time regardless of harm. The platforms found liable are also the platforms that spend the most time telling regulators they need broad content removal powers to protect users. Today's verdicts suggest the protection users actually needed was from the platforms themselves.


Today a Los Angeles jury found @Meta and @Google liable for designing their platforms in ways that harm young users, awarding compensatory damages with punitive damages still to be determined after finding the companies acted with malice, oppression, and fraud. Yesterday a New Mexico jury ordered Meta to pay $375 million for failing to warn users about dangers and protect children from sexual predators. Two juries in two days reached the same conclusion. The dominant platforms prioritized monetization over user wellbeing. These verdicts are not about content. They are about design. The liability is moving from speech to system design. Meta and YouTube were not held liable for what users posted. They were held liable for infinite scroll engineered to prevent stopping, autoplay designed to extend viewing time indefinitely, and algorithmic amplification that maximizes emotional response regardless of whether it is true or healthy. These are deliberate design choices made in the service of engagement metrics and advertising revenue. This is a correct application of the harm principle. Mill's foundational argument holds that the only legitimate basis for restricting liberty is to prevent harm to others. The harm here is not offensive speech. It is not uncomfortable ideas. It is platform architecture deliberately designed to exploit psychological vulnerabilities for profit. That is a meaningful distinction and two juries drew it. BitChute's governance framework prohibits exactly this by design. Our algorithmic neutrality commitment is explicit. Your data will not be used to suppress, prioritize, or manipulate content. You control your feed. We do not optimize it for you. We do not optimize for engagement at the expense of user wellbeing. We do not have infinite scroll engineered to keep you from stopping. We do not have autoplay designed to maximize watch time regardless of harm. The platforms found liable are also the platforms that spend the most time telling regulators they need broad content removal powers to protect users. Today's verdicts suggest the protection users actually needed was from the platforms themselves.



Today a Los Angeles jury found @Meta and @Google liable for designing their platforms in ways that harm young users, awarding compensatory damages with punitive damages still to be determined after finding the companies acted with malice, oppression, and fraud. Yesterday a New Mexico jury ordered Meta to pay $375 million for failing to warn users about dangers and protect children from sexual predators. Two juries in two days reached the same conclusion. The dominant platforms prioritized monetization over user wellbeing. These verdicts are not about content. They are about design. The liability is moving from speech to system design. Meta and YouTube were not held liable for what users posted. They were held liable for infinite scroll engineered to prevent stopping, autoplay designed to extend viewing time indefinitely, and algorithmic amplification that maximizes emotional response regardless of whether it is true or healthy. These are deliberate design choices made in the service of engagement metrics and advertising revenue. This is a correct application of the harm principle. Mill's foundational argument holds that the only legitimate basis for restricting liberty is to prevent harm to others. The harm here is not offensive speech. It is not uncomfortable ideas. It is platform architecture deliberately designed to exploit psychological vulnerabilities for profit. That is a meaningful distinction and two juries drew it. BitChute's governance framework prohibits exactly this by design. Our algorithmic neutrality commitment is explicit. Your data will not be used to suppress, prioritize, or manipulate content. You control your feed. We do not optimize it for you. We do not optimize for engagement at the expense of user wellbeing. We do not have infinite scroll engineered to keep you from stopping. We do not have autoplay designed to maximize watch time regardless of harm. The platforms found liable are also the platforms that spend the most time telling regulators they need broad content removal powers to protect users. Today's verdicts suggest the protection users actually needed was from the platforms themselves.












