Safety

1.7K posts

Safety banner
Safety

Safety

@Safety

Providing the latest safety tools, resources, and updates from X.

X HQ เข้าร่วม Aralık 2009
134 กำลังติดตาม3.2M ผู้ติดตาม
Safety
Safety@Safety·
As we observe Self Harm Awareness month, X’s Safety team continues to strengthen protections against harmful content. Key changes we’ve made to our Self-harm policy include: › Enhanced Detection: X is enhancing detection to spot predatory behavior targeting minors (especially those showing self-harm signs). Predatory accounts will be suspended and reported to law enforcement, where appropriate. Minors exhibiting self-harm behaviors and interacting with predatory accounts may also have their accounts suspended as a protective measure. › Tougher rules on graphic self-harm content: Graphic or gory images/videos showing cutting, self-injury, or self-harm without recovery context is not allowed as it can normalize harm or attract predators. › Improved signals to ensure supportive conversations about mental health are protected when they focus on recovery. Read more here - help.x.com/en/rules-and-p…
English
94
129
485
53.7K
Safety
Safety@Safety·
We’re proud to share that X has achieved our TAG Brand Safety Recertification for the third year in a row. Mike Zaneis, President & CEO, TAG on X’s new milestone: “By maintaining the rigorous standards of the TAG Brand Safety Certified program, X has demonstrated a meaningful commitment to protecting advertisers from ad misplacement and brand safety risks across its platform. TAG's certification standards give brands the confidence they need to invest in the digital advertising ecosystem, and X continues to show its leadership in upholding those high standards. We look forward to continuing to work with X to build a safer and more trustworthy environment for advertisers and users alike.” Read more here: tagtoday.net/certifications
English
92
59
206
31.9K
Safety
Safety@Safety·
An update on how X maintains safety of X in times of crisis As part of X’s incident response protocol, X initiated proactive manual sweeps to identify and remove violative content in less than 3 hours from the initial strikes. These sweeps have been running 24/7 since the response was initiated, and are supplemented by working group meetings bringing together experts across our company. X is actively scaling its enforcement by building heuristics and Grok-based defenses that can detect and enforce against new forms of violative content that emerge on the platform. These defenses allow us to scale at speed, ensuring our users are protected in real time. Additionally, conflict-related Community Notes have been shown on more than 20K posts and have been seen more than 119M times so far, and still growing. X applies media matching to ensure notes written on misleading videos and images are automatically applied to other posts with matching media. X’s incident and crisis response are robust protocols designed and tested for events that may lead to widespread proliferation of potentially violative content on the platform. We continue to monitor trends on X and ensure content is authentic and users are engaging safely.
English
608
253
1.1K
465.6K
Safety
Safety@Safety·
X is committed to protecting the real-time global public conversation and safeguarding the platform for all of our users. As part of our crisis response protocol, we are taking additional measures to enforce our X Rules and policies and take action as fast as possible. As a reminder, abuse, harassment, hate, and threats of violence have no place on X. If you find posts or accounts that you believe violate X Rules, report them in the app. Our Safety team will review and take any necessary action, including removing content or suspending accounts. X’s Community Notes complement our Safety team during these critical moments by enabling contributors worldwide to add evidence-based context quickly to potentially misleading posts, helping to combat misinformation in real time, ensuring accountability, and building trust for all our users. For more information on our X Rules, policies, and range of enforcement options, please refer to our help pages. help.x.com/rules-and-poli… help.x.com/rules-and-poli… help.x.com/using-x/commun…
English
356
271
1.3K
127.6K
Safety
Safety@Safety·
We’re excited to host our 2026 Brand Suitability Webinar - empowering brands with new tools for safety & reach on X. Join us Feb 26, 2pm - 3pm EST: Playbook deep dive + live demo. Register now👇 lnkd.in/eupThgXj
Business@XBusiness

Transparency. Controls. Measurement. We’re excited to host our 2026 Brand Suitability Webinar - empowering brands with new tools for safety & reach on X. Join us Feb 26, 2pm - 3pm EST: Playbook deep dive + live demo.

English
13
25
79
21.3K
Safety
Safety@Safety·
Child safety is our top priority. While X, as a service, is not primarily for children, we take our responsibility very seriously for the young users who are here. We’re continuously improving our tools and policies to protect them to keep their children safer. Here are some of the ways we’re doing it: We have strict measures in place to protect minors: - Users under 13 cannot create accounts - Minor accounts are defaulted to 'Protected', which allows for more control of who can see their content - Sensitive media is restricted from minors - Location sharing is turned off by default - Advertisers cannot specifically target users under 18 years old - Where X is legally required to do so, we take a multi-faceted age assurance approach to verify or estimate user age and restrict prohibited content from known minors What ‘Protected’ means for 13-17 year olds: Under protected mode, the default for young people: -DMs are restricted to receive DMs from accounts they follow by default - A follow request is sent to approve or deny when someone new wants to follow them - Posts are only visible to approved followers - Followers cannot repost or repost with comment - Protected posts do not appear in third-party search engines such as Google - they are only searchable on X by the poster and their followers - Replies sent to accounts that are not following the minor will not be seen (only followers can see posts from protected accounts) We work closely with the National Center for Missing & Exploited Children (NCMEC) to report suspected CSAM through their CyberTipline, enabling rapid investigation, takedown, and law enforcement action as necessary X is for everyone. Report any issues or concerns via our in-app tools or Help Center.
English
182
254
1.5K
87K
Safety
Safety@Safety·
X supports #SaferInternetDay and remains committed to protecting children on the platform. We maintain zero tolerance for child sexual exploitation—including AI-generated content—and enforce strict policies to keep minors safe and ensure a positive experience for everyone. Let’s keep building a better internet together.
Safer Internet Day@safeinternetday

Tomorrow is Safer Internet Day! 🌍 Join schools and organisations around the world in promoting a safer, more positive digital experience for children and young people. Find out how you can get involved and connect with activities near you via your Safer Internet Centre: better-internet-for-kids.europa.eu/en/saferintern… #SaferInternetDay #SID2026 #BIK

English
42
48
226
38.6K
Safety
Safety@Safety·
We’re excited to share that, starting today, advertisers can now measure ad placements on X Profiles using post bid measurement from @integralads (IAS). Brand Safety remains a top priority at X, and we’re proud to make this product enhancement widely available to all X and IAS clients. Expanding IAS measurement across X’s Profiles further strengthens transparency, advertiser confidence, and measurement consistency across the platform. This launch represents another step forward in delivering industry leading brand safety performance at scale!
English
65
71
351
61.2K
Safety
Safety@Safety·
We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary. Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content. For more information on our policies, please refer to our help pages for our full X Rules and range of enforcement options.  help.x.com/en/rules-and-p… help.x.com/en/rules-and-p…
Elon Musk@elonmusk

@cb_doge Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content

English
917
1.9K
11.2K
5.3M
Safety
Safety@Safety·
Starting 10 December 2025, you must be at least 16 years old to use an X account in Australia. It’s not our choice - it's what the Australian law requires following amendments to the Online Safety Act 2021 to introduce a social media minimum age. We’re drawing on our existing age assurance processes to make this as smooth, private, and secure as possible. Here's what this means: If you're under 16: Unfortunately, you won't be able to use an X account until you are 16 or over. If you think we've got this wrong, or if you wish to reinstate your account once you are 16 or over, you can follow the instructions to verify your age within the account lockout notice you received. If you're 16 or older: You can follow the in-app instructions to verify your age. We'll send in-app reminders — it's fast and we delete the data within 30 days. Creating a new account: If your date of birth shows you're under 16, creation will be blocked. Otherwise, you'll go through the same quick verification steps. For more information on the Australian social media minimum age law, including how we're handling the required minimum age checks and your privacy, please visit our Help Center page at help.x.com/en/rules-and-p…. Other helpful resources: Affected accounts can submit a request to access their account information here: help.x.com/forms/privacy/… Report an underage account here: help.x.com/en/forms/safet…
English
351
183
749
174.7K
Safety
Safety@Safety·
X has zero tolerance for users abusing our platform to facilitate criminal conduct. Pursuant to that policy, X has cooperated with the U.S. Federal Bureau of Investigation (@FBILosAngeles) in connection with an investigation into DDOS attacks and other hacking activities. As a part of this effort, X has suspended @NoName05716 for its facilitation of criminal conduct. X will continue to collaborate with law enforcement entities worldwide to protect our users and the integrity of the X platform.
English
382
535
3.6K
313.9K
Safety
Safety@Safety·
Update on X’s Civic Integrity Policy We believe civic participation starts with access to accurate information, and we’re empowering the community to keep election conversations open and informed. What’s Changed We’re introducing a new Civic Integrity report flow that will route your submission to request a Community Note, allowing people to flag potentially misleading content and add context and credible sources in real time. This update gives the community the tools to provide context and credible sources that traditional enforcement moderation tools don’t, and ultimately, keeps users better informed. Learn more about our Civic Integrity Policy and Community Notes 👇 help.x.com/en/rules-and-p… communitynotes.x.com/guide/en/about…
English
105
83
341
66.8K
Safety รีทวีตแล้ว
Kylie @safety
Kylie @safety@kyliem·
I’m proud to share an important update on the work X’s child safety team has done to protect minors on and off the platform. When X is made aware of content depicting or promoting child sexual exploitation, including links to third party sites where this content can be accessed, the accounts sharing this content are reported to the National Center for Missing & Exploited Children (NCMEC).  As a result of X’s efforts, in 2024, 309 reports made by X to NCMEC led to arrests and subsequent convictions in 10 cases that were confirmed by Law Enforcement agencies. And in the first half of 2025, 170 reports led to arrests. X made a total of 686,176 reports in 2024 and approximately 46% of the reports made to NCMEC were submitted without requiring a human moderator at X to review potentially traumatic content.  We remain committed to protecting children on our platform, and we’ll continue to invest in, and improve detection of child sexual exploitation through automation.
English
127
111
955
108K
Safety
Safety@Safety·
To clarify: this change is not related to any security concern, and only impacts Yubikeys and passkeys - not other 2FA methods (such as authenticator apps). Security keys enrolled as a 2FA method are currently tied to the twitter[.]com domain. Re-enrolling your security key will associate them with x[.]com, allowing us to retire the Twitter domain. If this relates to you, you'll be prompted automatically to re-enroll. You can also proactively do this by clicking “Add another key” and re-enrolling your current key at x.com/settings/accou….
Safety@Safety

By November 10, we’re asking all accounts that use a security key as their two factor authentication (2FA) method to re-enroll their key to continue accessing X. You can re-enroll your existing security key, or enroll a new one. A reminder: if you enroll a new security key, any other security keys will stop working (unless also re-enrolled).

English
181
385
1.2K
3.5M
Safety
Safety@Safety·
After November 10, if you haven’t re-enrolled a security key, your account will be locked until you: re-enroll; choose a different 2FA method; or elect not to use 2FA (but we always recommend you use 2FA to protect your account!).
English
67
216
513
213.4K
Safety
Safety@Safety·
By November 10, we’re asking all accounts that use a security key as their two factor authentication (2FA) method to re-enroll their key to continue accessing X. You can re-enroll your existing security key, or enroll a new one. A reminder: if you enroll a new security key, any other security keys will stop working (unless also re-enrolled).
English
526
1.4K
3.7K
3.5M
Safety
Safety@Safety·
Update on X's Efforts to Combat Terrorism on the Platform On the anniversary of the October 7 attack, we're sharing an update on X's ongoing work to tackle terrorist activity across our platform. X has zero tolerance for terrorist organizations, violent extremist groups, those responsible for violent attacks, or anyone who supports or promotes their illicit activities. From the onset of the conflict, we activated our crisis protocol and stood up the company to address the rapidly evolving situation with the highest level of urgency. As a result, we've suspended over 22,500 accounts linked to violent entities in the region—including those affiliated with Hamas. We’ve also proactively identified and acted on more than 3.5 million pieces of content that breached our X Rules, such as Violent Content and Hateful Conduct. This is consistent with our policies and enforcement being built on broad principles of violence and criminal behavior, enabling a comprehensive approach to addressing harmful content. Automation plays a central role in proactively identifying and suspending violating accounts and content—stopping terrorists and their propaganda at the gates. The vast majority of suspensions for terrorism promotion come from a mix of automated detection and purpose-built internal proprietary tools. In fact, more than 90% of our actions against designated terrorist entities are proactive. We're dedicated to continuing to invest in advanced technologies to better detect and remove terrorist and violent extremist content before it can harm users. We also maintain a highly trained team for countering terrorism on X. This group includes experts in policy, counterterrorism, law enforcement, legal matters, product development, and engineering. Anyone—whether they have an X account or not—can report suspected policy violations through our tools.
English
1K
429
2.5K
389.7K