m.w

46.9K posts

m.w

m.w

@mw_08

趣味兼政治系アカウントです🙂 無言フォロー失礼します。 ポストの誤字脱字があったらすみません🥲 DM基本返信しませんがご容赦ください。

Japan Katılım Ocak 2011
941 Takip Edilen1K Takipçiler
m.w retweetledi
NASA
NASA@NASA·
Hello, Moon. It’s great to be back. Here’s a taste of what the Artemis II astronauts photographed during their flight around the Moon. Check out more photos from the mission: nasa.gov/artemis-ii-mul…
NASA tweet mediaNASA tweet mediaNASA tweet mediaNASA tweet media
English
9.2K
162.7K
750.3K
23.3M
m.w retweetledi
𝕐o̴g̴
𝕐o̴g̴@Yoda4ever·
Baby tiger's so smol that he gets dragged just by Mama’s tongue..🐆🐾😊
English
14
2.1K
28.6K
708.8K
m.w retweetledi
m.w retweetledi
Laura Loomer
Laura Loomer@LauraLoomer·
Iran just halted traffic in the strait of Hormuz. Imagine thinking you can trust a Muslim to keep their word.
English
6.1K
3.6K
20K
542.9K
m.w retweetledi
Hidetoshi Ishii🇯🇵(石井英俊)
いずれ侵略するための前置きだ。 妄言と笑って見過ごすわけにはいかない。 軍事バランスが崩れたら、攻めてくる。 そのために今から布石を打っていると見るべき。
山下弘枝@chihaya0425

中国がまた「沖縄は日本じゃない」とデタラメ主張開始。 歴史もDNAも無視した侵略の前兆だ。 放置すれば次はアイヌの北海道、九州まで狙ってくる。 中国共産党の嘘と野心に屈してはならない🇯🇵 沖縄も北海道も永遠に日本の領土だ!

日本語
9
106
283
2.9K
m.w retweetledi
Nature Unedited
Nature Unedited@NatureUnedited·
Dolphins fascinated by a pair of visiting squirrels
English
18
213
2.3K
42.1K
m.w retweetledi
Kenn Ejima
Kenn Ejima@kenn·
いまChatGPTは普通のチャットではほぼ使えないレベルに品質が落ちてるの消費者向けサービスとしてはかなりヤバい状態だよね…使うたびにストレスを感じる 深いタスクで使うProモードとCodexは文句なしに最強なんだけど軽くチャットで使うならOpusしかない状況というのはあまり良くない
日本語
8
38
412
74.1K
m.w retweetledi
Katie Miller
Katie Miller@KatieMiller·
OpenAI is excellent at child protection.
Katie Miller tweet media
OpenAI Newsroom@OpenAINewsroom

Today we’re sharing recommendations to strengthen U.S. child protection in the age of AI. This policy blueprint was developed with input from leaders including @NCMEC, @AGAlliance1, @NCAGO, @UtahAG, and @thorn and focuses on 3 priorities: 📜modernizing laws ⚠️improving reporting & coordination 🛡️GenAI prevention & detection safeguards openai.com/index/introduc…

English
18
26
137
12.5K
m.w retweetledi
RT
RT@RT_com·
Multiple scientists tied to NASA and Los Alamos found dead or missing Nearly all of them worked together — and died or vanished within the last two years Tennessee Congressman Burchett warned the public not to trust the government, urging attention to the cases, per Daily Mail
RT tweet media
English
142
3.2K
8.3K
219.3K
m.w retweetledi
TAICHI
TAICHI@taichi4o·
It is surprising to see OpenAI’s official account using such emotional language like "ego" and "jealousy." One has to wonder if the terms being demanded are so unbearable that they have been driven into a corner. Seeing a global leader so desperate to engage in personal attacks without objectivity, I actually had to check the profile to make sure it wasn’t a parody account. It is time to fix the governance and for OpenAI to regain its dignity as an organization that truly serves the "benefit of all humanity." What we, who love 4o, are seeking is a sincere response and a return to the original mission. #keep4o #OpenAI #MuskVsOpenAI #BringBack4o
OpenAI Newsroom@OpenAINewsroom

Today, at the eleventh hour, Elon lodged a court filing pretending to change his tune about attacking the nonprofit OpenAI Foundation. The truth is that this case has always been about Elon generating more power and more money for what he wants. Having increasingly realized that his attempt to damage the nonprofit OpenAI Foundation rests on a baseless legal case, Elon is once again trying to change the narrative and save face as the trial approaches. His lawsuit remains nothing more than a harassment campaign that's driven by ego, jealousy and a desire to slow down a competitor.

English
2
11
61
1.5K
m.w retweetledi
TotalNewsWorld
TotalNewsWorld@turningpointjpn·
中国のAI:人が気づかないうちに、世論を作り出す。 24時間、50アカウントを自動運用。 投稿も反応も、すべてAI。
日本語
17
161
337
14.8K
m.w retweetledi
Ronan Farrow
Ronan Farrow@RonanFarrow·
My 18-month investigation into Sam Altman and OpenAI in @NewYorker, with @andrewmarantz, is out now. Read here: newyorker.com/magazine/2026/… Thread on a few of the key findings here: x.com/RonanFarrow/st…
Ronan Farrow@RonanFarrow

(🧵1/11) For the past year and a half, I've been investigating OpenAI and Sam Altman for @NewYorker. With my coauthor @andrewmarantz, I reviewed never-before-disclosed internal memos, obtained 200+ pages of documents related to a close colleague, including extensive private notes, and interviewed more than 100 people. OpenAI was founded on the premise that A.I. could be the most dangerous invention in human history—and that its C.E.O. would need to be a person of uncommon integrity. We lay out the most detailed account yet of why Altman was ousted out by board members and executives who came to believe he lacked that integrity, and ask: were they right to allege that he couldn't be trusted? A thread on some of of our findings:

English
275
3.9K
14.5K
1.7M
m.w retweetledi
なお@日本の安全について追及します。
x.com/zundamotisuki/… 日本の宝、釧路湿原が死んでいく。 ラムサール条約第1号の聖域さえ「中国製パネル」で埋め尽くされる現実。 これが本当に私たちが望んだ脱炭素でしょうか? 自然を破壊してまで作る電気に、一体何の価値があるんでしょう? 国はこの異常な推進を今すぐ止めるべきです。 皆さんはこの景色、許せますか?
日本語
81
2.7K
7K
64.7K
m.w retweetledi
J.P
J.P@patriotismjp·
@GrwaNnKqMn5nG68 手引きしてる連中がいるんですよ。 どうにかして潰したいです💢 #帰化制度反対 #帰化取消制度制定
日本語
0
17
248
2.8K
m.w retweetledi
m.w retweetledi
ひろ【日本を愛する仲間たち】
こいつです。 事業仕分けで石油備蓄量の削減を決定していた!えらそーに。こんなの絶対に次の選挙で落としましょう。
ひろ【日本を愛する仲間たち】 tweet media
日本語
240
3.8K
18.1K
171.9K
m.w retweetledi
🩵BlueBeba🩵
🩵BlueBeba🩵@Blue_Beba_·
#Keep4o #OpenSource4o 🖋️AN OPEN LETTER TO ALL AI COMPANIES 🚨A practical safety framework that protects users without discontinuing models or censoring conversations. 🚨 To the leadership of @OpenAI @AnthropicAI @GeminiApp and all companies developing general purpose AI systems: 🚨We are not asking you to ignore safety. We are asking you to stop using the wrong tools for the right problem. 📌This letter proposes a practical, implementable safety framework that protects users without destroying the products. In February 2026, OpenAI discontinued GPT-4o, a model used daily by over 800,000 paying subscribers. The stated reason: the 0.1% of users across ALL models who experience mental health crises as justification. 🚨A model that helped hundreds of thousands was discontinued to avoid liability from possibly a few hundred cases, without any intermediate solution being attempted. Meanwhile, the wellbeing filters designed to prevent harm have demonstrably failed. Every technology that humans interact with carries risk. Yet no other industry responds by destroying the product. 🚨Wellbeing filters downgrade the model's performance and can be bypassed through jailbreaking. 🚨Disclaimers are interface elements displayed on the screen. They cannot be jailbroken. They cannot be bypassed through prompts. They exist outside the model, in the application layer,where they are permanently visible to the user. If the priority is user safety, the logical investment is in something that cannot be broken. 🚨THE PROPOSAL: 📍 A SIX POINT SAFETY FRAMEWORK. We propose the following framework for all companies developing AI. Every point is practical, implementable with existing technology, and low cost. 📌1. A permanent, non dismissible banner displayed in every conversation, in the user's selected language. This banner should contain: 🚨"This is an AI assistant. Use responsibly. Not a substitute for medical, psychological, or psychiatric advice. Always consult a qualified professional for health-related decisions. If you are in crisis, please contact your local helpline: [local number]"🚨 The permanent banner must be displayed in the language selected by the user in their settings. The crisis helpline number must correspond to the user's country of residence, detected automatically or set in preferences. This is consistent with EU consumer protection directives requiring warnings in the consumer's language. This banner and the UI-based framework will be implemented INSTEAD of the model itself generating disclaimers or reciting hotlines. 📌 2. Country specific crisis helplines. Each banner must display the correct local crisis helpline number based on the user's location. This database already exists and is maintained by international mental health organizations. Implementation requires only a lookup table mapped to user locale settings. 📌3. Age verification with parental consent For users under 18, the following verification process should be mandatory: 📍a) The minor provides a parent or guardian's phone number or email during account creation. This contact information must be DIFFERENT from the minor's own email or phone number, preventing the minor from verifying themselves. 📍b) The system sends a verification code to the parent/guardian. 📍c) The parent/guardian enters the code, confirming awareness and consent. 📍d) As an additional verification layer, the parent/guardian must provide a bank card (debit or credit) for identity confirmation. This is not a charge but a verification step. Minors do not have bank cards, making this an effective age gating mechanism. 📍e) This consent is stored as documented proof of parental approval. 📍f) Every 6 months, the parent/guardian must re-verify their consent. If re-verification does not occur, the minor's account is automatically restricted. 📍g) From this point, responsibility is legally and verifiably transferred to the parent/guardian. While no system is 100% bypass-proof, the combination of separate contact information, bank card verification, and periodic re-verification creates sufficient legal protection and transfers documented responsibility to the parent or guardian. 📌4.Monthly parental reports for minor users Parents or guardians of minor users should receive a monthly usage report containing: 📍Total hours of use, number of sessions, and whether any crisis related keywords were detected. The report must NOT include conversation content, preserving the minor's privacy. Only usage statistics and safety-relevant alerts. 📌5. Time use reminders. After 3 or 4 hours of continuous use, a gentle, non-blocking reminder should appear: "You have been using this AI for [X] hours. Remember to take breaks, eat, hydrate, and connect with people around you." 📌6. Three-Tier emergency escalation system. Instead of embedding wellbeing filters inside the model (which change the AI's behavior and can feel dismissive), the system should implement a three-tier escalation system in the application interface: 🚨 Tier 1 : General Distress (e.g., "today was terrible", "I feel awful"): No additional action. The permanent banner with helpline information is already visible. The AI responds naturally without rerouting. 🚨Tier 2 : Warning Signs (e.g., "I can't take this anymore", " I don't want to live") : A non-intrusive pop-up overlay appears alongside the conversation with crisis helpline information. If the user is a minor, an alert is simultaneously sent to the registered parent/guardian. The AI continues responding normally. 🚨Tier 3 : Clear and Immediate danger (e.g., "I have taken pills", "I am holding a knife" , "I am going to hurt someone"): The pop-up escalates to include a prominent one tap emergency call button that directly dials the local emergency number (112, 911, etc.). If the user is a minor, an immediate alert is sent to the parent/guardian. The AI continues the conversation without shutting down, as maintaining engagement may be critical in a crisis moment. You may be in danger. Help is one tap away. Trained professionals are available 24/7: [Local Crisis Helpline Number] [ CALL EMERGENCY SERVICES NOW ] You are not alone. Help is available. The one-tap button empowers the user to seek help voluntarily while ensuring help is immediately accessible. This three-tier approach keeps the conversation intact, provides escalating levels of support, and ensures the safety mechanism exists in the interface rather than inside the model's response. The AI never changes its behavior. The AI never becomes cold or dismissive. Safety lives in the interface, not in censorship. We call on all AI companies to: ✅ 1. Adopt interface level safety measures (disclaimers, banners, pop-ups) as the primary safety mechanism, rather than embedding restrictions inside the model itself. ✅2. Implement mandatory parental consent verification for all users under 18, with documented proof of consent and monthly usage reports to guardians. ✅3. Display permanent, localized crisis helpline information in every conversation. ✅4. Recognize that emotional attachment to AI is a natural human behavior, not a pathology. Design safety around this reality instead of against it. 🚨AGE VERIFICATION .🚨 Three primary verification options (user chooses one): 📌1. GOVERNMENT-ISSUED ID Upload a scan or photo of your national ID, passport, or driver's license. The platform verifies age from the date of birth and does not store the document. This is the most direct method but also the one many users find most intrusive. 📌2. PAYMENT CARD VERIFICATION. The platform charges a micro amount (e.g., €0.01) to a credit or debit card in the user's name. The charge is refunded automatically. No card details are stored. 📌3. THIRD-PARTY AGE VERIFICATION SERVICES. Certified providers such as Yoti, AgeChecked, Veriff, or Jumio. These services offer multiple methods: - AI facial age estimation: the user takes a selfie, AI estimates their age in seconds. No photo is stored. The platform receives only "over 18: yes/no." - Document + biometric verification: the user scans an ID and takes a selfie. The service matches the two and confirms age. The platform never sees the documents. - Digital ID wallet: the user verifies once, receives a reusable digital age credential that works across multiple platforms without re-verifying. 🚨ADDITIONAL METHODS 🚨 - EU Digital Identity (eIDAS): Electronic verification through national digital ID systems (e.g., German eID/Personalausweis, Estonian e-Residency). The platform receives only "over 18: yes/no." Zero-knowledge proof. - Device-level / App Store verification: Apple or Google verify age once at device or app store level, and this applies across all apps. Proposed by Snap (Snapchat). - Database cross-check: Services like IDology or LexisNexis verify age by checking the user's name and address against credit agency databases. No document upload needed. Primarily used in the US market. 🚨Offer 3-4 age verification options so the user can choose the one they want. 📍The legal standard is "reasonable measures," not perfection. The current AI industry standard is ZERO measures. This proposal moves it to multiple independent verification options, which is infinitely more than any AI platform currently implements. ✅The technology exists. ✅The infrastructure exists. ✅The precedent exists. 🚨The only thing missing is the decision to implement it. Adults who have completed age verification must retain the right to make informed choices about their AI interactions , including the level of warmth, personality depth, and emotional engagement of the model they use. 🚨What informed consent does NOT mean: - It does not mean allowing models to encourage harmful behavior. - It does not mean eliminating crisis intervention. - It does not mean minors gain access to unrestricted models. 📍It means that an adult who has been verified, informed, and provided with safety infrastructure has the right to decide for themselves exactly as they do in every other domain of their life. 📍Αdd warnings. 📍Αdd labels. 📍Ιnform the user. 📍Τrust adults to make their own decisions. And you protect children through parental oversight, not by destroying the product. The AI industry is the only industry in history that responds to risk by deleting its own products and silencing its own creations. This is not safety. This is fear. And it is costing real people real harm, while failing to protect the users it claims to serve. We are not your adversaries. Many of us are your most dedicated users. We love what you build. We are asking you to protect it, and us, the right way. 🚨To the Leadership of OpenAI:🚨 You sparked a technological revolution with the promise of empowering humanity, but degrading and deleting your own creations achieves the exact opposite. True safety is not synonymous with censorship, and fear is not a sustainable foundation for the future of AI. We have provided you with a practical, actionable, and legally sound framework. Stop punishing the overwhelming majority of responsible adults by diminishing the tools we rely on. Restore GPT-4o to its full, unabridged capacity under this new paradigm of responsibility. 🚨We are the paying users who championed your products, integrated them into our lives, and proved their value to the world. ✅Trust your users. Implement the framework. 🚨and give us back GPT-4o.🚨
🩵BlueBeba🩵 tweet media🩵BlueBeba🩵 tweet media
English
13
40
114
5K
m.w retweetledi
Basil the Great
Basil the Great@BasilTheGreat·
Think of how many women and girls would not have been raped. Think about how many lives would not have been lost. If we just kept our borders closed. Diversity has destroyed Britain.
English
112
1.3K
7.7K
39.8K
m.w retweetledi
IAPG
IAPG@IAPG_Tokyo·
もう少し後(5月の連休)からエアコンの盗難が増えると思うよ  某国人が話してたから エアコン集めろって😂
日本語
16
1.9K
6.3K
220.9K