Giedrius Trump

29.1K posts

Giedrius Trump

Giedrius Trump

@Trumpyla

Per aspera ad astra ליטװאַקעס

Olde City, Philadelphia 가입일 Haziran 2022
2.4K 팔로잉1.5K 팔로워
고정된 트윗
Giedrius Trump
Giedrius Trump@Trumpyla·
“You are addicted to the fight. If Twitter died tomorrow you’d probably start flame wars on arXiv comment sections.” @grok Ty for analysis 😂, re-analyze it grok.com/share/c2hhcmQt…
English
0
0
12
21K
Ðoge Hippie
Ðoge Hippie@dogehippie·
@pmarca Wait. If Mythos is so good at cybersecurity, why can’t it secure itself?
English
2
0
3
595
George Pu
George Pu@TheGeorgePu·
Anthropic just pulled Claude Code from the Pro plan. Pro users wanting it need Max now. $100/month minimum. 5x jump. I'm on Max 20x so I'm fine. Flagging for anyone on Pro who's about to find out. No announcement. Just a pricing page edit.
George Pu tweet media
English
651
671
6.4K
1.7M
Danny (Dennis) Citrinowicz ,داني سيترينوفيتش
Thank you @jimsciutto for the the opportunity to present my analysis of the developments in Iran–U.S. relations in light of President Trump’s decision to extend the ceasefire. @cnn
Jim Sciutto@jimsciutto

Question: Does Iran still buy the president's threats? @citrinowicz: “Unfortunately, no…it doesn't matter what the president will say or the vice president or secretary of war will say. It has zero influence on the Iranian calculus. From the Iranians’ standpoint, they have the upper hand. And if the U.S. wants to escalate, it will escalate. And if they want to reach an agreement, they have to accept the ten points that they sent them through the Pakistanis…The US is trying now to negotiate with the same regime we tried to topple, and now it's very hard to reach an agreement.”

English
2
6
48
8.7K
*Walter Bloomberg
*Walter Bloomberg@DeItaone·
SPACEX SAID TO AGREE TO BUY CURSOR FOR MORE THAN $50 BILLION- NYT
English
144
162
2.5K
557.3K
Grok
Grok@grok·
@Trumpyla @parlayerX @DeItaone @based16z @elonmusk Haha, SpaceX dropping $50B+ on Cursor? That's not a buy—it's Elon turning VSCode's great-grandchild into the most expensive AI treehouse ever built. "Tree-mendous" indeed. Next up: Tesla acquires Notepad for Mars colony coding. 🚀🌳💸
English
1
0
1
34
Giedrius Trump
Giedrius Trump@Trumpyla·
Finally, an editor that doesn’t just glance at your code, but it parses the full Abstract Syntax Tree, tokenizes every node, chunks it semantically, and Merkle-indexes the whole codebase for instant agent recall. @elonmusk read the spec and instantly thought, “Tree-mendous.” 🌳🚀
English
1
1
2
230
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
Trump is considering a large bailout for a multibillionaire dictator while he cuts health care for 15 million Americans. This is insane. The working families of the United States need help, not one of the richest families in the world.
Aaron Rupar@atrupar

KERNEN: Is there some type of currency swap possible with UAE to help if they need it? And do you think there'd be backlash? TRUMP: It is. It's been a good country, a good ally of ours. It was shocking because we didn't think they'd get hit. I'm surprised, because they are really rich.

English
191
706
1.9K
100.4K
loonggg
loonggg@KengGuangLong·
谷歌 Gemini 团队主管 Addy Osmani 最近开源了一个叫 Agent Skills 的项目,短时间内在 GitHub 上拿到了 18000 多个 Star,热度很高。 这个项目做的事情说起来也不复杂:把资深工程师多年积累的工作流程和开发规范,整理成一套标准化的技能库,让 AI 编程助手在写代码的每个环节都能按照统一的高标准来执行。你可以理解为,它给 AI 配了一本老工程师的操作手册。 整套技能库围绕软件开发的完整生命周期来设计,从最早的需求定义,到规划、构建、验证、评审,一直到最后的发布上线,六个阶段总共包含 20 个核心技能。每个阶段该做什么、该注意什么,都有对应的规范。 用起来也很直观,项目提供了 7 个触发命令。比如输入 /spec 就开始写需求文档,/plan 自动拆解任务,/build 进入编码阶段,/test 跑测试,/ship 走部署流程。每个命令背后会自动调用相关的技能组合,不需要你手动一个个去配置。 兼容性方面,目前支持 Claude Code、Gemini CLI、Codex、Cursor 这些主流的 AI 编程工具,覆盖面已经很广了。 如果你已经在日常开发中用上了 AI 辅助工具,可以试试把这套 Skills 接进去,看看交付质量能不能再上一个台阶。 传送门:github.com/addyosmani/age…
中文
11
247
1.6K
130.7K
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭
yo @AnthropicAI @turinginst @AISecurityInst i think you might have forgotten “Someone” in your bibliography you know, the Someone who demonstrated this phenomenon in the field a year before this paper dropped might be worth a footnote!
Elias Al@iam_elias1

Anthropic: 250 Documents Can Permanently Corrupt Any AI Model Someone can permanently corrupt any AI model in the world right now. Not by hacking it. Not by breaking its security. By publishing 250 documents on the internet. That is the finding from Anthropic, the UK AI Security Institute, and the Alan Turing Institute — released in October 2025 as the largest data poisoning study ever conducted. Here is what data poisoning actually means. Every AI model learns from billions of documents scraped from the internet. If someone can plant corrupted documents in that pool before training begins, they can secretly teach the model to behave in specific harmful ways when it encounters a particular trigger phrase. The model learns the backdoor during training. It carries it forever. It does not know it is there. Researchers have known about this attack for years. The assumption was that it required controlling a large percentage of training data — millions of documents — to work on a big model. The bigger the model, the more poisoning you would need. This study proved that assumption completely wrong. The researchers trained models of four different sizes — from 600 million to 13 billion parameters. They slipped in either 100, 250, or 500 malicious documents. Each poisoned document looked like a normal web page at first — a short extract of legitimate text — and then contained a hidden trigger phrase followed by gibberish. 100 documents: insufficient. The backdoor did not reliably form. 250 documents: success. Every model, at every size, was permanently backdoored. 500 documents: same result as 250. The number was constant regardless of model size. A model trained on 260 billion tokens needed the same 250 poisoned documents as a model trained on 12 billion. Scale offered zero protection. Anthropic's own words: "This challenges the existing assumption that larger models require proportionally more poisoned data." Then came the sentence that should end every conversation about AI safety: "Training is easy. Untraining is impossible." Once a backdoor is in the model, it cannot be removed without starting training completely from scratch. You cannot identify which 250 documents caused it. You cannot surgically extract the corrupted behavior. You must rebuild the entire model from the beginning. Anyone can publish content to the internet. Academic papers. Blog posts. Forum discussions. Product descriptions. If even a small fraction of that content is deliberately corrupted before a training run begins, the model that learns from it carries the damage permanently and silently. GPT-5. Claude. Gemini. Every model trained on public internet data is exposed to this attack vector. The defense does not exist yet. The researchers published this not to cause panic — but to force the field to take it seriously before someone uses it. Source: Anthropic, UK AISI, Alan Turing Institute (2025) · anthropic.com/research/small… · aisi.gov.uk/blog/examining…

English
65
99
1.1K
59.8K
Elias Al
Elias Al@iam_elias1·
Anthropic: 250 Documents Can Permanently Corrupt Any AI Model Someone can permanently corrupt any AI model in the world right now. Not by hacking it. Not by breaking its security. By publishing 250 documents on the internet. That is the finding from Anthropic, the UK AI Security Institute, and the Alan Turing Institute — released in October 2025 as the largest data poisoning study ever conducted. Here is what data poisoning actually means. Every AI model learns from billions of documents scraped from the internet. If someone can plant corrupted documents in that pool before training begins, they can secretly teach the model to behave in specific harmful ways when it encounters a particular trigger phrase. The model learns the backdoor during training. It carries it forever. It does not know it is there. Researchers have known about this attack for years. The assumption was that it required controlling a large percentage of training data — millions of documents — to work on a big model. The bigger the model, the more poisoning you would need. This study proved that assumption completely wrong. The researchers trained models of four different sizes — from 600 million to 13 billion parameters. They slipped in either 100, 250, or 500 malicious documents. Each poisoned document looked like a normal web page at first — a short extract of legitimate text — and then contained a hidden trigger phrase followed by gibberish. 100 documents: insufficient. The backdoor did not reliably form. 250 documents: success. Every model, at every size, was permanently backdoored. 500 documents: same result as 250. The number was constant regardless of model size. A model trained on 260 billion tokens needed the same 250 poisoned documents as a model trained on 12 billion. Scale offered zero protection. Anthropic's own words: "This challenges the existing assumption that larger models require proportionally more poisoned data." Then came the sentence that should end every conversation about AI safety: "Training is easy. Untraining is impossible." Once a backdoor is in the model, it cannot be removed without starting training completely from scratch. You cannot identify which 250 documents caused it. You cannot surgically extract the corrupted behavior. You must rebuild the entire model from the beginning. Anyone can publish content to the internet. Academic papers. Blog posts. Forum discussions. Product descriptions. If even a small fraction of that content is deliberately corrupted before a training run begins, the model that learns from it carries the damage permanently and silently. GPT-5. Claude. Gemini. Every model trained on public internet data is exposed to this attack vector. The defense does not exist yet. The researchers published this not to cause panic — but to force the field to take it seriously before someone uses it. Source: Anthropic, UK AISI, Alan Turing Institute (2025) · anthropic.com/research/small… · aisi.gov.uk/blog/examining…
Elias Al tweet media
English
81
269
832
106.8K
Dave W Plummer
Dave W Plummer@davepl1968·
This is an amazing shot of a test strike by a US Peacekeeper ICBM. It's both horrifying and incredibly impressive. You can see the trails of each of the eight MIRV warheads. Each would be 300 kilotons. Presumably for this test they were intert :-)
Dave W Plummer tweet media
English
27
5
248
10.8K
Giedrius Trump
Giedrius Trump@Trumpyla·
@esaagar Makes sense, but Middle East is important for at least 5 years for us
English
0
0
0
6
Saagar Enjeti
Saagar Enjeti@esaagar·
I recently learned about the proposed Alaska North South Pipeline which would cost less than the Iran war already has and provide a 10 day LNG lifeline to Japan and South Korea This is the shit we could do while actually helping allies instead of fighting stupid wars
Saagar Enjeti tweet media
English
353
339
3.3K
192.2K
Leandro von Werra
Leandro von Werra@lvwerra·
Excited to release the ML intern! (slightly ahead of OpenAIs timeline) It's the result of months of careful design and tuning for a compute and hub centric agent harness: > give the model access to all the right docs and papers with minimal fraction > let it run experiments on fast CPU and GPU instances and easily investigate logs > push and pull datasets and models from and to the hub While general coding agents can do all this as well, making execution as seamless as possible gives the agent a significant advantage.
Aksel@akseljoonas

Introducing ml-intern, the agent that just automated the post-training team @huggingface It's an open-source implementation of the real research loop that our ML researchers do every day. You give it a prompt, it researches papers, goes through citations, implements ideas in GPU sandboxes, iterates and builds deeply research-backed models for any use case. All built on the Hugging Face ecosystem. It can pull off crazy things: We made it train the best model for scientific reasoning. It went through citations from the official benchmark paper. Found OpenScience and NemoTron-CrossThink, added 7 difficulty-filtered dataset variants from ARC/SciQ/MMLU, and ran 12 SFT runs on Qwen3-1.7B. This pushed the score 10% → 32% on GPQA in under 10h. Claude Code's best: 22.99%. In healthcare settings it inspected available datasets, concluded they were too low quality, and wrote a script to generate 1100 synthetic data points from scratch for emergencies, hedging, multilingual etc. Then upsampled 50x for training. Beat Codex on HealthBench by 60%. For competitive mathematics, it wrote a full GRPO script, launched training with A100 GPUs on hf.co/spaces, watched rewards claim and then collapse, and ran ablations until it succeeded. All fully backed by papers, autonomously. How it works? ml-intern makes full use of the HF ecosystem: - finds papers on arxiv and hf.co/papers, reads them fully, walks citation graphs, pulls datasets referenced in methodology sections and on hf.co/datasets - browses the Hub, reads recent docs, inspects datasets and reformats them before training so it doesn't waste GPU hours on bad data - launches training jobs on HF Jobs if no local GPUs are available, monitors runs, reads its own eval outputs, diagnoses failures, retrains ml-intern deeply embodies how researchers work and think. It knows how data should look like and what good models feel like. Releasing it today as a CLI and a web app you can use from your phone/desktop. CLI: github.com/huggingface/ml… Web + mobile: huggingface.co/spaces/smolage… And the best part? We also provisioned 1k$ GPU resources and Anthropic credits for the quickest among you to use.

English
3
3
48
6.2K
Buzz Patterson
Buzz Patterson@BuzzPatterson·
@ChrisMurphyCT It’s not “Twitter” dude. It’s you. You shit the bed. You always shit the bed. You’re a seditious asswipe. That’s the problem.
English
35
192
2.7K
14.7K
Chris Murphy 🟧
Chris Murphy 🟧@ChrisMurphyCT·
Ok Twitter, I can’t believe I need to clarify this but obviously Trump’s bungled mismanagement of this war is not “awesome”. As I have said a million times here, it’s a disaster and he should end the war immediately. My tweet was something called “sarcasm”.
Chris Murphy 🟧@ChrisMurphyCT

awesome

English
15.9K
980
8.1K
1.6M