Seth DeGroot

7.1K posts

Seth DeGroot banner
Seth DeGroot

Seth DeGroot

@SethDeGroot

Midwest Catholic. Husband to @MamaDeGroot. 4x father. Recovering founder. Partner @Gravityfund. NFA DYOR.

Metaverse Beigetreten Haziran 2010
231 Folgt6.5K Follower
Angehefteter Tweet
Seth DeGroot
Seth DeGroot@SethDeGroot·
Well, perhaps this shall reach Andilar Although I know not how it can For once again, he's hurled his wind Upon the silver prow But if it should, my words are these "Arise, young men, fine ships to build And set them north for Valinor 'Neath standards proud as fire"
English
0
0
9
5.2K
Seth DeGroot
Seth DeGroot@SethDeGroot·
@politicalmath Customers running the official LiteLLM AI Gateway Docker image (i.e. enterprises) were not impacted
English
0
0
4
195
Seth DeGroot
Seth DeGroot@SethDeGroot·
We’re investors in @LiteLLM. Yesterday they were hit by a highly sophisticated, multi-vendor supply chain attack. A 10-person team: • Contained it within hours • Engaged Mandiant for forensics • Rotated all credentials same-day • Migrated to OIDC Trusted Publishing Docker/proxy users were never affected. Startups are forged in the fire. You don't build a battle-hardened CI/CD pipeline without walking through the fire. Watching the 10-person team handle the crisis was a masterclass in incident response. This was their CloudFlare moment. Radical transparency + relentless execution. Proud to be on your cap table. 🤝
Ishaan@ishaan_jaff

Earlier today the @LiteLLM team was made aware of a supply chain attack impacting PyPI packages litellm==1.82.7 and litellm==1.82.8. The packages have been removed from PyPI. We confirmed that the compromise came from a Trivy dependency in our CI/CD docs.litellm.ai/blog/security-…

English
0
1
6
144
Seth DeGroot retweetet
LiteLLM (YC W23)
LiteLLM (YC W23)@LiteLLM·
[INCIDENT UPDATES] - Compromised LiteLLM packages have been deleted. - Proxy docker image users were not impacted - All dependencies are pinned on requirements.txt. - Compromise came from Trivvy security scan dependency, looking into it with Google’s Mandiant Security
English
16
85
525
71.5K
Seth DeGroot retweetet
alphaXiv
alphaXiv@askalphaxiv·
Introducing MCP for arXiv Let your research agents stand on the shoulders of giants Fast multi-turn retrieval, keyword search, and embedding search tools across millions of arXiv papers 🚀
English
77
403
3.1K
262K
Seth DeGroot
Seth DeGroot@SethDeGroot·
Blizzard camp 2026
English
0
0
8
81
Seth DeGroot retweetet
Bitflow
Bitflow@bitflow·
HODLMM: Concentrated Liquidity Comes to Bitcoin The first institution-grade market maker tooling purpose-built with Bitcoin finality to support the expanding on-chain BTC economy. HODLMM is LIVE. 🧵👇
English
76
106
237
31.4K
Seth DeGroot
Seth DeGroot@SethDeGroot·
We submitted to the CAISI RFI from the perspective most submissions won’t cover: the proxy layer between agents and LLMs. Model-level prompt injection defenses are probabilistic. Infrastructure-level controls like budget caps, PII redaction and tool restrictions are deterministic. NIST guidelines should reflect where the reliable controls actually live.
Brian Roemmele@BrianRoemmele

NIST Launches AI Agent Standards Initiative: Paving the Way for Secure and Interoperable AI Agents Very proud to have been advisor! Today the National Institute of Standards and Technology (NIST) has unveiled the AI Agent Standards Initiative (CAISI). This collaborative effort aims to establish a trusted foundation for AI agents autonomous systems capable of performing actions on behalf of users. By promoting industry-led standards, open protocols, and cutting-edge research, CAISI seeks to ensure that these advanced AI technologies are secure, interoperable, and widely adopted, while bolstering U.S. leadership in the global AI landscape. The Vision Behind CAISI AI agents represent the next frontier in technology, evolving from passive tools like chatbots to proactive entities that can manage tasks, interact with other systems, and make decisions independently. However, this evolution brings challenges, including security risks, interoperability issues, and the need for robust identity management. NIST’s initiative addresses these head-on, fostering an ecosystem where AI agents can operate seamlessly across digital platforms while prioritizing user trust and safety. The initiative is built on three strategic pillars: 1. Facilitating Industry-Led Standards: NIST will host technical convenings, perform gap analyses, and develop voluntary guidelines to guide standardization efforts. In partnership with the interagency, including the National Science Foundation (NSF), NIST will enhance stakeholder engagement and maintain U.S. influence in international standards bodies. 2. Fostering Community-Led Protocols: By engaging with the broader AI ecosystem, NIST aims to identify and eliminate barriers to interoperable agent protocols. The NSF will support open-source development through its Pathways to Enable Open Source Ecosystems program, encouraging collaborative innovation. 3. Investing in Research: NIST is committing to foundational research on agent authentication and identity infrastructure to enable secure interactions between humans, agents, and multi-agent systems. This includes developing advanced security evaluations to guide protocol creation and help consumers make informed comparisons. Ongoing activities; Request for Information (RFI) on AI Agent Security to gather ecosystem insights on threats, mitigations, and metrics; a Draft Concept Paper on Accelerating the Adoption of Software and AI Agent Identity and Authorization, focusing on enterprise use cases; and upcoming Listening Sessions to identify barriers to AI adoption in key sectors like healthcare, finance, and education. The initiative has already drawn input from leading experts in the field. I was one of the outside advisors as an AI pioneer and founder of The Zero-Human Company: a visionary framework for fully autonomous enterprises. Drawing on my decades of experience in AI systems, I provided insights into the practical challenges and opportunities of agentic AI, emphasizing the need for standards that balance innovation with ethical safeguards. I view CAISI as a promising foundation for the agentic era. It’s a good start and it’s crucial to have early guidance on these systems to prevent fragmentation and ensure they evolve responsibly. Standards like these will accelerate adoption while mitigating risks, allowing AI agents to truly transform industries without compromising security or interoperability. NIST is actively inviting participation from the public, industry, and academia to refine the initiative.The deadline: March 9, 2026. Join Listening Sessions. Register interest in virtual workshops focused on barriers to AI adoption in healthcare, finance, and education. These opportunities underscore NIST’s commitment to a collaborative approach, ensuring that the standards reflect diverse viewpoints and real-world needs. For more details, visit the official NIST page: nist.gov/caisi/ai-agent….

English
0
1
4
54
Seth DeGroot retweetet
Seth DeGroot retweetet
Shoaib
Shoaib@KillerShoaib__·
Finally able to finish the RLM from scratch and made the repo public. Here what I did: 1. Implemented the RLM paper (with some tweaks) from scratch using only litellm 2. Made the repl environment sandboxed via docker 3. Use memory compaction
Shoaib tweet media
English
12
38
356
19K
Seth DeGroot
Seth DeGroot@SethDeGroot·
@HipCityReg In ~2015 every new Oculus hire was given a copy of Ready Player One. He wasn't wrong, just early.
English
0
0
1
192
Seth DeGroot
Seth DeGroot@SethDeGroot·
OpenClaw ushered in the agentic future, but for enterprises it’s a nightmare: Uncapped loops = 1000x API spend Root access = Data exfiltration risk We’re seeing @LiteLLM become the default firewall for this traffic. If you want to run viral agents on corporate credits without bankrupting the company, you need a governor. Control the loop, don't kill the agent.
Robert Scoble@Scobleizer

THE ULTIMATE CLAWDBOT REPORT Every single important post here on X about @openclaw. Report and analysis by @blevlabs with the X API. docs.google.com/document/d/1Mz… It read 38,000 people in AI community here to learn about Clawdbot and its founder @steipete. This would not be possible if I hadn't built my lists of the entire AI community: x.com/scobleizer/lis… DOZENS of use cases, fun posts, tutorials, and more.

English
2
1
5
266
Seth DeGroot retweetet
Ejaaz
Ejaaz@cryptopunk7213·
so just to recap this week (so far) - musk industries is real (spacex, tesla, xai merger) - clawdbot explosion leading to a bankrun on mac minis but then anthropic released their own version - tesla dropped the bomb they’re halting production on model s and x to scale 1M optimus humanoid robots this year instead - china dropped the mother of all open source models kimi k2.5 that turn video into production-ready apps but then google dropped a gemini update ON THE SAME DAY that does the same thing gg - google said fuck it and also launched the worlds greatest world model genie and switched on gemini for 3.8B chrome browser users AND released alpha genome model that one-shots 1M dna base pairs for 3000 researchers across 160 countries AND teased new veo model - microsoft crushed earnings, launched a new ai chip but stock still tanked 10% because they *only* grew rev 39% - anthropic round 2X oversubbed raised to 20B 🏌️ - openai raising another $100B, 750B val 🏌️ - intel leaked they’re gonna help produce nvidias next gen feynman gpus - hello americas tsmc - a robot (built by figure) washed the dishes with zero human interaction - apple acquired stealth startup for $2B that can lip read - integrating their tech for new ai consumer airpods with cameras and mics - demis confirms google glass 2.0 coming this summer fckin hell
English
264
1.2K
12.5K
1.4M