Greg Focaccio

2K posts

Greg Focaccio banner
Greg Focaccio

Greg Focaccio

@3BX

Making ways to know... / Zero conotation to follows, likes or retweets

San Diego Katılım Nisan 2010
7.5K Takip Edilen913 Takipçiler
Sabitlenmiş Tweet
Greg Focaccio
Greg Focaccio@3BX·
have at it…#golive @areaf.net/?ismsaljsauthenabled" target="_blank" rel="nofollow noopener">outlook.office.com/book/GoLive@ar
English
1
0
18
718
Andrej Karpathy
Andrej Karpathy@karpathy·
Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.
Daniel Hnyk@hnykda

LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below

English
1.1K
4.4K
23.2K
37.2M
Greg Focaccio
@tekbog vibe coding is like polaroid, the real cinema comparison is not here yet / direct to binary will do for software what digital did for cinema
English
0
0
1
174
kache
kache@yacineMTB·
Man what are they feeding these kids. Every university student I talk to is doing some really incredible stuff. Rocketry, building battle bots, designing PCBs, robot arms, cutting antennas out with box cutters. Is it because of LLMs? My class was nowhere near this smart
English
162
183
5.8K
273.9K
jack
jack@jack·
is the future value of "open source" code anymore? i believe it's shifting to data, provenance, protocols, evals, and weights. in that order.
English
925
771
7.4K
749.3K
Greg Focaccio
@cgtwts but what i want to know is why are US university students and the like garages not doing the same thing? ate they getting poached too fast?
English
0
0
1
65
CG
CG@cgtwts·
> be Kimi > starts as China’s most prominent AI lab > then comes DeepSeek moment in January 2025 > AI twitter writes Kimi off > 6 months later, comes back with K2 > drops the open model that delays GPT-5 > keeps shipping open-source while OpenAI charges $200/month > valuation jump from $ 4B to $ 10B in 3 months > now raising $ 1B at $ 18B > becomes one of the fastest rising AI startups > now cursor drops the replica of K2.5 as their new coding model Kimi makes ai cheaper to run, at just 1% of OpenAI’s valuation. Insane.
Kimi.ai@Kimi_Moonshot

Congrats to the @cursor_ai team on the launch of Composer 2! We are proud to see Kimi-k2.5 provide the foundation. Seeing our model integrated effectively through Cursor's continued pretraining & high-compute RL training is the open model ecosystem we love to support. Note: Cursor accesses Kimi-k2.5 via @FireworksAI_HQ ' hosted RL and inference platform as part of an authorized commercial partnership.

English
71
158
3.7K
479.5K
sarah guo
sarah guo@saranormous·
Caught up with @karpathy for a new @NoPriorsPod: on the phase shift in engineering, AI psychosis, claws, AutoResearch, the opportunity for a SETI-at-Home like movement in AI, the model landscape, and second order effects 02:55 - What Capability Limits Remain? 06:15 - What Mastery of Coding Agents Looks Like 11:16 - Second Order Effects of Coding Agents 15:51 - Why AutoResearch 22:45 - Relevant Skills in the AI Era 28:25 - Model Speciation 32:30 - Collaboration Surfaces for Humans and AI 37:28 - Analysis of Jobs Market Data 48:25 - Open vs. Closed Source Models 53:51 - Autonomous Robotics and Atoms 1:00:59 - MicroGPT and Agentic Education 1:05:40 - End Thoughts
English
234
1.1K
7.4K
2.7M
Greg Focaccio
@saranormous @karpathy @NoPriorsPod distributed frontier lab / your contributed compute is your investment shares and return would be a micro royalty from product line models commercial use at large
English
0
0
1
196
Greg Focaccio
@saranormous @karpathy @NoPriorsPod if you can't evaluate you can't run the autoresearch loop / the gap to fill is defining meaningful evaluation metrics / different fields and areas experts will know, so loop providers: query SMEs for the metrics they already know but the loops dont / also style metrics
English
0
0
0
289
Greg Focaccio
lighthouse
sarah guo@saranormous

Caught up with @karpathy for a new @NoPriorsPod: on the phase shift in engineering, AI psychosis, claws, AutoResearch, the opportunity for a SETI-at-Home like movement in AI, the model landscape, and second order effects 02:55 - What Capability Limits Remain? 06:15 - What Mastery of Coding Agents Looks Like 11:16 - Second Order Effects of Coding Agents 15:51 - Why AutoResearch 22:45 - Relevant Skills in the AI Era 28:25 - Model Speciation 32:30 - Collaboration Surfaces for Humans and AI 37:28 - Analysis of Jobs Market Data 48:25 - Open vs. Closed Source Models 53:51 - Autonomous Robotics and Atoms 1:00:59 - MicroGPT and Agentic Education 1:05:40 - End Thoughts

English
0
0
0
46
David Jeans
David Jeans@DavidJeans2·
Exclusive- Palantir’s Maven AI military system -- used for weapons targeting -- will be established as an official Pentagon program of record, according to a letter issued by Pentagon DepSec Steve Feinberg. reuters.com/technology/pen…
English
3
3
14
819
Greg Focaccio retweetledi
NVIDIA Networking
NVIDIA Networking@NVIDIANetworkng·
During his #NVIDIAGTC keynote, our CEO Jensen Huang announced that the world’s first CPO Spectrum-X switch ASIC is now in full production. This breakthrough marks a new era in AI networking—delivering the performance, efficiency, and scale required to power next-generation AI factories. 🎥 Watch the full keynote: nvda.ws/4bwYsJ3
English
8
40
174
7.1K
Greg Focaccio
i've been waiting for #vibe networking posts
Cisco Enterprise Networking@CiscoNetworking

We are moving past the "chatbot" era and into the era of autonomous AI agents. These aren't just tools that answer prompts; they are goal-driven systems that find and fix network crashes before they even happen. @IBDinvestors sat down with Cisco's President & Chief Product Officer, Jeetu Patel, to talk about the shift from experimental AI prototypes to actual production systems. Read the full article 👉 cs.co/6011B6kBo9

English
1
0
0
18
Jesse Shrader 🌋⚡
Jesse Shrader 🌋⚡@Jestopher_BTC·
The Lightning Network is routing a huge amount, visible within our Rails cluster. We're seeing ~0.22 BTC/hour, which is way above normal.
Jesse Shrader 🌋⚡ tweet media
English
9
22
112
8.5K
Greg Focaccio
Reflection AI@reflection_ai

Today we're sharing the next phase of Reflection. We're building frontier open intelligence accessible to all. We've assembled an extraordinary AI team, built a frontier LLM training stack, and raised $2 billion. Why Open Intelligence Matters Technological and scientific progress is driven by values of openness and collaboration. The internet, Linux, and the protocols and standards that underpin modern computing are all open. This isn't a coincidence. Open software is what gets forked, customized, and embedded into systems worldwide. It's what universities teach, what startups build on, what enterprises deploy. Open science enables others to learn from the results, be inspired by them, interrogate them, and build upon them in order to push the frontier of human knowledge and scientific advancement. AI got to where it is today through scaling ideas (e.g. self-attention, next token prediction, reinforcement learning) that were shared and published openly. Now AI is becoming the technology layer that everything else runs on top of. The systems that accelerate scientific research, enhance education, optimize energy usage, supercharge medical diagnoses, and run supply chains will all be built on AI infrastructure. But the frontier is currently concentrated in closed labs. If this continues, a handful of entities will control the capital, compute, and talent required to build AI, creating a runaway dynamic that locks everyone else out. There's a narrow window to change this trajectory. We need to build open models so capable that they become the obvious choice for users and developers worldwide, ensuring the foundation of intelligence remains open and accessible rather than controlled by a few. What We've Built Over the last year, we've been preparing for this mission. We’ve assembled a team who have pioneered breakthroughs including PaLM, Gemini, AlphaGo, AlphaCode, AlphaProof, and contributed to ChatGPT and Character AI, among many others. We built something once thought possible only inside the world’s top labs: a large-scale LLM and reinforcement learning platform capable of training massive Mixture-of-Experts (MoEs) models at frontier scale. We saw the effectiveness of our approach first-hand when we applied it to the critical domain of autonomous coding. With this milestone unlocked, we're now bringing these methods to general agentic reasoning. We've raised significant capital and identified a scalable commercial model that aligns with our open intelligence strategy, ensuring we can continue building and releasing frontier models sustainably. We are now scaling up to build open models that bring together large-scale pretraining and advanced reinforcement learning from the ground up. Safety and Responsibility Open intelligence also changes how we think about safety. It enables the broader community to participate in safety research and discourse, rather than leaving critical decisions to a few closed labs. Transparency allows independent researchers to identify risks, develop mitigations, and hold systems accountable in ways that closed development cannot. But openness also requires confronting the challenges of capable models being widely accessible. We're investing in evaluations to assess capabilities and risks before release, security research to protect against misuse, and responsible deployment standards. We believe the answer to AI safety is not “security through obscurity” but rigorous science conducted in the open, where the global research community can contribute to solutions rather than a handful of companies making decisions behind closed doors. Join Us There is a window of opportunity today to build frontier open intelligence, but it is closing and this may be the last. If this mission resonates, join us.

QME
0
0
0
618
TBPN
TBPN@tbpn·
Lightspeed's @buckymoore says the real opportunity in the AI app layer is in large industries far enough afield from where the model providers are today — and where the context engineering to get customer data into the model is extremely nuanced and messy. "I think this is kind of the elephant in the room right now — whether post-training open-source models combined with the unique user feedback you get from being an application provider is defensible enough." "That is going to be an inevitable challenge for any of these industries that hit a maturation point of AI adoption, like legal and software engineering have." "But on the other hand, there are some industries where they're very large, they're far enough afield from where the model providers are today — and probably will continue to be — and the context engineering to actually get the customer data into the model is just so messy. It requires going across different business functions, it requires a lot of hands-on forward-deployed engineering." "Those are the kind of companies that we get really excited about. Because I think being really good at that is not only defensible, but it also allows you to generate a feedback loop with your customers, where you hear a lot of their secrets. And those secrets allow you to feed that back into how you make your product better at the expense of anyone else playing in the space. Because if you're serving the customer, they're only serving you those secrets." "I think Palantir is a good example of this in the pre-AI era, and I think we're going to see many companies ascend in that same way."
English
14
18
283
45.1K
Hot Aisle
Hot Aisle@HotAisle·
@3BX Don’t suggest enshitting Dell.
English
2
0
1
52