
Aayush Jain
2.6K posts

Aayush Jain
@aayushjain
I scale platforms where products grow. // Hypercuriosity.


1/ today we're releasing muse spark, the first model from MSL. nine months ago we rebuilt our ai stack from scratch. new infrastructure, new architecture, new data pipelines. muse spark is the result of that work, and now it powers meta ai. 🧵

- Is Meta betting that multi-agent orchestration at inference is the next scaling paradigm.. ? - imo its most likely replacing "train a bigger model" with "coordinate smarter agents" approach..




If you have a spare 25 minutes I wholeheartedly recommend you watch Nicholas Carlini - Black-hat LLMs. Link in the comment below. Amazing talk on the way LLMs are making it easier to find critical software vulnerabilities - Anthropic's LLM discovered a non-trivial heap buffer overflow in the Linux kernel that's been there since 2003..! The future is both exciting and scary. LLMs and AI should be used, as demonstrated here, as a force multiplier for analysts, researchers and developers. I also think LLMs are a good way for people to learn, so long as they do not just copy paste AI output blindly, and treat it as a pair programmer / colleague they converse with to learn and grow. LLMs are also pretty good at hunting through documentation, it's like a knife through butter - you can then go verify what it comes back with and use that as an off point. A tool in your toolbox - not to be someone's sole skill. And remember, always validate the output. Personal take - hopefully we see growth with LLMs over the coming months and years to make software more secure through QA such as in the video looking for vulnerabilities, and LLMs used in Cyber Security to help identify and detect threats from logs sooner, being an assistant to analysts. Great question at the end (simplified): How do we prevent threat actors from abusing this; A: Security is dual use - historically security software tooling has favoured the defender over the attacker, maybe that will change. The good people should have access to the software - they want the good people to use the software to find the bugs, but putting the right safeguards in place is hard and nuanced, they think currently it is ok, but still room for change.






