Jason Nocco

466 posts

Jason Nocco

Jason Nocco

@JasonNocco

AI Cofounder @ Stealth Mode Startup | Leading Innovation in Generative AI | #AI #GenAI

Los Angeles, CA Katılım Nisan 2015
168 Takip Edilen91 Takipçiler
Sabitlenmiş Tweet
Jason Nocco
Jason Nocco@JasonNocco·
The most sophisticated phishing attack ever seen... Link in the comments for "Gmail Security Warning For 2.5 Billion Users—AI Hack Confirmed" - Ping me if you are an enterprise and need to be prepared.
English
2
1
9
614
Matthew Berman
Matthew Berman@MatthewBerman·
Is there anyone shipping faster than @AnthropicAI? Seriously, it's insane. Every. single. day. They ship something new and incredible. I've never seen this kind of velocity before. What are they doing differently?
English
194
18
899
63.9K
Matthew Berman
Matthew Berman@MatthewBerman·
"OpenClaw is the most popular open source project in history of humanity" - Jensen (NVIDIA CEO) But most people are using it wrong... Here's everything I've learned from 10 billion tokens and 200+ hours of using OpenClaw every single day. Watch this now: 0:00 Intro 0:32 Threaded Chats 3:17 Voice Memos 4:43 Agent-Native Hosting (Sponsor) 6:49 Model Routing 11:18 Subagents & Delegation 14:02 Prompt Optimizations 17:22 Cron Jobs 19:15 Security Best Practices 24:03 Logging & Debugging 25:43 Self Updating 26:28 API vs Subscription 27:52 Documentation/Backup 31:19 Testing 33:11 Building
English
48
70
492
73.6K
Matthew Berman
Matthew Berman@MatthewBerman·
.@nvidia hand delivered a pre-production unit of the @Dell Pro Max with GB300 to my house. 100lbs beast with 750GB+ of unified memory to power the best open-source models in the world. What should I test first?
English
297
102
1.9K
253.6K
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
Somebody created a tool literally called “OBLITERATUS” that removes censorship from any open-weight LLM with a single click 13 obliteration methods, 116 models, 837 tests, and it gets SMARTER every time someone runs it Terrifying.
0xMarioNawfal tweet media
English
101
92
1.2K
143.8K
Andrej Karpathy
Andrej Karpathy@karpathy·
There was a nice time where researchers talked about various ideas quite openly on twitter. (before they disappeared into the gold mines :)). My guess is that you can get quite far even in the current paradigm by introducing a number of memory ops as "tools" and throwing them into the mix in RL. E.g. current compaction and memory implementations are crappy, first, early examples that were somewhat bolted on, but both can be fairly easily generalized and made part of the optimization as just another tool during RL. That said neither of these is fully satisfying because clearly people are capable of some weight-based updates (my personal suspicion - mostly during sleep). So there should be even more room for more exotic approaches for long-term memory that do change the weights, but exactly - the details are not obvious. This is a lot more exciting, but also more into the realm of research outside of the established prod stack.
Awni Hannun@awnihannun

I've been thinking a bit about continual learning recently, especially as it relates to long-running agents (and running a few toy experiments with MLX). The status quo of prompt compaction coupled with recursive sub-agents is actually remarkably effective. Seems like we can go pretty far with this. (Prompt compaction = when the context window gets close to full, model generates a shorter summary, then start from scratch using the summary. Recursive sub-agents = decompose tasks into smaller tasks to deal with finite context windows) Recursive sub-agents will probably always be useful. But prompt compaction seems like a bit of an inefficient (though highly effective) hack. The are two other alternatives I know of 1. online fine-tuning and 2. memory based techniques. Online fine-tuning: train some LoRA adapters on data the model encounters during deployment. I'm less bullish on this in general. Aside from the engineering challenges of deploying custom models / adapters for each use case / user there are a some fundamental issues: - Online fine-tuning is inherently unstable. If you train on data in the target domain you can catastrophically destroy capabilities that you don't target. One way around this is to keep a mixed dataset with the new and the old. But this gets pretty complicated pretty quickly. - What does the data even look like for online fine tuning? Do you generate Q/A pairs based on the target domain to train the model? You also have the problem prioritizing information in the data mixture given finite capacity. Memory based techniques: basically a policy for keeping useful memory around and discarding what is not needed. This feels much more like how humans retain information: "use it or lose it". You only need a few things for this to work: - An eviction/retention policy. Something like "keep a memory if it has been accessed at least once in the last 10k tokens". - The policy needs to be efficiently computable - A place for the model to store and access long-term memory. Maybe a sparsely accessed KV cache would be sufficient. But for efficient access to a large memory a hierarchical data structure might be beter.

English
274
299
4.6K
588.3K
Jason Nocco
Jason Nocco@JasonNocco·
@tbpn Most data is behind locked behind firewalls where that advantage doesn’t matter.
English
0
0
0
291
TBPN
TBPN@tbpn·
Cloudflare CEO Matthew Prince says Googlebot sees 3.2x more of the web than OpenAI, and 4.8x more than Microsoft. And he worries this advantage will allow Google to run away with the AI race, with no one else being able to catch them. "For every one page that OpenAI sees, Google is seeing 3.2." "What I worry about is, because Google has this unique access to the web that nobody else has, the game might just go to them. Because at the end of the day, whoever has the most data wins in the era of AI."
English
25
50
508
83.4K
the VC almanac
the VC almanac@theVCalmanac·
Chamath: Work-life balance is the worst thing happened to young people. "The first and most important thing is you have to be on Broadway." "If you're into politics, you need to be in Washington D.C. If you want to be in finance, you need to get to New York or London. If you want to be in crypto, you probably need to be in Abu Dhabi. If you want to be in tech, you just need to be in Silicon Valley." "There is no shortcut for any of these decisions. You have to be where the fish are." "The number of young people I encounter who talk about all of these idiotic things like work-life balance. I don't even understand what that means." Remote work is convenient. Being where it happens is how you win. video: @chamath
Chamath Palihapitiya@chamath

Launching a YouTube channel to share business stories, lessons I've learned, and important topics I’m actively exploring… youtu.be/0-LAT4HjWPo

English
215
213
3.3K
1.4M
Som Mohapatra
Som Mohapatra@Som_Mohapatra·
@bhalligan if you’re interested in below 10-20M ARR, I’m a fan of what Outtake (cybersecurity) is doing. have learned some interesting things about how they’re decreasing deal cycles for 6-7 fig deals
English
3
0
5
2.8K
Brian Halligan
Brian Halligan@bhalligan·
What are the most innovative plays going in enterprise sales (GTM) these days? Who is doing them?
English
37
9
182
36.5K
Matthew Berman
Matthew Berman@MatthewBerman·
I've spent 2.54 BILLION tokens perfecting OpenClaw. The use cases I discovered have changed the way I live and work. ...and now I'm sharing them with the world. Here are 21 use cases I use daily: 0:00 Intro 0:50 What is OpenClaw? 1:35 MD Files 2:14 Memory System 3:55 CRM System 7:19 Fathom Pipeline 9:18 Meeting to Action Items 10:46 Knowledge Base System 13:51 X Ingestion Pipeline 14:31 Business Advisory Council 16:13 Security Council 18:21 Social Media Tracking 19:18 Video Idea Pipeline 21:40 Daily Briefing Flow 22:23 Three Councils 22:57 Automation Schedule 24:15 Security Layers 26:09 Databases and Backups 28:00 Video/Image Gen 29:14 Self Updates 29:56 Usage & Cost Tracking 30:15 Prompt Engineering 31:15 Developer Infrastructure 32:06 Food Journal
English
432
1.6K
14K
3.3M
NVIDIA
NVIDIA@nvidia·
⚡New data shows NVIDIA Blackwell Ultra delivers up to 50x better performance and 35x lower cost for agentic AI. Cloud providers are deploying NVIDIA GB300 NVL72 systems at scale for low-latency and long-context use cases including agentic coding and coding assistants. Learn how #NVIDIABlackwell platform is maximizing inference performance: nvda.ws/4rTyDbi
NVIDIA tweet media
English
90
172
985
256.1K
Peter Steinberger 🦞
Peter Steinberger 🦞@steipete·
We should make EnterpriseClaw just for the lolz. Java 21, Spring Boot, 14 abstract factory beans, 2GB Docker image, takes 45 seconds to start, AbstractSingletonProxyFactoryAgentClawResponseHandlerBeanDefinitionRegistryPostProcessorImpl .java
English
408
225
6.3K
584.2K
Brian Armstrong
Brian Armstrong@brian_armstrong·
The next unlock for AI agents just launched. @CoinbaseDev released agentic wallets, the first wallet infrastructure designed for AI agents. Now agents can spend, earn, and trade autonomously and securely.
Coinbase Developer Platform🛡️@CoinbaseDev

Introducing Agentic Wallets, our first ever wallet infrastructure built specifically for autonomous agents. Give your agent the power of a wallet. Let your agent manage funds, hold identity, and transact onchain without human intervention. 🧵

English
473
449
3.9K
724.4K
Palmer Luckey
Palmer Luckey@PalmerLuckey·
Let's raise the stakes and go All-in. If Jason lied about visiting the island, he resigns from the podcast. If not, I move on and never again bring up his name, words, or deeds.
David Sacks@DavidSacks

@PalmerLuckey @Jason @mttgrmm JCal did not visit the island. Time to move on.

English
654
844
23.4K
2.7M
Forward Future
Forward Future@ForwardFuture·
Will software transition to fully end-to-end neural networks? @amasad ceo @replit says no: “Purely neural systems aren’t enough. We’re already seeing that neurosymbolic systems work better.” “What we actually need is more deterministic environments for neural networks to write programs in.” “Programs are useful tools. LLMs will need them too.” “Solving all computation inside probabilistic models isn’t realistic.” “And it’s not desirable.” “We need determinism. Correctness. Provability.”
English
11
9
128
8.5K
Riley Brown
Riley Brown@rileybrown·
Hahaha omg Opus 4.6 is TOKEN HUNGRY! I’ve never seen anything like this.
English
128
12
1K
132.2K