Deep Patel

25 posts

Deep Patel

Deep Patel

@potlee

I predict the future with Javascript + AI. Applied AI Researcher @distylai. Founder @amboeats @yolometrics

United States Katılım Mart 2014
847 Takip Edilen69 Takipçiler
Deep Patel
Deep Patel@potlee·
@theo same with @ tagging files in CLAUDE.md. not supported in AGENTS.md
English
0
0
0
133
Theo - t3.gg
Theo - t3.gg@theo·
The "inject dynamic context" pattern in Claude Code skills is so useful. IMO, this should be part of the "skills standard" and included in tools like Codex CLI, Pi, Cursor etc
Theo - t3.gg tweet media
English
66
78
1.8K
105.4K
Deep Patel
Deep Patel@potlee·
When you have a problem: gpt-5.4. When you don't have a problem and want to continue to not have a problem: opus
English
0
0
0
47
Deep Patel
Deep Patel@potlee·
There has never been a better time to have a usage based pricing model. SCREAMING BUYS $DDOG $JROG $ESTC $FSLY CSP obviously also benefit $AMZN $MSFT $GOOG $NET
English
0
0
1
256
Deep Patel
Deep Patel@potlee·
@thdxr I would if it checked out the right version from the package.json
English
0
0
1
34
dax
dax@thdxr·
what if in opencode.json you could specify project references references: ["git@github.com:Effect-TS/effect.git"] these would get cloned to a global cache and kept updated and opencode would have a subagent that could answer questions about them would you use this?
English
195
9
1.1K
69.8K
Deep Patel
Deep Patel@potlee·
@thdxr So I can use Cursor Tab for when I actually need to make changes manually, which I haven’t had to do in a while so just inertia at this point
English
0
0
0
137
dax
dax@thdxr·
why do you use both cursor and opencode together when cursor has an entire agent view?
English
104
3
341
65.3K
Deep Patel
Deep Patel@potlee·
@GosuCoder Feels faster because it thinks less and uses parallel tool calls more
English
0
0
0
24
GosuCoder
GosuCoder@GosuCoder·
Also to be clear i'm not picking on Devin, I've seen lots of people saying its a lot faster. I think I may partially be spoiled by Cerebras.
English
2
0
7
1.7K
GosuCoder
GosuCoder@GosuCoder·
Why is everyone saying Sonnet 4.5 is so much faster? I felt like I was being gaslit. So I had to do some digging to see if it is actually faster. Looking at just Anthropic provider and its reporting on OR, its 20% faster in TPS, but that doesn't actually provide the full picture. My CLI based TPS tester on the other hand shows basically identical speed. Anthropic OR with Anthropic provider Sonnet 4.5 >avg streaming 73.17 tps, >avg TTFT of 2.275s Anthropic OR with Anthropic provider Sonnet 4.0 >avg streaming 71.71 tps >avg TTFT of 1.863s So then I thought what about directly against Anthropic's own API for 4.5 >avg streaming of 72.8tps >avg TTFT of 2.134s This is over 5 tests each generating a ton of tokens. The range of lows were within 2tps of each other. For example the largest test I do was between 38.31 and 40.01 tps in all 3. The range of highs were a bit more skewed with the highest clocking in at 90.91 tps for 4.5, while 4.0 had a high of 82.12 tps. Note: my calculation for tps includes the TTFT time. Maybe its where I am located geographically, but seeing the Devin team saying 2x faster just isn't passing the smell test.
GosuCoder tweet mediaGosuCoder tweet media
English
27
2
109
9.9K
Deep Patel
Deep Patel@potlee·
and apparently you also CANNOT DRAG IN SCREENSHOTS?!
English
0
0
0
249
Deep Patel
Deep Patel@potlee·
codex: - cannot @ tag files in Agents.md - no --resume - no /context - no vim mode - no visible thinking you all dont know what you are talking about. for anything serious claude code is still 🤴🏾
English
1
0
1
401
Deep Patel
Deep Patel@potlee·
@amitisinvesting this is actually bullish for $HOOD investors. they will have to buy more at a higher market cap
English
0
0
1
148
amit
amit@amitisinvesting·
back in new jersey after a week of traveling woke up this morning & remembered that yesterday the S&P committee chose not to rebalance for the first time since 2022 and kept beat up names like $ENPH or $CZR in the index they didn't even have to pick $HOOD but to not allow incredible companies like $APP or $TTD in... and continue to have names that simply do not live up to the expectations like $ENPH and $CZR is wild since both are $5B market cap companies and the minimum requirement to get in is $20B... like really? there was no need for a rebalance? the worst is, this is supposed to be some important financial institution and while hundreds of billions of dollars were on the line waiting for this announcement...they couldn't even put out a proper press release? what century are we living in? so annoyed but hopefully companies that deserve to get in will have a chance later this year
English
114
42
773
100.9K
Deep Patel
Deep Patel@potlee·
then at some point you will realize it’s $GOOG and lose interest trades at a lower PE than any of those btw
English
1
0
2
426
Deep Patel
Deep Patel@potlee·
what if there was a company that has ALL THIS better AI chips than $NVDA better AI models than Open AI $MSFT more ad revenue than $META more mobile devices than $AAPL more self driving cars than $TSLA more hours watched than $NFLX faster cloud revenue growth than $AMZN
English
1
0
4
1.3K
Deep Patel
Deep Patel@potlee·
yet another quarter showing consistent execution from $HIMS. they have been growing at this rate long before GPL-1. can you even find when they started doing weight loss in the subscribers chart? (you can't)
Deep Patel tweet media
English
0
1
4
518
Deep Patel
Deep Patel@potlee·
@GavinSBaker It only has to compute 37 billion parameters but still needs 671 gb of RAM
English
0
0
0
155
Gavin Baker
Gavin Baker@GavinSBaker·
1) DeepSeek r1 is real with important nuances. Most important is the fact that r1 is so much cheaper and more efficient to inference than o1, not from the $6m training figure. r1 costs 93% less to *use* than o1 per each API, can be run locally on a high end work station and does not seem to have hit any rate limits which is wild. Simple math is that every 1b active parameters requires 1 gb of RAM in FP8, so r1 requires 37 gb of RAM. Batching massively lowers costs and more compute increases tokens/second so still advantages to inference in the cloud. Would also note that there are true geopolitical dynamics at play here and I don’t think it is a coincidence that this came out right after “Stargate.” RIP, $500 billion - we hardly even knew you. Real: 1) It is/was the #1 download in the relevant App Store category. Obviously ahead of ChatGPT; something neither Gemini nor Claude was able to accomplish. 2) It is comparable to o1 from a quality perspective although lags o3. 3) There were real algorithmic breakthroughs that led to it being dramatically more efficient both to train and inference. Training in FP8, MLA and multi-token prediction are significant. 4) It is easy to verify that the r1 training run only cost $6m. While this is literally true, it is also *deeply* misleading. 5) Even their hardware architecture is novel and I will note that they use PCI-Express for scale up. Nuance: 1) The $6m does not include “costs associated with prior research and ablation experiments on architectures, algorithms and data” per the technical paper. “Other than that Mrs. Lincoln, how was the play?” This means that it is possible to train an r1 quality model with a $6m run *if* a lab has already spent hundreds of millions of dollars on prior research and has access to much larger clusters. Deepseek obviously has way more than 2048 H800s; one of their earlier papers referenced a cluster of 10k A100s. An equivalently smart team can’t just spin up a 2000 GPU cluster and train r1 from scratch with $6m. Roughly 20% of Nvidia’s revenue goes through Singapore. 20% of Nvidia’s GPUs are probably not in Singapore despite their best efforts. 2) There was a lot of distillation - i.e. it is unlikely they could have trained this without unhindered access to GPT-4o and o1. As @altcap pointed out to me yesterday, kinda funny to restrict access to leading edge GPUs and not do anything about China’s ability to distill leading edge American models - obviously defeats the purpose of the export restrictions. Why buy the cow when you can get the milk for free?
English
224
1.4K
9K
3.3M
Deep Patel retweetledi
Shivam
Shivam@imrozed·
A message from Pakistan - “Tum bilkul ham jaise nikle” - Fehmida Riaz
Deutsch
163
1.4K
4.1K
250.6K
Deep Patel
Deep Patel@potlee·
@_sejko It’s 2020 we are co washing now. You didn’t get the memo?
English
0
0
0
0
Sejal
Sejal@_sejko·
2020 is starting to feel like an expensive shampoo that's drying for your hair but you use it anyway because you paid $2020 for it zzz
English
1
0
1
0
Deep Patel
Deep Patel@potlee·
@AppleSupport I’m using the solo band. It’s doesn’t feel tight at all but still did this.
English
1
0
0
0
Apple Support
Apple Support@AppleSupport·
@potlee We’re glad you’ve reached out on this. Let’s connect in DM. We’d like to get some additional information on what’s happening. Which band are you using with your Apple Watch? twitter.com/messages/compo…
English
1
0
0
0
Deep Patel
Deep Patel@potlee·
@warmly_adi @TatianaTMac Right?! Why are we participating in while exceptionalism? White is a color too. Just as much as black is anyway.
English
0
0
0
0
Tatiana Mac
Tatiana Mac@TatianaTMac·
People of non-colour: how’d y’all do on that test you were cramming for in June?
English
4
3
56
0
Deep Patel
Deep Patel@potlee·
@TartanLlama I bought down Yelp for a few mins in my first week. I never recovered from being the guy who brought down prod on his first deploy. But honestly, it’s the best thing to have happened. I see people constantly worried about breaking thing. I know I’m never gonna manage to top that.
English
0
0
1
0
Sy Brand
Sy Brand@TartanLlama·
Genius programmers are like: I joined Google at 20 and invented a feature you use every day Yeah well at 20 I cost a company I worked for hundreds of pounds by using the wrong pricing setting on AWS and triggering a job which never terminated YOUR MOVE
English
9
5
142
0