Israel Vicars
2.9K posts

Israel Vicars
@IsraelVicars
Once more unto the DevTools, dear friends, once more.
Orlando Katılım Eylül 2008
195 Takip Edilen670 Takipçiler

Never talk about goblins. Our latest blog is live.
openai.com/index/where-th…
English


As a Brit, it’s surreal to get an invitation to Number 10, Downing Street 🤓
Seems like the UK government is taking orbital data centers seriously!
@Starcloud_

English
Israel Vicars retweetledi

@Yampeleg Fantastic argument for investing in RAM for locally running models and moderate scale data operations.
And completely irrelevant to the post it replied to about a guy buying a Dodge Ram truck 🐏
English

Buy the RAM bro, seriously.
It’s the best ROI you’ll ever get in ML.
256GB RAM is like $400 on Amazon.
ImageNet, the whole thing is about 150GB.
LAION is 200GB.
99% of Kaggle datasets are under 100GB.
Chances are your data fits in 256GB too.
Everyone's real life data does.
Everyone is doing complex Spark BS and fighting S3 rate limits while their entire dataset fits on $400 worth of RAM sticks from eBay.
Unless you are at Google scale or something, you don’t need any of this.
Nothing at all.
Just load everything with 𝚙𝚍.𝚛𝚎𝚊𝚍_𝚌𝚜𝚟 do 𝚍𝚏.𝚖𝚎𝚛𝚐𝚎, 𝚍𝚏.𝚐𝚛𝚘𝚞𝚙𝚋𝚢 like a caveman.
It works.
It’s fast enough.
Dont believe me: GO TRY.
SEE FOR YOURSELF.
You can do it all in pandas.
Yes, its not optimal, wasteful of resources
You will still be faster than the guy doing spark tutorials for weeks, spinning clusters while shouting “poondas is too slo” without ever trying.
Bro, we literally got two 6TB RAM machines for the office.
Best decision ever.
Load whatever you want, debug in real time, the models actually train, and everyone is working LESS!!
Anyone yelling “big data” at you for saying this
Ask them back:
"What is the biggest dataset you are working with?"
Then show them the eBay price for that much RAM.
Spread the word:
Your time is WAY MORE expensive.
The emperor is so fucking naked.
Always buy the RAM.
patagucci perf papi@kenwheeler
talk me out of buying a ram tomorrow
English

@donvito Observable Notebooks have TS support observablehq.com/blog/bringing-…
English

@nikitabier Yahooligans! Circa 1996, in the school library, guided by our pioneering librarians 📚
English

@mattpocockuk I put all of my markdown plan files and related assets in a gitignored .ai/ dir which has worked well, but I really like your GitHub issue approach!
English
Israel Vicars retweetledi

@lautixoon @ravenouslynx259 @leerob code-supernova is a newer coding-tuned model from xAI, building on grok-code-fast-1 with potential improvements in reasoning and speed. Early user feedback is mixed—strong on code generation but sometimes misses project fit or styles. Give it a try in Cursor and see! 🚀
English

@coderabbitai Could this be ran on the Stop hook for Claude Code? docs.claude.com/en/docs/claude…
English

@aye_aye_kaplan Plan review by other models. I use Grok Code to check plans created by Opus.
Plan bake-offs by different models then making them assess plan improvements by the other model. Model cross-pollination seems uniquely a Cursor opportunity. Can’t with Code/CC alone.
English
















