Roy Jad

1.2K posts

Roy Jad banner
Roy Jad

Roy Jad

@jadroy2

Chasing clarity, less noise - prev founding designer @contextsuite, CS @uva

SF Katılım Temmuz 2020
2.2K Takip Edilen1.2K Takipçiler
Beni
Beni@ben_issen·
First ever Designers & Machines kicked off! 50 top designers in one room. 5 demos exploring how to design with AI. Now running once a month in San Francisco. Next one on April 30th : designers-machines.com
GIF
English
5
7
117
8K
khushi
khushi@khushkhushkhush·
couldn’t get aaru pins made in time for a tv appearance so our in-house architect schizo’d out and literally made CAD designs and laser cut mirrored acrylic to make diy pins in under 48h
khushi tweet media
English
4
1
35
1.7K
Roy Jad
Roy Jad@jadroy2·
site - wip
English
0
0
4
97
Roy Jad
Roy Jad@jadroy2·
U DON'T HAVE SPACE TO THINK U DON'T HAVE SPACE TO THINK U DON'T HAVE SPACE TO THINK MAKE SPACE TO THINK
English
0
0
2
105
Roy Jad
Roy Jad@jadroy2·
claude sketch someones risk of accidental death throughout a normal day
Roy Jad tweet media
English
0
0
3
218
dar
dar@radbackwards·
The speed of the world today is deeply at odds with being thoughtful
English
14
45
276
12.8K
Prama Yudhistira
Prama Yudhistira@menace_codes·
whenever a customer sends us a slack message its like im playing russian roulette with my cortisol
English
2
0
6
169
Roy
Roy@im_roy_lee·
@amasad why do u people never add captions to ur videos lol it's like u guys want less views on purpose
English
54
7
1.4K
61.7K
Amjad Masad
Amjad Masad@amasad·
We’ve raised $400M at a $9B valuation. Investors include Georgian, G Squared, Prysm, 1789, YC, Coatue, a16z, Craft, and QIA, with strategic investments from Accenture, Databricks, Okta, and Tether. We’re also lucky to have incredible individuals backing us, including Shaq and Jared Leto. This funding will help us scale our ambition and expand beyond coding into AI systems that center human creativity. Replit is now used at 85% of the Fortune 500. We have an opportunity to help shape the future of work. One where AI abstracts away the boring parts and humans shine as creative directors. We’re also investing more globally, particularly in Europe, Asia, and the Middle East. Innovation can come from anywhere in the world, and we want to help unlock it.
English
520
724
8.3K
2.4M
hamza mostafa
hamza mostafa@hamostaf04·
for those of you that don't have a GPU handy to play around with, i built a small fork of the repo that lets your coding agent tinker and experiment using a GPU on the cloud using @modal sandboxes w/ updated instructions in README and program.md. link in comments. enjoy :)
Andrej Karpathy@karpathy

Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.

English
14
3
81
7.3K
Roy Jad
Roy Jad@jadroy2·
headsets will make holding a small phone to do everything online feel so stupid
Aetheris@0xAetheris

@worldranking_ hard to get up when your phone holds your whole social life.

English
0
0
4
336
Aetheris
Aetheris@0xAetheris·
@worldranking_ hard to get up when your phone holds your whole social life.
English
1
0
52
2.5K
Roy Jad
Roy Jad@jadroy2·
@cxgonzalez Current life is oscillating between these two
English
0
0
1
23
joshpuckett
joshpuckett@joshpuckett·
You must fall in love with outcomes, not abstractions. This very raw, very thoughtful video from Mo is worth a watch in full. Much of it resonates, and I feel and have felt a lot of the dissonance he talks about. Here's how I'm thinking about things of late. ---- One of my many side quests in life is being an aspiring woodworker. I've made most of the furniture in our house, and my latest piece was a media console for our den. Woodworking has a lot of parallels to software. You work with raw materials, modify and assemble them, and ultimately deliver some output that is more valuable than it's constituent parts. Like software, there are many ways to build furniture. You can do it entirely with hand tools, no electricity at all. Or, you can do it with entirely 'fake' wood and mechanical fasteners to just assemble something like IKEA. I think there is value in knowing all of the points along the way; I have taken raw lumber, milled it by hand, planed it, jointed it, and transformed it from a tree to something resembling 2x4s you'd see at your local home center. You learn a respect for the material, an appreciate for how furniture was built, and develop a certain intuition. But if your goal is to build beautiful, heirloom quality furniture, and your constraints are that this is not your full-time job or hobby, there are much better abstractions available. You can by lumber that is already surfaced and dimensioned. You can use tablesaws and jointers and routers to aid you in more quickly cutting and shaping the wood into your desired form. You can select hardwood for key parts of a build, or use cabinet grade plywood strategically to help speed things up. These are all abstractions. Different ways of accomplishing a task to achieve an outcome. In my own woodworking practice, I have found a happy medium somewhere between 'hand tools only' and 'IKEA'. I make liberal use of power tools. But I also, because I am saving time, can achieve a higher end result by focusing on a better outcome and quality bar. It also, most importantly, makes this accessible to me. There's a running bit in woodworking communities that it's the perfect hobby for an old retiree. Partly cost, partly time. But with more modern abstractions, I (a very non-retired person) can participate and bring to life my own creations. I think of LLMs and coding agents in a similar way. They are the latest in a series of powerful abstractions that afford convenience and accessibility when it comes to those who make software. They are extremely powerful, far more than the abstractions of yesteryear. But at the end of the day, that's all they are. If the outcome you love and value is a world-class user experience, they are but one of many tools to help you get there. They are great for a good many things, but they aren't a complete answer (at least not yet). You cannot "make no mistakes" your way to a beloved, soulful, inspiring product that people talk about and smile at. You have to use the tools to achieve that outcome. And much like woodworking, there are still some things I or we all might prefer to do by 'hand'. I like to break my corners with a handplane still, in most cases. I could do this with a router, but there's something about the connection and feel that I want to have, if for nothing else than my own desire. I would never want an LLM to have the final say when it comes to the details of my interface; I want to use abstractions to more quickly allow me to focus on that, and do it 'by hand.' So all of that is to say: I think it's important to fall in love with the outcome of whatever it is you are trying to create, and view abstractions simply as tools to help you get there. That way, you can pick and choose how they will serve you without losing yourself to them.
joshpuckett tweet mediajoshpuckett tweet mediajoshpuckett tweet media
Mo@atmoio

I was a 10x engineer. Now I'm useless.

English
17
13
316
40.5K
Kit Langton
Kit Langton@kitlangton·
sexy rolling output
English
93
43
2.3K
157.9K
Don Pettit
Don Pettit@astro_Pettit·
Lightning appears as bright blue flashes across the time history of our orbit seen in these exposures from @Space_Station. And visualize the intensity of the storm!
English
36
214
1.1K
37.1K
Roy Jad
Roy Jad@jadroy2·
@naval Can't believed we normalized this shit
English
0
0
1
70
Naval
Naval@naval·
The human brain isn’t designed to process all of the world’s breaking emergencies in realtime.
English
1.5K
3.3K
31.7K
1M