Mike Morearty

5.3K posts

Mike Morearty banner
Mike Morearty

Mike Morearty

@mmorearty

Freelance software engineer. Previously: Asana (AI, text editor); Box; Adobe (Flex Builder, Dreamweaver); Microsoft (Sidewalk, Visual C++).

Katılım Mart 2008
1.1K Takip Edilen305 Takipçiler
Mike Morearty
Mike Morearty@mmorearty·
@_ashleypeacock I constantly ran into the same problem with Claude Code. I finally added instructions in my CLAUDE.md to tell it to use Workers, and also to start any Cloudflare project by reading the web for Cloudflare best practices, since its training is out of date.
English
1
0
0
29
Ashley Peacock
Ashley Peacock@_ashleypeacock·
Don't create new projects with Cloudflare Pages, use Static Asset Workers instead I was brainstorming an app idea with ChatGPT, using Cloudflare of course, and it consistently recommended Pages All future feature releases will be on Workers, not Pages
Ashley Peacock tweet media
English
39
19
347
26.2K
Mike Morearty
Mike Morearty@mmorearty·
@KentonVarda Haha when you said “an old blog post,” I was not expecting 2024 😂
English
1
0
1
219
Kenton Varda
Kenton Varda@KentonVarda·
It's kind of crazy how much of the way we've been designing Workers over the past 9 years unexpectedly turns out to be so relevant to AI and agents. Durable Objects and lightweight isolate sandboxes are obvious big things. But there are subtler things. Consider "bindings". In Workers, our environment (`env` object) doesn't just contain strings. It can contain live objects, which we often call "bindings". For instance, a Workers KV binding is a live object representing a Workers KV storage namespace. Once you've configured it, you can just do: let val = await env.MYKV.get("foo") await env.MYKV.put("foo", "new value"); Notice: There's no connection string. No secret token that you have to pass to talk to your KV namespace. The Workers Runtime handles it for you. You just get an already-initialized client object, on which you can call methods. You can still do everything you want to do. But you know what you can't do? Leak the secret token. Because there isn't one. A KV namespace binding fundamentally cannot be "leaked" because it's not bytes. But over the years, a lot of people have questioned whether this really mattered. I've had people inside and outside the team say: "Why are you so weird, Kenton? Yeah sure it can't leak but now I have to learn this new way of thinking about things. No other runtime works this way so writing portable code takes extra work. I'd rather just stick to what I'm used to, and anyway I know better than to leak my environment variables." Well, now we have AI agents writing the code and... suddenly everyone is worried about agents leaking keys. People are creating convoluted schemes to intercept the outbound traffic and inject keys in a proxy, or trying to issue very-short-lived keys so that if the agent leaks them the window of attack is short. Ahem. Welcome, folks! We solved this 8 years ago! Here's an old blog post -- written when I personally was still very much Not Thinking About AI -- which seems so much more relevant now: blog.cloudflare.com/workers-enviro…
English
28
48
524
81.5K
Mike Morearty
Mike Morearty@mmorearty·
@rektmando Although this looks like Jack Dorsey, it is not him. It's someone named Troy Casey (certifiedhealthnut on Instagram)
English
0
0
0
137
Mando
Mando@rektmando·
Jack Dorsey after firing 4000 people and getting to blame it on AI
English
446
1.1K
17.8K
2.4M
Mike Morearty retweetledi
Peer Richelsen
Peer Richelsen@peer_rich·
10,000 employees 6,000 employees 40% smaller
Peer Richelsen tweet mediaPeer Richelsen tweet media
English
254
942
18.9K
1.5M
Mike Morearty
Mike Morearty@mmorearty·
@bcherny Boris, you are ever-present here and on other social media, spending a lot of time helping people and answering their questions. Given how productive Claude is for coding, I'm starting to think maybe you're just using Claude to respond to people on social media 😉
English
0
0
1
178
Boris Cherny
Boris Cherny@bcherny·
I hope this was helpful! What are your tips for using Claude Code? What do you want to hear about next?
English
283
43
2.6K
335.1K
Boris Cherny
Boris Cherny@bcherny·
I'm Boris and I created Claude Code. Lots of people have asked how I use Claude Code, so I wanted to show off my setup a bit. My setup might be surprisingly vanilla! Claude Code works great out of the box, so I personally don't customize it much. There is no one correct way to use Claude Code: we intentionally build it in a way that you can use it, customize it, and hack it however you like. Each person on the Claude Code team uses it very differently. So, here goes.
English
1.3K
7K
54.4K
8.1M
Robleh
Robleh@robjama·
claude code is insane. just built the most important mac app of our generation in 1 hour (sound on 🔊)
English
45
41
1.1K
224K
Mike Morearty
Mike Morearty@mmorearty·
@karpathy Interesting, I feel a little the opposite: even though I am way behind, I feel less need to keep up because the AI-driven tools keep up for me, so I don’t have to learn things as in-depth as before.
English
0
0
1
24
Andrej Karpathy
Andrej Karpathy@karpathy·
I've never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful if I just properly string together what has become available over the last ~year and a failure to claim the boost feels decidedly like skill issue. There's a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering. Clearly some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession. Roll up your sleeves to not fall behind.
English
2.6K
7.5K
55.9K
16.8M
Mike Morearty
Mike Morearty@mmorearty·
@brandenflasch @Rivian When I got my Gen 2 in August of last year, I decided to lease (for the first time), because the automotive world, and especially the EV world, is changing so fast these days.
English
0
0
1
99
Branden Flasch
Branden Flasch@brandenflasch·
It's great that @Rivian made a ton of improvements in Gen 2, but man did they leave Gen 1 behind. Glad I sold mine when I did.
Branden Flasch tweet media
English
41
2
155
98.8K
Mike Morearty
Mike Morearty@mmorearty·
@__apf__ Centering a window must be as hard as centering a div!
English
0
0
0
35
Adriana Porter Felt
Adriana Porter Felt@__apf__·
no one warns you that if you spend too many years as a frontend engineer, you will spend the rest of your life repulsed by minor misalignments in the physical world. my neighbor's slightly uncentered window haunts me
English
7
6
209
6.6K
Jose | RivianTrackr
Jose | RivianTrackr@RivianTrackr·
Here’s what LiDAR looks like on R2 ❤️
Jose | RivianTrackr tweet media
English
6
2
88
3.4K
Mike Morearty retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
My most amusing interaction was where the model (I think I was given some earlier version with a stale system prompt) refused to believe me that it is 2025 and kept inventing reasons why I must be trying to trick it or playing some elaborate joke on it. I kept giving it images and articles from "the future" and it kept insisting it was all fake. It accused me of using generative AI to defeat its challenges and argued why real wikipedia entries were actually generated and what the "dead giveaways" are. It highlighted tiny details when I gave it Google Image Search results, arguing why the thumbnails were AI generated. I then realized later that I forgot to turn on the "Google Search" tool. Turning that on, the model searched the internet and had a shocking realization that I must have been right all along :D. It's in these unintended moments where you are clearly off the hiking trails and somewhere in the generalization jungle that you can best get a sense of model smell.
Andrej Karpathy tweet media
English
214
326
5.3K
1M
Mike Morearty
Mike Morearty@mmorearty·
@davepl1968 What I loved was that almost every office had a whiteboard. Such a small thing, but it made discussions so much easier.
English
0
0
7
388
Dave W Plummer
Dave W Plummer@davepl1968·
Offices at Microsoft in the 90s: For me, it was the ideal environment. I had an interior (no window) office, about 8x12, and a desk and a bookshelf and a halogen pole lamp for indirect lighting. I'd turn off the fluorescent lights. A MIPS box for dev and a couple of 386/486 class machines for testing and remoting. A NEC 4Fg if you were lucky enough to get one, or a Sony Trinitron. When I didn't want to be disturbed, I could put a "SEND EMAIL" sticker on my door and close it. But 95% of the time, my door was open for anyone to wander in. And people did, it was a very collaborative environment. I used to wander twice a day, mid morning and mid-afternoon. I'd plunk down in your spare chair and have many "Whatcha working on, how's it going" conversations. But those are valuable, and I learned a ton I wouldn't have otherwise by asking the silverbacks questions. As a person with ASD, the office itself allowed me control over my environment that'd I'd never get in an open floorplan...
Dave W Plummer tweet media
Gergely Orosz@GergelyOrosz

@davepl1968 Hold up: did you work back in the day in the Age Of The Private Offices at Microsoft?? (there are legends about that time!)

English
84
87
2.1K
130.9K
Mike Morearty
Mike Morearty@mmorearty·
@RivianTrackr Adaptive headlights: Why they're cool and how they work. Mostly they are "set and forget", but I still get a little confused about what happens if I push the steering column stalk away from me or pull it toward me
English
0
0
2
16
Jose | RivianTrackr
Jose | RivianTrackr@RivianTrackr·
I want to make some Rivian videos but I’m having creator’s block. Any ideas?
English
24
0
17
4.7K
Mike Morearty retweetledi
GDP
GDP@bookwormengr·
It is year 2030, Nvidia is 5 generations past Blackwell. Andrej: excited to release new repo: NanoAGI. As for training runs: If you spend 100$ (run it while finishing a meal)- it will be as smart as a PhD. If you spend 1000$ (basically over night) - it will help solve nuclear fusion. Did not use openAI AGI to write the code for NanoAGI. It was too out of distribution.
GDP tweet media
Andrej Karpathy@karpathy

Excited to release new repo: nanochat! (it's among the most unhinged I've written). Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single, dependency-minimal codebase. You boot up a cloud GPU box, run a single script and in as little as 4 hours later you can talk to your own LLM in a ChatGPT-like web UI. It weighs ~8,000 lines of imo quite clean code to: - Train the tokenizer using a new Rust implementation - Pretrain a Transformer LLM on FineWeb, evaluate CORE score across a number of metrics - Midtrain on user-assistant conversations from SmolTalk, multiple choice questions, tool use. - SFT, evaluate the chat model on world knowledge multiple choice (ARC-E/C, MMLU), math (GSM8K), code (HumanEval) - RL the model optionally on GSM8K with "GRPO" - Efficient inference the model in an Engine with KV cache, simple prefill/decode, tool use (Python interpreter in a lightweight sandbox), talk to it over CLI or ChatGPT-like WebUI. - Write a single markdown report card, summarizing and gamifying the whole thing. Even for as low as ~$100 in cost (~4 hours on an 8XH100 node), you can train a little ChatGPT clone that you can kind of talk to, and which can write stories/poems, answer simple questions. About ~12 hours surpasses GPT-2 CORE metric. As you further scale up towards ~$1000 (~41.6 hours of training), it quickly becomes a lot more coherent and can solve simple math/code problems and take multiple choice tests. E.g. a depth 30 model trained for 24 hours (this is about equal to FLOPs of GPT-3 Small 125M and 1/1000th of GPT-3) gets into 40s on MMLU and 70s on ARC-Easy, 20s on GSM8K, etc. My goal is to get the full "strong baseline" stack into one cohesive, minimal, readable, hackable, maximally forkable repo. nanochat will be the capstone project of LLM101n (which is still being developed). I think it also has potential to grow into a research harness, or a benchmark, similar to nanoGPT before it. It is by no means finished, tuned or optimized (actually I think there's likely quite a bit of low-hanging fruit), but I think it's at a place where the overall skeleton is ok enough that it can go up on GitHub where all the parts of it can be improved. Link to repo and a detailed walkthrough of the nanochat speedrun is in the reply.

English
34
84
2.2K
329.9K
Mike Morearty retweetledi
Riley Walz
Riley Walz@rtwlz·
The plan? At dusk, 50 people went to San Francisco's longest dead-end street and all ordered a Waymo at the same time. The world's first: WAYMO DDOS
Riley Walz tweet mediaRiley Walz tweet media
English
303
1.5K
31.1K
2.1M