will harris

7.1K posts

will harris banner
will harris

will harris

@boujeehacker

fullstack swe prev @linkedin , @joinodf OD50-1, @opensea, 💻 disapora ✈ #atl, #aus, #sf, #mia, #nyc, 🛬

Atlanta, GA Beigetreten Kasım 2020
1.2K Folgt791 Follower
Angehefteter Tweet
will harris
will harris@boujeehacker·
kinda shy about public stuff like twitter for a long time but here it is mostly me talkin about tech, remote, building great things working with awesome people, #mia, #atl and #atx along with the other places in my bio, music, whiskey, the NBA, and whatevs comes to mind.
English
2
0
22
0
will harris retweetet
James Cowling
James Cowling@jamesacowling·
Y'all clowning on GitHub but the real lesson is that agents suck at scaling infra. Even the labs are struggling with their DBs. We're seeing the highest workloads in history but good architecture is currently still a human-bottlenecked activity. Make good infra choices.
English
32
36
437
35.3K
will harris retweetet
Aakash Gupta
Aakash Gupta@aakashgupta·
The reason software eats RAM is the same reason factories used to dump chemicals in rivers. The cost is externalized. Every mass of inference compute shows up on an engineering manager's AWS bill, broken down to the cent, reviewed quarterly. Every mass of RAM consumed on YOUR machine shows up nowhere in anyone's budget. Chrome could cut memory usage by 60% tomorrow and Google's revenue wouldn't move a single basis point. Docker's 2GB idle footprint costs Docker Inc. exactly $0. Electron's 500MB todo list costs the Electron team exactly $0. The user paid for the RAM. The user pays the electricity. The user deals with the fan noise. The company ships faster because they chose the laziest possible runtime. The token-optimization obsession makes this even clearer. Companies optimize inference cost because inference cost hits their margins. They'll spend six months shaving 200ms off a model response. They won't spend six days reducing a desktop client's memory footprint because that memory belongs to someone else's hardware. This is why the 16GB vs 32GB debate is a trap. You're asking consumers to buy more expensive hardware to subsidize the software industry's refusal to optimize for a resource they never have to pay for. The market will never fix this on its own. The people writing the checks and the people running out of RAM are on opposite sides of the transaction.
Chayenne Zhao@GenAI_is_real

unpopular opinion: 16GB is plenty if software engineers actually cared about memory efficiency. chrome eating 4GB for 12 tabs is not a hardware problem its a software disgrace. docker consuming 2GB idle is not a feature its laziness. we live in an era where people optimize every single token to save $0.001 on API costs but happily ship electron apps that eat 500MB to display a todo list. if the industry treated RAM the way we treat inference compute - obsessively measuring every byte - 16GB would feel luxurious. the hardware isnt the problem, the software is @adxtyahq

English
93
1.1K
6.8K
268.8K
will harris retweetet
Malte Ubl
Malte Ubl@cramforce·
To quote from my keynote at Vercel's internal offsite: Software is free as in puppies. It will pee in your bedroom and eat your furniture. The weight of every line of code is real. We will need to maintain it. We will need to port it. It goes into the context window. And somebody in this room will get paged at 2am because it did something unexpected
Garry Tan@garrytan

Absolutely insane week for agentic engineering 37K LOC per day across 5 projects Still speeding up

English
46
108
1.6K
141.2K
will harris
will harris@boujeehacker·
@BrodieOnLinux There has been very little selection pressure for more efficient resource usage in software since the early smartphone days. Just cheaper to get better hardware. Macbook Neo might bring it back some for Access to that userbaae
English
0
0
8
665
will harris retweetet
こりま
こりま@korimakorima·
日本人にとってNY的シニカル、サンフランシスコ的恋愛過多、LAの心身ともに向上して健康でいましょうね主義は「そんなもん他の国にもあるがな」ってなるから、米都市部から「ど田舎の無価値ども」と蔑まれている場所のドデカトラック&盛大なBBQこそが「大好きなアメリカらしいアメリカ」になる。
Melissa Chen@MsMelChen

The best part is that the America that Japanese people adore the most… is the same one that coastal elites call “flyover trash” They’re not autistically LARPing NY cynicism or SF polycule / LA wellness culture. They’re drawn to the heartland of the American South and all its trappings - the jacked-up trucks, backyard BBQs, country radio, big skies and the friendly "yes ma'am" drawl.

日本語
214
1.8K
19K
2M
will harris retweetet
Nikita Bier
Nikita Bier@nikitabier·
“They’re losing faith in humanity. Release the wholesome Japanese posts.”
Nikita Bier tweet media
English
936
2.4K
43.1K
3.9M
will harris retweetet
Diffractor
Diffractor@Diffractor1·
@i_zzzzzz The Social Network directed by Wes Anderson vibes
English
0
2
92
2.3K
will harris retweetet
sarajo
sarajo@SaraJChipps·
There was a moment in 2013 when your team was using jQuery UI, Trello, React 0.3.0, Coda, Basecamp, and Express. Peak software in every category. wonderful stack, you were young and happy and in love.
Steve Ruiz@steveruizok

there was a moment in 2023 when your team was using Figma, Linear, VS Code, Typescript, React 18, esbuild or Next.js with the pages router. Peak software in every category. wonderful stack. you were young and happy and in love

English
10
4
116
9.7K
will harris retweetet
Patrick Collison
Patrick Collison@patrickc·
When @karpathy built MenuGen (karpathy.bearblog.dev/vibe-coding-me…), he said: "Vibe coding menugen was exhilarating and fun escapade as a local demo, but a bit of a painful slog as a deployed, real app. Building a modern app is a bit like assembling IKEA future. There are all these services, docs, API keys, configurations, dev/prod deployments, team and security features, rate limits, pricing tiers." We've all run into this issue when building with agents: you have to scurry off to establish accounts, clicking things in the browser as though it's the antediluvian days of 2023, in order to unblock its superintelligent progress. So we decided to build Stripe Projects to help agents instantly provision services from the CLI. For example, simply run: $ stripe projects add posthog/analytics And it'll create a PostHog account, get an API key, and (as needed) set up billing. Projects is launching today as a developer preview. You can register for access (we'll make it available to everyone soon) at projects.dev. We're also rolling out support for many new providers over the coming weeks. (Get in touch if you'd like to make your service available.) projects.dev
English
184
275
3.6K
1.4M
will harris retweetet
Aaron Levie
Aaron Levie@levie·
Jevons paradox is happening in real time. Companies, especially outside of tech, are realizing that they can now afford to take on software projects that they wouldn’t have been able to tackle before because now AI lets them do so. We’re going to start to use software for all new things in the economy because it’s incrementally cheaper to produce. Marketing teams at big companies will have engineers helping to automate workflows. Engineers in life sciences and healthcare will automate research. Small businesses will hire engineers for the first to build better digital experiences. And as long as AI agents still require a human who understands what to prompt, how to review when an agent goes off the rails, how it guide back, how to maintain the system that was built, how to fix the ongoing bugs, and more, we will still have humans managing these agents. This is why all the advice you get of not going into engineering is wrong. The world is going to increasingly be made up of software, and the people that understand it best will be in a strong economic position. This will happen in other roles as well where output goes up and demand increases.
Lenny Rachitsky@lennysan

Engineering job openings are at the highest levels we’ve seen in over 3 years There are over 67,000 (!!!) eng openings at tech companies globally right now, with 26,000 just in the U.S. We don’t know if there would have been more open roles if not for AI or if AI is actually leading to more open roles, but since the start of this year, the increase in open eng roles is accelerating even more.

English
224
655
4.7K
1M
will harris retweetet
Jen Zhu
Jen Zhu@jenzhuscott·
Agree w Terence Tao - LLMs limitations are structural. I’ve always said the usefulness of current AI correlates w the users expertise. So the illusion of creativity can impress/fool non experts. The current LLMs excel at Keplerian work (empirically testing many combinations via brute/compute scaling) but not Newtonian unification or genuine leaps. They act as a “super-assistant” for literature search, candidate generation, formalization, and exposition - freeing us for the creative core - but there is no evidence yet of autonomous originality at the frontier. Solving a Millennium Prize problem de novo w a genuinely novel technique (not latent in the corpus) would constitute such evidence; it has not occurred.
Valerio Capraro@ValerioCapraro

Terence Tao put it plainly: there is no evidence that LLMs exhibit genuine creativity. Yes, they have solved some Erdős problems. But these are low-hanging fruit, questions that attracted little attention and that yield once the right existing techniques are applied. That is not creativity. That is search plus recombination. Yes, LLM outputs can look impressive. But look at who is impressed: typically non-experts. Experts know very well that LLM performance gets terrible when you approach the frontier of human knowledge. And this is not a temporary gap. It reflects a structural limitation. We do not fully understand human creativity. But we do know a key property: Conceptual leaps: the ability to generate new representations, not just recombine existing ones. LLMs do not do this. They interpolate in representation space. They operate within existing conceptual frameworks; they do not create new ones. This is why we haven’t “yet seen them take the next step”.

English
33
64
376
88.1K