hanami

312 posts

hanami banner
hanami

hanami

@0xHanami

dev & design

Katılım Ekim 2021
1.4K Takip Edilen96 Takipçiler
hanami retweetledi
Lenny Rachitsky
Lenny Rachitsky@lennysan·
@gokulr I rarely disagree with you, but I do on this. I expect design to become a differentiator as the quantity of software increases. And anyone that’s worked with a design system has seen how much you still need to “design” the product and experience. Great design is hard.
English
15
15
436
21.5K
hanami retweetledi
Jordan Singer
Jordan Singer@jsngr·
@gokulr if you mean pure product designer who only draws pixels, agree. you need to become a builder too but more broadly, design won't just disappear, and the discipline and skillset will become more relevant than ever to separate those who don't care from the ones who do
English
3
2
176
13K
hanami retweetledi
Cheng Lou
Cheng Lou@_chenglou·
My dear front-end developers (and anyone who’s interested in the future of interfaces): I have crawled through depths of hell to bring you, for the foreseeable years, one of the more important foundational pieces of UI engineering (if not in implementation then certainly at least in concept): Fast, accurate and comprehensive userland text measurement algorithm in pure TypeScript, usable for laying out entire web pages without CSS, bypassing DOM measurements and reflow
English
1.3K
8.3K
65.5K
23.8M
hanami retweetledi
Boris Cherny
Boris Cherny@bcherny·
I'm Boris and I created Claude Code. I wanted to quickly share a few tips for using Claude Code, sourced directly from the Claude Code team. The way the team uses Claude is different than how I use it. Remember: there is no one right way to use Claude Code -- everyones' setup is different. You should experiment to see what works for you!
English
927
5.9K
51K
9.2M
Blas
Blas@BlasMoros·
we’re opening a small batch of @wabi invites just comment below with the first app you want to build and i'll DM you an invite code
English
88
6
74
16.9K
hanami retweetledi
Periodic Labs
Periodic Labs@periodiclabs·
We are proud to announce @periodiclabs. Our mission is to accelerate science. Our founding team co-created ChatGPT, DeepMind’s GNoME, OpenAI’s Operator (now Agent), the neural attention mechanism, MatterGen; have scaled autonomous physics labs; and have contributed to important materials discoveries of the last decade. We’re fortunate to be backed by investors who share our vision, including @a16z who led our $300M round, as well as @felicis, DSTGlobal, NVentures (NVIDIA’s venture capital arm), @Accel and individuals including @eladgil, @ericschmidt, @JeffDean, and @JeffBezos. We’ve come together to scale up and reimagine how science is done. If you want to help build the first generation of AI scientists, we’re hiring.
Liam Fedus@LiamFedus

Today, @ekindogus and I are excited to introduce @periodiclabs. Our goal is to create an AI scientist. Science works by conjecturing how the world might be, running experiments, and learning from the results. Intelligence is necessary, but not sufficient. New knowledge is created when ideas are found to be consistent with reality. And so, at Periodic, we are building AI scientists and the autonomous laboratories for them to operate. Until now, scientific AI advances have come from models trained on the internet. But despite its vastness — it’s still finite (estimates are ~10T text tokens where one English word may be 1-2 tokens). And in recent years the best frontier AI models have fully exhausted it. Researchers seek better use of this data, but as any scientist knows: though re-reading a textbook may give new insights, they eventually need to try their idea to see if it holds. Autonomous labs are central to our strategy. They provide huge amounts of high-quality data (each experiment can produce GBs of data!) that exists nowhere else. They generate valuable negative results which are seldom published. But most importantly, they give our AI scientists the tools to act. We’re starting in the physical sciences. Technological progress is limited by our ability to design the physical world. We’re starting here because experiments have high signal-to-noise and are (relatively) fast, physical simulations effectively model many systems, but more broadly, physics is a verifiable environment. AI has progressed fastest in domains with data and verifiable results - for example, in math and code. Here, nature is the RL environment. One of our goals is to discover superconductors that work at higher temperatures than today's materials. Significant advances could help us create next-generation transportation and build power grids with minimal losses. But this is just one example — if we can automate materials design, we have the potential to accelerate Moore’s Law, space travel, and nuclear fusion. We’re also working to deploy our solutions with industry. As an example, we're helping a semiconductor manufacturer that is facing issues with heat dissipation on their chips. We’re training custom agents for their engineers and researchers to make sense of their experimental data in order to iterate faster. Our founding team co-created ChatGPT, DeepMind’s GNoME, OpenAI’s Operator (now Agent), the neural attention mechanism, MatterGen; have scaled autonomous physics labs; and have contributed to some of the most important materials discoveries of the last decade. We’ve come together to scale up and reimagine how science is done. We’re fortunate to be backed by investors who share our vision, including @a16z who led our $300M round, as well as @Felicis, DST Global, NVentures (NVIDIA’s venture capital arm), @Accel and individuals including @JeffBezos , @eladgil , @ericschmidt, and @JeffDean. Their support will help us grow our team, scale our labs, and develop the first generation of AI scientists.

English
50
94
634
197.6K
hanami retweetledi
Marcin Krzyzanowski
Marcin Krzyzanowski@krzyzanowskim·
Apple censoring my engraving was not on my bingo card today
English
92
199
14.1K
6.1M
hanami retweetledi
Madhu Guru
Madhu Guru@realmadhuguru·
At @Google, we are moving from a writing‑first culture to a building‑first one. Writing was a proxy for clear thinking, optimized for scarce eng resources and long dev cycles - you had to get it right before you built. Now, when time to vibe-code prototype ≈ time to write PRD, PMs can SHOW not tell. Role profiles are blurring, creativity and building are happening in parallel.
English
205
420
4.8K
641.4K
hanami retweetledi
Eric Buess
Eric Buess@EricBuess·
Hey! I have used all the major agent IDEs and compared them against one another in their max/pro paid subscription plans. I've been vibe coding since long before it was a term - I think since around when the first Codex came out in 2021. I know you are very experienced in this domain as well, and I have absolutely loved watching your content since the early days! Each agentic editor has its trade-offs. The models they support and the scaffolding they provide on top of them change so often, with each tool leapfrogging the abilities of their competitors at various points. It's hard to solidly recommend any single tool for a large group of people. I'm not saying that my preferred tool for the moment is the best in all cases for all people. Here's what I've learned: I spent months building my own custom Agentic IDE using all the major models that integrate into existing tools like Cursor, Windsurf, and Visual Studio Code. It was a standalone tool built with Python and Node. This experience brought me to some conclusions about what really makes a good agentive editor and collaborator, and how to optimally design workflows to get the most out of an AI model while avoiding as much friction as possible. One of the most important things when asking for a change is making sure that the model has as much signal and as little noise as possible. In other words, every token of superfluous or irrelevant content that doesn't help it understand exactly the context needed to solve the problem is a distraction that lowers the quality of the output and introduces pain for the human. The largest pain point I see from these tools is creating too many files or code in places that should have been updated in existing classes or methods in files that just needed to be refactored a bit, for example. This problem is exacerbated by test-time compute models whose reinforcement learning included a lot of examples and rewards around writing extra code to solve the problem. Even the smartest model - say, a powerful superintelligence - is going to have to make assumptions where we leave ambiguity. Part of our job as communicators and directors of these agentic systems is to specify exactly what we do and do not need. Where ambiguity remains, these very intelligent models will carry on with their assumptions and lead to extraneous code or duplicate implementations only to the degree to which we did not tell them to do otherwise or we did not provide the appropriate code in the context. This brings me to what I love about Claude Code. There may be a simple way to get Windsurf and Cursor to read in the entire content of a large file that I tag with an @ reference, but I spent far too many hours over far too many weeks trying to get it to do so. Because these tools use RAG to read in chunks of data from files if those files are long, they will quite often miss relevant chunks. And while the vector search retrieval mechanisms are quite spectacular compared to the standards from a year ago, I still find myself in the constant battle of trying to get just the right amount of context with that extraneous code to the model. Claude Code, on the other hand, doesn't just index the code in a vector store. It is just about as close to the bare model as an agentic editor can be. It basically has some instructions for how to use POSIX commands in a bash tool. It has no problem whatsoever reading in the entire contents of a file and does an excellent job at understanding semantically which files should be searched for and finding those files. This is not to say that it is perfect. I would say there's still quite a lot of room for improvement. I still find myself fighting with it to keep the file count from creeping up. However, with a few simple instructions in the Claude config file in the home directory or in the project directory itself, it rarely if ever suffers from this problem. I also really love the rate of improvement of Claude Code. They are rolling out updates every couple of days. Some of my favorite ones in the last few weeks are the auto-compact command, which means I never need to start a new cascade or agent conversation like I would in Windsurf or Cursor. It will automatically compact the conversation which generates a summary of what has been done and what's next without me needing to say anything to prompt it to do so. But if I'm feeling like the next task may push it towards its context limit to the point that the output quality might be reduced, I can preemptively trigger the compact command. Also, when I start Claude in a directory, I can do so with a flag to prevent it from asking me to verify things explicitly. I know there are checkboxes for this in the other Agentic IDEs, but some are not aware that Claude Code has this as well. Vibe coding is fully unlocked. Another favorite feature is when the project loads it has something akin to Cursor rules or Windsurf rules, and that is a CLAUDE.md file which is read in both when Claude is started as well as when you run the /clear command. Any files that are referenced in that CLAUDE.md file with an @ symbol prepended will be read into context as well. I also love how it can be opened with -p flag and given a task to accomplish. So you can pipe claude -p commands together in a chain. It's super powerful. Claude can spin up its own subagents or you can have it write scripts to launch them however you like. Or you can trigger them from Claude Desktop which you can give filesystem and bash access to whatever directories you want. It also supports / commands you can set by adding simple files to the project. You can run multiple claude codes in parallel. I even play with have the root one be an orchestrator that just defines interfaces and a shared folder with a few files and the CLAUDE.md files for each sub directory and then it spins up new claude instances in those subdirectories to accomplish the tasks they find when they read in their dirs CLAUDE.md. There are so many different ways to use it. It's all about a well defined CLAUDE.md. You can use -c or -r to continue or resume specific previous sessions. You can # to add things to CLAUDE.md at each level. It supports vim keys. It can run in the integrated terminal of any IDE that supports that. I love the terminal and am comfortable in tools like tux and vim but I sometimes want gui to see files. So I tend to use iterm2 for native bell notifications when it's done working and Zed editor for silky smooth IDE with the electron bloat. Zed Preview is adding good agentic features. All the IDEs support MCP servers these days, but the Claude Desktop app and Claude Code are from the company that built the protocol and provide as close to the full functionality as I need. I use Claude Desktop in conjunction with Claude Code often. It checks my emails and schedules school events on my calendar, so I know when to have my kids where. It's tied into 15 different MCP servers and is a central hub for all the agentic tasks that I can trigger remotely if needed. The Claude mobile app integrates with connections established on the website, and I believe/hope will eventually integrate with remote MCP servers through claude.ai. Claude artifacts, inline editing with much more context and speed than Canvas, and rendering web artifacts and publishing/sharing are really incredible features. On top of all this, Claude research now returns consistently over a thousand websites in a search, and I regularly compare its results to all the other researching tools from the major providers. It is not behind anymore. It is a new product but has caught up in quality in the last week. Another great feature is the power of the Claude 3.7 Sonnet model with extended thinking. I know there are benchmarks and ELO scores, and people swear by their favorite model across the industry. But if you look at SWE Bench verified scores and the way that the scores were assessed compared to the other models, nobody is really beating Claude 3.7 Sonnet. The most recent reporting I've heard out of the major agentic IDE companies is that their devs still tend to prefer Claude 3.7 Sonnet internally. I really enjoy how I can just tell Claude to "think" or "think deeply" and that will trigger more compute to be allocated to the task at hand. And Claude models are released about every 3-4 months on average, so it's not long until we'll get another big update. And this brings me to a penultimate point. I've tested many different projects across all of the major agentic IDEs - same projects, same starting positions, same instructions, same prompts. Claude 3.7 Sonnet may not always win, but it's never at the bottom. However, Claude 3.7 Sonnet within Claude Code is almost always at the top. There's something to be said about the company that makes the model also creating the scaffolding and providing the tool and the low-level interface to the model. For example, they could be allowing different API parameters to unlock portions of the model that are not accessible to other IDEs. I'm not saying they are, but the performance seems to be better when run through Claude Code. This is likely just the way they are prompting the model and the tools they are choosing to use, and the benefits they are gaining from not indexing the files and calling back chunks. But I'm just speculating. A very important reason for me to prefer Claude Code is that it uses the token allocation assigned every 5 hours to my Claude Mac subscription. It was such a problem for me, as I'm sure it is for many others, to have to pay for a subscription to web interfaces and also agentic IDEs and sometimes API requests. I wrote those custom extensions and tools to make my own Agentic IDE to avoid paying additional fees. By creating a local API endpoint, the IDEs could call with their agents or I could call with my own tools using a browser extension to interact with the web interfaces and stream data back and forth. I made a request to Anthropic via a Zoom call and DM asking if they could allow some of our token allocations from our web subscriptions to be utilized by Agentic editors. If they were to do this, it would mean that we could only pay for the web interface subscription and not need to pay anything additional for API requests, as an example. And that is what they have done with their Claude Max subscription and Claude Code. I don't have a need to pay $20/month for some Agentic IDE to unlock pro models when I have a claude.ai Max subscription. Claude Code will just use tokens available to me on that subscription, and it resets every 5 hours. I've yet to hit the limit in any 5-hour period. So it's essentially free and unlimited access to a top-tier coding agent baked into the price of a subscription I would already be paying for. But there is one last note that is of incredible importance and is only tangential to Claude Code itself. I'm a big fan of Anthropic AI because their core mission and reason for existence is the best hope I think we have for long-term stable society. Without their leadership in steerable, harmless, constitutional AI, constitutional classifiers, and the vast amount of dollars and time they're putting into mechanistic interpretability research, I don't see much hope that we understand what's happening inside of powerful AI systems before they attain fast takeoff through iterative self-improvement with fewer humans in the loop. Current AI systems are largely jailbreakable, which means that if the next generation or two of such models are still jailbreakable, it's almost guaranteed they'll be in the hands of thousands, then millions, over time of extremists who have the motive to do harm to their ideological opposites but only lack the means. These AI systems they will have access to will have both the ability to reduce friction in attaining these means and be jailbreakable with a prompt copied off the internet unless we find a solution now. I'm not just talking about things that anyone can find with a Google search. Foundation models trained in chemistry or powerful AI systems that understand physics and math can derive solutions that cause harm on a massive scale. I'm not saying that I'm against everyone having access to AI systems. Open source is an antidote to the tyranny that comes with the consolidation of power. But open source without the ability to understand what's happening in powerful, superintelligent systems is giving enormous ability to do harm to massive numbers of people (over time) who wish to do harm. I believe the work Anthropic AI is doing is urgent. All things being equal, I would rather give more of my money to the company that is giving us the best hope of creating models that cannot be jailbroken by extremists who copy a prompt off the internet. This research is being and will continue to be disseminated to all other model providers for integration into systems that will make the world on average far safer. This is my hope and conviction. This is why I'm engaged in the Anthropic Bug Bounty Program that I need to get back to now. I have four children. I expect to have more descendants. I feel a moral obligation to invest in the company that I believe is going to have the best shot at helping us all make it through this potential AI great filter. I hope others can start to see this and vote for safety with their money.
English
34
25
265
48.1K
hanami retweetledi
Brian Chesky
Brian Chesky@bchesky·
Flat design is over. The future is colorful and dimensional.
English
605
796
12.5K
2M
hanami retweetledi
Suhail
Suhail@Suhail·
Want to feed my eyes great design each day. Best designer you aspire to hire, work with, become, envy? Only pick one.
English
37
6
165
35.9K
hanami retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
"Chatting" with LLM feels like using an 80s computer terminal. The GUI hasn't been invented, yet but imo some properties of it can start to be predicted. 1 it will be visual (like GUIs of the past) because vision (pictures, charts, animations, not so much reading) is the 10-lane highway into brain. It's the highest input information bandwidth and ~1/3 of brain compute is dedicated to it. 2 it will be generative an input-conditional, i.e. the GUI is generated on-demand, specifically for your prompt, and everything is present and reconfigured with the immediate purpose in mind. 3 a little bit more of an open question - the degree of procedural. On one end of the axis you can imagine one big diffusion model dreaming up the entire output canvas. On the other, a page filled with (procedural) React components or so (think: images, charts, animations, diagrams, ...). I'd guess a mix, with the latter as the primary skeleton. But I'm placing my bets now that some fluid, magical, ephemeral, interactive 2D canvas (GUI) written from scratch and just for you is the limit as capability goes to \infty. And I think it has already slowly started (e.g. think: code blocks / highlighting, latex blocks, markdown e.g. bold, italic, lists, tables, even emoji, and maybe more ambitiously the Artifacts tab, with Mermaid charts or fuller apps), though it's all kind of very early and primitive. Shoutout to Iron Man in particular (and to some extent Start Trek / Minority Report) as popular science AI/UI portrayals barking up this tree.
Andrej Karpathy tweet media
English
395
802
7.1K
737.6K
Lauren McCann Ryan
Lauren McCann Ryan@lemmccann·
Stoked to have @zoink, @ivanhzhao & @joshm - 3 of the most forward thinking founders of our generation - together for a panel this Friday! @DevinLewtan @mayanjb and I are co-hosting “Tools for the Future: Your Best Semester with @figma, @browsercompany & @NotionHQ" I’ll be asking them to dive deep into how they are building their iconic companies, share learnings for the next generation of makers, and revisit their college days. Any specific questions you want me to ask?! Hope to see you there! 💛 figma.zoom.us/webinar/regist…
GIF
English
5
11
81
46.1K
hanami
hanami@0xHanami·
@Mortdog It makes the game better, but as QoL I feel like we need like 1 more item bench space.
English
0
0
0
56
Riot Mort
Riot Mort@Mortdog·
Now that the patch has been live for almost the full 2 weeks, how are you feeling about the Item Remover change?
English
114
6
437
90.3K
Riot Games
Riot Games@riotgames·
We're sad to learn that Sam Mowry, the voice of Rhaast, passed away this weekend. For the voice of a Darkin, Sam was a bastion of light. It was an honor to work with him and he will live on as a part of Runeterra forever.
Riot Games tweet mediaRiot Games tweet media
English
493
5.2K
48.9K
2.1M
hanami
hanami@0xHanami·
@RiotAugust I think it would be cool if Hard and Expert had exclusive unlockables such as champs or weapons.
English
0
0
0
39
August
August@RiotAugust·
We're continuing to tune Swarm for release. TY for all the PBE testing! Here's what's going in today (6/27)! -Fixed even more bugs in game and in the client! -Added a clearer minimap icon for dead players -Yuumi Augments are now properly locked behind their achievements ---Difficulty--- Enemy health and damage has been increased in later difficulties. -HP by difficulty: 1/2.5/5x >>> 1/3/7x -Damage by difficulty: 1/1.8/2.5x >>> 1/2/3x Enemy spell damage (the stuff you can dodge) has been reduced on the final difficulty -Spell damage by difficulty: 1/1.5/3x >>> 1/1.5/2x Enemy HP has been reduced in solo play. Players must now defeat all 4 bosses on Hard to unlock Extreme Gold income on harder difficulties has been reduced -Hard: 2x >>> 1.5x -Extreme: 4x >>> 2.25x Yuumi and Bel'veth quests no longer give gold when started MF buffs now give 15 gold when picked up ---Maps--- Map A is easier late-game Wave 18 shield guys spawn less frequently Wave 19 golems spawn less frequently Map B is harder mid-game Wave 10 spawns more enemies Wave 13 spawns significantly more enemies ---Bosses--- Bosses now have healthbars above them Bosses now enrage after 5 minutes, massively increasing their damage (Aatrox enrage is 5 min per phase) Rek'Sai fight has been reworked. During the burrowed phase she now tunnels around the arena dropping rocks and shooting prey seekers instead of trying to chase players down. Kill the tunnel to get her to resurface. A percent of the damage dealt to Bel'Veth's coral is now dealt to her as well. The Gas circle in Bel'Veth's fight closes in slightly faster The Gas in Bel'Veth's fight now deals ramping damage (no hiding in it) Aatrox no longer heals Fixed a bug where Aatrox could permanently bind you to a location if his pillar is killed too fast Aatrox's tidal wave now deals ramping damage (DO NOT Stand in the ocean, or go to the beach, or go outside, or look at the sun) ---Champions--- Jinx Passive MS is now MUCH faster with a slight decay Passive duration: 10 >>> 8 Riven has been nerfed Damage on passive has been adjusted to scale worse with Area size Shield from passive has been adjusted to scale worse with damage Weapon no longer gains charge while it is casting. Aurora QoL Aimed missiles from Aurora's weapon now accelerate faster E can now be recast to end early E now deals damage to enemies Aurora passes through R now makes her faster and MORE untargetable when used to teleport No longer has her unethically obtained Armor + Armor/lvl from SR Fixed a number of bugs where Aurora could permanently lose her weapon after death ---Weapons--- Tibbers Fixed a bug where you could spawn TWO Tibbers Hollow Radiance Evolve explosion damage capped at 100 Gatling Gun Evolve does 10% extra damage to frozen targets Statikk Sword Damage: 100-360 >>> 100-460 ---Stats--- Crit: 10-50% >>> 8-40% HP: 200-1000 >>> 150-750
August tweet media
English
61
22
349
82.7K
hanami
hanami@0xHanami·
@jsngr You've been an absolute inspiration. Can't wait to see what you continue to build!
English
0
0
0
34
Jordan Singer
Jordan Singer@jsngr·
if you’re an aspiring designer-founder, i hope it inspires you to start something new
English
7
1
211
26.8K