Rob

942 posts

Rob banner
Rob

Rob

@robirwinengr

Designing and building robots at Standard Bots 🦾🇺🇸

CA Katılım Nisan 2009
447 Takip Edilen430 Takipçiler
Sabitlenmiş Tweet
Rob
Rob@robirwinengr·
Finally able to share one of the things we have been working on for the past year. Designed and assembled in the USA 🇺🇸 you can see it in action at Automate!
Rob tweet media
English
0
0
15
864
Edward Mehr
Edward Mehr@EdwardMehr·
There’s a founder in manufacturing who apparently goes around telling people not to work with us. I keep hearing it from investors, vendors, media… enough times that now it’s a pattern. Obviously doesn’t work on customers; cause they just want something that helps them! Part of me is amused, eating popcorn and seeing how it unfolds. The other part feels bad for my fellow brother. It reads a lot like insecurity. Especially coming from someone who’s never actually built hardware himself. Anyone who had done manufacturing knows this industry is far too interconnected for that kind of behavior to work. It is just noise. You don’t win by blocking others. You win by helping others. Manufacturing is big and complex. Founders in the space know who I’m talking about! I am not the only one he targets 😆
English
25
11
362
62.9K
Rob
Rob@robirwinengr·
@tunguz Not yet
English
0
0
0
70
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
7K
58.2K
20.9M
Rob
Rob@robirwinengr·
@emm0sh Congrats but I was expecting CEO role tbh
English
1
0
1
257
em m0shouris
em m0shouris@emm0sh·
i’m happy to announce i’ll be joining dassault as their director of product management for solidworks! i’ll be working on features users love, such as lack of interoperability, increasing yearly license costs, and feature tree breaking after changing one fillet
English
37
8
443
13K
Rob
Rob@robirwinengr·
@bcherny @teja2495 I’ve seen my app freeze and turn all black after sleeping and won’t respond any longer
English
0
0
0
27
Boris Cherny
Boris Cherny@bcherny·
@teja2495 Curious, what bugs are you hitting on desktop? Would love a short list
English
131
3
220
103.9K
Teja Karlapudi
Teja Karlapudi@teja2495·
With the Claude desktop app being very buggy for coding, I switched completely to the Claude Code CLI. It is excellent. I now understand why many people prefer the CLI over the desktop app.
English
53
7
382
92.8K
Rob retweetledi
Standard Bots
Standard Bots@standardbots·
Why do we think modular beats #humanoid? We actually built a humanoid from scratch with the roboticists behind NASA’s Valkyrie and Robonaut. We learned that the number of unique parts is enormous, costs rise quickly, and that lifespan shortens. Bottom line: ROI suffers. So we designed five modular joint sizes that deploy as one arm or two, stationary, on a mobile base, or on a lift. Modular means you can reach 3x higher than a human, move faster, and operate longer — without paying for technology you don’t actually need. Who wants to spend an extra $13K on legs if the job doesn’t require them? Manufacturing robots this way: standardbots.com #manufacturing #robotics #madeinusa
English
4
9
50
3.3K
Rob
Rob@robirwinengr·
@emm0sh Wow, ngl you got wrecked
English
0
0
1
82
Rob
Rob@robirwinengr·
@Tibbzzee @deanwball Aren’t the AI labs doing the same thing as Karpathy?
English
1
0
4
556
Steven Tibbs
Steven Tibbs@Tibbzzee·
@deanwball I feel like we definitely have reached the first version of AGI. v1 if you will. The speed of development from here to v2, v3, and v4 will happen faster than we anticipated. Esp with automated research literally happening right now under Karpathy
English
1
2
53
18.4K
Zane Hengsperger
Zane Hengsperger@zanehengsperger·
I am hiring a Head of Software Engineering at @noxmetals the perfect for role for someone who cares deeply about reindustrializing America, wants to supply every factory in America in <24 hours, and wants to write code integrated with machines/hardware you get: - to hire a team - large compute budget - hardware + vision systems to integrate 150-250k + equity detroit in person full time dm me or apply via the next tweet
English
29
34
399
43.6K
Rob
Rob@robirwinengr·
@emm0sh scope creep ... Now you need a pic of it being in space
English
1
0
3
192
em m0shouris
em m0shouris@emm0sh·
there’s been a lot of chatter in this space but by far the best head on the shoulder of the founder seems to be @akarshaurora working on this: trymechai.com
English
1
2
13
1.4K
Rob
Rob@robirwinengr·
@jordannoone Incredible stuff happening
English
0
0
0
7
Rob retweetledi
a16z
a16z@a16z·
"Speed wins." "You have to be willing to commit to being fast. You can't have long bureaucratic processes. You can't have a risk-averse posture." @pmarca explains the OODA loop — and why the fastest operator controls the narrative in business, media, and politics: "There's a framework called the OODA loop, originally developed for fighter pilots and later for broader military strategy." "It stands for observe, orient, decide, act. It's basically the decision-making cycle." "If speed is the thing that matters, then the person who gets through that cycle the fastest is the one who's going to win." "If you can have a sustainably faster OODA loop processing cycle than the next guy — think about what happens… You operate and make a decision within an hour. The other guy is still inside his own OODA loop when you make your decision. He's only halfway through his process and now has to start over. You've changed the parameters of what's going on." "This is also a big explanation for what's happened in traditional media." "The New York Times has its own OODA loop, and it's like 24 hours to go through its process."
English
144
545
4.3K
346.1K
Rob
Rob@robirwinengr·
@FilArons Series E oversubscribed
English
0
0
1
77
Fil Aronshtein
Fil Aronshtein@FilArons·
Just vibecoded this new AI CAD tool with Claude in 2 hours and check out what it gave me. Software is so over
Fil Aronshtein tweet media
English
7
0
79
6.9K