Bryan Altman

2.3K posts

Bryan Altman

Bryan Altman

@altmbr

angel investor and startup builder | prev chainvine (backed by slow), setter (backed by sequoia, sold to thumbtack), mckinsey

San Francisco شامل ہوئے Ekim 2011
957 فالونگ1.7K فالوورز
Bryan Altman
Bryan Altman@altmbr·
Insane partnership with WealthSimple. Any Canadian gets promoted to trade ok their platform. Curious magnitude / structure of affiliate fee.
Bryan Altman tweet media
English
0
0
1
27
Bryan Altman
Bryan Altman@altmbr·
Testing new feature : bitcoin:native
English
1
0
0
32
Bryan Altman
Bryan Altman@altmbr·
@robjama Herman Miller Sayl chairs. LG ultra wide monitors. Extraordinary amount of phone booths. Culture of co-education.
English
0
0
1
48
Robleh
Robleh@robjama·
thinking about what the dream builder space in Toronto looks like. - probably in the spadina area - curated coworking for startups and free agents - regular educational workshops and meetups - half court that doubles as event space - media studio for podcasts and video production - gym with squat rack and pull up bars - premo coffee and a majlis for the vibes what's missing?
English
59
5
204
15.1K
Bryan Altman
Bryan Altman@altmbr·
Excited to be a day 1 angel in @botsnbezels's @FoundryRobotics. Foundry is rebuilding American manufacturing: AI-first, assembly-focused, dual-use, end-to-end—and robotics-native from the ground up. A timely mission. And Adarsh is the man to lead it: trained at GRASP, early at Ghost Robotics, then a leader of robotics at Scale. He's also son to a manufacturing father and beleived so strongly in the mission that he leased an office and rented equipment prior to raising a dime. Visiting him at that loft prior to the closing of this fundraise, where we built Vention workcells, and modeled the future, was when I formed conviction on the potential. It's been a couple months and the team and the demand is incredible. Thanks @botsnbezels for having me along for the ride. Big things coming!
Foundry Robotics@FoundryRobotics

American manufacturing is one of the hardest problems of our generation. We're here to solve it. Today we announce $19M in seed funding, backed by @khoslaventures, @hanabicapital, @redglassvc, @ZeroShotFund, and all our other incredible investors. AI-first. Software-defined. Just getting started. We're hiring.

English
1
0
6
375
Claude
Claude@claudeai·
Introducing Claude Managed Agents: everything you need to build and deploy agents at scale. It pairs an agent harness tuned for performance with production infrastructure, so you can go from prototype to launch in days. Now in public beta on the Claude Platform.
English
2.1K
6K
57K
21.3M
0xSero
0xSero@0xSero·
One thing I've done this year is: - Download all my X data from settings/account - Download all my youtube, gmaps, gmail, google from takout google com - Download all my personal data from Claude, ChatGPT - Export a copy of every AI session on Cursor Claude Code, Codex, Droid, Opencode, etc.. - Take pictures of every legal document over my entire life - Searched and downloaded every online public record of myself - Exported all my apple health data - Every line of code and diff I could get locally and via github I then ragged it all, connected my openclaw to it. Now I just ask super basic questions and get some deep knowledge of myself: "How much did I spend on Trading cards in 2022" "How many kms did I walk in the last 3 months" "What are the most common dumb mistakes I make while coding?" "How many tokens did I spend on these 5 repos this year" Amazing
English
38
47
824
59.6K
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
6.7K
56.2K
19.9M
Pete Florence
Pete Florence@peteflorence·
Today, we announce GEN-1 and tell more of our story. It is truly an amazing time to be alive. The level of creativity in intelligence we are seeing is crossing into new levels. Not every task we try can be mastered today. But many can, and we now have multiple cases where the models are coming up with entirely new strategies to solve tasks. These improvisations feel more like "ideas", rather than slight adjustments. As one example, we did a task shoving plushy toys into bags on a conveyor belt. In finetuning the model, it was trained to use one hand to open the bag, the other hand to shove in the plushy – nothing fancy. But once when testing the model, the plushy didn't make it all the way in, and the model had the idea to simply pick up the bag with both hands, and shake it so the plushy settled down into the bag. In testing other skills too, we've seen similar emergent strategies where the model coordinates both hands to solve tasks – see the examples with the metal washers in the GEN-1 videos. For language models, it was these glimmers of creativity that lit the spark for many in the GPT-3 era. We are very excited about what's ahead. Amazing work by the whole Generalist team on this model.
Generalist@GeneralistAI

Introducing GEN-1. Our latest milestone in scaling robot learning. We believe it to be the first general-purpose AI model to master simple physical tasks. 99% success rates, 3x faster speeds, adapts in real time to unexpected scenarios, w/ only 1 hour of robot data. More🧵👇

English
19
16
201
23.5K
Bryan Altman
Bryan Altman@altmbr·
@gavinpurcell Why not tell the story after you've run the $3m/day secret money printing machine a bit longer?
English
2
0
5
646
Gavin Purcell
Gavin Purcell@gavinpurcell·
@altmbr counter-point: it's one guy who isn't operating like a corporation and... maybe he sees the end in sight why not get your story out there now
English
2
0
1
1.1K
Bryan Altman
Bryan Altman@altmbr·
So I don't think this can be real. It would make no sense that he'd want to share this data, including that he works with CareValidate as his Telemedicine provider, if this were true. If you really have this secret $3m/day money printing machine, the absolute worst thing you can do is tell the NYT how you're doing it!!
nic carter@nic_carter

first vibecoded billion-dollar company?

English
16
4
123
31K
Elijah Moore
Elijah Moore@elijahmoore28·
@altmbr There seems to be some truth stretching in this article for sure
English
1
0
10
1.1K
Bryan Altman ری ٹویٹ کیا
Chelsea Finn
Chelsea Finn@chelseabfinn·
Agents like Claude Code do amazing things relying critically on hand-engineered harnesses We show how agents can optimize the harness w.r.t. end performance Key idea: store all experience in a filesystem & allow agents to selectively inspect it Paper: yoonholee.com/meta-harness/
Yoonho Lee@yoonholeee

How can we autonomously improve LLM harnesses on problems humans are actively working on? Doing so requires solving a hard, long-horizon credit-assignment problem over all prior code, traces, and scores. Announcing Meta-Harness: a method for optimizing harnesses end-to-end

English
26
44
402
60K
Bryan Johnson
Bryan Johnson@bryan_johnson·
I think peptides are popular because they give people a feeling of power and control. One feels helpless when they can't sleep, stop scrolling, eat well or exercise consistently. A few injections wrestles back a feeling of control. Evidence shows that injections amplify perceived agency (the ritual potency of administration). This creates a dangerous situation where powerful compounds are being used less for biomarker improvement and more for psychological wellbeing. This is what you want: closed loop. > intervention (peptide) > biological change > measured biomarker > adjustment How most people are using peptides: open-loop. > intervention (peptide) > subjective feeling > more intervention The open-loop compounds over time. Without biomarker feedback, dose escalation is driven by subjective feelings which creates increased risk of doses with no clinical precedent. I am pro peptide and pro experimentation. Some peptides such as GLP-1s and similar are among the most effective in the world. Peptides (without clinical data) are among the most promising therapies available. They also need more clinical work so that we can characterize their effects, both good and bad. Nothing is free in biology.
English
162
74
2K
383.6K