Handoris Herrongtin

181 posts

Handoris Herrongtin

Handoris Herrongtin

@HandorisH

Katılım Haziran 2023
178 Takip Edilen4 Takipçiler
Handoris Herrongtin
Handoris Herrongtin@HandorisH·
@mil000 Are you serious? Here are ten: 1. Flock Safety 2. Relativity Space 3. Cruise 4. Astranis 5. Boom Supersonic 6. Solugen 7. Mashgin 8. Eight Sleep 9. Gecko Robotics 10. North The mismatch between confidence and lack of knowledge here never ceases to amaze me.
English
1
0
5
205
Milo Smith
Milo Smith@mil000·
Has YC ever funded a successful hardware company? Why would anyone go into YC if they’re doing a processor startup
Y Combinator@ycombinator

Inference Chips for Agent Workflows @sdianahu Most AI chips are designed for "prompt in, response out." Agents don't work that way. They loop, branch, and hold context across dozens of steps, and current GPUs hit 30–40% utilization as a result. That gap is where purpose-built silicon wins.

English
9
0
85
17K
Handoris Herrongtin
Handoris Herrongtin@HandorisH·
@shawmakesmagic @garrytan Many people leading the cutting edge of the field saw and have discussed this exact opportunity. Garry likely would have come up with the idea without you, just as so may others have.
English
0
0
0
71
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
Execution is a multiplier on distribution, you are the fucking president of YCo, you literally are taking your level 101 vibecode project and using your own media to promote it Are you intentionally ignoring my point that it's a terrible position to be competing with the startups you are supposed to fund? Are you just vibe listening?
English
4
0
41
2.8K
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
Ironically I applied to YCo once with this idea and they rejected me I would never apply again, just giving Garry your ideas for free so he can vibe code and claim credit out of ignorance for an industry he himself gatekeeps
Garry Tan@garrytan

For GBrain I built a proper eval harness. 145 queries, Opus-generated corpus. The retrieval stack uses graph based, vector based and Grep based strategies in combination. The graph layer is worth +31 points on precision. Vector-only misses 170/261 correct answers that the full system finds. Keyword + vector + graph are three separable wins, each load-bearing. Standard information retrieval metrics: the same ones Google uses to measure search quality. Precision at 5: You ask a question, the system returns 5 results. How many of those 5 are actually useful? If 3 out of 5 are relevant, P@5 = 60%. It measures: am I wasting your time with junk results? Recall at 5: For a given question, there might be 3 pages in the entire brain that are genuinely relevant. If the system finds all 3 in its top 5, R@5 = 100%. If it only finds 1, R@5 = 33%. It measures: am I missing things you need? High precision = low noise. High recall = nothing slips through. GBrain's 97.9% R@5 means it almost never misses the right answer. The 49.1% P@5 means about half the results are relevant — which is good when you realize that for most queries there are only 1-2 right answers out of 17,888 pages, so 2.5 hits out of 5 is strong signal. Entity resolution is zero-LLM-call: regex extracts typed links (works_at, invested_in, founded) on every write. Re-embed on write not on a timer, so decay = stale pages, and stale pages get rewritten when new info lands. Scorecards: github.com/garrytan/gbrai…

English
45
23
950
162.1K
Handoris Herrongtin
Handoris Herrongtin@HandorisH·
@Jason @openclaw It’s unbelievable how carelessly you misrepresent the basic facts. The OpenClaw project remains open source and independent. Peter repeatedly clarified that OAI’s resources have accelerated progress. He even refuted you in this very thread. Long time fans of yours expect more.
English
0
0
1
396
@jason
@jason@Jason·
Agents must stay open source They’re far to powerful/dangerous to have any one or two companies own the @openclaw space It’s clear from the OpenAI acquisition of OpenClaw that they did so to kill it, by slowing it down 🦞 FIGHT! FIGHT! FIGHT! 🦞
English
161
29
587
94.3K
Aella
Aella@Aella_Girl·
a long time ago i had the experience of chatting irl with someone well known on twitter, well connected, well platformed, influential, etc. He was smart, seemed to have domain expertise in his technical field. But gradually over the course of the conversation I realized he didn't actually know what he was talking about. He used complicated words, but in subtly off ways, and would reply to questions with things that sounded like answers but actually weren't.He'd confidently reference concepts that I think he assumed I didn't know, but I did know, and I knew that the term he used didn't actually have anything to do with his claim, etc. But his confidence was intense and radiating, and the speed at which he talked and the complex vocabulary he used really disguised what I perceived to be both a lack of deep understanding of the material and also a lack of self-awareness that he didn't have deep understanding! I've def met people who disagree with me online who I think *do* engage deeply with the concepts, but this specific one didn't, and it was a little blackpilling for me to realize that he had such a following from a bunch of people who couldn't tell the difference.
English
244
44
3.1K
746.5K
Handoris Herrongtin
Handoris Herrongtin@HandorisH·
@LTGetsPolitical I don’t agree with this take either. But do you realize that Marc created the modern web browser and one of the most successful VC firms of all time? You’re not in a position to intellectually dunk on him. I would research people before making such comments in the future.
English
1
0
0
37
Marc Andreessen 🇺🇸
It is 100% true that great men and women of the past were not sitting around moaning about their feelings. I regret nothing.
English
2.8K
1.4K
17.5K
8.1M
Handoris Herrongtin
Handoris Herrongtin@HandorisH·
@OctagonPulse Charles outwrestled Gamrot. And what happened between Gamrot and Arman? I personally think Arman won the fight, be he did not have a definitive wrestling advantage by any measure.
English
0
0
1
486
Octagon Pulse | UFC MEDIA🚨
Octagon Pulse | UFC MEDIA🚨@OctagonPulse·
Khabib doesn't see Ilia Topuria having a chance against Arman Tsarukyan 😳 "I believe right now, the best lightweight is Tsarukyan... I'd say it's 80%-20% in Arman's favor. We all know the level of wrestling that Holloway and Oliveira had. It's practically nonexistent... But a guy who wrestles, Topuria hasn't faced that. He's only fought strikers. I know the UFC is cautious. "
English
178
197
4.8K
532.9K
Paul Graham
Paul Graham@paulg·
@davidsenra @pmarca What? That's not true. Do you not feel that Charles Darwin, for example, was among the great men of history?
English
132
46
3.1K
108.4K
David Senra
David Senra@davidsenra·
Great men of history had little to no introspection. The personality that builds empires is not the same personality that sits around quietly questioning itself. @pmarca and I discuss what we both noticed but no one talks about: David: You don't have any levels of introspection? Marc: Yes, zero. As little as possible. David: Why? Marc: Move forward. Go! I found people who dwell in the past get stuck in the past. It's a real problem and it's a problem at work and it's a problem at home. David: So I've read 400 biographies of history’s greatest entrepreneurs and someone asked me what the most surprising thing I’ve learned from this was [and I answered] they have little or zero introspection. Sam Walton didn't wake up thinking about his internal self. He just woke up and was like: I like building Walmart. I'm going to keep building Walmart. I'm going to make more Walmarts. And he just kept doing it over and over again. Marc: If you go back 400 years ago it never would've occurred to anybody to be introspective. All of the modern conceptions around introspection and therapy, and all the things that kind of result from that are, a kind of a manufacture of the 1910s, 1920s. Great men of history didn't sit around doing this stuff. The individual runs and does all these things and builds things and builds empires and builds companies and builds technology. And then this kind of this kind of guilt based whammy kind of showed up from Europe. A lot of it from Vienna in 1910, 1920s, Freud and all that entire movement. And kind of turned all that inward and basically said, okay, now we need to basically second guess the individual. We need to criticize the individual. The individual needs to self criticize. The individual needs to feel guilt, needs to look backwards, needs to dwell in the past. It never resonated with me.
David Senra@davidsenra

My conversation with Marc Andreessen (@pmarca), co-founder of @a16z and Netscape. 0:00 Caffeine Heart Scare 0:56 Zero Introspection Mindset 3:24 Psychedelics and Founders 4:54 Motivation Beyond Happiness 7:18 Tech as Progress Engine 10:27 Founders Versus Managers 20:01 HP Intel Founder Legacy 21:32 Why Start the Firm 24:14 Venture Barbell Theory 28:57 JP Morgan Boutique Banking 30:02 Religion Split Wall Street 30:41 Barbell of Banking 31:42 Allen & Company Model 33:16 Planning the VC Firm 33:45 CAA Playbook Lessons 36:49 First Principles vs. Status Quo 39:03 Scaling Venture Capital 40:37 Private Equity and Mad Men 42:52 Valley Shifts to Full Stack 45:59 Meeting Jim Clark 48:53 Founder vs. Manager at SGI 54:20 Recruiting Dinner Story 56:58 Starting the Next Company 57:57 Nintendo Online Gamble 58:33 Building Mosaic Browser 59:45 NSFnet Commercial Ban 1:01:28 Eternal September Shift 1:03:11 Spam and Web Controversy 1:04:49 Mosaic Tech Support Flood 1:07:49 Netscape Business Model 1:09:05 Early Internet Skepticism 1:11:15 Moral Panic Pattern 1:13:08 Bicycle Face Story 1:14:48 Music Panic Examples 1:18:12 Lessons from Jim Clark 1:19:36 Clark Versus Barksdale 1:21:22 Tesla Versus Edison 1:23:00 Edison Digression Setup 1:23:13 AI Forecasting Myths 1:23:43 Edison Phonograph Lesson 1:25:11 Netscape Two Jims 1:29:11 Bottling Innovation 1:31:44 Elon Management Code 1:32:24 IBM Big Gray Cloud 1:37:12 Engineer First Truth 1:38:28 Bottlenecks and Speed 1:42:46 Milli Elon Metric 1:47:20 Starlink Side Project 1:49:10 Closing Includes paid partnerships.

English
1.3K
440
5.2K
2.8M
Handoris Herrongtin
Handoris Herrongtin@HandorisH·
@kimmonismus @chatgpt21 This seems unecessarily cynical. The statement merely means that if you want more intelligence, you can buy more of it, and if you want less intelligence, you can buy less of it. He’s simply referring to a departure from the traditional subscription model.
English
0
0
2
49
Chubby♨️
Chubby♨️@kimmonismus·
It's the openness with which he speaks of people buying it directly from them; Sam Altman often speaks in an almost humanistically idealistic way, claiming that his only goal is to create AGI for humanity. This openness, however, reveals what it's really about: cheap intelligence so that people will buy it in droves.
English
9
0
39
1.8K
Handoris Herrongtin
Handoris Herrongtin@HandorisH·
@dannyseguratv Max looked visibly larger than , or just as big as, Justin and Dustin. Not to mention that Charles did this to Gamrot and Chandler. This is objectively a bad take.
English
0
0
1
332
Danny Segura
Danny Segura@dannyseguratv·
Yeah, Max Holloway is not a lightweight. #UFC326
English
77
12
807
99.9K
Handoris Herrongtin
Handoris Herrongtin@HandorisH·
@JoshKale Some of the most successful enterprise startups of the last two decades went through YC, and Sam was directly involved with recruiting and supporting them.
English
0
0
0
54
Josh Kale
Josh Kale@JoshKale·
Anthropic is on track to pass OpenAI in revenue. Seriously, they just added $5B in the last 3 weeks. OpenAI has 800M weekly users and just started running ads. Anthropic has a fraction of the consumer base and zero ads. The difference? Anthropic monetizes at $211/user vs OpenAI's $25/user. That's 8x more efficient thanks to their strategy of selling to businesses OpenAI is projecting $14B in losses for 2026 while Anthropic expects to stop burning cash by 2027. The company everyone called "the safety lab" is on an absolute tear
Josh Kale tweet media
English
106
211
2.2K
217.1K
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
After a day of backlash, debates, and research, here's what I've come to on the Anthropic vs Pentagon situation: The fundamental issue is about procurement. The Pentagon has every right to ensure that their contractors meet specification. Anthropic's protests don't really make any sense. They have a partnership with Palantir, which conducts mass domestic surveillance. Furthermore, Claude is literally incapable of directing autonomous weapons right now. Dario "clarified" that he's actually okay with autonomous weapons, but that Claude "isn't ready" - but this really isn't their concern. The military is the one who decides when a tool is ready. Beyond that, the negotiation was private, with only a few minor leaks that no one really cared about, until Anthropic blew the lid off everything. They thought they could muster public support. They were the ones to escalate in a deep inappropriate way. Can you imagine if Lockheed did something like this over a next gen fighter? Next, the current administration decided they weren't going to be mogged by a private company and reacted in kind. So, you could say this is a case of FAFO. Anthropic escalated, drew confusing and unnecessary lines in the sand, and doubled down. Now, with all that being said, there is broad consensus that the "supply chain risk" was overkill, an outright "nuclear option" that may be illegal. However, legal analysis suggests that it will take several weeks or months just to get injunctive relief from this designation, and 1 to 3 years to litigate the issue. Furthermore, since the President himself has personally lashed out at Anthropic, there's likely almost nothing they can do to get back in the good graces of the government. Even if the supply chain risk doesn't stick, they're almost certainly out of the government. If the supply chain risk designation does stick (which there is a non zero chance of) then Anthropic cannot structurally survive in the long run. They will be relegated to a relatively small section of the economy compared to their competitors. However, this outcome seems unlikely. Even so, there's no way for them to compete with OpenAI, xAI, and Google, all of whom have signaled they will comply with Pentagon procurement requirements. Over time, Anthropic will fall to the back of the pack. Now, for my take, I'm a "structural realist" My view is this: the world is materially better if Anthropic has a seat at the table. America is better off with multiple competitors with such fundamentally different approaches to AI and alignment. While I am, and remain, highly critical of the direction that Anthropic is going in, I still believe that their contribution to the discourse is strongly net positive. It would be even more positive if they continued to work with the Pentagon. However, I do not see that as a viable path now. To get back in the good graces of this administration, they will need to demonstrate maximum contrition. As of Dario's interview this morning, that seems unlikely. He might even need to step down as CEO to convince the Pentagon to work with Anthropic. However, the corporate structure of Anthropic will make it exceedingly difficult to compel his resignation, and would take too long anyways, leaving voluntary departure as the only realistic pathway to contrition as far as I can tell. But again, I highly doubt Dario will go that way. Dario's deliberate escalation and subsequent gambit was clearly a miscalculation, which will have a chilling effect on any other labs that might want to play hardball with the government. On that point, I would not be surprised if this administration holds the line against Anthropic just to make a point. Trump has already ordered the entire federal government to stop using Anthropic, and this does not seem like it will reverse when OpenAI, xAI, and Google are ready to go. AI is fungible. Finally, I've been personally accused of all kinds of things given this structural realist position. The most common indictment is having "no principles" which I categorically reject. My principle is that the Western way of life is the most just, productive, and generative civilizational pattern that exists today. That includes America and most of Europe as well as our allies. Therefore, my position is that we should push for policies that strengthen the West. What has played out over the last few days has been a net negative to our way of life. We have materially lost future optionality. In short, I believe that a world in which Anthropic remains embedded with the Pentagon is the optimal policy, and I'm frustrated and disappointed that Dario would rather torpedo his company based on confusing and seemingly arbitrary "principles" rather than play ball. To that end, I've levied numerous hypotheses as to why Dario made this choice. Beyond the obvious strategic miscalculation, the best I can figure is that he followed the typical Effective Altruist script which advocate for creating maximum noise and trying to seize control over the narrative, rather than looking at structural incentives, market dynamics, and systems of power. In my dealings with EA types, they almost always reject realism in favor of idealism, often to their detriment. This pattern is deeply overdetermined by their epidemics and tribal values. I would be glad to be wrong on this. As much as I have become skeptical of Anthropic, I would prefer them return to the fold.
English
83
24
263
29.9K
Handoris Herrongtin
Handoris Herrongtin@HandorisH·
@DaveShapi This is a fair and reasonable take. Thank you for being so impartial. That is refreshing.
English
0
0
0
34
Handoris Herrongtin
Handoris Herrongtin@HandorisH·
@sama I think if you convinced the government to declassify Anthropic as a supply chain risk, and helped Anthropic secure a contract with the terms you agreed to, that would help with public sentiment. Is that something you would be open to?
English
0
0
0
14
Sam Altman
Sam Altman@sama·
I'd like to answer questions about our work with the DoW and our thinking over the past few days. Please AMA.
English
7.5K
570
10.3K
7.1M
Handoris Herrongtin
Handoris Herrongtin@HandorisH·
@Aella_Girl The following was part of the deal: 1. No use of OpenAI technology for mass domestic surveillance. 2. No use of OpenAI technology to direct autonomous weapons systems.  3. No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as “social credit”).
English
1
0
0
77
Aella
Aella@Aella_Girl·
I've been a power user of ChatGPT since before it was a chat, since it was an invite-only little box you'd type in and it'd autofill. I've been using it regularly, every day, for years. Ending that today, though, I've cancelled my subscription.
Aella tweet media
Sam Altman@sama

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

English
177
251
5.9K
311.2K
Handoris Herrongtin
Handoris Herrongtin@HandorisH·
@Aella_Girl I would encourage you to read through the contract OAI signed with the DoW before making rash decisions. It seems that OAI successfully negotiated the two red lines Anthropic originally had. I’ve seen a lot of strong reactions on X, but not much attention to the details.
English
0
0
1
376
Handoris Herrongtin
Handoris Herrongtin@HandorisH·
@shawmakesmagic I think this is unironically what happened. The doomers just immediately catastrophize everything. It can be hard to get them to see the nuance.
English
0
0
0
5
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
I have a theory Sam is just way way way better at speaking normie human than Dario This was all basically a misunderstanding in language between some hyper nerds and hyper jocks and it took a y co tech bro to bridge the gap
Sam Altman@sama

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

English
105
18
940
99.9K