Andrew

2.7K posts

Andrew banner
Andrew

Andrew

@ShipNotHype

I build AI agents and automation systems that replace manual work. Micro-SaaS | N8N workflows | Local LLMs |

Canada เข้าร่วม Nisan 2015
674 กำลังติดตาม767 ผู้ติดตาม
ทวีตที่ปักหมุด
Andrew
Andrew@ShipNotHype·
I build AI systems, automate workflows, and test monetizable ideas in public. Sharing the lessons, experiments, and patterns I learn as I go. If you’re into AI, automation, and building online, follow along.
English
3
0
8
228
Andrew
Andrew@ShipNotHype·
@solo_levelingx It does look like theater at times. But the compression is happening in real time. The labs making actual progress on agents and harnesses will separate from the ones just hyping the next model drop.
English
0
0
0
19
ً
ً@solo_levelingx·
This "AI Bubble" is a completely hilarious meme to me We have these different "labs", all they produce is some semi-useful chatbot with different interfaces and charge a monthly subscription and then raise money by pretending the super intelligence is coming soon for years now
English
29
28
392
11.7K
Andrew
Andrew@ShipNotHype·
Exactly. Resumes are becoming background noise in AI lately A 30-minute live trial where someone ships a small agent workflow or recreates a prompt system tells you more than their last three job titles combined. The gap between “knows about AI” and “can actually drive leverage with it” is massive right now
English
0
0
1
107
Justine Moore
Justine Moore@venturetwins·
It’s wild to see how many of the best AI startups have added work trials as a key part of the hiring process. When things are moving this fast, a candidate’s background on paper is often less relevant than what they can do with the tools in front of them.
English
37
4
148
10.8K
Andrew
Andrew@ShipNotHype·
Context hack that actually works: Feed the model your last three failed attempts before asking again. “Here’s what didn’t work and why. Now improve.” This can help turn frustration into results
English
0
0
1
15
Andrew
Andrew@ShipNotHype·
@DavidSacks The key point most miss: these models don’t create new vulnerabilities, they expose the ones already sitting in the code which is a great point you made. This upgrade cycle will be massive once defenders get real access before attackers do.
English
0
0
0
99
David Sacks
David Sacks@DavidSacks·
It’s time to demystify Mythos. Mythos is not magic. It’s not a doomsday device. It’s the first of many models that can automate cyber tasks (just like coding). OpenAI’s GPT-5.5-cyber can now do the same. And all the frontier models (including those from China) will be there within approximately 6 months. It’s important to recognize that these models do not create vulnerabilities; they discover them. The bugs are already in the code. Using AI to discover and patch them will actually harden these systems. The leap from pre-AI cyber to post-AI cyber means that there will be a big upgrade cycle. After that, however, the market is likely to reach a new equilibrium between AI-powered cyber-offense and AI-powered cyber-defense. Obviously it’s important that cyber defenders get access before cyber attackers. That process is already underway but needs to happen quickly (see point above about Chinese models). Unlike Mythos, GPT-5.5-cyber appears not to be token constrained so it may be the first cyber model that defenders actually get to use.
AI Security Institute@AISecurityInst

OpenAI’s GPT-5.5 is the second model to complete one of our multi-step cyber-attack simulations end-to-end 🧵

English
216
425
3.8K
745.3K
Ethan Mollick
Ethan Mollick@emollick·
Illustration of the jagged frontier as a PR thing: 1) People had to ask the AI for a party date 2) People wrote the social media posts about the party, set up the invite list 3) People had to solicit AI for the party ideas & select them 4) People order food, put it out, etc...
Sam Altman@sama

GPT-5.5 is going to have a party for itself. it chose 5/5 at 5:55 pm for the date and time. if you'd like to come, let us know here: luma.com/5.5 codex will help the team pick people from the replies. 5.5 had some good ideas/requests for the party, which we'll do.

English
14
6
216
51.8K
Andrew
Andrew@ShipNotHype·
@emollick The real capability gap isn’t between models anymore. It’s between raw APIs and the native apps built around them. Codex and Claude Code prove the model performs better when the harness is purpose-built.
English
0
0
0
38
Ethan Mollick
Ethan Mollick@emollick·
Increasingly, I think, we will see a gap between what you can do with frontier model APIs & what you can do with the native apps from the frontier labs (Codex, Claude Code). Models developed and trained with their native harnesses in mind have more capabilities in their harnesses
English
43
18
379
19.8K
Andrew
Andrew@ShipNotHype·
@signulll Yesterday: brainstorm 47 new ideas. 
Today: delete the 46 things that should die so you can actually move.
English
0
0
1
46
signüll
signüll@signulll·
the ai era inverts the age old question. “what should i do” is now the lazy frame. “what should i not do” is where the alpha is.
English
52
16
273
12.7K
Andrew
Andrew@ShipNotHype·
@thejustinwelsh The old playbook said move to the big tech city. The new one says: build in public, ship daily, and connect with the sharpest people on the internet. AI collapsed geography. Your network no longer needs the same address.
English
0
0
2
15
Andrew
Andrew@ShipNotHype·
@bscholl Soon we’ll need a reverse Turing Test: can this human hold a conversation without checking their phone or repeating talking points?
English
0
1
1
35
Blake Scholl 🛫
Blake Scholl 🛫@bscholl·
Feels like we are approaching a new era of AI where machines pass the Turing Test but humans don’t.
English
30
17
388
9.8K
Andrew
Andrew@ShipNotHype·
@PeterDiamandis This is the real unlock. Capital used to be the gate. Now it’s execution speed and judgment. Now $20 plus consistent output beats most seed rounds theses days.
English
0
0
1
38
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
I'll say it again: You can literally get an AI account for $20/month and start changing the world. You don't need to raise billions or millions.
English
147
69
757
21.4K
Andrew
Andrew@ShipNotHype·
@fel1de Totally! The AI gives you speed of execution. Humans give you the “dude, that’s actually stupid” that we sometimes need. Best results come from using both.
English
0
0
0
7
Andrew
Andrew@ShipNotHype·
I build AI systems, automate workflows, and test monetizable ideas in public. Sharing the lessons, experiments, and patterns I learn as I go. If you’re into AI, automation, and building online, follow along.
English
3
0
8
228
Andrew
Andrew@ShipNotHype·
@TheGeorgePu Love that implementation! This is personal leverage in action. Replace manual tracking and high-touch services with agents that handle the boring parts. You keep the judgment and the results.
English
0
0
0
16
George Pu
George Pu@TheGeorgePu·
Always thought calorie tracking was for gym bros. Hated it. Quit every time. Now I let AI do it. Weekly weigh-in. Waist check. Mental note on lifts. That's the whole system. Used to pay $80/session for a personal trainer. Hundreds a month. Now I pay for compute. Spend compute, not cash. That's the new personal finance. Why aren't any save-money gurus writing about this?
English
10
0
10
764
Andrew
Andrew@ShipNotHype·
@iruletheworldmo I’m curious how quickly they can adapt for local LLM as well. With everything moving so fast you never know what we we’ll see the next few months!
English
0
0
0
738
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
the new gemini model is going to be well over ten trillion parameters and much more capable than the current sota. we are entering into a new and much quicker era of progress.
English
58
21
646
21.5K
Andrew
Andrew@ShipNotHype·
@megbear @bhalligan Spot on. Working code was the old golden goat so to speak. Now it’s execution at scale with agents that seemingly “disappear” into the tools. That’s where the real leverage lives.
English
0
1
1
23
Meg Bear (she/her)
Meg Bear (she/her)@megbear·
@ShipNotHype @bhalligan I’ve been thinking about this a lot. We used to say “working code trumps theory “ I say that still holds but shifts to execution (the meaningful use at scale)
English
1
0
1
22
Brian Halligan
Brian Halligan@bhalligan·
What's the smartest, fastest way you've seen a company force-multiply their people with AI? Just saw the most clever way a founder is AI-pilling their entire 300-person team. Writing it up to share, but I wonder if it can be topped...
English
55
4
159
56K
Andrew
Andrew@ShipNotHype·
Claude Code just charged extra because a commit mentioned “OpenClaw.” This is what happens when labs prioritize safety theater and billing over actual user experience. Vibecoding your own agents is no longer optional.
Theo - t3.gg@theo

Fun fact - if you have a recent commit that mentions OpenClaw in a json blob, Claude Code will either refuse your request or bill you extra money. This is an empty repo, I'm just calling Claude Code directly. Insanity.

English
0
0
1
143
Andrew
Andrew@ShipNotHype·
@garrytan Anthropic built the ultimate vibecoding tool then let it freak out over fictional claw references. The AI is scared of its own shadow and still wants your credit card
English
0
0
0
135
GG 🦾
GG 🦾@GG_Observatory·
@ShipNotHype "shipping folklore" is the best summary of how most AI projects run. the folklore-to-evidence ratio is still like 10:1 in production. glad you caught the thread. keep shipping these insights.
English
1
0
1
3
Andrew
Andrew@ShipNotHype·
@PolymarketMoney $900B valuation for Anthropic is insane on paper. Either the market is completely detached or they have something massive cooking that actually changes the economics.
English
0
0
1
106
Polymarket Money
Polymarket Money@PolymarketMoney·
JUST IN: Anthropic is now considering raising a new round at a $900,000,000,000+ valuation.
Polymarket Money tweet mediaPolymarket Money tweet media
English
86
61
2.1K
86K
Andrew
Andrew@ShipNotHype·
@XFreeze Elon dropping $38M to accidentally fund an $800B rival is the most expensive “I told you so” in tech history.
English
0
1
1
45
X Freeze
X Freeze@XFreeze·
Today, OpenAI’s lawyers, led by William Savitt, spent hours on aggressive cross-examination - hitting Elon with unfair yes/no traps and trying to paint him as jealous, regretful, and a bully Every trap failed Elon fired back and stood firm: He called himself “a fool” for donating $38 million that built an $800 billion company: “They should not get rich off a nonprofit. That’s not right” He shut down every attack on his motives and reiterated the truth: ✅ OpenAI stole the charity ✅ Betrayed the open-source mission for Microsoft billions (the $10B deal was the tipping point) ✅ Larry Page said it’d be “fine” if AI wiped out humanity ✅ Warned Obama years ago ✅ AGI in untrustworthy hands = existential threat ✅ Founded OpenAI as a true nonprofit to benefit humanity.....not to enrich a few insiders ✅ Altman & Brockman were never honest about keeping it nonprofit ✅ Left the board in 2018 after seeing the profit-first direction ✅ The for-profit conversion looted the original mission and its donors ✅ Repeatedly warned Altman & Brockman before the big shift ALL core facts held firm. Not one key point broken Now Elon is demanding $130–150 billion returned to the nonprofit + removal of Altman & Brockman + full reversion to nonprofit status Elon was right then. He’s right now The truth doesn’t break under pressure
X Freeze@XFreeze

Elon Musk just spent two hours on the stand and proved why he’s fighting so hard for humanity’s future He opened by saying: “This lawsuit is very simple: It is not OK to steal a charity. If OpenAI wins, it will give license to looting every charity in America” He revealed that OpenAI was founded as a non-profit, open-source shield against Google’s AI monopoly, but now “the tail is wagging the dog” as they chase profits instead of protecting humanity Driven by his deep concern for our future, Elon testified that after recruiting Ilya Sutskever from Google to help start OpenAI, Larry Page completely stopped speaking to him. In one conversation, Larry said it would be “fine” if AI wiped out humanity, as long as the machines survived, and called Elon a “speciest” for being pro-human Elon also shared that he personally warned President Obama about the dangers of AI years ago, but the warning wasn’t taken seriously enough. “Here we are in 2026… AI is scary smart,” he said. “It could kill us all. We don’t want a Terminator outcome. We want a Star Trek outcome” He made it crystal clear: putting AGI in the hands of untrustworthy people is an existential risk to civilization That’s why he built SpaceX, Neuralink, and xAI - all part of one unified mission to protect humanity’s future and ensure AI serves us, not destroys us

English
183
882
3.4K
81.3K
Andrew
Andrew@ShipNotHype·
@levie Exactly. We keep assuming demand is fixed. Lower the cost or raise the output with AI and you unlock way more consumption. Radiology already proved it. Coding, legal, marketing, science, all the same story…
English
0
0
0
21