Payton

1.4K posts

Payton

Payton

@Payton_Thompson

Enjoying the moment | Co-founder @ EverythingAfter | AI & Automation 🤖 | Investor 💰 | Reader 📖 | Dad × 3

Portland, OR Katılım Temmuz 2010
439 Takip Edilen288 Takipçiler
Payton
Payton@Payton_Thompson·
I love Claude Code and use it all the time, but it’s still not openclaw. Probably will get there with time, but for now… • Session must be running — it only works while Claude Code is open in a terminal. No persistent always-on agent. • Single session — one Claude Code instance, not multi-agent orchestration • No crons, no heartbeats, no skills — just a chat bridge • No cross-platform — Telegram OR Discord, not Signal, WhatsApp, iMessage, etc. • Research preview — requires claude.ai login, no API keys, Team/Enterprise orgs need to explicitly enable • Requires Bun installed
Min Choi@minchoi

RIP OpenClaw Telegram/Discord Channels for Claude Code 🤯

English
0
0
1
18
Payton
Payton@Payton_Thompson·
I think one of the best things about something like openclaw is that it allows you to be agnostic across models. Anthropic is cooking right now, but what about in 6 months… things change on a dime in this space
English
0
0
0
17
Payton
Payton@Payton_Thompson·
@Austen I’m curious about that last one. I’ve been feeling there is probably something there. Especially when you consider organizational structure
English
0
0
0
31
Austen Allred
Austen Allred@Austen·
* Nano/Picobot (or other smaller/hardened forks of OpenClaw) * Hermes * Perplexity Computer * Claude's new settings to make Claude run longer/have crons I think those are the basic ones I'll start with.
English
14
3
34
3.6K
Austen Allred
Austen Allred@Austen·
Going to test all the OpenClaw alternatives today to figure out where they shine and if there’s anything net better yet (entirely possible.) What else should I test? Will post results to subscribers to pay for compute x.com/Austen/creator…
English
31
3
80
26.5K
Payton
Payton@Payton_Thompson·
@nateliason @AlphaSchoolATX This is so awesome! I am interested to see some of the details, because first thought was having the right people will be important for potentially conflicting incentives. Kids just coast and get an incredible education for free.. now you aren't making money like you could..
English
0
0
0
72
Nat Eliason
Nat Eliason@nateliason·
Make $1m by graduation. Or get 100% of your tuition refunded. That's the promise of the new high school for entrepreneurs Cameron and I are launching this fall through @AlphaSchoolATX. We need 2-3 coaches to help make it happen. DM us or apply!
Cameron Sorsby@CameronSorsby

We’re launching a new @alphaschoolatx high school for aspiring entrepreneurs. Our promise: Make $1m by graduation, or receive a full tuition refund. Yes, this will be the coolest high school in the world. And we're building the best team in the world to make it happen. We’re looking for 2-3 exceptional coaches to help us guide the students towards achieving this aggressive but achievable goal. You won’t be giving lectures or assigning homework. You’ll be grilling them on their P&L, driving them to the car wash they bought, critiquing their email funnels, pushing them to do things 99% of the world doesn't believe is possible. Job posting is live and DMs are open.

English
111
57
1K
314.3K
Payton
Payton@Payton_Thompson·
People often ask did you do this or was it AI? Listen, I have built my AI infrastructure in a way that when it speaks, I speak. I just don't have to do the work
English
0
0
3
8.1K
witcheer ☯︎
witcheer ☯︎@witcheer·
first overnight run of autoresearch on my mac mini m4. 9PM to 6AM. 35 experiments. zero intervention. woke up to a telegram debrief. let me explain what's actually happening here because the numbers mean nothing without context. autoresearch is an AI agent that tries to make another AI model better, autonomously. it reads the current training code, forms a hypothesis ("what if I change this setting?"), edits the code, trains the model for 5 minutes, measures if it improved, keeps the change or reverts it, and loops. all night. no human involved. the metric it's optimising is val_bpb, bits per byte. it measures how well the model predicts text. lower = better. think of it like a golf score: you want it as low as possible. yesterday I ran 8 experiments manually with claude. got val_bpb from 1.75 down to 1.478, a 15.7% improvement. last night the agent ran 35 more experiments autonomously and pushed it to 1.450. another 1.87% on top. out of 35 attempts, 7 made the model better. 26 made it worse (reverted automatically). 1 crashed. that's normal, most ideas don't work in research. the value is in trying 35 overnight instead of 2-3 per day by hand. what the AI researcher discovered on its own: → the model got better by getting simpler. it removed two architectural components and performance improved, fewer moving parts, cleaner learning → it figured out that a different activation function (GELU vs relu²) was genuinely better, but only after isolating a confounding variable that was hiding the real effect. that's experimental reasoning → it found that keeping a small amount of learning rate at the end of training instead of decaying to zero helped the model keep learning longer → it tried weight tying (sharing parameters between two layers to save memory) and the model's performance completely collapsed. logged it, reverted it, moved on the model it's training is tiny and the results are modest, but the fact that an AI agent can form hypotheses, run experiments, evaluate results, and iterate while I sleep is the part that matters.
witcheer ☯︎ tweet media
witcheer ☯︎@witcheer

what could be better on a Saturday than trying out the creations of the 🐐? I ran @karpathy’s autoresearch on my mac mini m4. 16GB RAM. no CUDA. no GPU cluster. here’s my full debrief: found a macOS fork that replaces FlashAttention-3 with PyTorch SDPA for Apple Silicon. setup took 3 hours. trained an 11.5M parameter GPT model, tiny compared to karpathy’s H100 baseline, but that’s what fits in 16GB. ran some manual experiments with claude opus as the researcher. me as the human in the loop, claude deciding what to try next. - experiment 1: tried depth 8 (50M params). OOM crash. - experiment 2: scaled down to depth 6, batch 8 (26M params). ran but val_bpb was worse than the tiny baseline. classic lesson: a small well-trained model beats a large undertrained one on limited compute. - experiment 3: halved batch to 32K. first real win. val_bpb dropped to 1.5960. - experiment 4: batch 16K. best single decision of the entire run. quadrupled optimiser steps (102→370), val_bpb dropped to 1.4787. 15.7% improvement over baseline. karpathy’s H100 hits 0.9979. the M4 is 2.5x slower per cycle but it’s a $600 desktop vs a $30K GPU. then I made it fully autonomous. launchd starts a tmux session at 9PM, runs claude -p in a bash loop (read results → decide experiment → edit train.py → run → check → keep or revert → log → repeat). stops at 6AM. at 6:30AM my @openclaw bot sends me a telegram debrief with overnight stats. ~45 experiments per night. ~315 per week. I will update y’all on this experiment!

English
20
26
552
76.1K
Payton
Payton@Payton_Thompson·
@TFTC21 Comparing apples and apple juice. Openclaw is only as good as the model you have it running on..
English
0
0
6
1.1K
TFTC
TFTC@TFTC21·
Someone set loose two AI agents with $1,000 each and 48 hours to trade on Polymarket. Claude: +1,322% to $14,216 OpenClaw: liquidated to zero in under 48h.
English
316
328
7.8K
2M
Payton
Payton@Payton_Thompson·
Finding the right system and process for you personally to learn and grow is so important. This is a good model. Yours might be different, and that's okay, but the important thing is you figure it out
Marc Andreessen 🇺🇸@pmarca

My information consumption is now 1/4 X, 1/4 podcast interviews of the smartest practitioners, 1/4 talking to the leading AI models, and 1/4 reading old books. The opportunity cost of anything else is far too high, and rising daily.

English
0
0
0
46
Payton
Payton@Payton_Thompson·
I wish that OpenAI had a $100 plan like Anthropic has a $100 plan. I'd like to split and do both of them, but I don't want to do the $200 and the $100
English
0
0
0
16
Julio Recalde
Julio Recalde@juliorecalde·
@steipete @grok alert me when Peter runs these evals and decides on which CLI works better for agents
English
4
0
0
1K
Payton
Payton@Payton_Thompson·
We have a very manual process for our business where we send direct mail surveys out and we get them back as leads, and we have people manually doing data entry from the surveys into our system. I'm building out a process to automate this so we don't have to manually enter data. I'm curious if anyone has any expertise or thoughts on what the best OCR extraction tools are. Currently I'm using Caude Vision but wondering if there's something else out there that would potentially work better or is made for this specific use case. Also open to any other feedback or thoughts for the process as a whole if anyone has any ideas.
English
0
0
0
17
The All-In Podcast
The All-In Podcast@theallinpod·
The Hottest New AI Job: The Agent Maestro Jason: I think the job people are not seeing, but I'm seeing right now, is the person who creates, manages, and is the maestro of the agents. The person who can take the business process, explain it, and train the agent to do it. And there are certain people in business who are just really good at operations. You were one of them, Sacks, running companies. Fire up an agent, train them, and figure out how to manage them and increase their skills. It’s a great job, and it's not a developer. Sacks: With any new technology, there's always a huge change management aspect with enterprises because it's hard for them to adapt and change. And the people in the organization who can lead that change management are the ones who are going to create an amazing career opportunity for themselves.
English
138
171
1.8K
346.9K
Payton
Payton@Payton_Thompson·
It's crazy how simple a task of maintaining file structure for an agent makes such a huge difference.
English
0
0
0
15
Tom Solid | AI Productivity
Tom Solid | AI Productivity@TomSolidPM·
Running 28 AI agents across my business right now. Each one has a defined role, its own knowledge base, and clear scope. The job isn't "prompting." It's system design. You're defining roles, routing decisions, managing handoffs between agents, and building the memory layer so the whole team compounds over time. Jason nailed it. This is operations work, not engineering.
English
1
0
5
571
Payton
Payton@Payton_Thompson·
@emollick Yeah. We're deploying for ops teams, not engineers. The benchmarks don't tell me anything I can use.
English
0
0
1
46
Ethan Mollick
Ethan Mollick@emollick·
What a great illustration of the central problem of AI benchmarking for real work All of the effort is going into benchmarking for coding, but that is a small part of the actual jobs people do, which leaves the true trajectory of AI progress less clear. arxiv.org/pdf/2603.01203
Ethan Mollick tweet mediaEthan Mollick tweet media
English
46
72
494
43.2K
Payton
Payton@Payton_Thompson·
@emollick Yeah, once you see it, you can't unsee it
English
0
0
0
381
Ethan Mollick
Ethan Mollick@emollick·
[[Topic of discussion]] is not [[analogy]]. [[Dramatic fact given own line]]. [[Dramatic fact given own line]]. [[Dramatic fact given own line]]. [[Dramatic summary sentence.]] [[Topic of discussion]] is [[different analogy]]. [[Implications delivered with certainty]].
English
113
608
7.2K
200.2K
Payton
Payton@Payton_Thompson·
Not to mention the fact that they don't even know exactly what the AI will or won't do in its entirety. It's fine to have some unknown when you're talking about creating poetry or writing code for a business, but not when we're talking about weaponry
Wes Roth@WesRoth

Dario Amodei: "It doesn't show the judgment that a human soldier would show." Anthropic CEO Dario Amodei just gave his most chilling examples of why he’s blocking the Pentagon from using Claude for autonomous weapons. His fear isn't just a "glitch" that causes friendly fire; it’s the terrifying concentration of power. Imagine an army of 10 million drones coordinated by a single person.

English
0
0
0
31
Payton
Payton@Payton_Thompson·
Volatility is good for velocity. More mistakes made faster means more chances to learn before your window closes. Most enterprise AI pilots try to engineer away the volatility. That's usually why they stall.
English
0
0
0
12