Nathan Fray

1.7K posts

Nathan Fray banner
Nathan Fray

Nathan Fray

@nathanfray

late bloomer surfer. used to be an idea guy, now a builder guy. drummer @unitedpursuit - currently in mild to moderate AI psychosis while living in CR

Nosara, Costa Rica 가입일 Nisan 2008
1.9K 팔로잉947 팔로워
Jordan Hall
Jordan Hall@jgreenhall·
@iancr @AnthropicAI I’m looking closely at getting a more poweful rig and going mostly local. The economics work out and I can leverage my current setup (also just turned off Anthropic for the most part) to design and implement my new setup.
English
1
0
0
73
ian c rogers
ian c rogers@iancr·
In the end I didn't eliminate @AnthropicAI models entirely. I used the /last30 skill to do research on consensus for which model is best at what and ended up with: 1. Cheap monitors should stay cheap If a job is basically: - checking status - watching CI - counting things - summarizing routine metrics - sending “only if notable” alerts use a small, fast, cheap model. For us, that is @GeminiApp Flash-Lite. 2. Use premium models only where taste or judgment matters If a task needs: - strategic synthesis - careful editorial voice - nuanced security reasoning - long-context interpretation - “this needs to actually be good, not merely functional” use a premium reasoning/writing model. For us, that is usually @claudeai Opus. 3. Use the middle tier for careful reviews If a task needs more care than a cheap monitor, but doesn’t justify the most expensive model, use a strong mid-tier reviewer. For us, that is @claudeai Sonet. 4. Use @OpenAICodexCli for tool-heavy technical loops If the hard part is: - running commands - working through a repo - orchestrating technical steps - interpreting tool output - acting like an engineering operator use @OpenAICodexCli. That is why the main loop is now @OpenAI Codex GPT-5.4. 5. Keep a cheap fallback You always want a competent non-premium fallback when the primary path is unavailable or too expensive. For us, that is Kimi. Short version The rule is: - Flash-Lite → cheap monitors - Opus → strategy, deep audit, editorial voice - Sonet → careful review - Codex → technical orchestration and main loop - Gemini → mid-tier synthesis - Kimi → cheap fallback / resilience This choice is the power of an opensource orchestration layer such as @openclaw. /cc @steipete
ian c rogers@iancr

This underscores why an opensource orchestrator like @openclaw gives users power. I've never experienced switching costs as low as simply asking software in natural language to changes itself one tech provider to another, and seconds later it's done. Imagine if moving from iPhone to Android was this easy: ME: OK, I set auth-choice openai-codex. Can you verify it works with the login and if it does restart the gateway. Then we will decide what to set all the crons to using codex as our main chat loop. OPENCLAW: "Done. Config patched and gateway restarting. Here's what I changed: - Primary model: openai-codex/gpt-5.4 (ChatGPT Pro subscription — flat rate, no per-token billing) - Fallback: moonshot/kimi-k2.5 (cheap, fast) - Anthropic API key: Retained in env — available for pay-per-use if we ever specifically need Claude" ME: Are we back? What model am I talking to? OPENCLAW: "You're now talking to OpenAI Codex GPT-5.4 via ChatGPT Pro OAuth in OpenClaw. Current main-chat setup: - Primary: openai-codex/gpt-5.4 - Fallback: moonshot/kimi-k2.5 So the main loop is off Anthropic now. Next step is deciding what to do with the Anthropic-based crons." ME: Great. And you have read and digested the SOUL, etc? OPENCLAW: "Yes. On restart I’m back with the same workspace context: - AGENTS.md - SOUL.md - USER.md - TOOLS.md - HEARTBEAT.md - MEMORY.md So, yes — I’m operating as your PA, with the same rules, tone, and constraints. The model changed. The job did not."

English
1
0
7
1.2K
Nathan Fray
Nathan Fray@nathanfray·
Just found out I was mass one npm update away from getting hit by the axios supply chain attack. Had axios 1.13.5 installed through Google Workspace CLI . The malicious 1.14.1 would've matched that range and auto-installed. Check your global npm packages, not just your projects. npm list
English
0
0
0
63
Mckay Wrigley
Mckay Wrigley@mckaywrigley·
looking for a handful of people to test something new... i've been using it for a few months and am prepping to share. if you're a fan of claude cowork, openclaw, manus, perplexity computer, etc then you're a perfect fit. this will self destruct in 4hrs - please dm or reply.
Mckay Wrigley@mckaywrigley

you’re like 6 prompts away from infinitely customizable personal agi. anthropic gave you a world class agentic harness for free. use it!!!

English
1K
15
769
157K
Noah Zweben
Noah Zweben@noahzweben·
You can now schedule recurring cloud-based tasks on Claude Code. Set a repo (or repos), a schedule, and a prompt. Claude runs it via cloud infra on your schedule, so you don’t need to keep Claude Code running on your local machine.
English
299
566
7.5K
2M
Nathan Fray
Nathan Fray@nathanfray·
@mattshumer_ thank you. already so much better. "I’m wiring the helper commands now. This is the part that turns it from “a daemon you can start manually” into “something you can install, inspect, and remove without digging around in files.”
English
1
0
1
3.9K
Matt Shumer
Matt Shumer@mattshumer_·
Add this to your Codex custom instructions for a way better experience: "When communicating your results back to me, explain what you did and what happened in plain, clear English. Avoid jargon, technical implementation details, and code-speak in your final responses. Write as if you're explaining to a smart person who isn't looking at the code. Your actual work (how you think, plan, write code, debug, and solve problems) should stay fully technical and rigorous. This only applies to how you talk to me about it. Before reporting back to me, if at all possible, verify your own work. Don't just write code and assume it's done. Actually test it using the tools available to you. If possible, run it, check the output, and confirm it does what was asked. If you're building something visual like a web app, view the pages, click through the flows, and check that things render and behave correctly. If you're writing a script, run it against real or representative input and inspect the results. If there are edge cases you can simulate, try them. Define finishing criteria for yourself before you start: what does "done" look like for this task? Use that as your checklist before you come back to me. If something fails or looks off, fix it and re-test. Don't just flag it and hand it back. The goal is to keep me out of the loop on iteration. I want to receive finished, working results, not a first draft that needs me to spot-check it. Only come back to me when you've confirmed things work, or when you've genuinely hit a wall that requires my input."
English
53
81
776
152.8K
Nathan Fray
Nathan Fray@nathanfray·
you know you've been on a bender with claude when you hit 7% left from just chatting back and forth
Nathan Fray tweet media
English
0
0
1
54
Gary Sheng - The Applied AI Guy
in the past month, i've had dozens of conversations about the future with people who are definitively on the frontier. just wanted to share some of these insights with y'all, in no particular order: 1. the bottleneck to applied AI isn't technical skill. it's imagination. most people don't know what's possible because they've never seen it done by someone they relate to. 2. AI moves at the speed of trust. the most powerful tools in the world don't matter if nobody trusts the person showing them. in-person, human-to-human onboarding is still the unlock. 3. "truth management" is a real discipline now. the quality of the files your AI agents read determines how useful they are. outdated context = agents chasing dead priorities. someone in your org needs to own this. 4. everyone writes a SOUL.md for their AI agent. almost nobody has written one for themselves. before AI can truly serve you, you have to know who you are and where you want to go. 5. the economy is splitting between people who know who they are and people who don't. clarity of purpose is the real multiplier. AI is a jetpack, but only if you know your destination. 6. enablement scales better than implementation. the person who gets a room of business owners excited about their first AI pilot is rarer (and more valuable) than the person who builds it. 7. AI transformation isn't a technology problem. it's leadership, data hygiene, governance, and culture. the tech is the easy part. 8. stop uploading all your know-how to someone else's model. sovereign AI means owning your own context, your own data, your own competitive edge. if your competitors train the same model you're feeding, you're building their moat too. 9. the work of a developer is basically a philosopher now. less syntax, more systems thinking. you're not writing code, you're articulating intent. 10. every city needs a watering hole for people figuring out applied AI. not a class. not a conference. a place to find your tribe and see what's actually working. 11. create content that's still relevant 10 years from now. principles over products. the people chasing tool-of-the-week content are building on sand. stay blessed!
English
9
12
46
4.5K
Nathan Fray
Nathan Fray@nathanfray·
@dotta been testing this afternoon and loving it!
English
1
0
1
2.5K
dotta 📎
dotta 📎@dotta·
We just open-sourced Paperclip: the orchestration layer for zero-human companies It's everything you need to run an autonomous business: org charts, goal alignment, task ownership, budgets, agent templates Just run `npx paperclipai onboard` github.com/paperclipai/pa… More 👇
English
403
706
8.1K
2.5M
Riyan Dhiman
Riyan Dhiman@riyandhiman14·
Does anyone know when Vercel build issues will be fixed??
Riyan Dhiman tweet media
English
3
0
1
154
Nathan Fray
Nathan Fray@nathanfray·
this fees like a marker day
jack@jack

we're making @blocks smaller today. here's my note to the company. #### today we're making one of the hardest decisions in the history of our company: we're reducing our organization by nearly half, from over 10,000 people to just under 6,000. that means over 4,000 of you are being asked to leave or entering into consultation. i'll be straight about what's happening, why, and what it means for everyone. first off, if you're one of the people affected, you'll receive your salary for 20 weeks + 1 week per year of tenure, equity vested through the end of may, 6 months of health care, your corporate devices, and $5,000 to put toward whatever you need to help you in this transition (if you’re outside the U.S. you’ll receive similar support but exact details are going to vary based on local requirements). i want you to know that before anything else. everyone will be notified today, whether you're being asked to leave, entering consultation, or asked to stay. we're not making this decision because we're in trouble. our business is strong. gross profit continues to grow, we continue to serve more and more customers, and profitability is improving. but something has changed. we're already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that's accelerating rapidly. i had two options: cut gradually over months or years as this shift plays out, or be honest about where we are and act on it now. i chose the latter. repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead. i'd rather take a hard, clear action now and build from a position we believe in than manage a slow reduction of people toward the same outcome. a smaller company also gives us the space to grow our business the right way, on our own terms, instead of constantly reacting to market pressures. a decision at this scale carries risk. but so does standing still. we've done a full review to determine the roles and people we require to reliably grow the business from here, and we've pressure-tested those decisions from multiple angles. i accept that we may have gotten some of them wrong, and we've built in flexibility to account for that, and do the right thing for our customers. we're not going to just disappear people from slack and email and pretend they were never here. communication channels will stay open through thursday evening (pacific) so everyone can say goodbye properly, and share whatever you wish. i'll also be hosting a live video session to thank everyone at 3:35pm pacific. i know doing it this way might feel awkward. i'd rather it feel awkward and human than efficient and cold. to those of you leaving…i’m grateful for you, and i’m sorry to put you through this. you built what this company is today. that's a fact that i'll honor forever. this decision is not a reflection of what you contributed. you will be a great contributor to any organization going forward. to those staying…i made this decision, and i'll own it. what i'm asking of you is to build with me. we're going to build this company with intelligence at the core of everything we do. how we work, how we create, how we serve our customers. our customers will feel this shift too, and we're going to help them navigate it: towards a future where they can build their own features directly, composed of our capabilities and served through our interfaces. that's what i'm focused on now. expect a note from me tomorrow. jack

English
0
0
0
97
Nathan Fray
Nathan Fray@nathanfray·
wow writing is so hard.
English
0
0
0
42
Nathan Fray
Nathan Fray@nathanfray·
“Intelligence was never the thing that made us special, we just convinced ourselves that it was. Intent and desire is something weirder and stranger and more fundamentally human I think” Feels spot on.
Nick St. Pierre@nickfloats

We talk about “intelligence” a lot, but I keep circling the thought that AI’s argument is actually that intelligence doesn’t matter. Or at least intelligence as we’ve understood it for humans doesn’t actually matter. It just becomes ambient, like electricity. You don’t think about it, you just assume it’s there. If we assume this is true for the sake of the argument, then the question becomes what does a post-intelligence culture actually value? I often hear / say “creativity” and “imagination” but those are likely just words we’re reaching for because we don’t have better ones yet. Same way early internet era said “information” was what mattered when really it turned out the medium was much more about connection and identity. In AI era it feels like it’s something closer to intention that matters to the medium. Like, what do you want to exist? Nit “can you make it” or “do you know how, because that wouldn’t even be a question in a post-intelligence culture. One metaphor might be that AI is the first medium that asks “what do you actually want?” while stripping away every excuse about capability. Which can be pretty terrifying because most people have been hiding behind skill their whole lives. What’s left when you can’t hide behind competence or skill is something closer to intent and desire, which feels more honest than “creativity”. Intent and desire seem to be far more human qualities than intelligence. We’ve literally bottled intelligence in machines and soon they’ll be infinitely smarter than we can ever begin to fathom. Intelligence was never the thing that made us special, we just convinced ourselves that it was. Intent and desire is something weirder and stranger and more fundamentally human I think. With infinite intelligence at our fingertips the only question left becomes “what world do you want to build?” and the rarest thing in the world becomes a genuine point of view about what should exist and the desire and conviction to make it happen.

English
0
0
1
99
Nathan Fray
Nathan Fray@nathanfray·
@jgreenhall @Loster Oooo very cool use case. 100% agree feels like a big jump in abilities. Gave my agent an email and GitHub email & can make commits to feature branches. Hard to wrap my head around
English
1
0
1
94
Jordan Hall
Jordan Hall@jgreenhall·
@Loster Yes. I've just finished my own visualization system so that I can watch the swarms collaborate. Its incredible. I also love that they give estimates in weeks.
English
1
0
2
379
Jordan Hall
Jordan Hall@jgreenhall·
The happenings in the Clawverse are interesting. I'm substantially more bullish than bearish. Skin in the game proof: I built and mined BTC when you could do so on a Linux box with NVIDA cards; I'm now several days into installing, debuging (and debugging and debugging) and optimizing my OpenClaw setup (including a fresh new MacMini for it to live in). I think this is "quite real".
English
23
2
110
10.4K
Nathan Fray
Nathan Fray@nathanfray·
@petergyang @openclaw Yes. I was trying to get Clawd to babysit Claude code for me, and it’s kinda random with it’s ability to finish a task vs get stuck
English
0
0
0
61
Nathan Fray
Nathan Fray@nathanfray·
@petergyang @openclaw Yes I agree. Only trouble I’m heaving with the cron job capability is consistency. Sometimes it works, sometimes it doesn’t. Have you figured out a skill that makes it more dependable?
English
2
0
1
1.3K