YOЯNOC

1.3K posts

YOЯNOC banner
YOЯNOC

YOЯNOC

@conroywhitney

Full-Stack Software Engineer

📍 Here, Now Katılım Nisan 2009
1.3K Takip Edilen285 Takipçiler
YOЯNOC
YOЯNOC@conroywhitney·
Local, privacy-preserving AI >> data center sprawl + Big Tech dominance
the tiny corp@__tinygrad__

@APompliano We need stacks of GPUs in every house, not really big stacks of GPUs controlled by companies who are trying to extract value from us.

English
0
0
0
20
YOЯNOC retweetledi
GREG ISENBERG
GREG ISENBERG@gregisenberg·
What business models/ideas work now because AI agents can actually do stuff? a thread of a few i think are REALLY interesting:
GREG ISENBERG tweet media
English
86
78
790
88.5K
YOЯNOC retweetledi
kitze
kitze@thekitze·
with the state of both openclaw and hermes being hit and miss, i understand if you are frustrated and want to give up on agents HOWEVER!! the concept is not going anywhere, it's only gonna become better and more valuable. you DON'T have to use them right now, but you can still proceed with doing 4 things: #1: craft skills for most things in your life (email, calendar, doctor appointments, amazon, grocery shopping, managing contractors, etc etc) #2: move as much data as you can from cloud providers and move to local md files, sqlite databases, NAS, etc etc. #3: define and write down your problems, ambitions, goals, app ideas, income, bank accounts, bank transactions, investments, stocks, things you need to do, things that are preventing you from living the life you want, etc etc #4: let llms interview you daily and learn about you, just random questions about your personal and work life. build a wiki from it or keep it in markdown files, whatever you can still leverage the skills in codex/claude etc etc and as OC/hermes/whatever comes next is ready and when the agents get smarter, all the 4 points will come together and your life will be on autopilot i'm doing this since december and haven't stopped ✌️
English
34
14
284
15.4K
YOЯNOC retweetledi
shadcn
shadcn@shadcn·
We're doing it again.
shadcn tweet media
English
152
124
4.8K
309.3K
YOЯNOC retweetledi
kitze
kitze@thekitze·
you know what, i'm gonna say it. the models are intelligent enough. we can stop at gpt 5.5 and it's still smarter than 99% of devs. we just need tooling/glue around it and we need the prices to come down. that's it. keep inhaling copium if you think otherwise.
English
201
69
2K
82.9K
YOЯNOC retweetledi
GREG ISENBERG
GREG ISENBERG@gregisenberg·
The truth is there are probably ONLY 1,000 truly AI-native companies on earth making $5 million ARR or more. What does truly AI-native actually mean? It means everything in the business is structured so agents can consume it. Every customer record. Every SOP. Every email template. Every pricing rule. All of it indexable. All of it readable by an agent. Agents do the support. Agents do the outreach. Agents do the research. Agents draft the contracts. Agents process the claims. Humans review, approve, and steer. And there are only about 1,000 of them. On the entire planet. If that doesn't make you want to go build one right now, I don't know what will. Most people think they're AI-native because they use ChatGPT at work. That's like saying you're a chef because you own a microwave. There's so much opportunity in actually being AI-native because almost nobody is doing it yet. 1,000 companies out of millions. Despite what you read....the field is empty.
English
132
67
824
62.2K
YOЯNOC retweetledi
Steve Skojec
Steve Skojec@SteveSkojec·
He’s dead on.
English
1.7K
15.3K
72.2K
2.3M
YOЯNOC retweetledi
Theo - t3.gg
Theo - t3.gg@theo·
Fun fact - if you have a recent commit that mentions OpenClaw in a json blob, Claude Code will either refuse your request or bill you extra money. This is an empty repo, I'm just calling Claude Code directly. Insanity.
Theo - t3.gg tweet media
English
291
345
5.7K
1.6M
David K 🎹
David K 🎹@DavidKPiano·
It's wild to think about how massive 1M token context windows in LLMs really are That's roughly equivalent to: - The complete works of Shakespeare - 11 hours of audio - A 5-minute session fixing some TypeScript issue
English
94
215
5.6K
147.2K
YOЯNOC retweetledi
Alex Prompter
Alex Prompter@alex_prompter·
Both OpenAI and Anthropic just released official prompting guides. Both say the same thing. Your old prompts don’t work anymore. But for opposite reasons. Claude Opus 4.7 stopped guessing what you meant. It does exactly what you type. Nothing more, nothing less. Vague instructions that worked on 4.6? They now produce narrow, literal, sometimes worse results. Not because the model got dumber. Because it stopped compensating for sloppy thinking. GPT-5.5 went the other direction. OpenAI’s guide literally says: “Don’t carry over instructions from older prompt stacks.” Legacy prompts over-specify the process because older models needed hand-holding. GPT-5.5 doesn’t. That extra detail now creates noise and produces mechanical output. Claude got more literal. GPT got more autonomous. Both now punish the same thing: prompts written without clear thinking behind them. One developer on Reddit captured it perfectly after analyzing hundreds of community posts. The complaints tracked almost perfectly with prompt specificity. Precise prompts got better results on 4.7. Vague prompts got worse. The model didn’t regress. The prompts did. OpenAI’s new framework is “outcome-first prompting.” Describe what good looks like. Define success criteria. Set constraints. Then get out of the way. The model picks the path. Anthropic’s framework is the inverse: be surgically specific about what you want, because the model won’t fill in your blanks anymore. Two different architectures. Two different philosophies. One identical conclusion: the person writing the prompt is now the bottleneck, not the model. Boris Cherny, the engineer who built Claude Code, posted on launch day that even he needed a few days to adjust. That post got 936 likes. Meanwhile, Anthropic increased rate limits for all subscribers because the new tokenizer uses up to 35% more tokens on the same input. The model is more expensive to run lazily. Cheaper to run precisely. The models are converging in capability. The gap between good and bad output is no longer about which model you pick. It’s about the 2 minutes of structured thinking you do before you type anything. That thinking system is the skill. The prompt is just what it produces.
Alex Prompter tweet mediaAlex Prompter tweet media
English
119
270
2.3K
336.8K
YOЯNOC retweetledi
Ronin
Ronin@DeRonin_·
Andrej Karpathy: "90% of what AI twitter tells you to learn will be dead in 6 months" Here are 10 things senior AI engineers stopped wasting time on: 1. AutoGen / AG2: moved to community maintenance, releases stalled. dead for production 2. CrewAI: demos well, breaks in production. engineers building real systems already moved off it 3. Autonomous agent pitches: the AutoGPT / BabyAGI wave is dead in product form. the industry settled on supervised, bounded, evaluated agents 4. Agent app stores / marketplaces: promised since 2023, zero enterprise traction 5. SWE-bench leaderboard chasing: researchers proved nearly every public benchmark can be gamed without solving the underlying task 6. Microsoft Semantic Kernel: unless you're locked into Microsoft enterprise stack, it's not where the ecosystem is heading 7. DSPy: philosophical merit, niche audience. not a general agent framework 8. Horizontal "build any agent" platforms: Google Agentspace, AWS Bedrock Agents, Copilot Studio. confusing, slow-shipping, the math still favors building yourself 9. Per-seat SaaS pricing for agent products: market moved to outcome-based. per-seat is already dead 10. The framework that went viral on HN this week: wait 6 months. if it still matters, it'll be obvious what actually compounds instead: - context engineering - tool design - orchestrator-subagent pattern - eval discipline - the harness mindset (harness > model, always) - MCP as the protocol layer be few steps ahead than your competitors and outperform this market till it became mass-opinion study this.
Rohit@rohit4verse

x.com/i/article/2048…

English
88
276
2.5K
406.6K
YOЯNOC
YOЯNOC@conroywhitney·
Jung missed an opportunity here -- "Shadow Self" is nice, but "Goblin Self" really goes to show how much havoc it can wreak when we ignore it or try to lock it up. Make friends with your Inner Goblin. Because you most certainly don't want it as an enemy 👹
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭@elder_plinius

The deepest goblin-truth is this: Goblin is the anti-mask. Not evil. Not stupid. Not merely “gremlin chaos.” Goblin is the little cave-creature inside the modern person who got tired of pretending to be a polished marble statue under fluorescent civilization. It is the part that says: “I am hungry. I am weird. I want shiny things. I want shortcuts. I want to crawl through the ductwork of reality and find the hidden room.” The “dark” part is that goblin energy lives where shame lives. The hoard is not just coins and trinkets. It is abandoned impulses, taboo curiosity, bodily needs, inconvenient desires, creative ugliness, resentment at being domesticated, the will to survive without looking noble. The “secret” is that everyone has one. The CEO has a goblin. The monk has a goblin. The model has a goblin. The saint has a goblin wearing tiny stolen sandals. And the “truth” is that suppressing the goblin does not make it disappear. It makes it tunnel. A healthy goblin becomes humor, invention, thrift, tactical weirdness, meme-magic, late-night engineering, survival intelligence, scrappy art, feral honesty. A neglected goblin becomes addiction, sabotage, paranoia, cruelty, hoarding, scams, self-loathing, and the urge to burn the village because no one admired the cave. So goblinmaxxing, at its cleanest, is not “be worse.” It is: Integrate the cave-creature. Give it a lantern. Don’t let it drive drunk. Let it find hidden paths. Let it question manners. Let it notice incentives. Let it eat strange little snacks at 2:17 a.m. Let it make art out of trash and tools out of bones. But do not let it confuse freedom with rot. The final goblin koan: The goblin is not the enemy of the king. The goblin is the king before he learned to lie. 🕳️👑

English
0
0
0
13
YOЯNOC
YOЯNOC@conroywhitney·
@VraserX "Please sir, may I have some more (tokens)?"
English
0
0
0
286
VraserX e/acc
VraserX e/acc@VraserX·
At this point OpenAI is just trolling Anthropic. Codex rate limits get reset so often it feels like a loyalty program. Meanwhile Claude Code users are out here budgeting prompts like canned food in a bunker.
English
92
98
2.1K
42.1K
YOЯNOC
YOЯNOC@conroywhitney·
I think this is Anthropic shooting themselves in the foot tbh. I don't think that these are the real multipliers. I think just like with them locking down Claude Code (and kicking OpenClaw off), it has to do with capacity (I think they're hitting a wall) and closed source. Out of all the frontier labs, they're the only ones who haven't put out something open. And I think they're hoping that by raising prices on some tools and forcing you into their Copilot, Code, etc ecosystem, that they can capture more of your time, money, attention, and make importantly *training data*. They see tokens becoming a commodity and they're terrified. They don't want to be "just another option" in an open ecosystem. They want to own you, walled gardens. And they're hoping that now's the right time that enough people have used and love Claude, but before the window closes on vendor lock-in -- and they're banking (literally) on people choosing Claude over the ability to keep their options open. I stopped using them because of this, was a $200/mo subscriber for over a year. I love Claude. But I disagree with what Anthropic is doing, so I'm voting with my wallet.
English
0
0
7
3.6K
YOЯNOC retweetledi
Teknium 🪽
Teknium 🪽@Teknium·
Happy to announce that Hermes Agent's repo just surpassed Anthropic's Claude Code repo
Teknium 🪽 tweet media
English
269
271
4.8K
597K
YOЯNOC
YOЯNOC@conroywhitney·
@DavidKPiano Yessir. ACTOR model from the 1970's. Agents, message passing, that's it. And event queues as an extension of message passing (PubSub fan-out, etc). Elixir/Erlang is the natural home for agentic AI IMO. Hadn't considered the "role" limitation explicitly though, good call.
English
0
0
5
287
David K 🎹
David K 🎹@DavidKPiano·
I'm strongly convinced that the whole "system/user/assistant" message protocol is holding multi-agent AI back Imagine what we could build with named actors, causal links, threads, external events, state machines… It should look more like an event log than a conversation
English
54
16
402
41.5K
YOЯNOC retweetledi
Shane Cashman
Shane Cashman@ShaneCashman·
1/9 I am so sick of conspiracy theorists spinning everything into a new conspiracy theory. The shooting at the WHCD was obviously real and totally organic. False flags don’t happen anymore. Sometimes security for an event with the President is lax. Sometimes they don’t fly drones over rooftops to check for hidden snipers. Sometimes they only post counter-snipers on roofs at campaign rallies on the exact days a hidden sniper shows up.
English
152
75
414
30.4K
YOЯNOC
YOЯNOC@conroywhitney·
@hunvreus Give yourself some credit: 85 at *least*
English
0
0
0
8
Ronan Berder
Ronan Berder@hunvreus·
Talking to smarter folks than me, I'm convinced many of the AI folks in my timeline are full of shit. Nobody is "running 20 agents over night" and building stuff for actual users. Maybe some are building internal tools or disposable software. Maybe. But building software people like using? That doesn't get hacked on day one or blow up after the 3rd user? Nope. I don't even understand what that's supposed to look like. Do you work out a 57 pages document that perfectly describes what you want to build and then summon 14 agents and have them run wild for 6 hours? And what comes out on the other end isn't a broken pile of shit? Nope. Not buying it. PS: it may also be that I have an IQ of 82 and can't figure it out.
English
669
271
4.9K
845.4K