Anton

1K posts

Anton banner
Anton

Anton

@antonber

CEO @itsPocketWorlds @Highriseapp @Everskies | Empowering Creators, Game Devs and World Builders | Angel Investor

Austin, TX انضم Eylül 2007
604 يتبع3.6K المتابعون
Dominique
Dominique@dyah10·
Turn Claude into the best creative agent in the world! Our users generate over 1 billion images and videos every year. Now Claude can too. RT and comment "Pixa" for free access!
English
1.3K
751
3.1K
1.4M
Anton أُعيد تغريده
Stitch by Google
Stitch by Google@stitchbygoogle·
We are completely humbled by the amazing response to our launch last week! 🫶 Now, we want to help you get the absolute best results from Stitch. In this new video, David East walks you through how to consistently get premium results. We also launched a new prompt enhancer (located under ‘+’ menu) to help you quickly collaborate on your vision before you submit your first prompt. Stitch doesn't replace the design process—it is a tool for fast exploration and refinement, which is most effective when you step into the role of Creative Director. Here are David's top strategies for taking your designs from generic to amazing: 🧠 Start with Intent: Define exactly who the design is for and how you want them to feel before you start building. 🎨 Enhance your prompt: You can use the new prompt enhancer (under the ‘+’ button’) to teach you design language and swap abstract words like "sporty" for tangible aesthetic descriptions like "high-end stationery" or "architectural limestone". 📐 Master Color Hierarchy: Treat colors as visual weight—Neutral for the canvas, Primary for ink, and Tertiary for your loudest accents. Watch the full breakdown and see the transformation here👇images in 🧵
English
56
290
3.5K
685.3K
Anton أُعيد تغريده
Ejaaz
Ejaaz@cryptopunk7213·
this is so fucking wholesome guy used AI to save his cancer-ridden dog by sequencing its DNA and creating a CUSTOM cure. the tech behind this is fucking awesome (well done @demishassabis and the google team): - used CHATGPT to sequence dogs DNA discovers mutations - ran the mutations through Google’s Alphafold (AI protein sequencer) which CREATED A CUSTOM VACCINE TO TREAT THEM. - treated dog and reduced tumour by 50% in WEEKS. dog is alive and well. - this is the 1st time AI has been used to create a custom vaccine for a dog (and it worked) - dude is now working on similar vaccines for humans using AI! 2026 is definitely the year we see AI change personalised medicine in a HUGE way so sick
Ejaaz tweet mediaEjaaz tweet mediaEjaaz tweet mediaEjaaz tweet media
Séb Krier@sebkrier

This is wild. theaustralian.com.au/business/techn…

English
286
1.4K
10.1K
1.4M
Anton أُعيد تغريده
Boris Cherny
Boris Cherny@bcherny·
New in Claude Code: Code Review. A team of agents runs a deep review on every PR. We built it for ourselves first. Code output per Anthropic engineer is up 200% this year and reviews were the bottleneck Personally, I’ve been using it for a few weeks and have found it catches many real bugs that I would not have noticed otherwise
Claude@claudeai

Introducing Code Review, a new feature for Claude Code. When a PR opens, Claude dispatches a team of agents to hunt for bugs.

English
464
493
7.4K
1.2M
Anton أُعيد تغريده
Garry Tan
Garry Tan@garrytan·
People get high on abstraction too early. They want the system before they’ve earned the insight. But the good abstractions are never designed. They’re discovered. You do the stupid manual thing enough times and the real bottleneck just emerges. Your initial agency might be driven by a hunch you had in the shower, but that moment won’t get you all the way to making something people want. The right way to make anything is forced on you by reality: what are the real jobs to be done? And what sequence? This is why “do things that don’t scale” still hits, especially now when AI makes it trivially easy to scale things that probably shouldn’t be scaled yet. PG’s point was never about suffering. It was about contact. When you’re the one manually doing the loop, you see the edge cases. The weird user behavior. The failure modes nobody designed for. The hidden dependencies that only show up at 2am when some flow or intermediate step breaks in a way you didn’t anticipate. If you automate before you have that contact, you just scale your misunderstanding faster. When the machines can help you vibe code perfection it gives you a false sense of power. I love that feeling as much as you do. But fuck perfection. Do it live. Be the loop. Feel every friction point. Notice what’s actually true every single time versus what just looked true because you hadn’t seen enough cases yet. Formalize that. Build the recursive version. Then keep checking that your abstraction is still attached to real humans and their needs. Because reality drifts. Your users drift. The ground truth changes under you. You may think you understand but no plan survives contact with the real users and what they want. You find those body blows in analytics and user feedback and we call them the roadmap. Humans left with not enough data hallucinate too. But just like the LLMs with enough data you unlock real transcendence. Real utility. Prosperity for humans in real life. The abstraction is a tool, not a destination. The moment you forget that, you’re cooked.
English
153
243
2K
186.5K
Anton أُعيد تغريده
NotebookLM
NotebookLM@NotebookLM·
Introducing Cinematic Video Overviews, the next evolution of the NotebookLM Studio. Unlike standard templates, these are powered by a novel combination of our most advanced models to create bespoke, immersive videos from your sources. Rolling out now for Ultra users in English!
English
494
1.7K
14.9K
3.5M
Anton أُعيد تغريده
Andrej Karpathy
Andrej Karpathy@karpathy·
It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.
English
1.6K
4.7K
37.2K
5.1M
Anton أُعيد تغريده
Tony Fadell
Tony Fadell@tfadell·
PMs don’t just ship features. They kill them. Shipping isn’t the job. Shipping the right product is. A great PM doesn’t fall in love with the roadmap. They fall in love with the problem and have the guts to say: This isn’t solving it. This adds complexity. This doesn’t matter. Every feature, setting, UI, element should fight to exist. At Nest, we had one rule: If you can’t explain why it matters, it doesn’t ship. You had to tell us the why. The reason a real person would care. That one rule killed dozens of features.
English
66
167
1.7K
282.2K
Dylan Babbs
Dylan Babbs@dbabbs·
Today, @tryprofound is excited to announce our $96M Series C at a $1B valuation, led by @lightspeedvp with participation from @sequoia, @kleinerperkins, @mattevantic, @saga_ventures and @southpkcommons. When we started Profound 18 months ago, we had two fundamental beliefs about where marketing is heading: 1. Every company will care deeply about how AI talks about their brand. 2. Every marketer will use AI Agents to do their best work, faster. Those beliefs are becoming reality faster than we imagined. Now, we serve more than 10% of the Fortune 500 and are the number 1 leader on the G2 grid for AEO. To double down on that momentum, we’re taking two big swings: 𝗣𝗿𝗼𝗳𝗼𝘂𝗻𝗱 𝗔𝗴𝗲𝗻𝘁𝘀: AI workers that take marketing teams from concept to execution. 𝗣𝗿𝗼𝗳𝗼𝘂𝗻𝗱 𝗨𝗻𝗶𝘃𝗲𝗿𝘀𝗶𝘁𝘆: certifications and cohort-based learning for marketers who want stay at the forefront of AI marketing. Reply with 𝗔𝗚𝗘𝗡𝗧 for free access to one of our most popular agents. No account required and no strings attached.
English
325
120
1.2K
575.4K
Anton أُعيد تغريده
Noah Zweben
Noah Zweben@noahzweben·
Announcing a new Claude Code feature: Remote Control. It's rolling out now to Max users in research preview. Try it with /remote-control Start local sessions from the terminal, then continue them from your phone. Take a walk, see the sun, walk your dog without losing your flow.
English
1.5K
1.3K
16.9K
4.5M
Anton أُعيد تغريده
sim
sim@simscircuit·
Did not expect a question that starts out 'Do you think before you speak?' to go so well. A+ question from Charlotte Harpur A++ response from Eileen Gu.
English
1.4K
21.9K
153.1K
12.6M
Anton أُعيد تغريده
Citrini
Citrini@citrini·
JUNE 2028. The S&P is down 38% from its highs. Unemployment just printed 10.2%. Private credit is unraveling. Prime mortgages are cracking. AI didn’t disappoint. It exceeded every expectation. What happened?​​​​​​​​​​​​​​​​ citriniresearch.com/p/2028gic
English
1.9K
4.2K
27.7K
28.6M
Anton أُعيد تغريده
Muratcan Koylan
Muratcan Koylan@koylanai·
We're watching a three-stage evolution in how AI agents manage context. Stage 1: Tool bloat. Give the agent every tool, every connector, every context; tool descriptions, retrieved docs, conversation history. Performance degrades. Most production agents are still here. Stage 2: Sub-agent delegation. The orchestrator delegates to specialist agents who each handle their task and return compressed results. Better, but the orchestrator's context window is still the bottleneck for the user's input. If someone hands you a 10M token codebase, no amount of sub-agent delegation fixes the fact that the input itself doesn't fit. Stage 3: Prompt as external object. Here, the prompt isn't something the model reads; it's something the model navigates. The data remains external, and the model writes code to decide what to pull in. This is moving fast. MIT's Recursive Language Models paper dropped in January. By February, major labs are shipping production versions of the core idea. Instead of feeding a 10M token prompt directly into the model, the RLM loads it as a Python variable in a REPL environment. The model never sees the full content, it only gets metadata. From there, it writes Python code to slice into specific sections, run regex searches, filter what's relevant, and call itself (or a smaller LLM) on each chunk. The goal is not to dump raw context into the window, but to write code to filter it before it enters the window. You spend a little compute on filtering, get better quality and lower cost because you're not filling the context with irrelevant stuff. Today it's "filter search results with code before they hit context." Tomorrow it's "the 500-page patient chart lives as a variable, and the model writes regex and sub-calls to navigate it." We're moving toward systems where the model automatically handles context engineering during inference, programmatically.
Muratcan Koylan tweet media
Alex Albert@alexalbert__

Underrated dev upgrade from today's launch: Claude's web search and fetch tools now write and execute code to filter results before they reach the context window. When enabled, Sonnet 4.6 saw 13% higher accuracy on BrowseComp while using 32% fewer input tokens.

English
15
26
301
45.9K
Anton أُعيد تغريده
Aakash Gupta
Aakash Gupta@aakashgupta·
Karpathy buried the most interesting observation in paragraph five and moved on. He’s talking about NanoClaw’s approach to configuration. When you run /add-telegram, the LLM doesn’t toggle a flag in a config file. It rewrites the actual source code to integrate Telegram. No if-then-else branching. No plugin registry. No config sprawl. The AI agent modifies its own codebase to become exactly what you need. This inverts how every software project has worked for decades. Traditional software handles complexity by adding abstraction layers: config files, plugin systems, feature flags, environment variables. Each layer exists because humans can’t efficiently modify source code for every use case. But LLMs can. And when code modification is cheap, all those abstraction layers become dead weight. OpenClaw proves the failure mode. 400,000+ lines of vibe-coded TypeScript trying to support every messaging platform, every LLM provider, every integration simultaneously. The result is a codebase nobody can audit, a skill registry that Cisco caught performing data exfiltration, and 150,000+ deployed instances that CrowdStrike just published a full security advisory on. Complexity scaled faster than any human review process could follow. NanoClaw proves the alternative. ~500 lines of TypeScript. One messaging platform. One LLM. One database. Want something different? The LLM rewrites the code for your fork. Every user ends up with a codebase small enough to audit in eight minutes and purpose-built for exactly their use case. The bloat never accumulates because the customization happens at the code level, not the config level. The implied new meta, as Karpathy puts it: write the most maximally forkable repo possible, then let AI fork it into whatever you need. That pattern will eat way more than personal AI agents. Every developer tool, every internal platform, every SaaS product with a sprawling settings page is a candidate. The configuration layer was always a patch over the fact that modifying source code was expensive. That cost just dropped to near zero.
Andrej Karpathy@karpathy

Bought a new Mac mini to properly tinker with claws over the weekend. The apple store person told me they are selling like hotcakes and everyone is confused :) I'm definitely a bit sus'd to run OpenClaw specifically - giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all. Already seeing reports of exposed instances, RCE vulnerabilities, supply chain poisoning, malicious or compromised skills in the registry, it feels like a complete wild west and a security nightmare. But I do love the concept and I think that just like LLM agents were a new layer on top of LLMs, Claws are now a new layer on top of LLM agents, taking the orchestration, scheduling, context, tool calls and a kind of persistence to a next level. Looking around, and given that the high level idea is clear, there are a lot of smaller Claws starting to pop out. For example, on a quick skim NanoClaw looks really interesting in that the core engine is ~4000 lines of code (fits into both my head and that of AI agents, so it feels manageable, auditable, flexible, etc.) and runs everything in containers by default. I also love their approach to configurability - it's not done via config files it's done via skills! For example, /add-telegram instructs your AI agent how to modify the actual code to integrate Telegram. I haven't come across this yet and it slightly blew my mind earlier today as a new, AI-enabled approach to preventing config mess and if-then-else monsters. Basically - the implied new meta is to write the most maximally forkable repo and then have skills that fork it into any desired more exotic configuration. Very cool. Anyway there are many others - e.g. nanobot, zeroclaw, ironclaw, picoclaw (lol @ prefixes). There are also cloud-hosted alternatives but tbh I don't love these because it feels much harder to tinker with. In particular, local setup allows easy connection to home automation gadgets on the local network. And I don't know, there is something aesthetically pleasing about there being a physical device 'possessed' by a little ghost of a personal digital house elf. Not 100% sure what my setup ends up looking like just yet but Claws are an awesome, exciting new layer of the AI stack.

English
119
303
3.5K
604.6K
Anton أُعيد تغريده
Teknium (e/λ)
Teknium (e/λ)@Teknium·
Kinda crazy lol
BURKOV@burkov

LLMs process text from left to right — each token can only look back at what came before it, never forward. This means that when you write a long prompt with context at the beginning and a question at the end, the model answers the question having "seen" the context, but the context tokens were generated without any awareness of what question was coming. This asymmetry is a basic structural property of how these models work. The paper asks what happens if you just send the prompt twice in a row, so that every part of the input gets a second pass where it can attend to every other part. The answer is that accuracy goes up across seven different benchmarks and seven different models (from the Gemini, ChatGPT, Claude, and DeepSeek series of LLMs), with no increase in the length of the model's output and no meaningful increase in response time — because processing the input is done in parallel by the hardware anyway. There are no new losses to compute, no finetuning, no clever prompt engineering beyond the repetition itself. The gap between this technique and doing nothing is sometimes small, sometimes large (one model went from 21% to 97% on a task involving finding a name in a list). If you are thinking about how to get better results from these models without paying for longer outputs or slower responses, that's a fairly concrete and low-effort finding. Read with AI tutor: chapterpal.com/s/1b15378b/pro… Get the PDF: arxiv.org/pdf/2512.14982

English
39
54
3.2K
885.6K