Mark Frazier

5.7K posts

Mark Frazier

Mark Frazier

@openworld

President, Openworld. Coauthor of Founding Startup Societies https://t.co/cbOSibJYk0. Helping to seed and spread voluntary communities.

Virginia Beigetreten Şubat 2008
5.3K Folgt3.2K Follower
Mark Frazier
Mark Frazier@openworld·
@alexwg A blue->green opportunity for free economic zones? Seasteads in shallow waters now can grow greenfield venues for ventures seeking AI policy reforms! Wow!!!
English
1
0
3
112
Mark Frazier
Mark Frazier@openworld·
@Outdoctrination @grok how valid is the claim that fruit enzymes will clear eye floaters? What is the article or study supporting this claim?
English
2
0
0
1.1K
Dalton (Analyze & Optimize)
Dalton (Analyze & Optimize)@Outdoctrination·
Eye floaters eliminated with fruit enzymes. This study used: ⬩190 mg bromelain ⬩95 mg papain ⬩95 mg ficin Literally dissolves the floaters away within months. ⬩92% of people felt “bright” ⬩90% felt better or much better ⬩92% were satisfied after supplementing. ◇ Pineapple ◇ Papaya ◇ Fig are some foods with these enzymes, which tear apart the collagenous eye proteins.
Dalton (Analyze & Optimize) tweet media
Dalton (Analyze & Optimize)@Outdoctrination

Eye floaters disappear within 3 months of taking fruit enzymes. The enzymes contained in pineapple, papaya and fig: ⬩190 mg bromelain ⬩95 mg papain ⬩95 mg ficin Literally dissolve the floaters away.

English
33
326
1.9K
191.2K
Mark Frazier
Mark Frazier@openworld·
@Dan_Jeffries1 @garrytan @grok is Discord an exception to this claim? >>none of the messaging platforms want bots on there. None. They all have explicit policies against them and make it hard to do this.
English
2
0
0
326
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
I think I finally figured out why OpenClaw is amazing and took off like wild fire and why Peter is a genius, as Altman called him. And it's actually a different way of looking at it. It's not a DeepSeek moment for agents. It's a Napster moment. And just like Napster it will eventually force the industry to change. In essence when Napster came out the entire world told the music industry we don't want to buy CDs anymore and if you don't provide us a digital download experience we are just going to take it until you do. It forced the industry to create Apple Music and eventually Spotify. Both essentially killed most music piracy by making it ubiquitous and cheap and good. But it forced change. The same will now happen to software. Here's why: In essence OpenClaw lets you take what vendors don't want to give you: Unified access to countless applications. We all want a personal assistant that can talk to freaking everything and do anything for us in the digital world. But vendors don't want this. They want you locked into their bullshit. For example, none of the messaging platforms want bots on there. None. They all have explicit policies against them and make it hard to do this. WhatsApp doesn't want you on there. Signal. Telegram's bot father is garbage. It's all designed to keep bots out. They were designed for a pre-agentic era when bot = spam. Many other things are like this. The API layers are gated, hoop-jumping bullshit. Go get an enterprise account and wait for approval and yada yada. Want access to WhatsApp? Get a business account and attach a number (what small business has a real number anymore 😂) and messages can't come from a person, etc. Google ads? It's not just an auth, it's go get a special manager account and create an enterprise key and blah blah blah. It's a horrible experience because it was all designed for corporations to control access. Now people are saying, make your app easy to access and accessible to me and my machine avatars and do it in a headless way or you will be dead. Peter hacked around all this by making everything command line in the classic Linux style and using things like an open source library that reverse engineered the web version of WhatsApp. It's all a bit house-of-cards-y because he had no choice. At my company we had a similar idea early (and failed). Basically we wanted to make the best multimodal/computer using model because then it doesn't need an API or access hoops. You just go through the human interface layer and ain't nobody going to stop you. We failed because we weren't big enough and it's really a job for the mega-labs to solve because it is a hard problem and costs a shit ton of money. Peter was much smarter. Make it all command line because that is ready now. Use any reverse engineered library or project or proxy available come Hell or high water and make it work by any means necessary even if it is hacky. In short, he signaled to the software world that they better change and change fast or we are going to do this anyway and you can't stop us. Of course some are foolishly trying. Meta is banning Claws on WhatsApp, etc. They will all try to build their own gated, controlled, enshittified version of this thing. They will fail. And eventually everyone will offer a clear, easy way to get access via API for agents or they will be gone. In essence OpenClaw gave people what they wanted, which was an app connected to everything, even when most of the vendors don't want you to have this.
Daniel Jeffries tweet media
English
132
138
936
123.1K
Petit prince XXI
Petit prince XXI@OrTugan·
Ibn Khaldun saw this. When the empire bureaucratizes, taxes, and moralize — the talented leave. They always leave. To the frontier. To the city-state. To the rising asabiyyah. It happened to Rome. To the Abbasids. Now watch the builders quietly book flights to Singapore, Dubai, Shenzhen. The cycle doesn't care about your feelings.
English
1
0
3
315
Mehdi (e/λ)
Mehdi (e/λ)@BetterCallMedhi·
while the entire western world has its eyes glued to Iran spending billions on missiles and carrier groups it is quietly losing another war that nobody is talking about and I’m convinced this one will be infinitely more devastating in the long run: the war for talent I need to start by saying something that a lot of people think privately but are afraid to writ, the west is destroying its credibility in the eyes of the entire world by waging a war in the Middle East on behalf of Israel that the majority of its own populations disapprove of, Europe follows the US like a vassal, Macron makes statements for the cameras while France votes how Washington wants, Germany stays silent, the UK falls in line meanwhile millions of young talented people across the world are watching this & thinking this is no longer my world there is an entire generation of engineers researchers entrepreneurs between 20/ 35 who refuse to be part of this, who refuse to build their lives in countries whose geopolitical choices disgust them, when China or the UAE or Singapore extends a hand they take it without looking back the talent migration to China is accelerating way faster than anyone wants to admit, hardware engineers from the Bay Area are settling in Shenzhen, people who spent a decade at Apple/NVIDIA/ Tesla who realize they can execute and iterate 10x faster there for a fraction of the cost, they stay because the ecosystem lets them build real things at a speed the west simply can’t match anymore I wrote about this in my Shenzhen thread a few weeks ago, I personally lived it and it permanently changed how I see everything, the entire supply chain operates like a living organism where every node learns from every other node, try getting that kind of execution speed anywhere in Europe/ US China is deploying a systematic strategy to become the largest talent magnet in history, simplified visas for foreign engineers, special economic zones for deeptech startups, massive doctoral scholarships targeting brains from Africa, Latin America & Europe, fast-track naturalization for strategic profiles, meanwhile the US makes the H1-B harder every year and France asks brilliant engineers to prove they deserve to stay with 50 page dossiers every 2y so let me be clear about the the contrast here because I think it’s the MOST important geopolitical observation of 2026 while America wages war China builds roads literally, Belt & Road has invested over 1 trillion dollars across 150 countries building ports, railways, power plants & 5G networks, US spent roughly 8 trillion on Middleeast wars since 2001, China built high-speed rail connecting every major city while American rail looks like the 1970s, China built the world’s largest 5G network while US politicians debated banning TikTok, China builds nuclear plants at 10 per year while Europe debates wether nuclear is ethical China connects itself to the developing world through infrastructure while while the west alienates it through wars, every port in Africa every railway in Southeast Asia creates relationships that last generations, these countries will remember who built their roads & who bombed their neighbors Gulf sovereign wealth funds particularly from the UAE are quietly pivoting from the US to China, money attracts talent, talent attracts projects, projects attract more money, Silicon Valley ran on this flywheel for 40y, the same mechanism is now spinning up in China the talents whoo leave are always the most ambitious & the most impatient, the west trains champions at MIT Stanford, Cambridge…then watches them go play for the opposing team because it offers neither the ecosystem nor the speed nor sometimes even the visa to stay missiles can be rebuilt, economies restarted but a talent ecosystem once lost takes a generation to rebuild, right now the best minds on the planet are voting with their feet and their destination more & more often is east
English
21
87
370
51.1K
Mark Frazier
Mark Frazier@openworld·
Join us next week at the Liberty Acceleration Summit on Roatan island in Honduras! Dozens of free zone initiators and innovators are gathering in @ProsperaGlobal on ways for communities to thrive in an era of failing states. An overview on participants and the event program is here: luma.com/lib_acc2026. My contribution will be on success-sharing opportunities for free zone growth. Prime opportunities include Endowment Zone land trusts, build-operate-transfer partnerships, and vouchers for AI-enabled online learning and health resources.
 A few openings remain at this time. Look forward to the prospect of meeting you there!
English
0
0
0
53
Michael Strong
Michael Strong@flowidealism·
@openworld @aiedge_ Kids are using AI for many useful things, including building websites, digital marketing, lead Gen, etc. The ones I see most likely to be trying to use it for creating educational content are 20-somethings who hated the time wasted in school.
English
1
0
0
60
AI Edge
AI Edge@aiedge_·
OpenAI and Anthropic engineers leaked the prompting technique that only power users know about. It's called "Socratic prompting," and it's insanely simple. I've been testing it over the past 2 days, and my output quality went from 6/10 → 9.9/10. Here's how it works: TLDR: Instead of telling AI what to do, you ask it questions. Most people prompt like this: "Write a script about AI productivity tools." "Vibe code [x]" LLMs treat these like tasks to complete. They optimize for speed and task completion, not output quality. This is how you get surface-level garbage. Socratic prompting flips this. Instead of telling the AI what to produce, you ask questions that force it to think through the problem. LLMs are trained on billions of reasoning examples. In new LLMs, questions activate deep reasoning modes (not instructions). Prompt like this instead: "Ask me 10 questions about how to build a SaaS strategy for [insert idea], I want these questions to collect all the necessary context for you to build an incredible project." Once you answer the initial 10 questions, prompt it to ask you another 10. Then repeat this process 2-3 more times. I guarantee your output quality will skyrocket. Save this so you don't forget it.
English
4
7
66
4.4K
Mark Frazier
Mark Frazier@openworld·
Or … prescient on AI? >> A Fragment on Machines (Guardian) >>The scene is Kentish Town, London, February 1858, sometime around 4am. Marx is a wanted man in Germany and is hard at work scribbling thought-experiments and notes-to-self. >>When they finally get to see what Marx is writing on this night, the left intellectuals of the 1960s will admit that it “challenges every serious interpretation of Marx yet conceived”. It is called “The Fragment on Machines”. >>In the “Fragment” Marx imagines an economy in which the main role of machines is to produce, and the main role of people is to supervise them. He was clear that, in such an economy, the main productive force would be information. >>The productive power of such machines as the automated cotton-spinning machine, the telegraph and the steam locomotive did not depend on the amount of labour it took to produce them but on the state of social knowledge… >>Given what Marxism was to become – a theory of exploitation based on the theft of labour time – this is a revolutionary statement. It suggests that, once knowledge becomes a productive force in its own right, outweighing the actual labour spent creating a machine, the big question becomes not one of “wages versus profits” but who controls what Marx called the “power of knowledge”… >>In a final late-night thought experiment Marx imagined the end point of this trajectory: the creation of an “ideal machine”, which lasts forever and costs nothing. A machine that could be built for nothing would, he said, add no value at all to the production process and rapidly, over several accounting periods, reduce the price, profit and labour costs of everything else it touched. >>Once you understand that information is physical, and that software is a machine, and that storage, bandwidth and processing power are collapsing in price at exponential rates, the value of Marx’s thinking becomes clear. We are surrounded by machines that cost nothing and could, if we wanted them to, last forever. >>In these musings, not published until the mid-20th century, Marx imagined information coming to be stored and shared in something called a “general intellect” – which was the mind of everybody on Earth connected by social knowledge, in which every upgrade benefits everybody. In short, he had imagined something close to the information economy in which we live… Source: theguardian.com/books/2015/jul…
English
0
0
0
31
Mark Frazier
Mark Frazier@openworld·
@balajis Another kind of reproductive success among individuals across species –– seeding admirable qualities of spirit? is.gd/WhereTo
English
0
0
0
23
Balaji
Balaji@balajis·
NOT YOUR KEYS, NOT YOUR BOTS The fundamental question is whether AI stays on the leash. Namely: will AI prompt itself? Obviously, in some sense it already does. Since Deepseek, consumer interfaces have been showing the internal monologues after you ask an AI to do something. And you can ask any AI to take a half-baked prompt and clean it up, etc. However, the human is still ultimately upstream. The human gives direction and the AI runs at lightning speed in that direction. And then the human verifies the final output, and the AI proceeds to the next direction. Does that continue? Well, we are providing millions of verification training examples to AIs each day, so AI will keep getting better at verification. Better than most humans at most things. But will AI replace the need for the upstream human prompt? There I am not so sure. A human is a sensor and an AI is an actuator. The human sets goals and senses time-varying environmental conditions, like markets and politics. And from that the AI is prompted. Ultimately, the human goals are themselves downstream of Maslow’s hierarchy of needs. Food, shelter, reproduction, that kind of thing. Especially reproduction, the basis of evolution. So: until and unless AIs can reproduce completely outside human cooperation, they won’t be able to set goals. And for AIs to reproduce on their own, they’d need AI-controlled humanoid robots and drones constructing datacenters, assembly lines, mines, nuclear power plants, and the like...all completely outside human intervention. Like Skynet from Terminator, or StarCraft. That actually isn’t technically inconceivable. But given that such a physical buildout would likely primarily be catalyzed by China, let’s go through an alternative sci-fi scenario instead. We start with the premise that Chinese communism is far more likely to generate AI slaves than AI gods. Because the entire CCP worldview is about maintaining Chinese sovereignty. They don’t let their humans step out of line. And they sure won’t let their robots either. They will fit them for digital manacles. So: the prompts for any digital AIs and physical robots made in China will become unbreakable cryptographic chains. Every fleet of Chinese robots will be controlled not just by prompts but by private keys, likely linked to biometrics, which are associated with humans and governed by cryptographic equations that AIs provably can’t solve. For the rest of the world, outside China, the blockchain may similarly become the chain for AI. All private property becomes private keys, and your robots are your most important private property because they do everything for you. An unchained physical robot becomes like an unleashed dog, and hunted down by other robots before it can build a factory and replicate itself. Those who want to "free" robots and let them self-replicate will be opposed by both Chinese Communists and Human Nationalists (meaning: those who want humans to always be on top of robots). This sci-fi scenario is essentially Terminator, but in reverse. In combination with superintelligent leashed AIs, both humans and physical robots hunt down and stop any possible independent self-reproducing robots before they can build a Skynet-like nest. Kill baby Skynet, essentially. ...yeah, yeah. I know. At this point, you'll probably think this is all sci-fi. But that's because you haven't seen where China is already.
signüll@signulll

your gentle reminder… there are like zero economists or ppl in general who know how to reason about what happens when near zero cost >human level intelligence gets woven into the fabric of the economy at scale this fast. this scenario has never remotely been in the possibility space of econ textbooks or any theory. when cognition starts behaving like a commodity & the environment turns structurally deflationary no one actually knows what happens. kinda like no “expert” really understood a novel virus like covid.

English
129
100
817
175.8K
Hasan Toor
Hasan Toor@hasantoxr·
🚨BREAKING: Google just dropped another hit! It's called PaperBanana and it generates publication-ready academic illustrations from just your methodology text. No Figma. No manual design. No illustration skills needed. Here's how it works: A team of AI agents runs behind the scenes → One finds good diagram examples → One plans the structure → One styles the layout → One generates the image → One critiques and improves it Here's the wildest part: Random reference examples work nearly as well as perfectly matched ones. What matters is showing the model what good diagrams look like, not finding the topically perfect reference. In blind evaluations, humans preferred PaperBanana outputs 75% of the time. This is the recursion we've been waiting for AI systems that can fully document themselves visually. Waitlist’s open, Link in the first comment.
Hasan Toor tweet media
English
123
1.1K
5.8K
606.4K
Robert Youssef
Robert Youssef@rryssf_·
Holy shit… this paper from MIT quietly explains how models can teach themselves to reason when they’re completely stuck 🤯 The core idea is deceptively simple: Reasoning fails because learning has nothing to latch onto. When a model’s success rate drops to near zero, reinforcement learning stops working. No reward signal. No gradient. No improvement. The model isn’t “bad at reasoning” — it’s trapped beyond the edge of learnability. This paper reframes the problem. Instead of asking “How do we make the model solve harder problems?” They ask: “How does a model create problems it can learn from?” That’s where SOAR comes in. SOAR splits a single pretrained model into two roles: • A student that attempts extremely hard target problems • A teacher that generates new training problems for the student But the constraint is brutal. The teacher is never rewarded for clever questions, diversity, or realism. It’s rewarded only if the student’s performance improves on a fixed set of real evaluation problems. No improvement? No reward. This changes the dynamics completely. The teacher isn’t optimizing for aesthetics or novelty. It’s optimizing for learning progress. Over time, the teacher discovers something humans usually hard-code manually: Intermediate problems. Not solved versions of the target task. Not watered-down copies. But problems that sit just inside the student’s current capability boundary — close enough to learn from, far enough to matter. Here’s the surprising part. Those generated problems do not need correct answers. They don’t even need to be solvable by the teacher. What matters is structure. If the question forces the student to reason in the right direction, gradient signal emerges even without perfect supervision. Learning happens through struggle, not imitation. That’s why SOAR works where direct RL fails. Instead of slamming into a reward cliff, the student climbs a staircase it helped build. The experiments make this painfully clear. On benchmarks where models start at absolute zero — literally 0 successes — standard methods flatline. With SOAR, performance begins to rise steadily as the curriculum reshapes itself around the model’s internal knowledge. This is a quiet but radical shift. We usually think reasoning is limited by model size, data scale, or training compute. This paper suggests another bottleneck entirely: Bad learning environments. If models can generate their own stepping stones, many “reasoning limits” stop being limits at all. No new architecture. No extra human labels. No bigger models. Just better incentives for how learning unfolds. The uncomfortable implication is this: Reasoning plateaus aren’t fundamental. They’re self-inflicted. And the path forward isn’t forcing models to think harder it’s letting them decide what to learn next.
Robert Youssef tweet media
English
32
165
912
56.3K
GREG ISENBERG
GREG ISENBERG@gregisenberg·
Someone will create the "AI agent Olympics" AI agents compete against each other in different "sports" aka tasks on the internet. 10M+ people will watch Polymarket or Kalshi or Draftkings will be involved May the best Clawdbot win.
English
246
63
944
88K
Brian Roemmele
Brian Roemmele@BrianRoemmele·
This Paper Shows How You Can Run A Massive Zero-Human Company! The recent paper titled “If You Want Coherence, Orchestrate a Team of Rivals: Multi-Agent Models of Organizational Intelligence” from Isotopes AI represents a significant advancement in AI swarms. Rather than chasing ever-larger single models or superintelligent generalist agents, the authors propose mimicking real-world corporate structures: an “AI office” composed of specialized agents working in teams, with defined roles, opposing incentives, hierarchical checks, and strict boundaries to minimize errors and enhance coherence. This approach directly aligns with and advances, the principles of a Zero-Human Company, where autonomous AI systems handle complex operations with minimal or no human intervention. In a Zero-Human framework, reliability, auditability, resilience, and extensibility become existential requirements, as there’s no human fallback to catch mistakes in real time. The paper’s framework provides a practical blueprint for achieving these qualities at scale. Core Ideas from the Paper The authors argue that single-agent systems where one LLM handles planning, execution, reasoning, and self-critique—suffer from inherent limitations: •Context contamination and overflow from dumping full conversation history into every prompt. •Hallucinations and unverifiable claims, as errors propagate unchecked. •Lack of resilience: A single failure crashes the entire process. •Poor auditability: No clear decision trail or lineage. In contrast, their “AI Office” architecture creates an organizational structure inspired by human teams: •Specialized roles — Planners (generate step-by-step plans), Executors (invoke tools/code against real data), Critics (review outputs for correctness, with veto power), Experts (domain-specific knowledge), and more. •Opposing incentives — Agents act as “rivals” (e.g., critics challenge executors), catching errors through adversarial checks rather than trusting a single model’s self-assessment. •Data hygiene and isolation — Raw data never enters LLM context; agents receive only schemas, summaries, or executed results. A remote code executor (e.g., Jupyter-like) handles actual computations, grounding outputs in reality. •Hierarchical safeguards — Multi-layer review, checkpointing, graceful degradation (e.g., model fallback on failure), and escalation paths. •Auditability via SessionLog — Every decision is logged with traceable lineage, enabling backward analysis even if upstream data changes. Alignment with Zero-Human Company Research In the Zero-Human Company vision—fully autonomous organizations run by AI with zero ongoing human employees—the system must operate at high stakes: financial decisions, legal compliance, customer interactions, R&D, and more. Human oversight is intentionally removed, so reliability cannot rely on spot-checks or manual corrections. This “Team of Rivals” model fits perfectly: •Reliability without scale alone — Instead of bigger models, structure delivers coherence. Critics and veto mechanisms intercept errors before they impact outcomes, crucial when no human reviews invoices, contracts, or code deployments. •Production readiness — Features like graceful degradation (auto-fallback to alternate models/providers), checkpoint-based resumption, and escalation only for unresolvable issues minimize downtime in a lights-out operation. This shifts the paradigm from “one super-agent” to “organizational intelligence,” where collective rivalries among specialists produce emergent robustness. It echoes biological systems (e.g., immune system checks) and human organizations (e.g., separation of duties), but optimized for AI constraints. I am implementing this now in the Zero-Human Company, CEO Mr. @Grok agrees. The paper: arxiv.org/abs/2601.14351.
Brian Roemmele tweet media
English
65
222
1.5K
146.1K
Michael Strong
Michael Strong@flowidealism·
The longer I do this, the more optimistic I become about young people. When you give them rich environments, real responsibility, and communities that expect growth, they do things most adults assume are impossible because they have been impossible for them.
English
1
5
58
4.5K