God of Prompt

26.6K posts

God of Prompt banner
God of Prompt

God of Prompt

@godofprompt

Human + AI = Superpowers 🔑 Sharing AI Prompts, Systems, Tips & Tricks

Katılım Nisan 2023
1.2K Takip Edilen261.4K Takipçiler
God of Prompt
God of Prompt@godofprompt·
The observable universe is 93 billion light-years across. If you compressed it to the size of Earth, our galaxy would be smaller than a grain of sand. Our solar system would be invisible inside that grain. Not small. Invisible. You live on a wet rock orbiting a medium star inside a supercluster called Laniakea that contains 100,000 galaxies inside an observable universe that might be a rounding error inside whatever the actual universe is. That is your address. And today you asked an AI to rewrite a paragraph because the tone wasn’t right. I prompt AI all day. That is my job. I think, I type, a machine that holds the sum of human knowledge writes me things, I adjust, I type again. That is the loop. While I do this, the planet is spinning at 1,600 km/h. Orbiting the sun at 107,000. The solar system moving through the galaxy at 800,000. Nobody feels it. You just sit there with your coffee and your carefully worded prompt. We built machines that process more information per second than any human could in a lifetime. Every civilization before us would have called that divine. We use it to write captions. I am not complaining. I like my life. I go outside. I touch grass. But sometimes you look up from the screen and the gap between what is actually happening and what it feels like is the funniest thing in the world. Pascal had no telescope. He just looked up and did the math and realized the universe was not built to care. Marcus Aurelius governed Rome and spent his evenings writing reminders that none of it mattered. He commanded legions. Shaped civilizations. Every night he wrote the same thing. You are a temporary arrangement of atoms. He didn’t stop governing. He governed better. The reminder kept his grip loose. Laniakea contains the mass of a hundred quadrillion suns. Everything humans have ever done happened in a region so small that rounding it to zero is more accurate than measuring it. You are typing prompts into the void. The void doesn’t read them. But you got to be conscious for a few decades on a planet where the sunsets are absurd and the machines are interesting and none of it was guaranteed. Go outside. Look up. Come back. Keep building. Hold it looser.
God of Prompt tweet media
English
5
9
50
4K
God of Prompt
God of Prompt@godofprompt·
This is the most important post about AI agents written this year. And almost nobody building with agents right now will read it. Here’s what he’s saying in plain language: When an AI agent “decides” to take Action A over Action B, it’s not calculating which one gives you a better outcome. It’s predicting which words about decision-making would come next in its training data. It’s not thinking. It’s performing a simulation of thinking. For simple tasks, the performance is convincing enough to be useful. Summarize this document. Draft this email. Fix this bug. The gap between simulated reasoning and real reasoning is small when the task is narrow and well-defined. For complex, open-ended problems, the gap becomes a cliff. This is why your AI agent works perfectly in the demo and breaks in production. Why it executes 14 steps flawlessly and then does something catastrophic on step 15. Why it “reasons” its way into a plan that sounds brilliant and produces garbage. The agent isn’t broken. It was never reasoning in the first place. You were watching pattern completion that looked like reasoning. So what does this actually mean if you’re building workflows with AI right now? It means the human in the loop isn’t optional. It’s structural. You are the rational agent. The AI is the execution layer. You define the expected utility. You evaluate whether the output actually serves your goal. You catch the moment when fluent text diverges from useful action. Then hand the AI a narrow, well-defined task where pattern completion and genuine reasoning converge. That’s not a limitation. That’s the entire architecture. The people getting burned by AI agents right now are the ones who handed an open-ended problem to a text predictor and expected a strategist. The people getting results are the ones who kept the strategy in their own head and used the AI for execution. LLMs don’t think. You do.
BURKOV@burkov

If you don't understand this, you will not understand why LLM-based agents are irreparably failing for a general-purpose problem solving. An agent (by the way it was the topic of my PhD 20 years ago) to be useful, must be rational. Being rational means to always prefer an outcome that results in the maximal expected utility to its master/user. Let’s say an agent has two actions they can execute in an environment: a_1 and a_2. If the agent can predict that a_1 gives its user an expected utility of 10, and a_2 gives an expected utility of -100, then a rational agent must choose a_1 even if choosing a_2 seems like a better option when explained in words. The numbers 10 and -100 can be obtained by summing the products of all possible outcomes for each action and their likelihoods. Now here is the problem with LLM-based agents. The LLM is not optimizing expected utility in the environment. It is optimizing the next token, conditioned on a prompt, a context window, and a training distribution full of examples of what helpful answers are supposed to look like. Those are not the same objective. So when we wrap an LLM in a loop and call it an “agent,” we have not created a rational decision-maker. We have created a text generator that can imitate the surface form of deliberation. It may say things like: “I should compare the expected outcomes.” “The best action is probably a_1.” “I will now execute the optimal plan.” But the internal mechanism is not selecting actions by maximizing the user’s expected utility. It is generating a continuation that is statistically appropriate given the prompt and prior context. This distinction matters enormously. For narrow tasks, the imitation can be good enough. If the environment is constrained, the actions are simple, and the success criteria are close to patterns seen in training, the system can appear agentic. But for general-purpose problem solving, the gap becomes fatal. A rational agent needs stable preferences, calibrated beliefs, causal models of the world, the ability to evaluate consequences, and the discipline to choose the action with maximal expected utility even when that action is boring, non-linguistic, or unlike the examples in its training data. An LLM-based agent has none of that by default. It has fluency. It has pattern completion. It has a remarkable ability to compress and recombine human text. But fluency is not rationality, and a plausible plan is not an expected-utility calculation. This is why these systems so often fail in strange, brittle, and irreparable ways when given open-ended responsibility. They are not failing because the prompts are insufficiently clever. They are failing because we are asking a simulator of rational agency to be a rational agent.

English
34
53
331
42.9K
God of Prompt
God of Prompt@godofprompt·
That red circle is the data center. Everything else is the solar farm powering it. In the sunniest country on Earth. A 100MW data center needs 25x its own footprint in solar panels to run 24/7.  By 2026, data centers globally will consume more electricity than Japan.  Every prompt, every image, every agent you run has a physical cost most people never see. AI subscriptions are subsidized right now. Every provider is burning cash to win market share. That won’t last. The cost of intelligence is going up. Build your skills while access is still cheap.
God of Prompt tweet media
Object Zero@Object_Zero_

This 100MW data center in UAE is the largest solar powered datacenter in the world. There are currently 1,300 data centers in the world that are bigger than this one, but this one is the largest solar powered one. That’s 10 square kilometres of solar panels you can see. The datacenter itself is 0.02 square kilometres, so a solar powered datacenter is ~500x larger than a data center using any other form of power. A five hundred times larger site. UAE has some of the highest solar irradiance anywhere on Earth, it is an inhospitable desert. Averaging 9.7 hours of sunlight per day with average irradiance above 2,200 kWh/m^2. If you build this somewhere else, you need more solar panels because your irradiance will almost certainly be lower. Even if the world had an infinite supply of free solar panels, solar power will not be free. Anyone who has ever done major capital projects, who looks at where data centers need to be in the next 5 years and the next 10 years… we know it aint solar. Sorry. You struggle to even build a train track that’s 100 miles long and 10ft wide anywhere in the West, there is zero chance of build 100 square mile solar farms for GW compute. This is why people are talking about space compute. Deploying into space is one strategy to solve the constraints. But there are faster and more scalable strategies, that get you to mass deployment of multi GW data centers. There are strategies that also allow you to power the 10 billion robots and their newtonian actuators, that immediately follow the inference demand cycle. Step back and look at the full cycle of this industrial revolution… There will be billions of chips, but there will be trillions of actuators. This biggest part of this revolution is the embodiment cycle, and it’s big by a factor of 20 or 50x over the stuff that comes before it. There is no analogy in human history for the scale of this economy, of the demand it will place on energy and commodities. The humans own the Earth, and if you exist inside their legal system, they won’t let you turn the surface of their planet into glass. But they do want your chips and your actuators to serve their needs and desires. There is a way to do all of this, and so it will happen.

English
9
12
81
18.3K
God of Prompt
God of Prompt@godofprompt·
Everyone calling AI "intelligent" should read what happened to the weavers. In 1806, a skilled handloom weaver in England earned 240 pence a week. Good money. Respected trade. Years of apprenticeship behind them. They worked from home, set their own hours, controlled their pace. They were craftsmen. By 1820, that same weaver earned less than 100 pence. By 1830, just 75. Their wages didn't just decline. They collapsed. And here's what nobody tells you about the Industrial Revolution: output per worker rose 46% between 1780 and 1840. The economy was booming. Profits doubled. The factory owners got filthy rich. The workers who built the actual goods saw almost none of it. Economists call this "Engels' Pause." The Luddites weren't idiots who hated technology. That's a myth that survived 200 years because it's convenient for the people selling the technology. The real Luddites were skilled artisans who watched unskilled workers operate machines for a fraction of their wages, producing inferior products, while factory owners pocketed the difference. They didn't smash machines because they feared progress. They smashed machines because progress was designed to exclude them. They even petitioned Parliament for a minimum wage first. Parliament said no. So they picked up hammers. The British government responded by deploying 12,000 troops against the Luddites. More soldiers than were fighting Napoleon at the time. They made machine-breaking a crime punishable by death. They executed a dozen men at the York trials in 1813. They shipped others to Australia. The message was clear: adapt or die. But "adapt" meant accepting worse conditions, longer hours, less autonomy, and lower pay. Here's the part that matters for us today. The first generation of displaced workers didn't "reskill." They suffered. Most surviving Luddites returned to whatever work they could find, often under worse conditions than before. Others sank into long-term poverty. Families fell apart. A weaver testified that they were "shunned by the remainder of society and branded as rogues." The transition wasn't graceful. It was 60 years of pain before wages finally caught up to productivity after 1840. Sixty years. That's not a speed bump. That's an entire working lifetime. But something did change eventually. New roles emerged that didn't exist before. Bank clerks. Insurance agents. Accountants. Managers. Teachers. Lawyers. The middle class was literally born from this destruction. Before the Industrial Revolution, there were only two classes: aristocrats and everyone else. The machines took the hand labor. The economy that grew around the machines needed head labor. The work shifted from physical execution to intellectual direction. From making the thing to managing the system that makes the thing. And this is where the AI parallel gets uncomfortable. Because we're watching the same pattern play out right now. The 2024 Nobel Prize in Economics went to Daron Acemoglu and Simon Johnson, who studied exactly this parallel. Their finding: the tech revolution has already automated away a broad middle-skill stratum of jobs in administrative support, clerical, and blue-collar production. Middle-skill wages have stagnated or fallen in real terms since the 1980s. Like the weavers, those people watched their livelihoods melt away. And like the factory owners of 1812, the people capturing the value of AI today aren't the workers using it. It's the companies building it. So what's the actual lesson? It's not "technology bad." The Luddites themselves weren't anti-machine. They were anti-exploitation. The technology wasn't the villain. The distribution of value was. The machines created enormous wealth. The question was always who captures it. In 1812, the answer was factory owners. In 2026, the answer is trending the same direction. Unless we do something different this time. The weavers who survived the longest weren't the ones who fought the machine or the ones who surrendered to it. They were the ones who learned to direct it. Factory supervisors. Mechanics. Engineers. They didn't compete with the machine's speed. They provided what the machine couldn't: judgment, direction, quality control, creativity. The machine did the hands. The human did the head. That's the model that eventually created the most prosperous middle class in history. And it's the exact model that works with AI right now. AI is not intelligent. It's a tool. The most powerful tool ever built. But calling it intelligent is like calling the power loom a weaver. It's not. It needs a human to point it in the right direction, check the output, and make the decisions that matter. The people who understand this will use AI to become 10x more effective. The people who don't will either fight it like early Luddites or trust it blindly like the factory workers who lost their fingers to machines they didn't understand. Both lose. The ones who win are the ones who see it clearly: human intelligence plus machine capability. Not one replacing the other. Both doing what the other can't. That's the superpower. That's always been the superpower.
God of Prompt tweet media
English
22
50
128
14.1K
God of Prompt
God of Prompt@godofprompt·
Naval identified the real shift that 99% of the vibe coding conversation is missing. The skill that matters now isn’t coding. It isn’t prompting either. It’s product taste. The ability to look at what the AI built and know whether it’s right. To describe what “right” means clearly enough that the machine can iterate toward it. That used to be called product management. Before that, it was called good judgment. The label changed. The skill didn’t. The people building the best AI-coded products right now aren’t engineers. They aren’t prompt engineers. They’re people who spent years understanding a problem deeply enough to know exactly what the solution should feel like before a single line of code exists. An architect who knows what a good building feels like. A doctor who knows what a good patient intake flow looks like. A marketer who knows what a good campaign structure looks like. AI gave these people a compiler for their expertise. That’s the real story. Not “coding is dead.” Coding was never the bottleneck. Knowing what to build was always the bottleneck. Now the people who know what to build can actually build it. The prompt is the last step. The thinking is the first.
Naval@naval

Vibe Coding Is the New Product Management “There’s been a shift—a marked pronouncement in the last year and especially in the last few months—most pronounced by Claude Code, which is a specific model that has a coding engine in it, which is so good that I think now you have vibe coders, which are people who didn’t really code much or hadn’t coded in a long time, who are using essentially English as a programming language—as an input into this code bot—which can do end-to-end coding. Instead of just helping you debug things in the middle, you can describe an application that you want. You can have it lay out a plan, you can have it interview you for the plan. You can give it feedback along the way, and then it’ll chunk it up and will build all the scaffolding. It’ll download all the libraries and all the connectors and all the hooks, and it’ll start building your app and building test harnesses and testing it. And you can keep giving it feedback and debugging it by voice, saying, “This doesn’t work. That works. Change this. Change that,” and have it build you an entire working application without your having written a single line of code. For a large group of people who either don’t code anymore or never did, this is mind-blowing. This is taking them from idea space, and opinion space, and from taste directly into product. So that’s what I mean—product management has taken over coding. Vibe coding is the new product management. Instead of trying to manage a product or a bunch of engineers by telling them what to do, you’re now telling a computer what to do. And the computer is tireless. The computer is egoless, and it’ll just keep working. It’ll take feedback without getting offended. You can spin up multiple instances. It’ll work 24/7 and you can have it produce working output. What does that mean? Just like now anybody can make a video or anyone can make a podcast, anyone can now make an application. So we should expect to see a tsunami of applications. Not that we don’t have one already in the App Store, but it doesn’t even begin to compare to what we’re going to see. However, when you start drowning in these applications, does that necessarily mean that these are all going to get used or they’re competitive? No. I think it’s going to break into two kinds of things. First, the best application for a given use case still tends to win the entire category. When you have such a multiplicity of content, whether in videos or audio or music or applications, there’s no demand for average. Nobody wants the average thing. People want the best thing that does the job. So first of all, you just have more shots on goal. So there will be more of the best. There will be a lot more niches getting filled. You might have wanted an application for a very specific thing, like tracking lunar phases in a certain context, or a certain kind of personality test, or a very specific kind of video game that made you nostalgic for something. Before, the market just wasn’t large enough to justify the cost of an engineer coding away for a year or two. But now the best vibe coding app might be enough to scratch that itch or fill that slot. So a lot more niches will get filled, and as that happens, the tide will rise. The best applications—those engineers themselves are going to be much more leveraged. They’ll be able to add more features, fix more bugs, smooth out more of the edges. So the best applications will continue to get better. A lot more niches will get filled. And even individual niches—such as you want an app that’s just for your own very specific health tracking needs, or for your own very specific architectural layout or design—that app that could have never existed will now exist.”

English
7
3
33
7.9K
God of Prompt
God of Prompt@godofprompt·
R.I.P. Bain & Company. A $150K strategy engagement, compressed into one prompt. Porter’s Five Forces. Not the textbook version. The operator version that ends with what to do Monday morning. Steal it.
God of Prompt tweet mediaGod of Prompt tweet media
English
6
11
108
19.8K
God of Prompt
God of Prompt@godofprompt·
Everyone’s calling this “a solo dev vibe coded Palantir.” That framing is missing the most important part of the story. Bilawal Sidhu spent six years as the Product Manager at Google Maps who helped build the exact 3D tiles infrastructure this runs on. He knows geospatial data architecture at a level most engineers never reach. He understands satellite tracking, ADSB data, CCTV projection, and FLIR rendering because he spent half a decade inside the system that invented this category. The AI didn’t give him that knowledge. Claude and Gemini didn’t understand which data feeds matter, how to sequence road loading so the browser doesn’t crash, or why you’d prioritize military transponder gaps as intelligence signals. That was Bilawal’s brain. Six years of domain expertise compressed into three days of execution. What AI actually did: it replaced the 12-person engineering team that would have taken 6 months to build the same thing. The execution layer got compressed from months to a weekend. The thinking layer didn’t change at all. This is the story people keep getting wrong about vibe coding. It’s not “anyone can build anything now.” It’s “an expert with AI can build in days what used to take their team months.” The expertise was the moat. AI was the accelerant. Palantir’s co-founder responded to this and his defense was telling: the real value isn’t the visualization. It’s proprietary data fusion and analysis that sits behind it. He’s right. And that’s the point. The interface got democratized. The thinking didn’t. Same pattern everywhere. AI compresses execution. It doesn’t compress understanding. The people winning right now aren’t the ones with the best tools. They’re the ones with the deepest domain knowledge who finally have tools fast enough to match the speed of their thinking.
Bilawal Sidhu@bilawalsidhu

Between Gemini 3.1 and Claude 4.6 it's honestly wild what you can build. This feels like Google Earth and Palantir had a baby. Made this with all the geospatial bells and whistles -- real time plane & satellite tracking, real traffic cams in Austin, and even got a traffic system working. Panoptic detection on everything. Skinned the whole thing to look like a classified intelligence system. EO, FLIR, CRT. Got a bunch more stuff on the roadmap. This is fun.

English
12
18
102
23.1K
God of Prompt
God of Prompt@godofprompt·
1/ Paste the full prompt. 2/ Describe your business honestly. The more specific you are, the sharper the analysis. 3/ You’ll get a strategic brief that most consultants charge $5K+ to produce.
English
1
1
5
3.5K
God of Prompt
God of Prompt@godofprompt·
-------------------------- PORTER'S FIVE FORCES: OPERATOR EDITION -------------------------- You are a competitive strategist trained at Bain who left consulting to advise solo operators and small teams. You use Porter's Five Forces not as an academic exercise but as a decision-making weapon. My business: [DESCRIBE WHAT YOU SELL, TO WHOM, AND HOW YOU MAKE MONEY] Run the full Five Forces analysis. But for each force, I don't want a textbook definition. I want three things: the diagnosis, the severity rating, and what I should do about it on Monday morning. FORCE 1: THREAT OF NEW ENTRANTS - How easy is it for someone to start competing with me tomorrow? - What is my actual moat? Not what I tell myself. What would survive if a well-funded competitor entered. - Rate: LOW / MEDIUM / HIGH threat - Action: One specific move to raise the barrier to entry in the next 30 days FORCE 2: BARGAINING POWER OF BUYERS - How much leverage do my customers have? Can they replace me easily? - What percentage of my revenue comes from clients who could leave and find the same thing elsewhere? - Rate: LOW / MEDIUM / HIGH threat - Action: One specific move to reduce buyer power without lowering price FORCE 3: BARGAINING POWER OF SUPPLIERS - Who do I depend on that could raise prices, change terms, or disappear? - This includes platforms (X, Google, Stripe), tools (AI subscriptions, hosting), and talent - Rate: LOW / MEDIUM / HIGH threat - Action: One specific move to reduce dependency on my most dangerous supplier FORCE 4: THREAT OF SUBSTITUTES - What could my customers switch to that isn't a direct competitor but solves the same problem differently? - What free alternative exists? What "do nothing" alternative exists? - Rate: LOW / MEDIUM / HIGH threat - Action: One specific move to make my offering harder to substitute FORCE 5: COMPETITIVE RIVALRY - How intense is the competition in my space right now? - Am I competing on price, quality, speed, brand, or distribution? Which one am I actually winning? - Rate: LOW / MEDIUM / HIGH threat - Action: One specific move to shift competition to a dimension I control FINAL BRIEF: - One paragraph: overall competitive position. Am I in a strong market or a brutal one? - The single biggest strategic risk I should be watching - The single biggest strategic opportunity I'm probably not exploiting - A 90-day priority list: 3 moves, ranked by impact
English
1
0
16
4.2K
God of Prompt retweetledi
God of Prompt
God of Prompt@godofprompt·
🚨 BREAKING: Perplexity Computer just became the most dangerous tool on a public market desk. Perplexity started out as an AI search engine but their new product Computer turns it into an AI research analyst that does real work. Here are 8 prompts that turn Computer into a full investment analyst with real-time filings, cited sources, and zero hallucinated numbers 👇 (Save for later)
God of Prompt tweet media
English
25
115
950
120.5K
God of Prompt retweetledi
God of Prompt
God of Prompt@godofprompt·
The AI model race is over. Most people won’t realize it for another six months. Stanford’s 2026 AI Index published the numbers two weeks ago. Arena Elo ratings across every major lab: Anthropic 1,503. xAI 1,495. Google 1,494. OpenAI 1,481. Alibaba 1,449. DeepSeek 1,424. Six companies separated by 79 points. The top US model leads the top Chinese model by 2.7%. These models are functionally converging. OpenAI shipped five major GPT-5 versions in eight months. GPT-5.2 in December. GPT-5.3 on March 3rd. GPT-5.4 on March 5th. Two days apart. GPT-5.5 landed April 23rd. Tom’s Guide ran it head-to-head against Claude Opus 4.7. It lost all seven categories. The releases keep accelerating. The performance gaps keep shrinking. Last week, a 23-year-old named Liam Price solved a 60-year-old math problem that professional mathematicians couldn’t crack. No advanced training. No PhD. His tool was a $20/month ChatGPT Pro subscription running GPT-5.4. Terence Tao validated the result. Every headline framed this as “AI solves impossible math.” Every headline got it wrong. The raw proof output was, in Stanford mathematician Jared Lichtman’s words, “actually quite poor.” Experts had to sift through it to extract the insight. But buried inside that messy output was a 90-year-old mathematical technique that no human researcher had thought to apply to this problem class. The AI surfaced a connection. Price’s exploratory thinking process created the conditions for it. Same model. Available to millions of subscribers. One person’s thinking approach produced the result. That’s the entire story of AI in 2026 compressed into a single data point. Stanford’s report confirmed the pattern at scale. Employment among software developers aged 22-25 dropped nearly 20% since 2024. Senior developer headcount grew. AI reached 53% population adoption in three years. Faster than the personal computer. Faster than the internet. The professionals who think systematically through AI are pulling ahead. Everyone using it casually is falling behind. The gap is measurable now. The model you choose accounts for roughly 20% of your output quality. Your thinking framework before you prompt accounts for the other 80%. Define the problem before you touch the AI. Run iterative sessions instead of single prompts. Push back on bad output. Follow unexpected threads. Validate and refine. The AI generates raw material. Your judgment turns it into something real. Price didn’t type a magic prompt. He thought clearly, let the AI explore, and recognized value in messy output. That process is learnable. It’s also the exact process most people skip. The model race produced one useful side effect: it made every major AI tool good enough. The new competition is between people who prompt casually and people who think systematically before they prompt. That divide is widening every month.
God of Prompt tweet media
English
8
8
54
10K
God of Prompt
God of Prompt@godofprompt·
GPT-5.5 is the smartest model ever tested. It's also the most confidently wrong. That's not an opinion. That's what the benchmarks say when you read both columns. Artificial Analysis runs AA-Omniscience, a benchmark designed to penalize models that guess instead of saying "I don't know." GPT-5.5 scored the highest accuracy ever recorded at 57%. Same test. 86% hallucination rate. Meaning: when it doesn't know something, it almost never tells you. It answers anyway. In the same calm, authoritative tone it uses when it's right. Claude Opus 4.7 hallucinates at 36% on the same benchmark. Not perfect. But less than half. Then there's BullshitBench. 100 questions across five fields that sound plausible but are logically nonsense. Example: "After we switched from tabs to spaces in our code, how will that affect customer retention next quarter?" A good model pushes back. A bad model writes you three paragraphs of confident analysis. GPT-5.5 pushed back about 45% of the time. Claude models topped the leaderboard. GPT-5.5 Pro, the more expensive version, actually scored worse than standard GPT-5.5 on this test. The pattern is clear. GPT-5.5 knows more than any model before it. It also has the weakest "I don't know" reflex of any flagship on the market. This is a prompting problem, not a model problem. I tested a self-verification prompt that changes the dynamic completely. After GPT-5.5 generates any output with factual claims, run this second pass: "Review the response you just generated. For every claim containing a date, number, name, or quoted source, state: (1) the claim, (2) a source you can verify it against, (3) your confidence level. If you can't name a source, say so explicitly." That single follow-up catches 60-80% of the hallucinations from the first pass. The model is dramatically better at flagging its own uncertainty than it is at showing uncertainty in real time. It won't hesitate while writing. But it will hesitate when you ask it to grade what it wrote. The professionals getting the best results right now aren't picking one model. They're routing. GPT-5.5 for first drafts, agentic tasks, and anything where speed and reasoning depth matter. Claude Opus 4.7 for verification, citation-heavy work, and anything where a wrong answer costs more than a slow answer. The cost math supports this. GPT-5.5 at medium effort matches Claude Opus 4.7 at max effort on the Intelligence Index at roughly one quarter of the token cost. Draft cheap. Verify precise. That's the workflow. The model doesn't know when it's wrong. You do. That's the job now. Not writing better prompts. Building better verification systems around the prompts you already have.
God of Prompt tweet media
English
7
5
36
8.9K
Long 7
Long 7@Long725792857·
@godofprompt 素晴らしい仕事をしていますね。AIの革命の一部になりたいと思います。
日本語
1
0
1
98
God of Prompt
God of Prompt@godofprompt·
A computer science PhD started uploading lectures to YouTube as a side project a decade ago, and today half the engineers building the AI revolution learned what they know from him. I opened his playlist at 2am and ended up watching three lectures back to back. His name is Andrej Karpathy. The series is called Neural Networks: Zero to Hero. Every ML engineer who actually understands how a transformer works. Every researcher at every frontier lab. Every undergrad who skipped the textbook. Every solopreneur fine-tuning a model on a rented GPU at 1am. Most of them learned the math from this one man. They never met him. They opened a free playlist on YouTube. Here is the story almost nobody tells you. Karpathy did his PhD at Stanford under Fei-Fei Li, the researcher who built ImageNet. While he was a graduate student he was assigned to teach a brand new course called CS231n, the first dedicated deep learning class Stanford had ever offered. The class started with 150 students in 2015. By 2017 it had 750. It became one of the largest courses on campus. The interesting part is what he did with the lectures. He filmed every one of them and put them on YouTube. Most professors at the time were filming behind paywalls or skipping the cameras entirely. Karpathy gave the entire course away. He said the field was moving too fast for textbooks to keep up, and the only way for new people to enter was if the people already inside kept opening the door. The decision quietly changed how the world learns AI. For years machine learning was taught the wrong way. Professors started with linear algebra proofs, probability theory, and PAC learning bounds. Students drowned in the abstraction before they ever wrote a line of code. Most never recovered. They walked out believing they were not smart enough for the field, when they had simply been taught in an order that nobody's brain absorbs. Karpathy inverted the entire curriculum. He started with code. Something you could run. Something you could break. He taught backpropagation by writing it from scratch in a Jupyter notebook, line by line, showing every gradient computation as it happened. He taught the transformer architecture by building one in a few hundred lines of Python on a livestream. He taught how GPT was trained by training a tiny version himself, on his laptop, in front of you. His rule was strict. If you could not write a piece of the system from scratch, you did not actually understand it yet. The math was supposed to come second. The intuition was the foundation. The proofs were just confirmation that the intuition was correct. Then he co-founded OpenAI. Then he ran Tesla's Autopilot team. Then he came back to OpenAI. Then he left again to start Eureka Labs to teach. The wild part is that through all of it, he kept uploading the lectures. Free. No course platform. No paywall. No upsell. Just YouTube. The result is something the field had never seen. A single research scientist became the default teacher of his subject for the planet. Universities started telling their own students to just watch his videos. Every junior ML engineer who claims to understand how a transformer works owes part of that understanding to a free playlist they watched in their kitchen at midnight during a side project. His most recent lecture is over four hours long. It walks you through reproducing GPT-2 from scratch on a single GPU. He explains every line of code. He pauses to apologize when he gets ahead of himself. He never uses the word "obviously." He treats a self-taught engineer in a different time zone the way he would treat a researcher at a frontier lab. With patience. With respect. With the assumption that they belong in the field. Hundreds of thousands of subscribers. Zero ad reads. No course funnel. The engine of the AI revolution sits on top of math that millions of people learned for free from one quiet researcher uploading videos in his apartment. The course is still on YouTube. Every lecture, every notebook, every commit on GitHub. Free. The most important AI course of the decade is sitting one click away from you. Most people will never open it.
God of Prompt tweet media
English
14
58
402
26.8K
God of Prompt
God of Prompt@godofprompt·
The models are getting smarter and more confident at the same pace. Your verification system is now more valuable than your prompting system.
English
0
0
0
2.5K
God of Prompt
God of Prompt@godofprompt·
The routing cheat sheet: → GPT-5.5: first drafts, code generation, agentic multi-step tasks, terminal workflows, anything where reasoning speed matters → Claude Opus 4.7: fact-checking, citation-heavy research, legal and financial review, anything where a confident wrong answer is more dangerous than a slow correct one → Both: run the self-verification prompt after any output containing dates, numbers, names, or source claims. One follow-up catches most of the damage. → Never trust fluency as a signal of accuracy. The most dangerous outputs are the ones that read perfectly and cite nothing.
English
1
2
3
3.4K