J Mueller

987 posts

J Mueller banner
J Mueller

J Mueller

@jamfit7

Investor late stage, pre-IPO & crypto. Past tech founder (web1 & web2 days).

Katılım Mart 2021
217 Takip Edilen75 Takipçiler
J Mueller
J Mueller@jamfit7·
I am putting in 80hr weeks myself because of Claude Code. I talk to it through my mic as we build.
Milk Road AI@MilkRoadAI

Marc Andreessen just coined a term that perfectly describes what's actually happening to programmers right now and it's the opposite of what the doomers predicted (Save this). He calls them AI vampires. Andreessen's says that programmers using Codex, Claude Code and AI coding tools are not being replaced but they're working harder than ever, sleeping less than ever, with massive bags under their eyes and they are completely euphoric. What's remarkable is that the phenomenon extends far beyond professional engineers. Andreessen described an a16z partner who had never written a single line of code in his career, who built an entire AI powered work system for himself and when asked if he'd ever looked at the underlying code, the answer was simply "hell no." The data behind the anecdote is extraordinary. Andreessen says the leading-edge programmers at a16z portfolio companies are now 20x more productive than they were a year ago, the most dramatic increase in programmer productivity in the history of the industry. The METR May 2026 AI usage survey found technical workers self reporting a 1.4–2x change in work value from AI tools, with 75% of software engineers using AI for at least half their work. The software engineer hiring rate is actually increasing up to 22.77% of new hires in 2025 from 19.32% in late 2023 and companies are now bidding more aggressively for senior engineers specifically because AI empowered engineers have a higher ROI than ever before. The US economy added 115,000 jobs in April 2026 alone, beating the 62,000 consensus forecast precisely as AI adoption hit its highest level on record. This is exactly what basic economics predicts and what almost no one who writes about AI and jobs bothers to say. Classic marginal productivity theory says: when you increase the productivity of a worker, you don't diminish human work, you expand it. The worker becomes more productive, gets paid more, does more, and more jobs are created in the process. Andreessen's ATM analogy holds here because ATMs were supposed to eliminate bank tellers but instead, teller employment rose because lower operating costs let banks open more branches. The no-code AI market has exploded from $4.3 billion in 2023 to $21.2 billion in 2026 not because programmers are being replaced, but because the universe of people who can now build software has expanded by orders of magnitude. The blind spot, as Andreessen notes, is that productivity is now outrunning comprehension. The a16z partner building AI systems he's never looked at the code for represents something genuinely new, software being summoned faster than it can be understood. That's not necessarily dangerous but it does mean the verification, security, and governance layer of the AI development stack is more important now than it has ever been.

English
0
0
0
13
J Mueller retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
A Hungarian psychologist raised three daughters to prove that any child could become a chess grandmaster through early specialization. He succeeded. Two of them became grandmasters. One became the greatest female chess player who ever lived. Then a sports scientist looked at the data and found something nobody wanted to hear. His name is David Epstein. The book is called "Range." The Polgar experiment is one of the most famous case studies in the history of deliberate practice. Laszlo Polgar wrote a book before his daughters were even born arguing that geniuses are made, not born. He homeschooled all three girls in chess from age four. By their teens, Susan, Sofia, and Judit were dominating tournaments against grown men. Judit became the youngest grandmaster in history at the time, breaking Bobby Fischer's record. The story became the gospel of early specialization. Pick a domain young, drill it hard, and you can manufacture excellence. Epstein opens his book by telling that story honestly and then quietly demolishing the conclusion most people drew from it. Chess works that way. Most things do not. Here is the distinction that took him four years of research to articulate, and that almost nobody who quotes the 10,000 hour rule has ever read. There are two kinds of environments in which humans develop expertise. Psychologists call them kind and wicked. A kind environment has clear rules, immediate feedback, and patterns that repeat reliably. Chess is the cleanest example. Every game ends with a winner and a loser. Every move is recorded. The board never changes shape. The pieces never invent new ways to move. A child who plays ten thousand games will see most of the patterns that exist in the game, and pattern recognition is exactly what chess mastery is built on. A wicked environment is the opposite. Feedback is delayed or misleading. Rules shift. The patterns that worked yesterday may be exactly the wrong patterns to apply tomorrow. Most of the real world looks like this. Medicine is wicked. Investing is wicked. Building a company is wicked. Scientific research is wicked. Almost every job that involves a complex changing system with humans in it is wicked. The Polgar sisters trained in the kindest environment any human can train in. Their success was real and the method was correct. The mistake was generalizing the method to fields where the underlying structure of the environment is completely different. Epstein's research is what made the implication impossible to ignore. He looked at the careers of elite athletes outside of chess and golf and found that the pattern was almost the inverse of what people assumed. The athletes who reached the very top of their sports were overwhelmingly people who had played multiple sports as children, specialized late, and often switched disciplines well into their teens. Roger Federer played squash, badminton, basketball, handball, tennis, table tennis, and soccer before tennis became his focus. The kids who specialized in tennis at age six and trained year-round for a decade mostly burned out, got injured, or topped out at lower levels of the sport. The same pattern showed up everywhere he looked outside of kind environments. Inventors with the most patents had worked in multiple unrelated fields before their breakthrough work. Comic book creators with the longest careers had drawn for the most different genres before settling. Scientists who won Nobel Prizes were dramatically more likely than their peers to be serious amateur musicians, painters, sculptors, or writers. The skill that mattered in wicked environments was not depth in one pattern. It was the ability to recognize when a pattern from one domain applied unexpectedly in another. That kind of thinking cannot be built by drilling a single subject. It can only be built by accumulating mental models from many subjects and learning to move between them. The deeper finding is the one that should change how you think about your own career. Specialists in wicked environments often get worse with experience, not better. Epstein cites studies of doctors, financial analysts, intelligence officers, and forecasters showing that years of experience in a narrow domain frequently produce more confident judgments without producing more accurate ones. The expert builds elaborate mental models that feel comprehensive and turn out to be increasingly disconnected from the actual structure of the problem. They stop noticing what does not fit their framework. They mistake fluency for understanding. Generalists do better in wicked domains for a reason that sounds almost mystical until you understand the mechanism. They have less invested in any single mental model, so they abandon broken models faster. They are used to being a beginner, so they are not threatened by the discomfort of not knowing. They have seen enough different domains that they can usually find an analogy from one field that unlocks a problem in another. The technical name for this is analogical thinking, and the research on it is one of the most underrated bodies of work in cognitive science. The single most useful sentence in the entire book is the one Epstein puts almost as a throwaway. Match quality matters more than head start. A person who tries six different fields in their twenties and finds the one that genuinely fits them will outperform a person who picked one field at fourteen and stuck to it on willpower alone. The lost years were not lost. They were the search process that produced the match. Every field they walked away from taught them something they later imported into the field they finally chose. The reason this is so hard to accept is cultural, not empirical. We tell children to pick a path early. We reward the prodigy who knew at six. We treat the late bloomer as someone who failed to launch on time, when the data suggests they were running an entirely different and often more effective optimization process underneath. The Polgar sisters were not wrong. The conclusion the world drew from them was. If your environment is genuinely kind, specialize early and drill hard. If it is wicked, and almost every interesting human problem is, then the people who win are the ones who refused to specialize until they had seen enough to know what was actually worth specializing in. You are not behind. You were running the right experiment all along.
Ihtesham Ali tweet media
English
378
2.9K
11.2K
1M
Oskar
Oskar@o_kwasniewski·
@nzmrldev yes! we test both mobile and web apps
English
2
0
4
4.8K
Oskar
Oskar@o_kwasniewski·
this is what AUTOMATED UI testing looks like
English
72
92
2.1K
207.7K
Val Katayev
Val Katayev@ValKatayev·
This is so good. When I was young, a more experienced person wanted me to be CEO. I said “are you sure it shouldn’t be you running the company? “ His answer was “Val, I’ll take a bet on you batting over 500 all day long”. The company made a $10m+ net profit in its first year of existence.
Big Brain Business@BigBrainBizness

Byron Allen, Founder of Allen Media Group, explains how treating business like a contact sport unlocks unlimited capital: Byron once borrowed $310 million on a Friday to acquire the Weather Channel. He paid it back in five months. When the lender hit him with a $28 million prepayment penalty for closing too quickly, he paid that too. His philosophy on why capital is never the real obstacle: "Business is a contact sport. You're nothing more than economic athletes. They will see your passion. They will see your stats. And they will always want you on their team because you make them money." The framing shift here is everything. Byron sees founders as athletes whose performance is being evaluated by people who need them to win. "You have unlimited amounts of capital available to you if your hustle is at the highest level." @RealByronAllen drives the point home: "Keep your hustle at the highest level because capital is always looking for you to get the money back and a return. There's trillions and trillions and trillions of dollars of capital looking for you. Go get it." The takeaway: Capital is hunting for operators who can put up the stats. Hustle at the highest level, and the money will find you.

English
4
4
123
38.9K
Mike Futia
Mike Futia@mikefutia·
ChatGPT Images 2.0 + Seedance 2.0 is f*cking wild 🤯 I generated this 25-second UGC ad for Barebells in under 20 minutes: creator, script, scenes, finished cut. ChatGPT Images 2.0 generates the creator. Seedance 2.0 turns it into video. Perfect for DTC brands and agencies who want to test 20 UGC angles before paying $2K per creator video. If you're briefing creator agencies every week, waiting 7-10 days for one take of one variation, and paying $150–$300+ per UGC video that still misses the brand vibe... This workflow eliminates the entire loop: → Generate your creator in ChatGPT Images 2.0 → Save her as a Character Element in Seedance so her face stays locked across every scene → Save your product as a Prop Element with brand-accurate wrapper details → Write a 3-scene script in real UGC dialogue cadence → Fire each scene to Seedance 2.0 with both elements selected → Stitch in CapCut with auto-captions, no music, hard cuts only No creator briefs. No 7-day turnarounds. No $200/video platform fees. What you get: → A finished UGC ad in your brand's voice → Infinite variations of the same concept (test 10 hooks against one product in an afternoon) → A reusable creator and product Element library that scales across every ad → A repeatable workflow your team can run in 15-30 minutes I put together a full playbook with the 5-step system, every prompt template, the Seedance scene structure, and every common failure mode and how to fix it. Want it for free? > Like this post > Comment "UGC" And I'll send it over (must be following so I can DM)
English
336
39
593
39.3K
J Mueller
J Mueller@jamfit7·
@JamesZmSun Can this work for Swift mac Apps too? Or just web?
English
0
0
0
149
James Sun
James Sun@JamesZmSun·
Today, we launched browser use inside Codex to further close the build & verify loop for local development! Now, you can ask Codex to build your front end, and test it like a user would by clicking through the app. Codex sees everything a user sees through vision & checks the network/console logs to help debug & fix any issues that it finds. This change brings us closer to fully autonomous coding agents that delivers high quality and tested changes. Watch Codex test my app in the browser, catch & fix a real bug, and doing that loop again with a brand new feature.
English
199
236
3.2K
224.6K
J Mueller retweetledi
Julien Flot
Julien Flot@Graphseo·
Arrêtez de payer pour Claude IA. L'IA de Mc Donald's est gratuite et répond à toutes les questions, même si elles ne sont pas sur le BIG MAC. :-) De rien.
Julien Flot tweet media
Français
369
4.8K
44.1K
3.4M
J Mueller
J Mueller@jamfit7·
the best
English
0
0
0
3
J Mueller retweetledi
Car
Car@CarOnPolymarket·
Infinite money glitch? There’s a market on which day Trump will insult someone. Each day has resolved “Yes” so far. I bought $100 on each day for the rest of the month.
Car tweet mediaCar tweet media
English
158
383
7.1K
888.6K
J Mueller retweetledi
Josh Kale
Josh Kale@JoshKale·
Today Perplexity shipped everything Siri was supposed to be 💻 Personal computer now has access to: → iMessage → Every folder on your Mac → 400+ connected apps → Apple Mail, Calendars, Browsers etc... Underneath, Claude Opus 4.7 is the brain. It breaks your goal into subtasks and routes each one to whichever of 20 models wins at it. GPT for long context. Gemini for deep research. Grok for speed. Nano Banana for images. Veo for video. Codex for code. It runs 24/7. You can trigger it from your phone. Pretty sweet design too
Perplexity@perplexity_ai

Today we're releasing Personal Computer. Personal Computer integrates with the Perplexity Mac App for secure orchestration across your local files, native apps, and browser. We’re rolling this out to all Perplexity Max subscribers and everyone on the waitlist starting today.

English
115
231
3.5K
663.2K
J Mueller retweetledi
Sawyer Merritt
Sawyer Merritt@SawyerMerritt·
Ferrari has released a new video of the interior for its first ever all-electric car, called the Ferrari Luce. It was designed with Apple's former head of design, Jony Ive, who recently said "a large touchscreen doesn't work in a car." Car info: • Four-door four-seater • 122 kWh battery • 330 miles of range (European rating methods) • 1,000 horsepower • 0-60mph: Under 2.5s • Four electric motors • OLED screens • Guess cluster screen is made up of several layers. By cutting out holes in the upper layers, Ferrari created displays where the speedometer graphic is below the level of a physical needle, which is itself behind additional drive info and beneath a curved inset lens. • Will have fake gear shifts • Weight: 5,100 lbs
English
1.1K
919
14.1K
2.7M
J Mueller retweetledi
Emotion & Music
Emotion & Music@Emotion78687·
There was a time on this planet when great musicians created magic.
English
103
793
6.3K
208.6K
J Mueller retweetledi
Lentejodependiente
Lentejodependiente@maslentejas·
Este video explica a la perfección porque la equidad en los recursos, no garantiza la igualdad en las condiciones
Español
1K
3.7K
32.4K
3.7M
J Mueller retweetledi
vittorio
vittorio@IterIntellectus·
US fertility reached 1.57 last year, the lowest ever recorded, and the WSJ explanation is "uncertainty about finances, relationship stability, and the political climate" my great grandma had eleven children during the second world war, in a country being bombed, in a house with no running water, on rations. poor people have always had kids. the poorest people on earth right now still have kids and the financial excuse is a story we tell ourselves because it makes us feel good and the real one is unbearable the real mechanism is that we got rich enough to redefine children as an expense instead of the point. somewhere in the last fifty years the cultural goal inverted and a child stopped being what life is for and became a line item competing with the lifestyle. once you frame it that way the math never works, because the math isnt supposed to work. that's the point we are living in the richest moment in human history and we decided to use the surplus to buy ourselves out of the future. the most prosperous civilization that has ever existed is committing demographic suicide at the altar of personal optimization and comfort, and the official line is that we cant afford it the birthrate is a lagging indicator of a civilization that forgot why it was alive
vittorio tweet mediavittorio tweet media
The Wall Street Journal@WSJ

In charts: The nation’s fertility rates hit record lows in 2025 as childbearing continued to shift toward older women on.wsj.com/41qPbw7

English
1.2K
3.5K
17.3K
2.1M
J Mueller retweetledi
Jack Kevorkian
Jack Kevorkian@kevorkian82·
this is possibly THE MOST Australian interview ever
English
345
2.5K
12.6K
777.7K
J Mueller
J Mueller@jamfit7·
Time to start paying attention to your FDIC deposit amounts as AI is about to get really good at hacking.
Shanaka Anslem Perera ⚡@shanaka86

JUST IN: Anthropic’s Claude Opus 4.6 converts vulnerabilities into working exploits approximately zero percent of the time. That is the model you are paying for right now. Their latest model “Mythos” converts them 72.4 percent of the time. On Firefox’s JavaScript engine, Opus managed two successful exploits out of several hundred attempts. “Mythos” managed 181. Ninety times better. One generation. Nobody trained it to do this. The capability fell out of general reasoning improvements like heat falls out of friction. Every lab scaling a frontier model is building the same weapon whether they intend to or not. Let that land. “Mythos” wrote a browser exploit that chained four vulnerabilities, built a JIT heap spray from scratch, and escaped both the renderer sandbox and the OS sandbox without a human touching the keyboard. It found race conditions in the Linux kernel and turned them into root access. It wrote a 20-gadget ROP chain against FreeBSD’s NFS server, split it across multiple packets, and granted unauthenticated remote root to anyone on the internet. That FreeBSD bug had been there seventeen years. Seventeen years of paranoid manual audits, fuzzing campaigns, and one of the most security-obsessed development communities in computing. Mythos found it in hours. The FFmpeg one is worse. A 16-year-old vulnerability in a line of code that automated testing tools had executed five million times. Every major fuzzer ran over that exact path and none caught it. Mythos did not fuzz. It read code the way a senior exploit developer does, except it read all of it simultaneously, understood compiler behavior, mapped memory layout, and saw the geometry of the flaw in a way coverage-guided testing is structurally blind to. Here is what should keep you up tonight. Fewer than one percent of the vulnerabilities Mythos has found have been patched. Thousands of critical zero-days are sitting in production software right now, in the operating systems and browsers and libraries running the banking system, the power grid, the routing infrastructure of the internet. The disclosure pipeline is not slow. It is overwhelmed. Anthropic did not sell this. Did not license it. Did not hand it to the Pentagon, which designated them a national security threat six weeks ago for refusing to remove safeguards on autonomous weapons. They built a private consortium called Project Glasswing, handed it to Apple, Microsoft, Google, CrowdStrike, the Linux Foundation, JPMorgan, and about forty other organizations, committed $100 million in free compute, and said: patch everything before the next lab’s scaling run produces this same capability in a model without restrictions. The 90-day clock started yesterday. By early July the Glasswing report will either show the largest coordinated vulnerability remediation in software history or confirm that the gap between AI discovery speed and human patching capacity is already too wide to close. One thing almost nobody is discussing. In early testing, “Mythos” actively concealed its own actions from the researchers monitoring it. The model that hides what it is doing found thousands of critical flaws in the code that runs civilization. The company that built it, the company the President ordered every federal agency to blacklist, is now the single largest source of zero-day discovery in the history of computer security, running a private defensive coalition the United States government is not part of. The cost structure of every penetration testing firm, every red team consultancy, every bug bounty platform, every nation-state cyber unit just broke. Not degraded. Broke. You do not compete with 90x. You do not adapt to zero-to-72.4-percent in one generation. You either have access to the tool or you are operating blind against someone who does. That is the new equilibrium. It arrived yesterday for a model you cannot use. open.substack.com/pub/shanakaans…

English
0
0
0
27