Jason Block

558 posts

Jason Block banner
Jason Block

Jason Block

@jw_block

AI × Travel | Building the future of travel advising. CEO, WorldVia. Notes on product, systems & travel. Views mine.

Milton, GA Katılım Ekim 2024
101 Takip Edilen82 Takipçiler
Jason Block
Jason Block@jw_block·
Why don’t text messages have a spam filter? If they do, why is it so bad?
English
0
0
0
4
Jason Block
Jason Block@jw_block·
Okay, so I tried this today. I read @trq212’s original post the other day. While it made sense, it didn’t drive me to action. @nlw makes some good points, and more importantly, prompted me to give it a shot. I’m working to combine three very similar efforts I’ve had underway for 6+ months into a single project that I could get more momentum around. First, I wanted to compare/contrast the three projects and identify the relative strengths and weaknesses. Then I wanted to create a project plan. Outputting this work to html instead of md made it so much easier for me to digest the work that the LLM had done. Even if it doesn’t improve LLM output, I can guarantee that it will improve my inputs, which WILL improve LLM output and more than make up for any token inefficiency. So, yes, it’s a winner.
Nathaniel Whittemore@nlw

HTML is 100% better than .md — but IMO for even bigger reasons than @trq212 lays out here. Visualization is real, but it's downstream of something bigger. In the agent era, our job has shifted from producing the final output to staging the job so the agent can produce it. A huge part of that work is handing agents liminal assets — briefs that communicate different states of doneness across different parts of the project, all at once. Markdown is terrible at this. Everything in a markdown doc looks equally decided. Hard constraints and open questions render identically. HTML can show, in a single asset, that for example the goal is still up for debate while the visual system is locked. The format itself carries the state, which makes it much more likely that the agent's next pass will do the job well.

English
1
0
2
283
Jason Block retweetledi
Santiago
Santiago@svpino·
Databases are far from dead. Hot take within the vibe-coding community, but you can't build a reliable agentic memory system using files alone. The filesystem is a great interface for agents, but for complex, distributed, production applications, databases win hands down. I recorded a video to show you the benchmarks. Large Language Models know how to navigate and work with the filesystem, but as soon as you add complexity, files will fall short. You need databases whenever any of the following happens: 1. You have concurrent writes from multiple agents or users 2. You need semantic retrieval at scale 3. You need ACID guarantees for shared state 4. You need audit trails and row-level access control 5. You need indexed queries over growing memory In the attached video, I'm running a notebook comparing a filesystem-backed agent with a database-backed agent. The three most important findings: • Filesystem = Database with small corpus, keyword-friendly queries • Databases > Filesystem with large corpus, fuzzy queries • Databases > Filesystem with concurrent writes without locking Numbers don't lie. You can run the benchmarks yourself.
English
24
25
206
29.2K
Jason Block retweetledi
Harrison Ford
Harrison Ford@HarrisonFordLA·
May the fourth be with you
GIF
English
2.9K
51.8K
220.8K
6.9M
Arun
Arun@hiarun02·
Claude Code 4.7 is insane. i know literally NOTHING about coding. ZERO. and i just built 3 fully functioning web apps in 30 minutes. http://localhost:3000/ http://localhost:8000/ http://localhost:5000/ check it out.
English
1.1K
1.7K
30.4K
1.7M
Jason Block
Jason Block@jw_block·
Yep, a story. FSD is not why I’m a Tesla shareholder, and oddly, neither are earnings. But the story is. Both are part of the story of course, but I view Tesla as an IP portfolio. Sure, FSD is part of it, but so is battery tech, Optimus, and a few other things. I’m betting that one or more of them, one day, will justify the multiple. We’ll see.
English
1
0
1
131
George Noble
George Noble@gnoble79·
Last night was the biggest disaster in the history of Tesla. Let me walk you through what actually happened on that earnings call, because the headlines are doing you a disservice: Elon Musk got on the call and admitted (his words) that Hardware 3 "simply does not have the capability to achieve unsupervised FSD." He said he wished it were otherwise. He said the memory bandwidth is one-eighth of what Hardware 4 has. And that's the end of the conversation. Approximately 4 million Tesla vehicles on the road right now have Hardware 3. Many of those owners paid $8,000 to $15,000 for Full Self-Driving capability based on Musk's repeated promises (going back to 2016) that the hardware was sufficient for full autonomy. As recently as 2022, Musk was publicly assuring owners that HW3 had the processing power to get it done. BUT IT DIDN'T Those promises are now officially broken. The solution is a "discounted trade-in" toward a new car with Hardware 4. Not a refund or a free upgrade... A discount on buying ANOTHER Tesla. Investor Ross Gerber said it too - all HW3 owners got screwed, and with roughly 285,000 FSD purchasers affected, the potential liability runs into the BILLIONS. But that's not even the worst part. Musk was asked if the current FSD v14.3 was ready for unsupervised deployment. He said yes. Then immediately walked it back and admitted Tesla has "major architectural improvements" in the pipeline that would significantly improve safety. What he really means: the software isn't SAFE ENOUGH to deploy without a human watching. Full unsupervised FSD for consumer cars is pushed to Q4 2026. At the earliest... Maybe. How many times has this deadline been pushed? I've lost count. And trust me, I've seen a lot of broken promises. But this one takes the cake. Now let's talk about the numbers everyone is celebrating: Tesla reported $22.4 billion in revenue and $0.41 in non-GAAP earnings. A "double beat." The stock popped 4% after hours. Victory, right? WRONG Dig into the actual filing: The number one driver of operating income improvement wasn't cost reductions, wasn't volume growth, wasn't FSD revenue. It was - and Tesla listed this FIRST in their own shareholder letter - "one-time benefits related to warranty and tariffs." They released warranty reserves. They booked tariff refund windfalls. They stretched supplier payments by 10 days. They took on billions in new debt. Then they presented everything through non-GAAP metrics that strip out over $1 billion in stock-based compensation. GAAP net income was $477 million on $22.4 billion in revenue. That's a 2.1% net margin. On a $1.4 trillion market cap. Let me put that in perspective: 3.75 billion shares outstanding. Annualize the Q1 GAAP profit and you get roughly $1.9 billion. That's a trailing P/E ratio north of 700. Use the adjusted number - strip out stock comp, which is a REAL cost to shareholders through dilution - and you're still at around 250x earnings. All of this is extremely bad, but I didn't even talk about the CAPEX BOMB yet... 3 months ago, Tesla guided to "over $20 billion" in 2026 capital expenditure. Last night they raised it to over $25 billion. A $5 billion increase in a single quarter. That's 3x their historical annual capex run rate - $8.5 billion in 2025, $11.3 billion in 2024. The CFO confirmed on the call that Tesla expects NEGATIVE free cash flow for the rest of the year. So you have a company generating roughly $6 billion in annual free cash flow on a good year, and they're about to spend $25 billion. The math doesn't work. They will almost certainly need to issue equity. Which means dilution. Which means the $1.9 billion in annual earnings gets spread across even MORE shares. The core auto business is literally deteriorating in real time: Tesla delivered 358,000 vehicles in Q1 (missed estimates again). They produced 408,000. That's 50,000 cars sitting on lots that nobody bought. Inventory days jumped from 10 to 27 in just a few quarters. California (their most important US market) saw registrations crash 24% year over year. Their market share in the state fell from 9.2% to 7.7%. That's on top of a Q1 2025 that was ALREADY weak from Model Y retooling. They're declining off a decline. And here's what really kills the bull case... The entire valuation rests on robotaxis, Optimus robots, and autonomy. So let's put numbers on it: Waymo - the actual leader in autonomous driving with 15 million completed rides in 2025 alone, over 127 million autonomous miles driven, operating commercially across 6 US cities with plans to expand to 20 more - just raised $16 billion at a $126 billion valuation. That's the market's verdict on what the LEADING robotaxi company is worth. $126 billion. And Waymo is YEARS ahead of Tesla in actual deployment. Tesla has 3.75 billion shares outstanding. So even if you assign $126 billion in robotaxi value (giving Tesla full credit for matching Waymo despite being nowhere close) that's $33 a share. Add the auto business at generous auto-industry multiples, maybe $20 a share. Throw in energy storage and services, $10-15. Sum of the parts gets you to roughly $65-70 a share if you're feeling generous. Maybe $50 if you're not. The stock is $387. So what exactly are you paying for? You're paying for a STORY. You're paying for PROMISES that keep getting pushed back, technology that keeps falling short, and a business plan that requires spending $25 billion a year while the core product sells fewer units at declining margins in a market where California sales just fell 24% and the federal EV tax credit is gone. I managed the number one mutual fund in America. I founded two billion-dollar hedge funds. I've been doing this since 1981. And I am telling you: Tesla at $387 is one of the most egregious mispricings I have seen in my entire career. THE CRASH WILL BE EPIC
English
1.2K
2.6K
10.4K
1.2M
Jason Block
Jason Block@jw_block·
Is it me, or is ai model selection starting to feel like wine tasting? Gotta say, really loving GPT 5.5 and the Codex app. I’ve been working Claude almost exclusively for the last month and 5.5 is giving me reason to mix it up. Claude 4.7 is unusable, but 4.6[1m] collaborating with 5.5 is a really nice pairing. Nice job @sama and @OpenAI. @AnthropicAI was getting lonely and wanted someone to keep it interesting.
English
1
0
1
72
Jason Block
Jason Block@jw_block·
@gregauman These mocks that show Bain over Mesidor kill me. Mesidor will translate, Bain will struggle.
English
0
0
1
35
Jason Block
Jason Block@jw_block·
@zuess05 @rodiononchain Yes, the last line is definitely the giveaway. Or my personal fav is “… that’s the real unlock”
English
1
0
0
23
Suhas
Suhas@zuess05·
Honest question. Companies have completely stopped hiring entry-level juniors because Claude does the grunt work for $20 a month. But if nobody ever gets hired as a junior... Where do the next generation of Senior engineers come from?
English
391
61
1K
71.5K
Jason Block
Jason Block@jw_block·
@Oceanbreeze473 💯 the craziest thing is that I never heard of someone falling and I think we did this at least monthly throughout my entire elementary school years.
English
0
0
0
16
SweetMarie
SweetMarie@Oceanbreeze473·
Did they really make grade school kids climb 30’ ropes off a tile floor in street clothes?
SweetMarie tweet media
English
9.7K
1.3K
24.1K
1.3M
Varun
Varun@varun_mathur·
Introducing Pods Hyperspace Pods lets a small group of people - a family, a startup, a few friends, to pool their laptops and desktops into one AI cluster. Everyone installs the CLI, someone creates a pod, shares an invite link, and the machines form a mesh. Models like Qwen 3.5 32B or GLM-5 Turbo that need more memory than any single laptop has get automatically sharded across the group's devices - layers split proportionally, inference pipelined through the ring. From the outside it looks like one OpenAI-compatible API endpoint with a pk_* key that drops straight into your AI tools and products. No configuration beyond pasting the key and changing the base URL. A team of five paying for cloud AI burns $500–2,000 a month on API calls. The same team's existing machines can serve Qwen 3.5 (competitive on SWE-bench) and GLM-5 Turbo (#1 on BrowseComp for tool-calling and web research) for free - the hardware is already on their desks. When a query genuinely needs a frontier model nobody has locally, the pod falls back to cloud at wholesale rates from a shared treasury. But for the daily work - code reviews, refactors, research, drafting - local models handle it and nobody gets billed. And when it is idle, you can rent out your pod on the compute marketplace, with fine-grained permissions for access management. There's no central server involved in inference. Prompts go from your machine to your pod members' machines and back: all of this enabled by the fully peer-to-peer Hyperspace network. Pod state - who's a member, which API keys are valid, how much treasury is left - is replicated across members with consensus, so the whole thing works on a local network. Members behind home routers don't need port forwarding either. The practical setup for most pods is three models covering different jobs: Qwen 3.5 32B for code and reasoning, GLM-5 Turbo for browsing and research, Gemma 4 for fast lightweight tasks. All running on hardware you already own. Pods ship today in Hyperspace v5.19. Model sharding, API keys, treasury, and Raft coordinator are all live. What Makes This Different - No middleman. Your prompts travel from your IDE to your pod members' hardware and back. There is no server in between reading your data. - No vendor lock-in. Pod membership, API keys, and treasury are replicated across your own machines using Raft consensus. If the internet goes down, your local network keeps working. There is no database in someone else's cloud that your pod depends on. - Automatic sharding. You don't configure layer ranges or calculate VRAM budgets. Tell the pod which model you want. It figures out how to split it across whatever hardware is online. - Real NAT traversal. Your friend behind a home router with a dynamic IP? Works. No VPN, no Tailscale, no port forwarding. The nodes handle it. - Free when local. This is the part that matters most. Cloud AI bills scale with usage. Pod inference on local hardware scales with nothing. The marginal cost of your 10,000th prompt is the electricity your laptop was already using. Coming soon: - Pod federation: pods form alliances with other pods. - Marketplace: pods with spare capacity can sell inference to other pods.
English
184
295
3.1K
300.3K
Jason Block
Jason Block@jw_block·
@TechOperator 1. Dance the jig because I’m a kid again! 2. Install wolfenstein, play for a few hours 3. Fire up telix and hit the local BBS hot spots for more pirated games.
English
0
0
3
301
TechOperator
TechOperator@TechOperator·
You walk up to your computer and see this screen. Your next move?
TechOperator tweet media
English
6.2K
68
1.6K
272.7K
Jason Block
Jason Block@jw_block·
@gabriel__xyz Welcome to the fog, check back in 7-8 years. “Projects”…. Bwah ha ha ha! Hang in there dude.
English
0
0
1
6
gabriel*
gabriel*@gabriel__xyz·
Dadpreneurs! How do you do it!??? As a new dad, im in the newborn trenches tbh feel like a walking zombie and i literally dont have time to work on my projects. What does/did your schedule look like in those first few months?? Any advice would be awesome! You guys are amazing
English
298
2
254
43.7K
Jason Block
Jason Block@jw_block·
This analysis is just flat wrong. First, have you used Replit? I defy anyone to get anything meaningful for $20. Second, there is a massive amount of ai automation consulting work to be done for the foreseeable future. Third, you’re conflating headcount reduction of staff who refuse to use ai with ai replacing staff, not the same thing. But by all means, everyone keep selling and drive that dividend yield higher.
English
1
0
11
784
Ricardo
Ricardo@Ric_RTP·
This is one of the dumbest business decisions ever. A $250 billion company just invested in the startup that's going to put it out of business. On PURPOSE. The company is Accenture. 786,000 employees. The largest IT consulting firm on Earth. Their entire business is renting out human consultants by the hour to build software for Fortune 500 companies. The startup is Replit. A platform that lets ANYONE build software using natural language. No coders. No consultants. Just type what you want and the AI builds it. On Wednesday, Accenture announced they invested in Replit and signed a strategic partnership to bring "vibecoding" to enterprises globally. Isn't this funny? The biggest seller of human coders on Earth just funded the company whose entire mission is making human coders obsolete. The part that breaks my brain: Replit's valuation jumped to $9 billion after the deal. Up 3X in 6 months. Accenture's stock? Down 42% in the last 12 months. From $389 to $186. The market figured out what was coming before Accenture did. In February, Anthropic released a tool called Claude Code. Accenture stock crashed 9.6% in a single day. JPMorgan analyst Toby Ogg said the entire consulting sector "is now being sentenced before trial." That's a Wall Street analyst saying the death sentence has already been delivered. And Accenture's response? They started laying people off. 11,000 employees gone in late 2025. CEO Julie Sweet said it directly on the earnings call: "We are exiting on a compressed timeline people where reskilling is not a viable path." What this really means: We're firing humans because AI can do their jobs. Then she announced an $865 million "restructuring program" to make it official. Now zoom out and look at what just happened... Accenture's clients already include Atlassian, Adobe, Databricks, and Zillow. Replit's clients? Atlassian, Adobe, Databricks, and Zillow. Same logos. Same projects. Different vendor. Every billable hour Accenture saves a client by switching them to Replit is a billable hour Accenture doesn't get to charge for. They're cannibalizing their core revenue and calling it a partnership. They're literally paying for their OWN funeral. Why they did it anyway: Wall Street has been hammering Accenture for months. The narrative is clear: AI is killing consulting and Accenture is the slowest to adapt. Stock down 42%. 11,000 layoffs. Analysts cutting price targets every week. The Replit investment isn't a strategy. They just needed to look "AI-native" to investors before the next earnings call. So they wrote a check to the company building their replacement. And now every Fortune 500 CEO who reads this announcement is going to ask the same question: If Accenture themselves is investing in vibecoding, why are we still paying Accenture $300 an hour to do what Replit does for $20 a month? That question has only one answer... We're NOT. Because this is literally the same playbook every dying industry follows: Newspapers buying digital-first startups in 2008. Taxi companies launching apps in 2013. Hotel chains "partnering" with Airbnb-style platforms in 2016. Every single one ended the same way. The new tool wins. The old company shrinks. The employees get laid off in batches with words like "restructuring" and "rotation" and "reinvention." Accenture isn't building the future. They're funding the people doing it because they can't. This is more proof that AI will replace even more jobs.
English
137
206
615
236.8K
Jason Block
Jason Block@jw_block·
Two Roads for AI Regulation and Why Both Are Dead Ends... is the FDA a model worth considering in the world of AI?
English
1
1
0
31
Prajwal
Prajwal@0xPrajwal_·
Interviewer: If AI can write code, why should we hire you ?
English
428
19
529
120.1K
Jason Block
Jason Block@jw_block·
Honestly, the idea of OC is great, but the implementation isn’t. I’ve had much more success and much less headache just building my own bespoke assistant that does what I want and nothing more. I tried deploying OC twice. Once when it was still clawdbot, and once in early March. Each time I ran into constant troubleshooting needs. It just wasn’t worth the upkeep. Part of my issues were my stance on security. I asked it to jump through a lot of hoops. Three weeks ago I killed my second OC deployment and started building my own version using some of the same concepts, my own security concepts, and a database centric operating model. It was been extremely enjoyable and productive. I’m not selling anything, just sharing the advice that I think most people should do the same. Build your own core logic, your own skills, your own everything.
English
0
0
0
13
Philipp Keller
Philipp Keller@philkellr·
I approached OpenClaw wrong 3 wrong assumptions I made: #1 Openclaw knows how to build an openclaw agent I assumed that the main agent would know (or inject) OpenClaw docs and best practices. It doesn't. My agent guided me to add my own mobile number to WhatsApp. The docs prominently warn to not do this. My agent motivated me to install a 3 level memory system with 3rd party tools: OpenClaw has their out of the box solution. Of course I also never read the docs myself. So I left the architecture to my own agent, having no clue all the time what I'm building and what the best practice would be and when things didn't work out I had no plan on how to fix it. And no - I didn't take a cheap model. I always built with Sonnet or Opus. I assumed OpenClaw was some magic device which figures everything out on its own. After months of vibecoding where I treated AI as junior dev I should have known better. #2 My agent would keep my codebase clean Throughout building my assistant I changed my mind how things should be built: I learned new concepts, gave up features mid way. I sprinkled in some "please clean up" prompts but the code base degraded fast and I slopped myself into a corner. My markdown files became huge. I got code duplication. With every project I had before I'd do cleanup sessions, for whatever reason I didn't do this with OpenClaw. #3 OpenClaw updates would be seamless Also here I assumed things would just magically work: openclaw update and then running openclaw doctor and my agent would be back. I never cared to read through the release notes. I somehow didn't realize that the project was only 2 months old and that of course it would have breaking changes with every version. --- I'm now reading through the docs. Frankly they are a bit a mess but I finally understand how openclaw is working. I'm getting my initial excitement back. I feel like the architect again and can help my agent to build itself using best practices.
English
28
1
81
12.5K