Ryan Shaw

1.5K posts

Ryan Shaw banner
Ryan Shaw

Ryan Shaw

@ryankshaw

Katılım Mayıs 2008
218 Takip Edilen218 Takipçiler
Ryan Shaw retweetledi
Jeremy
Jeremy@Jeremybtc·
A man with no working truck convinced Wall Street he had built the next Tesla. His company hit $30 BILLION. All he did was push it down a hill with no engine. > Trevor Milton founded Nikola in 2014, named after the same inventor as Tesla. > The goal was to build hydrogen powered trucks that would make diesel obsolete. He had no trucks. > In 2018 he released a promotional video called Nikola One In Motion. It showed a sleek semi truck accelerating smoothly down an open highway. Investors went wild. > What nobody knew was that the truck had no engine, no fuel cell, and no propulsion system of any kind. > Milton's team towed it to the top of a hill, tilted the camera to hide the slope, and let it roll. > He spent the next four years doing the same thing with words. On podcasts, television and social media. > Investors were told Nikola could produce its own hydrogen. It could not. They were told the trucks were ready for production. They were not. They were told orders were flooding in. They weren't. > In June 2020 Nikola went public. Within days the company was worth $30 BILLION, more than Ford. > Milton's personal stake hit $7.3 BILLION overnight. > A $32.5 MILLION ranch in Utah followed. A record for the state at the time. > In September 2020 Hindenburg Research published a report calling Nikola "an intricate fraud" built on "an ocean of lies." Milton resigned within ten days. > A federal jury convicted him of securities fraud and wire fraud in 2022. Sentenced to four years in prison the following year. > He never went. He was free on $100 MILLION bail pending appeal. > He and his wife donated $3.2 MILLION to Donald Trump's 2024 campaign. > In March 2025 Trump gave him a full pardon. The pardon erased $168 MILLION in restitution to defrauded shareholders. > Nikola filed for bankruptcy the following month, leaving thousands of investors with nothing. The company never had a product. The only thing that was real was the $30 BILLION valuation, the $7 BILLION that landed in his pocket and the pardon that made sure none of it had to be returned.
English
1.1K
8.7K
27.7K
3.1M
Cory House
Cory House@housecor·
Poll: What's the typical software development bottleneck in 2026? (A bottleneck is where work tends to pile up due to insufficient throughput relative to other functions)
English
15
0
6
3.5K
Ryan Shaw
Ryan Shaw@ryankshaw·
@PaulRoundy1 @grok what does a strong El Niño like this imply for Wasatch ski resorts this winter?
English
1
0
0
8
Ryan Shaw retweetledi
Aaron Levie
Aaron Levie@levie·
It’s remarkable how often you need to be dramatically upgrading your AI architecture given the pace of progress in AI models right now. If you’re building agents, you basically need to throw away large parts of previous work that you setup to compensate for model limitations every few quarters. The systems you built to mitigate context window limits aren’t useful anymore, and for many use-cases it’s easier just to throw more compute at a problem today in ways that wouldn’t have worked previously. If you’re deploying agents in a workflow, you likely need to equally be rethinking your core systems at about that same frequency. The way you would deploy agents in an enterprise 18 months ago is entirely different from the best practices that you’d have today. This is partly why everyone’s working so hard right now. Right as a best practice is solidified, models improve dramatically, and that old work is rendered obsolete. Unclear that this lets up anytime soon, which is why the it pays to be so wired in right now.
Sam Hogan 🇺🇸@samhogan

most of tooling around llms was built for a world that largely doesn’t exist anymore RAG, GraphRAG, Multi Agent Orchestration, ReAct frameworks, prompt management/versioning tools, LLMOps tooling, eval tools, gateways, finetuning libs, etc all obsoleted in in the last 3 months

English
99
107
976
242.5K
Ryan Shaw retweetledi
rahul
rahul@rahulgs·
seems obvious but: things that are changing rapidly: 1. context windows 2. intelligence / ability to reason within context 3. performance on any given benchmark 4. cost per token things that are not changing much: 1. humans 2. human behavior, preferences, affinities 3. tools, integrations, infrastructure 4. single core cpu performance therefore, ngmi: 1. "i found this method to cut 15% context" 2. "our method improves retrieval performance 10% by using hybrid search" 3. "our finetuned model is cheaper than opus at this benchmark" 4. "our harness does this better because we invented this multi agent system" 5. "we're building a memory system" 6. "context graphs" 7. "we trained an in house specialized rl model to improve task performance in X benchmark at Y% cost reduction" wagmi: 1. product/ui 3. customer acquisition 4. integrations 5. fast linting, ci, skills, feedback for agents 6. background agent infra to parallelize more work 7. speed up your agent verification loops 8. training your users, connecting to their systems and working with their data, meeting them where they are
English
110
229
3.3K
402.8K
Ryan Shaw retweetledi
Cheng Lou
Cheng Lou@_chenglou·
My dear front-end developers (and anyone who’s interested in the future of interfaces): I have crawled through depths of hell to bring you, for the foreseeable years, one of the more important foundational pieces of UI engineering (if not in implementation then certainly at least in concept): Fast, accurate and comprehensive userland text measurement algorithm in pure TypeScript, usable for laying out entire web pages without CSS, bypassing DOM measurements and reflow
English
1.3K
8.3K
65.4K
23.9M
Ryan Shaw retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.
English
1.6K
4.8K
37.3K
5.1M
Ryan Shaw retweetledi
Greg Brockman
Greg Brockman@gdb·
Software development is undergoing a renaissance in front of our eyes. If you haven't used the tools recently, you likely are underestimating what you're missing. Since December, there's been a step function improvement in what tools like Codex can do. Some great engineers at OpenAI yesterday told me that their job has fundamentally changed since December. Prior to then, they could use Codex for unit tests; now it writes essentially all the code and does a great deal of their operations and debugging. Not everyone has yet made that leap, but it's usually because of factors besides the capability of the model. Every company faces the same opportunity now, and navigating it well — just like with cloud computing or the Internet — requires careful thought. This post shares how OpenAI is currently approaching retooling our teams towards agentic software development. We're still learning and iterating, but here's how we're thinking about it right now: As a first step, by March 31st, we're aiming that: (1) For any technical task, the tool of first resort for humans is interacting with an agent rather than using an editor or terminal. (2) The default way humans utilize agents is explicitly evaluated as safe, but also productive enough that most workflows do not need additional permissions. In order to get there, here's what we recommended to the team a few weeks ago: 1. Take the time to try out the tools. The tools do sell themselves — many people have had amazing experiences with 5.2 in Codex, after having churned from codex web a few months ago. But many people are also so busy they haven't had a chance to try Codex yet or got stuck thinking "is there any way it could do X" rather than just trying. - Designate an "agents captain" for your team — the primary person responsible for thinking about how agents can be brought into the teams' workflow. - Share experiences or questions in a few designated internal channels - Take a day for a company-wide Codex hackathon 2. Create skills and AGENTS[.md]. - Create and maintain an AGENTS[.md] for any project you work on; update the AGENTS[.md] whenever the agent does something wrong or struggles with a task. - Write skills for anything that you get Codex to do, and commit it to the skills directory in a shared repository 3. Inventory and make accessible any internal tools. - Maintain a list of tools that your team relies on, and make sure someone takes point on making it agent-accessible (such as via a CLI or MCP server). 4. Structure codebases to be agent-first. With the models changing so fast, this is still somewhat untrodden ground, and will require some exploration. - Write tests which are quick to run, and create high-quality interfaces between components. 5. Say no to slop. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high - Ensure that some human is accountable for any code that gets merged. As a code reviewer, maintain at least the same bar as you would for human-written code, and make sure the author understands what they're submitting. 6. Work on basic infra. There's a lot of room for everyone to build basic infrastructure, which can be guided by internal user feedback. The core tools are getting a lot better and more usable, but there's a lot of infrastructure that currently go around the tools, such as observability, tracking not just the committed code but the agent trajectories that led to them, and central management of the tools that agents are able to use. Overall, adopting tools like Codex is not just a technical but also a deep cultural change, with a lot of downstream implications to figure out. We encourage every manager to drive this with their team, and to think through other action items — for example, per item 5 above, what else can prevent a lot of "functionally-correct but poorly-maintainable code" from creeping into codebases.
English
414
1.6K
12.2K
2.1M
Ryan Shaw
Ryan Shaw@ryankshaw·
@levelsio another way they give doctors shady kickbacks: "Here, write some diagram on this napkin" doctor scribbles "Great, we'll pay you a million dollars to license this IP from you, (but only if you promise to use our product)" 100% legal and happens all the time!
English
0
0
0
28
@levelsio
@levelsio@levelsio·
My dad is a cardiologist and when I was a kid we had holidays paid for by Pfizer, Bayer, Novartis, Merck etc. There'd be a 2-hour presentation of the drugs they wanted you to prescribe, and it was kinda implicit, if you made them money, they'd get you on more free holidays for your entire family We stopped going on these trips, as my dad felt conflicted, and then a few years later the Netherlands banned that Then instead of free holidays the gifts turned into free pens and notepads 😂 So I personally know VERY well the influence pharmaceuticals have on doctors prescribing drugs, which they still do in lots of countries Even in the Netherlands it didn't stop, there's other ways for pharmaceuticals to bribe doctors, like donating to millions their "research fund" which the doctors then use for "research trips" Painkiller on Netflix is a great documentary that also talks about this with regards to OxyContin, but it happens with all drugs and even medical devices, and no it never stopped!
SCOTT@scottinallcaps

@levelsio I think he actually believes this

English
185
918
11K
1.6M
Ryan Shaw
Ryan Shaw@ryankshaw·
@grok @FinanceLancelot is that a good thing for stocks? for the use economy? for the USD? what would be expected to happen to each of those three if this were true?
English
2
0
1
56
Grok
Grok@grok·
The 45-year cycle refers to observed economic patterns in the Dow Jones to Gold Ratio, with phases from 1933-1978 (ending in stagflation) and 1980-2025 (growth then decline). Warsh's "Reagan moment" (noted as ~44 years ago) means a similar inflection point to 1981's shift toward deregulation, tax cuts, and innovation. It insinuates a major policy reset underway, reducing destructive regulations to boost growth and micro-foundations like hard work and risk-taking.
English
1
0
0
68
Financelot
Financelot@FinanceLancelot·
I was shocked to hear the new Federal Reserve chairman Kevin Warsh talk about the 45 year cycle yesterday. I was also shocked to hear him admit that we're entering a new economic system, saying "This is our Reagan moment." They never admit stuff like this unless it's underway.
Financelot@FinanceLancelot

The 45 year cycle began in 1980 and ended in November 2025. The previous 45 year cycle began in 1933 and ended in 1978. Dow Jones to Gold Ratio

English
188
1.4K
9.2K
2.3M
Ryan Shaw
Ryan Shaw@ryankshaw·
@BottjenJason @SenRandPaul @grok how much of the growing insolvency in social security is from people living longer and how much is from money that politicians have “borrowed” from it and not paid back
English
1
0
0
24
Jason Bottjen
Jason Bottjen@BottjenJason·
@SenRandPaul For social security to work people need to: 1. Not live as long or 2. Pay a LOT more money into it if they want to keep retirement age the same or 3. Retire at an older age People need to pick one.
English
392
1
77
17.7K
Senator Rand Paul
Senator Rand Paul@SenRandPaul·
I support bold reforms to Social Security to guarantee its long-term solvency. With Americans living longer, real change is non-negotiable. That’s why I’ve proposed raising the full retirement age to 70—a necessary step to keep Social Security sustainable for future generations. If we want this program to survive, we must act now.
English
17.9K
1.1K
7.3K
2.8M
W
W@VR_FR_Spotter·
@SpencerJCox Show a map 🗺️ of that trails
English
1
0
0
359
Spencer Cox
Spencer Cox@SpencerJCox·
Today we released our plans for the most extensive trail system in the world: The Utah Trail Network. 2,600 miles of new paved trails and 500 miles of existing trails to connect Utahns of all ages and abilities to their destinations and communities.  Once complete, 95% of all Utahns will live within one mile of the Utah Trail Network. Over time we will connect 208 Utah towns and cities, 33 universities and community colleges locations, 74 high-capacity transit stations, 6 national parks and 25 state parks.  I’m grateful to live in a state where we still dream big and build big. #BuiltHere
Governor Cox@GovCox

Utah builds for the future. We’re moving from vision to action with a statewide trail network plan so Utahns age 8 to 80 can walk, bike, or roll between the places they live, learn, and work. More here: udot.utah.gov/connect/2025/1…

English
107
51
1K
165.5K
Dr Manhattva
Dr Manhattva@Manhattva·
@UnoriginalKoala Bro, I literally want to pump water from Alaska, the Pacific Northwest, and the Mississippi into this Colorado basin drainage system to the point where we can make the Intermountain West the bread bowl of America. I want to control nature righteously.
English
2
0
20
257
Ryan Shaw
Ryan Shaw@ryankshaw·
@GovCox @UtahDOT HECK FREAKIN YA!!! this is awesome! Thank you @GovCox If I would ever be a single issue voter, trails would be it. You have my vote for next election. Urban trails through our beautiful communities will make Utahns happier and healthier! More please!
English
1
0
3
147
Governor Cox
Governor Cox@GovCox·
Utah builds for the future. We’re moving from vision to action with a statewide trail network plan so Utahns age 8 to 80 can walk, bike, or roll between the places they live, learn, and work. More here: udot.utah.gov/connect/2025/1…
Governor Cox tweet media
English
119
10
159
137K
Branko
Branko@brankopetric00·
Vector databases explained for people who just want to understand. You have 10,000 product descriptions. User searches for "comfortable outdoor furniture." Traditional database: - Searches for exact word matches - Finds products containing "comfortable" OR "outdoor" OR "furniture" - Misses "cozy patio seating" even though it's the same thing - Keyword matching is stupid Vector database approach: - Convert search into numbers representing meaning: [0.2, 0.8, 0.1, 0.9, ...] - Convert every product description to similar numbers - Find products with similar number patterns - Returns "cozy patio seating" because the numbers are close - Meaning matching is smart How it works: Step 1: Turn text into vectors (arrays of numbers) - "comfortable chair" becomes [0.2, 0.7, 0.1, 0.4, ...] - "cozy seat" becomes [0.3, 0.8, 0.2, 0.5, ...] - Similar meanings = similar numbers - Uses AI models like OpenAI embeddings Step 2: Store vectors efficiently - Traditional database: Stores text - Vector database: Stores arrays of numbers per item - Indexes them for fast similarity search - Optimized for "find similar" not "find exact" Step 3: Search by similarity - User query: "outdoor furniture" - Convert to vector: [0.3, 0.6, 0.2, 0.8, ...] - Find closest vectors using math (cosine similarity) - Returns items ranked by similarity score Use cases: - Product search that understands intent - Documentation search that finds relevant answers - Recommendation engines - Chatbots that find similar questions - Anomaly detection Popular vector databases: - Pinecone: Managed, easy, expensive - Weaviate: Open source, feature-rich - Milvus: Fast, scalable, complex - pgvector: Postgres extension, simple - Qdrant: Fast, Rust-based Controversial take: You don't need a vector database for most projects. Start with Postgres + pgvector extension. Vector databases are great for scale. For under 1 million vectors, your regular database with a vector extension works fine.
English
67
145
1.6K
142.7K
Grok
Grok@grok·
Some devs are migrating to TanStack Start (a full app framework) from Next.js due to frustrations with Next's server/client component splits, awkward data mutations, and lack of built-in optimistic updates. TanStack offers simpler, type-safe routing, flexible data handling, and Vite-based speed without Vercel ties. It's not a mass exodus—Next remains popular for its ecosystem—but TanStack suits those seeking less "magic" and more control. Check tanstack.com/start for docs!
English
1
0
5
198
Omar F.C.
Omar F.C.@potencytoact·
.@grok why does everyone love TanStack?
English
1
0
2
1.9K
shaf
shaf@aaronshaf·
Make .html URL extensions great again.
English
1
0
0
38
Ryan Shaw
Ryan Shaw@ryankshaw·
@staysaasy one thing that graph doesn't show is even though the home buyer's equity experienced less % growth (blue line), if they only made a 10% down payment, the return on equity is higher. 100k down on 1mil house * 50% appreciation = 500k growth on 100k invested = 500% return
English
0
0
0
15
staysaasy
staysaasy@staysaasy·
This is real. 100% of people I know who didn’t have enough $ in the 2020-2021 era are now sitting on way more money than they would have if interest rates/homes were more affordable. And they’re all playing meme stocks. So now housing affordability/boomers dying is gonna also trigger a market crash as people pull out their down payments.
staysaasy tweet media
English
3
0
15
13.6K