Ryan Topps

658 posts

Ryan Topps

Ryan Topps

@RyanJTopps

Software Engineer

New York Katılım Ağustos 2017
114 Takip Edilen87 Takipçiler
Ryan Topps
Ryan Topps@RyanJTopps·
They aren’t general intelligence they are just compressed networks of it’s in distribution training. Granted remarkable the we have a ‘general’ algorithm to achieve that compression and generalization within the distribution but it’s far from picking up novel tasks without being retrained with them in distribution
English
0
0
1
47
Krasnal Wojtek
Krasnal Wojtek@KrasnalWojtek·
@kimmonismus I don't know what to think about this one. Either this is not good task for LLM, or current model have zero general intelligence. But I am sure they will get better in coming months. But will it mean anything?
English
4
0
16
1.4K
Chubby♨️
Chubby♨️@kimmonismus·
Back to work friends. Frontier models Achieve below 1% on Arc agi 3. Let’s see if this will be saturated by end of year.
Chubby♨️ tweet media
English
64
43
922
39.6K
Ryan Topps
Ryan Topps@RyanJTopps·
@SHOTGUNSUNN Well humans react before an event occurs. I am not even lying look it up there were studies where drivers react to IEDs before the event occurred
English
1
0
3
15K
Melissa
Melissa@SHOTGUNSUNN·
The concept of Kirk doing this face 2 miliseconds before getting kirked
Melissa tweet media
English
929
12.6K
272.5K
8.3M
Chase
Chase@chaseposts·
Oil spiking hard on the Iran conflict = classic petrodollar boost. Importers need more USD to buy expensive crude → dollar demand surges → DXY pushing toward 2026 highs around 99.7. Stronger USD makes anything paying in dollars (Treasuries, stocks, etc.) way more appealing to foreign buyers. Simple but powerful right now
English
8
0
5
7.8K
Ramp Capital
Ramp Capital@RampCapitalLLC·
The fuck is going on here
Ramp Capital tweet media
English
400
230
3.6K
815.4K
Shay Boloor
Shay Boloor@StockSavvyShay·
Jensen Huang says every company will need an OpenClaw agentic system strategy by calling it “the new computer.” He claims OpenClaw became the most popular open-source project in $NVDA history within weeks and comparing its impact to Linux reshaping the software stack.
English
176
342
3.7K
1.4M
Pedro Domingos
Pedro Domingos@pmddomingos·
Anthropic will go to any lengths to prove that AI is dangerous, including making it dangerous.
English
48
10
192
11.9K
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
🚨 Holy shit...A developer on GitHub just built a full development methodology for AI coding agents and it has 40.9K stars on GitHub. It's called Superpowers, and it completely changes how your AI agent writes code. Right now, most people fire up Claude Code or Codex and just… let it go. The agent guesses what you want, writes code before understanding the problem, skips tests, and produces spaghetti you have to babysit. Superpowers fixes all of that. Here's what happens when you install it: → Before writing a single line, the agent stops and brainstorms with you. It asks what you're actually trying to build, refines the spec through questions, and shows it to you in chunks short enough to read. → Once you approve the design, it creates an implementation plan so detailed that "an enthusiastic junior engineer with poor taste and no judgement" could follow it. → Then it launches subagent-driven development. Fresh subagents per task. Two-stage code review after each one (spec compliance, then code quality). The agent can run autonomously for hours without deviating from your plan. → It enforces true test-driven development. Write failing test → watch it fail → write minimal code → watch it pass → commit. It literally deletes code written before tests. → When tasks are done, it verifies everything, presents options (merge, PR, keep, discard), and cleans up. The philosophy is brutal: systematic over ad-hoc. Evidence over claims. Complexity reduction. Verify before declaring success. Works with Claude Code (plugin install), Codex, and OpenCode. This isn't a prompt template. It's an entire operating system for how AI agents should build software. 100% Opensource. MIT License.
Ihtesham Ali tweet media
English
208
686
6.2K
923.4K
Ryan Topps
Ryan Topps@RyanJTopps·
@AlecLace Ironically the best marketing he got in a year
English
0
0
0
752
Alec Lace
Alec Lace@AlecLace·
🚨 Ben Stiller has been pushing his soda brand hard for nearly a year. Yet the Stiller Soda account on X sits at just 822 followers. Yikes. 😬🥤
Alec Lace tweet media
English
1.3K
563
12.8K
1.7M
Ryan Topps
Ryan Topps@RyanJTopps·
@nypost I know the right man for the right mission
Ryan Topps tweet media
English
0
0
0
49
New York Post
New York Post@nypost·
Trump briefed that Iran's new supreme leader Mojtaba Khamenei is probably gay - and president has priceless reaction trib.al/MugVpH6
New York Post tweet media
English
3.9K
3.6K
24.4K
18.8M
Ryan Topps
Ryan Topps@RyanJTopps·
@edandersen Eh yes and no. Over the years I’m quite got at PR reviews so my AI flow is just me running a team of juniors full time
English
0
0
1
48
Ed Andersen
Ed Andersen@edandersen·
Reading code, especially code you didn’t write, is 10x harder than writing code These people AI generating 90%+ of their code *are* reading it all, right… or are they just dumping the difficult verification work on their colleagues in PRs?
English
209
86
1.4K
51.8K
Chen Avnery
Chen Avnery@MindTheGapMTG·
@ThePrimeagen Except when the markdown IS the architecture. Our agent pipeline runs on .md files - scope definitions, task queues, coordination state, memory. The RPC call reads markdown to know what it can touch and writes markdown for the next agent. The format became the system.
English
5
0
1
2K
Ryan Topps
Ryan Topps@RyanJTopps·
@trikcode Well you paying for the knowledge to get the token spend to be useful.
English
0
0
1
24
Wise
Wise@trikcode·
Software engineers are the happiest people on Earth now. They pay $100/month for Claude Code to do the work. Their employer pays them $10,000/month for the results. $9,900 profit for sipping coffee and talking to AI. The funniest part? Not a single dev with a full-time job will ever admit this publicly What a time to be alive.
English
708
239
5.2K
620.3K
Gergely Orosz
Gergely Orosz@GergelyOrosz·
As AI coding tools went mainstream, Amazon decided it’s not worth them supporting their Zoom clone, called Chime (that has paying customers!) And yet startups are assuming it’s worth rebuilding and supporting their own JIRA clones (with no paying customers) Who is mistaken?
Gergely Orosz tweet media
English
96
34
858
95.6K
Dwayne
Dwayne@CtrlAltDwayne·
It's the Metaverse all over again. Why can't they just fork a China made open source model and build on top of it? I don't get why these labs are doing everything from scratch. They could fork Qwen and just make it better or something instead.
First Squawk@FirstSquawk

META HAS DELAYED THE RELEASE OF ITS NEW AI MODEL "AVOCADO" AFTER INTERNAL TESTING REVEALED IT LAGGED BEHIND RIVAL MODELS FROM GOOGLE, OPENAI, AND ANTHROPIC IN REASONING, CODING, AND WRITING PERFORMANCE.

English
25
1
254
21.7K
Ryan Topps
Ryan Topps@RyanJTopps·
@atmoio That’s been my life in big tech before AI
English
0
0
1
26
Mo
Mo@atmoio·
Software engineering was once fulfilling and deterministic. Now it’s managing a fleet of junior engineers who constantly lie to you. Meanwhile your ability and willingness to code atrophy exponentially. At this rate you won’t lose your job to AI, but to someone *not* using AI.
Rohan Paul@rohanpaul_ai

New Harvard Business Review research reveals that excessive interaction with AI is causing a specific type of mental exhaustion ( or AI brain fry), which is particularly hitting high performers who use the tech to push past their normal limits. A survey of 1,500 workers reveals that AI is intensifying workloads rather than reducing them, leading to a new form of mental fog. While AI is generally supposed to lighten the load, it often forces users into constant task-switching and intense oversight that actually clutters the mind. This mental static happens because you aren't just doing your job anymore; you are managing multiple digital agents and double-checking their work, which creates a massive cognitive burden. The study found that 14% of full-time workers already feel this fog, with the highest impact seen in technical fields like software development, IT, and finance. High oversight is the biggest culprit, as supervising multiple AI outputs leads to a 12% increase in mental fatigue and a 33% jump in decision fatigue. This isn't just a personal health issue; it directly impacts companies because exhausted employees are 10% more likely to quit. For massive firms worth many B, this decision paralysis can lead to millions of dollars in lost value due to poor choices or total inaction. Essentially, we are working harder to manage our tools than we are to solve the actual problems they were meant to fix. --- hbr .org/2026/03/when-using-ai-leads-to-brain-fry

English
72
83
1K
129.1K
Ryan Topps
Ryan Topps@RyanJTopps·
@mSanterre That’s not how it works but okay you get the shares on the date at the price and owe on the dollar amount you get.
English
0
0
4
3.1K
max
max@mSanterre·
Imagine paying taxes on vested RSUs at $120 and it falls by 75% by the time lockup ends.
max tweet media
English
57
18
1.2K
211.3K
Oliver
Oliver@olvrgln·
Controversial - Git and GitHub were designed for a world where humans write code. That’s not the world we’re moving into. Here’s why: 1. Git assumes a human writes a diff. 2. A human opens a pull request. 3. Another human reviews it. 4. Then it gets merged. That mental model has worked for 15+ years. But today: - Engineers prompt agents to generate entire features - AI refactors 5,000 lines in minutes - Bots auto-fix tests, lint, and security issues - Background agents continuously modify codebases And we’re still pushing all of that through a PR workflow built in 2008. Here’s the tension: When an agent edits 5k lines across 37 files, what does “review” even mean? When 60% of a commit was AI-generated, who’s the author? When changes happen continuously in the background, does a pull request still make sense? Git tracks changes. It doesn’t track intent. GitHub optimizes for human discussion. It doesn’t optimize for human + agent collaboration. We’ve basically duct-taped AI onto a system built for human-only contributors. It works but it’s starting to creak. The real shift won’t just be better coding assistants. It’ll be: - Intent-based diffs instead of line-based diffs - Agent-aware reviews - Policy-driven merges - Provenance tracking (who prompted what, with which context) - Trust systems for machine contributors The next generation of dev tooling will treat agents as first-class collaborators. - That’s why we’re building Mesa. Infrastructure for human + agent collaboration, not just AI autocomplete.
English
13
1
37
5.5K
Ryan Topps
Ryan Topps@RyanJTopps·
@thdxr In big tech there is so much debt that it’s easy to crap simple refactors or clean ups that don’t take much thought but require effort. AI has made the marginal cost near zero.
English
0
0
0
291
dax
dax@thdxr·
us: we are struggling to figure out the best way to use coding agents, we don't have clarity yet everyone else: our team is moving at speeds unheard of, all our PRs are ai generated, we've cleared 6 years of backlog man we must really suck huh
English
167
105
3.1K
183.9K
Skye Priestley
Skye Priestley@eternalmagi·
@astuyve I’m skeptical about this mythical past when all engineers were 10x and no breaking change ever got pushed to prod.
English
2
0
3
118