Jeremy

5.6K posts

Jeremy

Jeremy

@linuxquestions

Founder of https://t.co/ELk0TeZdbf, VP Open Source and Technical Community @datadoghq, CNCF Board, Linux Fund, ardent but realistic open source advocate.

Inscrit le Şubat 2007
199 Abonnements5.7K Abonnés
Jeremy
Jeremy@linuxquestions·
For you fellow weather nerds: I just released an ad-free privacy-first Chrome weather extension. Precipitation alerts, rain probability, hourly/daily forecasts, and multiple locations. You can bring your own Pirate Weather API key, or use the default. chromewebstore.google.com/detail/pirate-…
English
0
0
1
92
Jeremy retweeté
Datadog, Inc.
Datadog, Inc.@datadoghq·
Learn how you can more easily stay up to date and collaborate with new Incident Management releases, as well as decouple deployments from releases with Feature Flags: youtube.com/watch?v=iMbYjk…
YouTube video
YouTube
English
0
1
3
809
Jeremy
Jeremy@linuxquestions·
It doesn't seem sustainable for Claude to be down this often. It's to the point that it's impacting which tool I reach for first... and paying for multiple coding Agents long term isn't a given.
English
1
0
4
649
Jeremy
Jeremy@linuxquestions·
@astuyve @inerati Really happy with the Pixel buds pro 2, and they work very well with Pixel phones.
English
0
0
1
60
AJ Stuyvenberg
AJ Stuyvenberg@astuyve·
@inerati Fine if your software stuff is mostly google. Google Photos is excellent. AirPods are more annoying to use. Airdrop works between your pixel and MacBook now.
English
2
0
15
2.2K
Jeremy retweeté
staysaasy
staysaasy@staysaasy·
About to buy some SaaS for my team next week. It has little scale or compliance or data moats. But it’s thoughtfully built a bunch of stuff that makes it a great product that will accelerate my team. Not a single person on my team has suggested we try to vibe code it ourselves. Feeling proud about that.
English
5
1
34
1.9K
Jeremy retweeté
Datadog, Inc.
Datadog, Inc.@datadoghq·
This Month in Datadog ➡️ Detect data and pipeline issues early with Data Observability, protect agentic AI applications with AI Guard, release software with confidence using Feature Flags, and more. Tune in to the full episode: bit.ly/4ugsgke
English
2
2
5
1.4K
Jeremy retweeté
Datadog, Inc.
Datadog, Inc.@datadoghq·
AI is reshaping how engineers build software, acting as a force multiplier rather than a replacement. In this fireside chat, Datadog CPO Yanbing Li and Block’s Manik Surtani share how they’re using AI and open source tools like Goose and Toto to ship faster and more reliably: events.datadoghq.com/summits/2026-s…
Datadog, Inc. tweet media
English
1
1
1
903
Jeremy
Jeremy@linuxquestions·
@GergelyOrosz I agree with the sentiment, but Chime is a weird example to use. When they shut down Kiro, it won't be a signal that agentic AI coding didn't go mainstream.
English
0
0
0
72
Gergely Orosz
Gergely Orosz@GergelyOrosz·
As AI coding tools went mainstream, Amazon decided it’s not worth them supporting their Zoom clone, called Chime (that has paying customers!) And yet startups are assuming it’s worth rebuilding and supporting their own JIRA clones (with no paying customers) Who is mistaken?
Gergely Orosz tweet media
English
96
35
858
95.5K
Jeremy retweeté
Chris Laub
Chris Laub@ChrisLaubAI·
BREAKING: Alibaba tested 18 AI coding agents on 100 real codebases, spanning 233 days each. they failed spectacularly. turns out passing tests once is easy. maintaining code for 8 months without breaking everything is where AI completely collapses. SWE-CI is the first benchmark that measures long-term code maintenance instead of one-shot bug fixes. each task tracks 71 consecutive commits of real evolution. 75% of models break previously working code during maintenance. only Claude Opus 4.5 and 4.6 stay above 50% zero-regression rate. every other model accumulates technical debt that compounds with every single iteration. here's the brutal part: - HumanEval and SWE-bench measure "does it work right now" - SWE-CI measures "does it still work after 8 months of changes" agents optimized for snapshot testing write brittle code that passes tests today but becomes completely unmaintainable tomorrow. they built EvoScore to weight later iterations heavier than early ones. agents that sacrifice code quality for quick wins get punished when the consequences compound. the AI coding narrative just got more honest. most models can write code. almost none can maintain it.
Chris Laub tweet media
English
89
316
1.5K
520.2K
Jeremy retweeté
Robert Lange
Robert Lange@RobertTLange·
AgentLens: A WebUI for local observability of agent traces 🔎 📝 Happy to share a side-project for detailed real-time inspection of what all your coding agents are up to. npx -y @roberttlange/agentlens --browser It locally monitors session traces and allows you to interact & terminate sessions in Codex, CC, OpenCode, Pi, Cursor Agents & Gemini CLI 🧑‍💻
English
14
20
110
8.1K
Jeremy
Jeremy@linuxquestions·
Everyone thinks they’re behind on AI. But after speaking with hundreds of engineers and leaders, I’m seeing something interesting: 𝐀𝐈 𝐚𝐝𝐨𝐩𝐭𝐢𝐨𝐧 𝐢𝐧𝐬𝐢𝐝𝐞 𝐨𝐫𝐠𝐚𝐧𝐢𝐳𝐚𝐭𝐢𝐨𝐧𝐬 𝐢𝐬 𝐛𝐞𝐜𝐨𝐦𝐢𝐧𝐠 𝐊-𝐬𝐡𝐚𝐩𝐞𝐝 I posted my thoughts here: jeremyg.dev/k-shaped-ai-ad… Curious if others are seeing this same divide emerging.
English
0
0
1
116
Jeremy retweeté
Gregor Ojstersek
Gregor Ojstersek@gregorojstersek·
Saying "I don't know" is a sign of seniority for me.
English
0
1
5
357
Jeremy
Jeremy@linuxquestions·
The absolute irony...
Jeremy tweet media
English
0
0
2
118
Jeremy retweeté
Joe Weisenthal
Joe Weisenthal@TheStalwart·
What's the best evidence of a hard computer/energy constraint for AI. Are there some examples of companies just not accepting new users, or new companies just not being able to operate?
English
30
4
96
24.3K
Jeremy retweeté
staysaasy
staysaasy@staysaasy·
Talk to me Twitter - what is your AI tool token/cost/budget allowance per engineer? How much Claude/Codex $ can people spend per day? What happens when they go over their budget?
English
10
1
13
7.3K