Sabitlenmiş Tweet
David Kandler
21 posts

David Kandler
@DavidKandler
GTM @ LogicstarAI | Co-Host @ unfundable | HSG
Zurich Katılım Mart 2020
35 Takip Edilen3 Takipçiler

@pcshipp Reddit is full of experts. On everything. Employed in nothing.
English

Germany / Switzerland - who wants to be on this?
Vaibhav (VB) Srivastav@reach_vb
putting together a group chat for Codex power users in London / Europe who are the biggest ballers around?
English
David Kandler retweetledi

Everyone's measuring AI coding ROI wrong.
The pitch: AI assistants make engineers 10x faster.
The reality at every Series A/B team I've talked to:
PR volume up 3-5x
Review quality down
Bug backlog quietly compounding
On-call rotations getting heavier, not lighter
Senior engineers spending 30%+ of their week on triage and root-cause work that used to take 5%
The bottleneck moved. Writing code stopped being the hard part. Owning code after it ships became the hard part.
The "second half" of software engineering (reproduce, root-cause, fix, verify) is still 100% human. That's the gap.
It's also what we're building Logicstar for: an autonomous agent that takes a bug report and ships a verified PR. No human in the loop until review.
self serve is open now.

English

@Samaytwt Producing code was never the hard part. Knowing which code to write, which bugs actually matter, what not to ship. AI compresses the production, not the judgment.
English

@zeeg honest take: greenfield code gen is a red ocean, maintenance is a blue one. nobody wants to build the agent that owns your pager at 3am, which is exactly why that's where the real upside sits.
English

@rauchg Petabyte-scale log analysis to trace one threat actor is a hell of a detection investment. The real signal here: attacker prioritized env var enumeration over direct exfil. Suggests credential chaining is the actual goal, not the initial breach.
English

I want to keep everyone updated on the details of the security investigation.
The team performed an in-depth analysis to search for root causes and to better understand the behavior of the threat actor.
We cast a very wide net, pulling and processing nearly a petabyte of logs of the entire Vercel Network and API, extending well beyond the initial Context[.]ai compromise.
We now understand that the threat actor has been active beyond that startup's compromise. Threat intel points to the distribution of malware to computers in search of valuable tokens like keys to Vercel accounts and other providers.
Once the attacker gets ahold of those keys, our logs show a repeated pattern: rapid and comprehensive API usage, with a focus on enumeration of non-sensitive environment variables.
As a result:
◾We've deepened and widened our collaboration with partners across the industry, like Microsoft, AWS and Wiz, to further protect the broader internet.
◾ We've notified other suspected victims of this threat actor, independent of this event, encouraging them to rotate credentials and adopt best practices.
We've also shipped a bunch more product enhancements. I'm extremely thankful to our team and industry partners for working around the clock. For more details on the ongoing investigation, refer to our security bulletin:
vercel.com/kb/bulletin/ve…
English

@mitsuhiko The pricing opacity actually has a second-order effect: it makes budget conversations with eng buyers harder, they can't anchor expectations. You lose the deal before the demo.
English

I really want to understand what Anthropic is thinking. They are now “secretly” AB testing entry level pricing? Sorry folks, I love ya, but that is just weird.
Amol Avasare@TheAmolAvasare
For clarity, we're running a small test on ~2% of new prosumer signups. Existing Pro and Max subscribers aren't affected.
English

My ideal AI design tool probably something like:
A canvas tool, where you can get any view of your app rendered to edit or use as the starting point for a new view. You can freely explore, duplicate, and make changes visually.
You could start these renders from other tools like @linear. User feedback -> render the screen to be edited.
It would have design language, system and product guidance files that help guide the overall design based on your product.
Each artboard carries metadata, like the origin of the view, who created it, what changes was made when, so you could query things across your whole team.
You could create areas that you want AI to fill or complete. Fill this list, complete the columns with this data or using this screenshot or something.
Edits in the artboard are tracked as a diff. You export those diffs as a plan for a coding agent to build against your actual codebase.
The design tool agents keep check-ins with the coding agent and try to communicate the nuances of the design so it gets built as a prototype.
English

@PawelHuryn This is the exact gap I’m seeing in outbound to CTOs and EMs right now. Every team pilots AI agents for code. Almost none have closed the loop on Sentry/Datadog triage yet. The on-call page still lands as raw noise for most 40–100 person teams.
English

The example to study is alert triage.
Sentry POSTs the alert. Claude pulls the stack trace, correlates with recent commits, opens a draft PR with a proposed fix.
On-call wakes up to a PR, not a blank terminal.
Noah Zweben@noahzweben
Claude Code Routines are here! In addition to a schedule, you can now trigger templated agents via GitHub event or API – with our infra & your MCP+repos They've changed how we do docs, backlog maintenance and more internally at Anthropic Get started at claude.ai/code/routines
English













