Tim Abbott

104 posts

Tim Abbott

Tim Abbott

@tabbott3

Lead developer of @zulip. Formerly CTO of @ksplice.

San Francisco, CA Katılım Mayıs 2010
68 Takip Edilen541 Takipçiler
Tim Abbott
Tim Abbott@tabbott3·
@DavidSKrueger I get plenty of initial disagreement and useful context, but all chatbots do seem to often come around to whatever my bias is with more turns. Anyway an interesting corollary is to not count someone else's AI sessions as evidence on a topic, unless you trust their judgement.
English
0
0
0
30
David Krueger 🦥 ⏸️ ⏹️ ⏪
It feels like 100% of the time to me.
Steve Rathje@steverathje2

An analysis of 1 million Claude conversations found that Claude was sycophantic around 9% of the time. However, this rate varied substantially by topic: sycophancy was much higher in conversations about spirituality (38%) and relationships (25%). See Anthropic’s full analysis here: anthropic.com/research/claud… Sycophancy was less common in more recent models (such as Opus 4.7 and Mythos), but even small amounts of sycophancy may have psychological consequences, given the sheer number of people using generative AI products. See our work on the consequences of sycophancy here: osf.io/preprints/psya…

English
6
2
21
3.9K
Tim Abbott
Tim Abbott@tabbott3·
@zeeg Zulip may not quite qualify as big but I think it'd be a great target for this.
English
0
0
1
160
David Cramer
David Cramer@zeeg·
What’s a big public codebase that has a large api surface or otherwise server code? Preferably open source. I’d like to try running an experiment with Warden on it. If it works I’ll send you a bunch of disclosures or potential vulnerabilities.
English
24
0
36
18.9K
Micah Berkley - The 50 Cent of AI.
So @AnthropicAI is no longer allowing me to scan my own software for security vulnerabilities using Opus 4.7. This is a huge problem. Opus 4.6 this was never an issue. And respectfully Opus 4.6 was a beast at this. I'm really disappointed. Especially since I'm paying $200 month for this. I'm not going to use freaking Sonnet to do security work. @bcherny help us out maannn...
Micah Berkley - The 50 Cent of AI. tweet media
Brownsville, FL 🇺🇸 English
75
39
603
82.6K
Tim Abbott
Tim Abbott@tabbott3·
@simonw I've found out to be quite good at making changes to Zulip. I think it does matter how nice your codebase is. If you have lots of uncommented nonlocal dependencies, I'm sure you'll feel that when trying to have AI make changes, just like if you hired a new person.
English
0
0
2
487
Tim Abbott retweetledi
Simon Willison
Simon Willison@simonw·
Is there still a widespread belief that LLMs and coding agents are good for greenfield development but don't help for maintaining large existing codebases? I don't think that idea holds up any more
English
236
33
1K
153.6K
Tim Abbott retweetledi
Charlie Marsh
Charlie Marsh@charliermarsh·
We wrote up everything we do to secure our open source projects at Astral
Charlie Marsh tweet media
English
10
40
361
23.1K
Tim Abbott
Tim Abbott@tabbott3·
@simonw The thing I don't think has gotten enough attention is the upcoming catch-22 of wanting to install dependency version updates immediately because exploit generation is really fast, and supply chain attacks.
English
0
0
0
188
Simon Willison
Simon Willison@simonw·
Wrote up some thoughts on Anthropic's Project Glassing, where their latest Opus-beating model is available to partnered security research organizations only Given recent alarm bells raised by credible security voices I think this is a justified decision simonwillison.net/2026/Apr/7/pro…
English
47
59
588
58.2K
Tim Abbott retweetledi
Mike Krieger
Mike Krieger@mikeyk·
Claude is #1 in the App Store today — I want to say a huge thank you to all of our new (and existing!) users for the support. We’re working hard for you, please share your thoughts and feedback along the way.
Mike Krieger tweet media
English
267
489
6.4K
420.7K
Tim Abbott retweetledi
Future of Life Institute
Future of Life Institute@FLI_org·
Statement from Max @Tegmark, Founder and Chair of the Future of Life Institute, in the aftermath of @AnthropicAI refusing the Department of War's ultimatum: “Fully autonomous weapons systems and Orwellian AI-enabled domestic mass surveillance are affronts to our dignity and liberty. We highly commend Anthropic, OpenAI and leading researchers from across AI companies for standing up for the principle that AI should never be used to kill people without meaningful human control, and that domestic mass surveillance of US citizens is a red line that should never be crossed. We call on all AI companies to follow suit. However, our safety and basic rights must not be at the mercy of a company's internal policy; lawmakers must work to codify these overwhelmingly popular red lines into law. All AI systems should be under meaningful human control. This is especially true for those that could be used in the taking of human lives. Moreover, current AI systems are inherently unpredictable and fundamentally brittle, unsuited for very high stakes applications. Even if they could be made effective, fully autonomous weapons would pose a threat not just to human dignity and liberty but to American national security: they could inadvertently fuel escalation, and would easily proliferate, putting cheap, accessible, weapons of assassination and mass destruction in the hands of non-state actors and adversaries. They should be prohibited by the US and globally.”
English
21
73
284
48.5K
Tim Abbott retweetledi
Guido van Rossum
Guido van Rossum@gvanrossum·
Talking of spines, Anthropic definitely has one. May they continue to expose and resist the Pentagon’s blackmail.
English
61
118
2.2K
150.4K
Tim Abbott retweetledi
Max Tegmark
Max Tegmark@tegmark·
Anthropic 2024: You can trust that we'll keep all our safety promises Antropic 2026: Nvm
Max Tegmark tweet media
English
283
533
3.1K
875.4K
Tim Abbott retweetledi
Jean-Denis Greze 💡
Jean-Denis Greze 💡@jgreze·
"Documents that write themselves" That's the tagline for our newest feature: Town Docs. It's changed the way I write documents and emails as it supercharges the Town Assistant and makes it trivial to pull in context from all of my work to create the perfect content. Use cases from the last few days: - Pull together every piece of customer feedback from Slack, email, and support channels into a prioritized doc. After working on the doc for a bit, I was able to add things to Slack simply by saying "Create or bump tickets for every P1 and P2 item." - Automatically creating an agenda for our weekly meeting, along with references to major PRs of what we shipped last week. - Weekly review of what other AI companies are up to, which I then iterate on to figure out what we can learn from other great teams. Check it out at town.com/features/town-…!
English
0
2
8
445
Tim Abbott retweetledi
Max Tegmark
Max Tegmark@tegmark·
OpenAI has dropped safety from its mission statement – can you spot another change? Old: "OpenAIs mission is to build general­ purpose artificial intelligence (AI) that safely benefits humanity, unconstrained by a need to generate financial return. [...]" New: "OpenAIs mission is to ensure that artificial general intelligence benefits all of humanity" (IRS evidence in comments)
English
75
96
623
615K
Tim Abbott retweetledi
Future of Life Institute
Future of Life Institute@FLI_org·
🚨 "[@SamA] is telling me that my 3-year-old son has only two choices in life: put electrodes in his head or never get a job, become obsolete." -FLI co-founder and president @Tegmark at Florida Gov. @RonDeSantis' roundtable on AI policy earlier this week:
English
1
16
58
4.7K
Tim Abbott retweetledi
Tim Abbott retweetledi
Guido van Rossum
Guido van Rossum@gvanrossum·
Great 18-minute rant on why AI making art isn’t the same as a person making art, even if the audience can’t tell whether a person or an AI made it. Also applies to the conundrum of junior vs. senior workers — if AI replaces the juniors, the seniors will eventually retire, and then what? youtu.be/mb3uK-_QkOo
YouTube video
YouTube
English
11
29
282
54.5K