Austin Tackaberry

969 posts

Austin Tackaberry

Austin Tackaberry

@AETackaberry

Founder ShareCal | prev @databricks @uber

San Francisco, CA Katılım Haziran 2017
1.2K Takip Edilen699 Takipçiler
Ankur Goyal
Ankur Goyal@ankrgyl·
I've noticed that models are easily persuaded that a human programmer's theories are right—e.g., if I suspect something is buggy, the model will quickly agree and start generating conspiracy theories for why. This is dangerous because programmers feel like they have evidence without the burden of objective proof. In "old school" programming, you prove your claims with a test—a reproducible demonstration of the alleged behavior. The process is illuminating: more often than not, you discover the underlying issue, or that you're straight up wrong. As we graduate from handwriting code, it's critical to preserve objectivity. A rule of thumb at Braintrust: human↔human discussion among programmers cannot include LLM-generated explanations. Use LLMs to debug and investigate, but when you rope in someone else, you should share objective proof. I'm curious how software tools (e.g., coding agents) will evolve to enforce this. The best AI-supercharged programmers I know artfully create verification loops for their agents—essentially performing the repro task already. I wonder what a more opinionated, productized version of this could look like for sharing information with others.
English
13
2
45
4.5K
Austin Tackaberry
Austin Tackaberry@AETackaberry·
@chrisman new hire first oncall this week, TIL he is apparently trying the "cry it out" strategy with pagerduty
English
1
0
2
1.9K
Chrisman
Chrisman@chrisman·
We tried cry it out with my oldest. After 5 minutes my wife said “anything that feels this bad can’t possibly be good for the baby.” And that was the end of that.
Dr Danish@operationdanish

What if the “Cry It Out” sleep training (aka extinction-based sleep training) has contributed to mental health issues in young people? In some ways, it’s the most insane thing to do to a child (and is based on incredibly poor science). For centuries, families co-slept without issues, but in modern times, it has become increasingly taboo… why? How can repeated emotional non-response to a baby be healthy? What does it do to their stress calibration, attachment expectations, and self-regulation? How does it play out in their long term relationships and social connections? I’ve read the studies and they are poorly designed and weakly supported. Yet, we have an entire generation of parents that blindly follow this insane protocol without reviewing the data themselves. To be fair, the data supporting co-sleeping is weak as well, but it has centuries of precedent so I feel much more comfortable supporting that than a new approach that was largely instituted since the 1920s. For some context, in the 20th century, behaviorist John Watson (1928), interested in making psychology a hard science, took up the crusade against affection as president of the American Psychological Association. He applied the paradigm of behaviorism to childrearing, warning about the dangers of “too much mother love”. The 20th century was the time when “science" was assumed to know better than mothers, grandmothers, and families about how to raise a child. Too much kindness to a baby would result in a whiney, dependent, failed human being. A government pamphlet from the time recommended that "mothering meant holding the baby quietly, in tranquility-inducing positions" and that "the mother should stop immediately if her arms feel tired" because "the baby is never to inconvenience the adult." A baby older than six months "should be taught to sit silently in the crib; otherwise, he might need to be constantly watched and entertained by the mother, a serious waste of time." The truth is the opposite. We now know that ignoring a child raising cortisol levels and hurts trust and attachment. Yet, every young parent I know today has been brainwashed to let their child cry in silence. It’s truly wild.

English
413
566
19.4K
1.9M
Austin Tackaberry
Austin Tackaberry@AETackaberry·
why is everyone suddenly freaking out about Opus 4.5. how can you say this is the moment SWE changes forever when it still adds code comments like a drunken sailor
English
0
0
3
118
Neddy
Neddy@restocc·
had to cop this shit to celebrate my last day working in the metaverse
Neddy tweet media
English
8
0
35
2.4K
Austin Tackaberry
Austin Tackaberry@AETackaberry·
If you think you can't add another AI PR reviewer, you're wrong. You can always add another AI PR reviewer. And another. And another. There's no limit of AI PR reviewers you can add to your PR.
English
0
0
3
73
Austin Tackaberry
Austin Tackaberry@AETackaberry·
Really impressed with all the new Gmail features coming out lately, but I gotta say... The embedded Google Translate suggestions in my email don't seem to be working very well.
Austin Tackaberry tweet media
English
0
0
2
124
Mitchell Hashimoto
Mitchell Hashimoto@mitchellh·
@donvito @AmpCode Not at all. I require AI disclosure but said in the same breath that I love and use AI myself. I don’t even know how people came to the conclusion I dislike AI. Ironically, people are hallucinating lol.
English
2
0
21
1.5K
Mitchell Hashimoto
Mitchell Hashimoto@mitchellh·
My favorite part about @AmpCode is that you can share your whole session globally. PRs with Amp threads attached make me very, very happy as a maintainer. here is one from a bug fix this morning: ampcode.com/threads/T-24dc…
English
8
17
286
87.9K
darren
darren@darrenjr·
someone offered me a job because they saw me on the wakatime leaderboard i didnt even know wakatime had a leaderboard
English
3
1
14
927
Robert Eng
Robert Eng@rengrenghello·
NEW: Customer Count for Teams Channels You can now automatically check how many customers are in your Microsoft Teams channels 🧵
Robert Eng tweet media
English
3
1
0
363
Juan
Juan@JuanRezzio·
What do you guys think about Plan Mode so far? How can we make this better? I'm curious to see how you are using this
Juan tweet media
English
360
16
1K
155.4K
Austin Tackaberry
Austin Tackaberry@AETackaberry·
@ericzakariasson "Everything begins with the prompt." A phrase that if 2 years ago you predicted this would be the way of 2025, you would have been ridiculed endlessly
English
1
0
0
649
eric zakariasson
eric zakariasson@ericzakariasson·
from what i’ve seen working for myself and other people How to get the most out of AI coding tools Topics: - Prompting - Context - Self verification - Models (and selection) - Background Agents - Parallel Agents Prompting Everything begins with the prompt. This is the primary way you shape how an AI agent behaves. Prompting is how you communicate intent, share your mental model, and provide the state of the world as you see it. A good prompt does not just ask for an outcome, it frames the problem clearly and gives the agent enough signal to proceed in the right direction. When writing prompts, consider the environment the agent is operating in. Does it have access to tools? Can it perform actions to gather additional information? If not, you are responsible for frontloading that context. For example, if an agent cannot run tests on its own, you will need to manually execute them and provide the results. But if the agent does have terminal access, it can handle the entire loop by itself, including running and iterating on test cases. Take the function add(a, b) as a simple case. If tool access is unavailable, your prompt might be, “Write a test for a function add(a, b) that sums two positive numbers.” You would then manually run it and share the results. If tool access is available, you can say, “Write a test for add(a, b) that sums two positive numbers and run the test. Iterate until it passes.” This small change in environment leads to a significant change in how you prompt and how much work you offload to the agent. This principle applies to many environments, such as access to CI logs or browser runtimes. The more context the agent can independently gather, the more useful and capable it becomes. Context Every new agent session begins with minimal memory. Think of it as onboarding a new engineer to your team. If you want them to deliver high-quality work, you need to provide the right background. When you give an agent clear instructions, workflows, and edge cases, it behaves more autonomously and makes fewer mistakes. Providing proper context reduces back-and-forth interactions and avoids unnecessary errors. For example, if your team always runs a linter or compiles code after changes, include that expectation in your prompt. This allows the agent to verify its own work and prevents small issues from slipping through. Your prompts should vary based on the size of the task. For small tasks, a brief prompt such as “verify tests and fix failures” may be sufficient, especially if the surrounding code gives the agent enough to reason about. For larger efforts, more detailed prompts are necessary. These often take the form of structured plans that outline goals, steps, and constraints. Creating and reviewing such a plan with the agent can help establish a clear path before execution, allowing the agent to work more independently once the scope is aligned. Self verification Allowing agents to verify their own work is one of the most effective ways to improve quality and reduce manual oversight. The simplest form of this is through unit tests. But it can also include compiling code, running integration tests in the browser, or checking lint rules. If the agent has access to these tools, you should instruct it to use them regularly. It is also valuable to formalize the techniques that work well. Define rules that capture recurring expectations or common edge cases. These act as behavioral guidelines the agent can consistently follow. You can also define commands that automate routine flows. For example, using a command like /add-new-service can set up boilerplate and provide instructions for adding a new microservice. By turning proven strategies into reusable prompts, you create a system that gets smarter and faster over time. Models (and how to select) Choosing the right model plays a big role in shaping the development experience. There are generally two categories to consider: one is slower but more intelligent, the other is faster but slightly less capable. I used to favor the more intelligent models, but I have found more satisfaction in faster iteration cycles. This insight is closely related to something I wrote about in my post about avoid the copilot pause (anyblockers.com/posts/avoid-th…). The moment you lose momentum because a model is taking too long, your attention takes a hit. This is why Tab feel so powerful. It keeps you in flow by offering rapid, context-aware completions without the wait. If you do use a slower model, the key is to let it run in the background with detailed instructions. You need to give it a solid plan, clear validation steps, and enough autonomy so it can make progress without constant input. Once the agent has enough information, it can work uninterrupted for long stretches and return with higher-quality results. Background Agents Once you master detailed prompting, you can begin treating agents as background workers. This is where things become truly scalable. You provide the objective, describe the environment, explain how the agent should verify its work, and let it execute on its own. This frees up your time to focus on higher-level decisions while the agent completes a scoped, well-defined task in parallel. The better your planning, the less hand-holding the agent requires. With strong prompts and scoped autonomy, the agent becomes a reliable part of your workflow instead of a tool that needs constant attention. Parallel Agents Running multiple agents at the same time is a powerful way to accelerate your output. However, it introduces a new challenge: context switching. You now have to track several threads of work in your head, just as you would with human teammates. My personal sweet spot is running between one and three agents in parallel. This gives me a strong boost in throughput while still allowing me to keep track of what each one is doing. To make this work, each agent needs to be scoped to a clearly defined, non-overlapping task. If responsibilities are unclear or shared across agents, you will run into the same coordination issues you would with a team of engineers. Isolating work streams allows agents to operate independently and prevents conflicts. With good scoping, you get the benefits of parallelism without the overhead of micromanagement This can be done both remotely and locally! --- there's probably a lot missed here, but it's been top of mind as of lately
English
26
48
610
57.2K
Austin Tackaberry
Austin Tackaberry@AETackaberry·
. @rauchg I'm choosing a new Macbook Pro. Which option should I choose?
Austin Tackaberry tweet media
English
0
0
1
127
Austin Tackaberry
Austin Tackaberry@AETackaberry·
ya know when you put it in quotes like that, it does seem a lil sus
Austin Tackaberry tweet media
English
0
0
2
106