Anthony Accomazzo

413 posts

Anthony Accomazzo banner
Anthony Accomazzo

Anthony Accomazzo

@accomazzo

engineering @notionhq | prev founder @sequinstream (acq by Notion)

San Francisco, CA Katılım Haziran 2008
275 Takip Edilen1.3K Takipçiler
Sabitlenmiş Tweet
Anthony Accomazzo
Anthony Accomazzo@accomazzo·
My fiancé and I have kept our relationship humming with a simple morning ritual. Every morning over coffee, we ask each other these three magic questions: 1. Are your CPUs saturated? 2. How's back-pressure? 3. Is your I/O optimally batched?
English
4
1
43
2.6K
ℏεsam
ℏεsam@Hesamation·
it’s staring at a wall guys. he only stared at a wall 10-20 mins/day and limited unnecessary screen time and the results were noticeable: > easier maintaining focus > more flow state and creativity > mental clarity and presence seems like all you need to do, is just do nothing.
ℏεsam tweet media
English
20
122
1.9K
31.1K
Anthony Accomazzo
Anthony Accomazzo@accomazzo·
@pkayokay It is a novel way of organizing programs. It takes seriously the fact that most programs need to go horizontal sooner or later. Concurrency primitives in other languages will feel bolted-on in comparison. (Because they often are!)
English
0
0
6
321
Paul Kim
Paul Kim@pkayokay·
I'm happy for the Elixir community with the recent hype, though a lot of times I don't know what half of you all are referring to when you talk about it (processes, BEAM, messaging, what?). I'm not a technologist and don't care much as long as the tech enables me to build products for end users with good DX and good enough performance. Maybe that's why I landed on Rails. Though for some time, while I was working at a high-scale Rails startup, I became frustrated at some of the scaling challenges we faced so early. That's when I heard about Elixir/Phoenix. I wanted to learn it to build more performant apps, but there are a lot more factors than that when choosing a stack... Tbh, I barely scratched the surface in the last year. The recent hype is encouraging me to look again though. I'm not going to learn Elixir because I see it as more performant, but will learn because I'm genuinely curious.
English
6
2
57
4.8K
Anthony Accomazzo
Anthony Accomazzo@accomazzo·
Yes. There’s a reason you so rarely see the word “actor” in the erlang/elixir communities. The deeper, more general abstraction is the beam’s preemption and cooperative scheduler. Then you layer on processes with isolated memory. *Then* inter-process communication.
Zach Daniel@ZachSDaniel1

A common reductive take on what actually makes the BEAM good. People do not understand that porting the API of an actor model is not even remotely the same thing as adopting the properties of the BEAM into another lang.

English
2
3
82
26.8K
Anthony Accomazzo
Anthony Accomazzo@accomazzo·
LLMs are so good, even AGI will find them useful.
English
0
0
1
214
Anthony Accomazzo
Anthony Accomazzo@accomazzo·
AGI will not emerge from scaling LLMs anymore than computers emerged from bigger power grids.
English
0
1
0
198
Anthony Accomazzo
Anthony Accomazzo@accomazzo·
We don’t understand consciousness well enough to know for sure. But consider computational universality: a system either is a universal computer or it isn’t. That threshold is *discrete*, not analog. I think people too quickly assume that making a machine more humanlike, or giving it more knowledge, somehow turns up consciousness by degrees But we have no particular reason to think a cellphone is more conscious than a Furby, or that Excel is more conscious than VisiCalc. Babbage wouldn’t have gotten a universal computer by simply adding more functions to his difference engine. It took Turing discovering the principles and engineers implementing them. We can now definitively determine which systems are universal computers and which aren’t. And we can establish this through understanding alone, *without empirical testing*. I can determine if a system is a universal computer *without even using it*. It’s just a matter of understanding if it was implemented with the correct characteristics or not. (Turing knew this. His “test” was nothing of the sort. It was a thought experiment in service of another argument.) We have every reason to believe AGI will come about the same way. And that once we discover it, its principles will be explicable to us. (I don’t think it will “just emerge”. But even if it did, you could still easily figure out its properties by discovering which of them are invariable.) Furthermore, a universal computer — like other complex physical systems — is hard to vary. We only know of one such kind. And thinking of building a system that has similar properties but a different implementation feels nonsensical. For general intelligence too, we only know of one such kind. We should expect that the properties of our general intelligence are also hard to vary. That is, changing one property does not produce a different kind of general intelligence, but strips the system of its general intelligence entirely. That’s why the idea that we’ll create another system with all the same extraordinary capacity but some key differences strikes me as bizarre. “It’s the same, but it’s fixated on paperclip maximizing!” Universal laws do not behave that way. 95% of the properties of the system is equivalent to none. Change the core principles of evolution and you don’t get a “weaker” or “stronger” evolution, you don’t get evolution at all. We don’t know the key ingredients behind our intelligence. But it seems very likely that properties such as motivation and curiosity are fundamental. And it’s difficult to see how eg motivation arises without qualia. Obviously, LLMs completely lack motivation (hence the prompt). They also show zero signs of curiosity. The way they learn is basically the complete opposite from how we do.
gfodor.id@gfodor

Expect a lot of people telling you they know the machines are conscious or that the machines are not conscious. Nobody knows, or probably ever will. We are all going to have to decide which machines to believe when they tell us they are. It’s that simple.

English
0
0
1
239
Anthony Accomazzo
Anthony Accomazzo@accomazzo·
This is a silly test. What if the AGI is completely disinterested in physics? And math, for that matter. Perhaps art is more its thing. Or maybe it’s not interested in much at all and has decided to be a hermit.
Rohan Paul@rohanpaul_ai

Demis Hassabis’s “Einstein test” for defining AGI: Train a model on all human knowledge but cut it off at 1911, then see if it can independently discover general relativity (as Einstein did by 1915); if yes, it’s AGI.

English
0
0
1
350
Anthony Accomazzo retweetledi
meatball times
meatball times@meatballtimes·
software engineering is absolutely not being automated right now. I barely ever code any more but I have never been busier. now that coding is 80% automated, the limiting factor is my ability to design, comprehend, and safely change systems. it's insanely exciting
English
109
186
3.6K
103.8K
Zach Tratar
Zach Tratar@zachtratar·
Every Tuesday for the next 10 weeks, we’ll be announcing new improvements to Notion AI Meeting Notes! Buckle up. Momentum feels great & the team is working hard. 🤠 - New features - Speed & reliability improvements - Behind-the-scenes info Let the ships begin... tomorrow 9am!
English
26
12
209
30.8K
Anthony Accomazzo
Anthony Accomazzo@accomazzo·
San Francisco built one new market-rate residential building last year (Quincy). In 2026, it will build zero.
English
0
1
1
525
Anthony Accomazzo
Anthony Accomazzo@accomazzo·
@filipcodes Yes, but a function call is a far simpler interface than an RPC call! Why abstract with a service what you can abstract with a module?
English
1
0
0
17
filipcodes
filipcodes@filipcodes·
@accomazzo unsure. i would have agreed with you maybe 6 months ago. with AI in picture, tradeoffs change. there's a significant benefit to hiding complexity in boxes behind interfaces, and a lot of issues with microservices are boilerplate in nature (AI solves).
English
1
0
0
14
Anthony Accomazzo
Anthony Accomazzo@accomazzo·
Breaking up a monolith into services is technical debt. You extract a service so a team can move faster in the short run: fast test suite, fast CI, deploy 30 times a day, iterate quickly on their subsystem. The debt is that what were function calls are now RPC calls over a network. And the benefit of deploying separately is also the drawback of deploying separately. Two sides of the same coin. The key insight is why teams actually do this. It's not about scaling or separate deployments. The real reason is developer contention. Too many people touching the same codebase at the same time. It's not just merge conflicts and merge queues. It's a slower deploy cycle, as everyone's code has to go out at once. It's a longer CI, as the world needs to be tested on every push. These are all coordination costs, not systems costs. Here's the thing I keep turning over: AI is accelerating this pressure in a way that's new. DoorDash presents a classic case study: pandemic hits, they hire like crazy, everyone's crowding into the same monolithic Python app, strangling each other. Development speed slows, existentially. So they extract services as a fast-path to re-acceleration. The new version of this is AI-driven: we're not adding engineers, we're increasing throughput per engineer. The effect on the monolith is almost the same, though. More PRs in flight, more merge conflicts, more CI runs competing, more chances to hit a flaky test. In some ways it's worse than the headcount version, because AI productivity can spike overnight when a new model drops (Opus 4.6 Fast 👀), and the codebase doesn't elastically scale with your output. I'd expect this to push the break-apart threshold forward. Where previously a team may not have felt the pressure to decompose until they were past 1,000 engineers, they may feel it at 100. It's interesting to think that AI might also lower the cost on the debt side. If AI makes it easier to scaffold, maintain, and operate services — all that boilerplate, health checks, client libraries, observability — then maybe the interest rate on that technical debt is dropping. The debt is still real, but it's cheaper to service. So AI is squeezing from both sides: more reason to decompose, less cost to do it.
English
2
0
8
802
Anthony Accomazzo
Anthony Accomazzo@accomazzo·
Often, services are broken up for the wrong reason. "Optimizing architecture" is always the wrong reason, because a monolith is optimal. The right reason is developer contention/velocity. When solving for anything other than coordination costs, you just get the debt without the benefit. A distributed monolith is almost inevitable. Services that share a domain will always be coupled. That's not a failure, it's the tradeoff a team accepted. Trying to fully decouple them costs enough to negate the velocity gains you broke them apart for.
English
1
0
1
33
filipcodes
filipcodes@filipcodes·
@accomazzo most monoliths I've seen are distributed monoliths - especially in node.js land. worst of both worlds.
English
1
0
0
40
Anthony Accomazzo
Anthony Accomazzo@accomazzo·
@geoffreylitt Yes! As of switching on Opus 4.6 fast last week, I *acutely* feel my ability to review/make decisions as the bottleneck.
English
1
0
0
72
Geoffrey Litt
Geoffrey Litt@geoffreylitt·
In fact, in the limit we might expect AI to reduce parallelism? If every code change can be made ~instantly and our job as people is to inject taste, then a coding session should be a steady stream of hard decisions. Already feel this a bit when Opus 4.6 fast interviews me for a spec
English
4
0
19
1.4K