elias tazartes 🥕🧑‍🌾

8K posts

elias tazartes 🥕🧑‍🌾 banner
elias tazartes 🥕🧑‍🌾

elias tazartes 🥕🧑‍🌾

@ETazou

engineering @zama | formerly cofounder @kakarotzkevm

Paris Katılım Mayıs 2016
1.5K Takip Edilen4.1K Takipçiler
Tibo
Tibo@thsottiaux·
With Codex the there is quite the gulf in load between peak and off-peak times, and we would like to achieve more of a smoother traffic pattern as that would be a more optimal use of our compute. We have ideas, but curious what you all think we should do? Would more usage during off-peak and surge multiplier during peak times make sense?
English
733
35
1.5K
140.1K
elias tazartes 🥕🧑‍🌾 retweetledi
Rand
Rand@randhindi·
This new quantum attack on ECDSA is a big deal, and Google now says quantum computers will break Bitcoin and Ethereum by 2029. The nice thing about FHE as we use it in the @zama protocol is that it's already resistant to quantum computers, as it is based on lattice cryptography, which is widely accepted (including by NIST) to be the strongest option against quantum computers. We still need the underlying signature scheme used by blockchains to migrate to post-quantum, but at least the data you are encrypting today with Zama and putting onchain is not at risk.
English
23
29
136
5.6K
elias tazartes 🥕🧑‍🌾 retweetledi
Justin Drake
Justin Drake@drakefjustin·
Today is a monumentous day for quantum computing and cryptography. Two breakthrough papers just landed (links in next tweet). Both papers improve Shor's algorithm, infamous for cracking RSA and elliptic curve cryptography. The two results compound, optimising separate layers of the quantum stack. The results are shocking. I expect a narrative shift and a further R&D boost toward post-quantum cryptography. The first paper is by Google Quantum AI. They tackle the (logical) Shor algorithm, tailoring it to crack Bitcoin and Ethereum signatures. The algorithm runs on ~1K logical qubits for the 256-bit elliptic curve secp256k1. Due to the low circuit depth, a fast superconducting computer would recover private keys in minutes. I'm grateful to have joined as a late paper co-author, in large part for the chance to interact with experts and the alpha gleaned from internal discussions. The second paper is by a stealthy startup called Oratomic, with ex-Google and prominent Caltech faculty. Their starting point is Google's improvements to the logical quantum circuit. They then apply improvements at the physical layer, with tricks specific to neutral atom quantum computers. The result estimates that 26,000 atomic qubits are sufficient to break 256-bit elliptic curve signatures. This would be roughly a 40x improvement in physical qubit count over previous state-of-the-art. On the flip side, a single Shor run would take ~10 days due to the relatively slow speed of neutral atoms. Below are my key takeaways. As a disclaimer, I am not a quantum expert. Time is needed for the results to be properly vetted. Based on my interactions with the team, I have faith the Google Quantum AI results are conservative. The Oratomic paper is much harder for me to assess, especially because of the use of more exotic qLDPC codes. I will take it with a grain of salt until the dust settles. → q-day: My confidence in q-day by 2032 has shot up significantly. IMO there's at least a 10% chance that by 2032 a quantum computer recovers a secp256k1 ECDSA private key from an exposed public key. While a cryptographically-relevant quantum computer (CRQC) before 2030 still feels unlikely, now is undoubtedly the time to start preparing. → censorship: The Google paper uses a zero-knowledge (ZK) proof to demonstrate the algorithm's existence without leaking actual optimisations. From now on, assume state-of-the-art algorithms will be censored. There may be self-censorship for moral or commercial reasons, or because of government pressure. A blackout in academic publications would be a tell-tale sign. → cracking time: A superconducting quantum computer, the type Google is building, could crack keys in minutes. This is because the optimised quantum circuit is just 100M Toffoli gates, which is surprisingly shallow. (Toffoli gates are hard because they require production of so-called "magic states".) Toffoli gates would consume ~10 microseconds on a superconducting platform, totalling ~1,000 sec of Shor runtime. → latency optimisations: Two latency optimisations bring key cracking time to single-digit minutes. The first parallelises computation across quantum devices. The second involves feeding the pubkey to the quantum computer mid-flight, after a generic setup phase. → fast- and slow-clock: At first approximation there are two families of quantum computers. The fast-clock flavour, which includes superconducting and photonic architectures, runs at roughly 100 kHz. The slow-clock flavour, which includes trapped ion and neutral atom architectures, runs roughly 1,000x slower (~100 Hz, or ~1 week to crack a single key). → qubit count: The size-optimised variant of the algorithm runs on 1,200 logical qubits. On a superconducting computer with surface code error correction that's roughly 500K physical qubits, a 400:1 physical-to-logical ratio. The surface code is conservative, assuming only four-way nearest-neighbour grid connectivity. It was demonstrated last year by Google on a real quantum computer. → future gains: Low-hanging fruit is still being picked, with at least one of the Google optimisations resulting from a surprisingly simple observation. Interestingly, AI was not (yet!) tasked to find optimisations. This was also the first time authors such as Craig Gidney attacked elliptic curves (as opposed to RSA). Shor logical qubit count could plausibly go under 1K soonish. → error correction: The physical-to-logical ratio for superconducting computers could go under 100:1. For superconducting computers that would be mean ~100K physical qubits for a CRQC, two orders of magnitude away from state of the art. Neutral atoms quantum computers are amenable to error correcting codes other than the surface code. While much slower to run, they can bring down the physical to logical qubit ratio closer to 10:1. → Bitcoin PoW: Commercially-viable Bitcoin PoW via Grover's algorithm is not happening any time soon. We're talking decades, possibly centuries away. This observation should help focus the discussion on ECDSA and Schnorr. (Side note: as unofficial Bitcoin security researcher, I still believe Bitcoin PoW is cooked due to the dwindling security budget.) → team quality: The folks at Google Quantum AI are the real deal. Craig Gidney (@CraigGidney) is arguably the world's top quantum circuit optimisooor. Just last year he squeezed 10x out of Shor for RSA, bringing the physical qubit count down from 10M to 1M. Special thanks to the Google team for patiently answering all my newb questions with detailed, fact-based answers. I was expecting some hype, but found none.
English
318
1.2K
5.8K
1.5M
elias tazartes 🥕🧑‍🌾 retweetledi
Calc 🇫🇷
Calc 🇫🇷@Calcutat·
🚨 SURVIVOR CARDS ARE LIVE 🚨 Get yours now: SURVIVORCARDS.XYZ TIRED OF WATCHING YOUR COINS GO DOWN? GOOD NEWS THIS IS NOT A TOKEN.
Calc 🇫🇷 tweet media
English
18
20
67
7.4K
elias tazartes 🥕🧑‍🌾
my biggest issue with AI right now is ending up with a PR that is easy to review for colleagues
English
1
0
5
231
elias tazartes 🥕🧑‍🌾 retweetledi
Sisyphus Labs
Sisyphus Labs@justsisyphus·
Dear Ultraworkers, GPT 5.4 is truly amazing model, now Sisyphus can be powered by GPT 5.4. Finally we have GPT + Sisyphus = GPTPhus - he got sprits of Sisyphus, but with the powers of Hephaestus This is our first release as an oh-my-openagent. We used to build oh-my-opencode, but now we are building oh-my-openagent. github.com/code-yeongyu/o…
Sisyphus Labs tweet media
English
9
8
148
8.8K
elias tazartes 🥕🧑‍🌾 retweetledi
banteg
banteg@banteg·
so far good vibes from gpt 5.4, it got back its holistic understanding from 5.2, after a dip into being an infinite code monkey in 5.3-codex.
English
6
2
91
5.4K
harshbajpai
harshbajpai@bajpaiharsh244·
Harsh Bajpai is now 24 years old! Which is an interesting feeling tbh …..
English
33
0
62
2.9K
elias tazartes 🥕🧑‍🌾 retweetledi
Peter Steinberger 🦞
Peter Steinberger 🦞@steipete·
it’s a good model. the coding specific jump is more in line what we had in 5.0 to 5.1; but it’s now unified and smarter on everything else, writes better docs, is a better general purpose agent and is overall more pleasant to use.
OpenAI@OpenAI

GPT-5.4 Thinking and GPT-5.4 Pro are rolling out now in ChatGPT. GPT-5.4 is also now available in the API and Codex. GPT-5.4 brings our advances in reasoning, coding, and agentic workflows into one frontier model.

English
272
162
3.8K
410.6K
elias tazartes 🥕🧑‍🌾 retweetledi
Julien B.
Julien B.@bneiluj·
Anthropic’s analysis of AI’s impact on the labor market is insane.
Julien B. tweet media
English
14
6
30
5.5K
Q
Q@q_yeon_gyu_kim·
wow gpt 5.4 soooo good it feels a bit different from opus obviously but it could be @justsisyphus on oh-my-opencode first time ever gpt sisyphus experimenting now very good model @OpenAI
English
5
0
39
2.9K
elias tazartes 🥕🧑‍🌾 retweetledi
Rhys
Rhys@RhysSullivan·
easily the best ai coding tip i have is to know good repos and tell your agent to clone them into /tmp/ to learn from them
Rhys tweet media
English
75
19
1K
63.8K
elias tazartes 🥕🧑‍🌾 retweetledi
loaf
loaf@lordOfAFew·
If you’re still unsure about the deflationary impact that’s already here, you must read this. I’ve got a friend who runs a small architecture drafting company. He’s semi-technical. Knows his way around computers. Uses Claude in the browser. Ahead of most people but still drowning in admin. Councils. Proposals. Time tracking. Local compliance. Drafting. Client back-and-forth. A ton of repetitive, deterministic work. We caught up. I got him set up with an @openclaw -style workflow. Showed him how to hook it into his CRM and automate a pipeline that normally took 30 minutes per job. It took him about 2 hours of tinkering. Result? That 30-minute task is now fully automated. He’s saving ~8 hours a week. From one workflow. Fast forward to today (1 week later): He’s automated almost the entire backend of his business. Lead intake Website optimisation Proposal generation Client workflows Now he’s looking at plugging into MCP servers and 3D mapping tools to generate end-to-end client proposals from a few images in minutes. Minutes. Here’s the punchline: Service businesses price based on hours billed. If AI reduces the hours required by 4x, margins explode. Or and this is where it gets interesting you drop prices. You undercut competitors. You win more work. You compress industry pricing. That’s deflation in action. He probably has a 6-month edge before competitors catch up. And when they do? The marginal cost of this service drops for everyone. What used to take days becomes hours. What used to take hours becomes minutes. One friend. One small business. Entire backend transformed in weeks. Multiply this across thousands of service industries in the next 12–24 months. That’s the shift. The repricing of everything has begun Wild.
English
5
5
53
3.4K
Dark Bio
Dark Bio@dark_dot_bio·
First production devices, heading to Basel ❤️🇨🇭🤍
Dark Bio tweet media
English
6
3
61
8.7K
fucory
fucory@FUCORY·
I try to give to everybody in crypto and tech because it's not a 0 sum game and helping others feels good. Many give back. Some won't even return the bare minimum level of support. I'm cutting those people off so I can double down on people who are truly on my team
English
6
0
42
1K
Péter Szilágyi
Péter Szilágyi@peter_szilagyi·
Me: Hey Claude, your diff fails build: Claude: Yes, it's missing an import, let me just hard code the string. Me: Dafuq, why not add the import? Claude: "Fair enough. Let me add the import instead." Like, WAT
English
9
1
49
6.6K
elias tazartes 🥕🧑‍🌾 retweetledi
⟠Palis⟠🐍
⟠Palis⟠🐍@palis·
Different perspective on Vitalik and his frequent posting recently Many of the smartest people in AI fear “AGI” and fear the inability of humanity to adapt within the timelines presented to accelerating AI. This doesn’t seem to be something anyone has argued with, the argument is that it’s beyond anyone’s control and someone will be working to achieve it asap and deceleration doesn’t work Everyone is clowning in Vitalik for sharing a thinking process of wrestling with complex competing issues on the topic, when he openly admits all the time to changing his views based on new information, seems to often share thoughts having low conviction in them (being “open-minded” to a different perspective in this case) or hasn’t formed a mature opinion on I don’t see this as a sign of weakness or low intelligence but a rare vulnerability and type of humility that very few smart people are willing to engage in. Most people conceal this process and reserve themselves to public statements said in confidence of their understanding. Most people speak on topics with absolutism after reaching their conclusion and are often ridged once they’ve shared it I disagree with Vitalik on many posts. I am also aware that like him, I’m one man with a limited understanding, and inevitably am wrong about many things that I believe. I come to ideas briefly all the time which later in hindsight were obviously wrong, and I go through the process of refining them alone using publicly accessible info and opinions of others. With a friend sitting around a campfire who’s sharing an idea like this, you wouldn’t verbally assault him. You’d share your perspective and possibly correct him. Sharing one’s thought process on complex tough questions daily, publicly, to receive backlash and correction on is a sign of a thoughtful and wise thinker imo, no matter how wrong on any one idea. Also refreshing in contrast to the saturation of algorithmic aligned confident preaching to aura farm engagement and intellectual respect over discovery and refinement
vitalik.eth@VitalikButerin

I'm actually pretty open-minded about the anti-data-center populism. From everything I've seen from people working on this, reducing industrial-scale hardware availability seems to be both the most practical, and most non-dystopian / non-invasive way to lengthen AGI timelines. So if the movement that makes that happen starts out with anti-data-center populism, that seems fine? Of course you have to do things other than going after data centers located in populated areas to really make a dent on AGI timelines (my intuition is that 10-100x compute reduction is feasible in a "static" model of the world, and 100-10000x if you compare to a counterfactual that includes future chip design progress; those numbers *would* make a dent), but there is a first step for everything.

English
22
7
116
17.5K