Godefroy

4.3K posts

Godefroy

Godefroy

@Godefroy

⚗️ https://t.co/tOpWxdFUBN 🔗 https://t.co/w3lgEmU2ul🎙️ https://t.co/4BqO5w71CI 🎤 https://t.co/SnSKypHSLh (open source)

Katılım Mart 2008
813 Takip Edilen1.2K Takipçiler
Godefroy
Godefroy@Godefroy·
if you still think the "ai bubble" is gonna pop, just go bet on polymarket. you’ll 5x your money if you’re right by end of year. i wouldn’t touch it though. not financial advice obviously
Godefroy tweet media
English
0
0
0
20
Godefroy retweetledi
Brian Scanlan
Brian Scanlan@brian_scanlan·
We've been building an internal Claude Code plugin system at Intercom with 13 plugins, 100+ skills, and hooks that turn Claude into a full-stack engineering platform. Lots done, more to do. Here's a thread of some highlights.
English
84
199
3K
799.6K
Godefroy retweetledi
Jeff Weinstein
Jeff Weinstein@jeff_weinstein·
Introducing the Machine Payments Protocol (MPP). mpp.dev: an open protocol for machine-to-machine payments, co-authored by @tempo and @stripe. Watch it in agentic action ⤵️
English
57
105
855
190.1K
Godefroy
Godefroy@Godefroy·
@denohawari Are you using multiple Reddit accounts? Are they warmed up?
English
0
0
0
121
Godefroy
Godefroy@Godefroy·
Gradium text-to-speech integration is now available in Micdrop.dev, which allows to build entirely French voice AI 🇫🇷, when used with Gladia and Mistral.
English
2
2
6
1.5K
Godefroy retweetledi
Christian Catalini
Christian Catalini@ccatalini·
1/ Some Simple Economics of AGI—🔥🧵 Right now, there is a low-grade panic running through the economy. Everyone is asking the same anxious question: what exactly is AI going to automate, and what will be left for us?
Christian Catalini tweet media
English
144
374
2K
601.1K
Godefroy retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
The math on this project should mass-humble every AI lab on the planet. 1 cubic millimeter. One-millionth of a human brain. Harvard and Google spent 10 years mapping it. The imaging alone took 326 days. They sliced the tissue into 5,000 wafers each 30 nanometers thick, ran them through a $6 million electron microscope, then needed Google’s ML models to stitch the 3D reconstruction because no human team could process the output. The result: 57,000 cells, 150 million synapses, 230 millimeters of blood vessels, compressed into 1.4 petabytes of raw data. For context, 1.4 petabytes is roughly 1.4 million gigabytes. From a speck smaller than a grain of rice. Now scale that. The full human brain is one million times larger. Mapping the whole thing at this resolution would produce approximately 1.4 zettabytes of data. That’s roughly equal to all the data generated on Earth in a single year. The storage alone would cost an estimated $50 billion and require a 140-acre data center, which would make it the largest on the planet. And they found things textbooks don’t contain. One neuron had over 5,000 connection points. Some axons had coiled themselves into tight whorls for completely unknown reasons. Pairs of cell clusters grew in mirror images of each other. Jeff Lichtman, the Harvard lead, said there’s “a chasm between what we already know and what we need to know.” This is why the next step isn’t a human brain. It’s a mouse hippocampus, 10 cubic millimeters, over the next five years. Because even a mouse brain is 1,000x larger than what they just mapped, and the full mouse connectome is the proof of concept before anyone attempts the human one. We’re building AI systems that loosely mimic neural networks while still unable to fully read the wiring diagram of a single cubic millimeter of the thing we’re trying to imitate. The original is 1.4 petabytes per millionth of its volume. Every AI model on Earth fits in a fraction of that. The brain runs on 20 watts and fits in your skull. The data center required to merely describe one-millionth of it would span 140 acres.
All day Astronomy@forallcurious

🚨: Scientists mapped 1 mm³ of a human brain ─ less than a grain of rice ─ and a microscopic cosmos appeared.

English
1.2K
12.1K
64.4K
4.6M
Godefroy retweetledi
John Rush
John Rush@johnrushx·
Expect this: > LLMs gonna go from slow to neat instant > Context window will be infinite > LLMs gonna top every eval & benchmarks beating humans in literally everything that can be measured > LLMs will learn & fine tune on the fly, instead of full new training run as now
Taalas Inc.@taalas_inc

24 dedicated people. $30M spent on development. Extreme specialization, speed, and power efficiency. Today we launch Taalas’ first product. Check it out: Details: taalas.com/the-path-to-ub… Demo chatbot: chatjimmy.ai API: taalas.com/api-request-fo…

English
26
5
92
16K
Godefroy retweetledi
Aaron Levie
Aaron Levie@levie·
This is the question every software company is asking themselves right now. What happens to our roadmap if an engineer can produce 2X or 5X more output. The general direction will be roadmap expansion. Companies that just use this leverage to cut costs will be outcompeted by those that decide to do more. As a result, this will mean we will see more competitive battles between companies, but also the expansion of many more categories since software can touch more surface area. The limiter then becomes how rapidly your customers can actually adopt new software, how good you make that software (vs. it becomes slop because it’s so much easier), and whether you can get paid for more software or if customers’ expectations just go up over time for what they get from each vendor. As an aside, building up a brand, ecosystem, and distribution moat ends up being critical. If software development cost per unit go down, then the new game is how you can get customers to adopt and remain sticky. GTM becomes a critical factor in all this.
Gergely Orosz@GergelyOrosz

Interesting thought experiment: Let's run with the assumption that AI makes creating software ridiculously fast + cheap, and quality doesn't suffer (I know, I know, but let's assume) What would this mean for software businesses? Would eg they all expand scope w new products?

English
96
64
795
156.1K
Godefroy retweetledi
hari raghavan
hari raghavan@haridigresses·
THE PARADOX OF LEVERAGE The CEO of YC, @garrytan stayed up late this weekend vibecoding. So did I, and so did thousands of other founders, engineers, and builders (and, frankly, insomniacs) because the gap between "idea" and "working product" has collapsed from years / months / weeks to hours. This should (and does!) feel liberating. But honestly I've also felt some existential dread, because of the following events and reads over the last week. 1. All the foundation models will win @EthanChoi7's excellent post last week, where he lays out why all the foundation model companies will win: OpenAI, Anthropic, xAI, Gemini. The most consequential section (for me) was this one: Ethan calls out (correctly) that we're still in the first innings. While I (and others) have been celebrating all my superpowers with building, we're not paying attention to the fact that the capabilities are sprinting faster than we can keep up. 2. The value of knowledge workers is evaporating The Norwegian sovereign wealth fund published a case study where they deployed Anthropic to monitor their ~$1T AUM, with 9,000 companies, saving 213,000 analyst hours / year. That's 100 full-time employees. Gone. Absorbed into the model. From one function, at one organization. 3. Clawdbot taking X by storm I'm yet to dig in and install it. In the meantime, I read this excellent post by @TukiFromKL, reminding us not to outsource our memory, our presence, and our life experience by overrelying on tooling like that. 4. @DarioAmodei's The Adolescence of Technology. He reminde us that things are happening far faster than we're prepared for: The years in front of us will be impossibly hard, asking more of us than we think we can give. ~10 days ago I told a friend that I think there's a non-trivial (though still <10% chance) of reaching the singularity in 2026. I think the probability is significantly higher in 2027. You can see direct traces of this possibility in Dario's post. 5. Software is eating the world, and the foundation models are eating software In the last ~week, Anthropic has released released Claude Cowork, Claude for Excel, apps (whereby you can interface and export work to Figma, Box, Clay, etc. from inside Claude. Anthropic is utterly taking over every single enterprise application. In parallel, OpenAI is sprinting ahead on consumer apps (and enterprise, to a lesser extent than Anthropci), eating one startup at a time. 6. Months of work in days My wife and I did months' worth of work (for a 2020 startup) in a few hours on Sunday — a personal finance / portfolio management app with recursive querying, temporal data storage, external data integrations, the works. ———— The paradox and dilemma as a builder I have never had more leverage. And yet I've never had less clarity on what will survive the next 5 years. Because if the models keep compounding at this rate, what moat actually exists? What's durable? What won't get absorbed into OpenAI or Anthropic or Gemini's next release? Paul Atreides, after drinking the Water of Life, describes the feeling of seeing the time-matrix for the first time: standing on shifting sands where even a single grain can cause landslides. He observes "not moving is a choice." I think that's where I am. It's where all founders of companies <$1M in ARR are. It's where most founders <$100M ARR are — even if they won't say it publicly. So where does that leave us? I think the only edge left is action, momentum, agency. The old startup playbook was: find a problem, build a solution, iterate until PMF. The new playbook might be: build fast, stay close to the frontier, and accept that the ground beneath you is moving faster than your roadmap. I say "might be", because I don't even have conviction in this. But the alternative is watching from the sidelines while the world rewrites itself. I will not give into the quiet desperation. There is no choice but to build. Thanks to Opus 4.5 for helping write parts of this post, and Barbara Pascetta for the discussion that sparked it + the Dune reference.
hari raghavan tweet mediahari raghavan tweet mediahari raghavan tweet mediahari raghavan tweet media
English
21
27
260
33.3K
Godefroy
Godefroy@Godefroy·
@gonzague Pareil ! Il vaut d’ailleurs mieux pas que j’y mette les pieds, j’ai renoncé à ma nationalité US il y a 1 an.
Français
0
0
0
32
Gonzague 👨🏼‍💻
Gonzague 👨🏼‍💻@gonzague·
Tout ce qui se passe aux USA ne fait que me renforcer dans l'idée que je ne mettrais - malheureusement pour moi - plus les pieds dans ce pays tant que Trump sera au pouvoir. Ca me manque un peu, j'aime voyager aux USA, y voir des amis mais pas question d'y aller actuellement.
Français
24
1
62
12K
Godefroy retweetledi
Mistral AI
Mistral AI@MistralAI·
Introducing the Devstral 2 coding model family. Two sizes, both open source. Also, meet Mistral Vibe, a native CLI, enabling end-to-end automation. 🧵
English
172
466
3.5K
1.8M
Godefroy
Godefroy@Godefroy·
@ponceto91 C’est moche 😆 Je connais un très bon gars pas très cher si tu veux faire un pentest utile
Français
0
0
0
334
Olivier Poncet 🦝
Olivier Poncet 🦝@ponceto91·
Aujourd'hui, suite à un pentest sur une application que l'on a développée, je reçois le rapport d'audit : une vulnérabilité faible sur config Ngnix et php avec analyse et recos. Problème : l'application utilise Apache et Python (gunicorn). Ma réponse a été savoureuse.
GIF
Français
28
6
161
35.4K
Godefroy
Godefroy@Godefroy·
@HarryStebbings Of course I want to see the code, it would be highly irresponsible otherwise
English
0
0
0
33
Harry Stebbings
Harry Stebbings@HarryStebbings·
How Base44 Beats Cursor:  "I don't think you will want to see the code. Nowadays even the most technical users would prefer building inside Base44 instead of Cursor. They do not want to set up servers and databases and see the code and write tests and navigate through the files etc. This portion of software that you can build in Base44 is going to grow. If you can build it inside Base44, you'd prefer to do that than Cursor." @MS_BASE44 Do you agree, people "will not want to see the code" @antonosika @ScottWu46 @gregisenberg @theo
English
90
9
154
69.5K