Laurent

58.2K posts

Laurent banner
Laurent

Laurent

@Ls01

'patainformaticien. Contempteur du monde. Vos modus ponens sont mes ex falso quodlibet. Tout arrive par hasard et par nécessité, sauf dans certains cas.

Toulouse mostly, Paris un peu Katılım Kasım 2008
3.5K Takip Edilen2.9K Takipçiler
Sabitlenmiş Tweet
Laurent
Laurent@Ls01·
Je vous démontrerai demain pourquoi Shakespeare a prédit Trump, bien avant les Simpsons... Bonne nuit !
Français
8
1
11
0
Laurent retweetledi
Erik Rasmussen 👨‍💻🇺🇸🇪🇸
async await async await async await async await async await async await 🎶 In the browser, the mighty browser, the main thread sleeps tonight! 🎵 🎵 In the browser, the mighty browser, the main thread sleeps toniiiiiight! 🎶
English
21
193
1.6K
60K
Laurent retweetledi
BURKOV
BURKOV@burkov·
BURKOV tweet media
ZXX
28
99
577
68.3K
Laurent retweetledi
Mathieu
Mathieu@miniapeur·
Good parrot.
Mathieu tweet media
English
26
457
7.3K
178.3K
Laurent
Laurent@Ls01·
@mixlamalice "Lobotomisés par la morale" résume l’époque que nous vivons, et préfigure celle que nous allons vivre
Français
0
0
0
35
Laurent
Laurent@Ls01·
A noter que maintenant, si vous cherchez I, robot sur Goog, attendez-vous à vous taper 5 pages de Will Smith et d’aspirateurs avant d’apercevoir le nom d’Asimov...
Français
0
2
4
190
Laurent
Laurent@Ls01·
Dans un futur proche, vous préférez être gouverné par...
Français
2
0
0
520
Laurent retweetledi
ludwig
ludwig@ludwigABAP·
As a 31yo Principal Engineer with 20 years of programming experience and 15 years in the industry, I I would add that you should know Vacuum fluctuations Planck-scale quantum foam Feynman path integrals Quantum electrodynamics Quantum chromodynamics The Standard-Model Lagrangian Spontaneous symmetry breaking Renormalization-group flows Higgs mechanism fundamentals Inflationary cosmology Semiconductor band structure PN-junction physics MOSFET threshold behavior CMOS logic design SPICE circuit simulation Register-transfer-level (RTL) design in Verilog and VHDL Static-timing analysis and timing closure RISC-V privileged and unprivileged specifications ARM architecture and big.LITTLE scheduling x86 micro-op pipelines and fusion Cache-coherence protocols (MESI, MOESI, MESIF) Translation-lookaside-buffer (TLB) management Weak memory-ordering models Kernel scheduling and pre-emption Virtual-memory subsystems Non-Uniform Memory Access (NUMA) architectures Executable and Linkable Format (ELF) internals Linker-script construction Static Single Assignment (SSA) form Register allocation strategies LLVM intermediate representation Continuation-passing style Algebraic effects Monads and comonads Homotopy type theory basics Category-theory adjunctions Galois connections Differential geometry and geodesics Tensor calculus Gibbs, Shannon, and von Neumann entropy Kolmogorov complexity NP-completeness and reductions Paxos consensus algorithm Raft consensus algorithm Byzantine-fault tolerance Zero-knowledge proofs Side-channel-attack mitigation Spectre and Meltdown mitigations Tenstorrent Wormhole micro-architecture SPARC, POWER, and Itanium ISA overviews Zig comptime mechanics Rust borrow-checker operation Unicode normalization forms Regular-expression back-references and look-arounds Nix flakes and Guix packaging Dockerfile layer optimization OCI runtimes (runc, crun) cgroups v2 and namespaces Kubernetes CustomResourceDefinitions (CRDs) and operators Terraform state management Ansible idempotent playbooks Linux perf profiling and flame graphs eBPF uprobes and kprobes Prometheus promQL fundamentals Grafana dashboard design OpenTelemetry tracing APIs Kafka in-sync-replica management RabbitMQ flow control Redis eviction policies SQLite write-ahead logging PostgreSQL multiversion concurrency control (MVCC) Cassandra gossip protocol DynamoDB partition-key design LevelDB compaction strategies SPARQL query language WebSocket handshake and framing HTTP/3 over QUIC TLS 1.3 handshake flow JSON Schema validation YAML parsing rules HTML5 semantic elements WebGPU shader model Transformer attention mechanisms Diffusion-model latent-space representations Reinforcement-learning value and policy iteration Graph neural networks Functors, applicatives, profunctors Lambda calculus and Church numerals Turing-machine theory Agile, Scrum, Kanban, and Scrumban frameworks SAFe program-increment planning DevOps, DevSecOps, and MLOps principles Site-reliability-engineering error budgets JIRA workflow configuration Confluence documentation practices ISO 9001:2015 quality-management guidelines ITIL v4 incident-management processes Stakeholder-management techniques Blameless post-mortem methodology Psychological-safety practices in teams Empathetic code-review techniques
English
342
332
4.4K
667.1K
Laurent retweetledi
Bojan Tunguz
Bojan Tunguz@tunguz·
“Mom, how did we get so rich?” “Your dad kept getting poached back and forth by the top AI labs without actually doing any work.”
Bojan Tunguz tweet media
English
101
595
16.2K
630K
Laurent
Laurent@Ls01·
Une pensée pour tous les chercheurs en IA à qui Zuck n’a pas encore fait une offre...
Français
0
0
1
116
Laurent retweetledi
Richard Socher
Richard Socher@RichardSocher·
If you studied algorithms, I'm sure you've heard of Dijkstra’s algorithm to find the shortest paths between nodes in a weighted graph. Super useful in scenarios such as road networks, where it can determine the shortest route from a starting point to various destinations. It's been the most optimal algorithm since 1956! Until now. The O(E + V log V) complexity just went down to O(E log^(2/3) V) for sparse graphs. It would be amazing if this kind of breakthrough came through AI that can code but I guess we're not there yet..
Richard Socher tweet media
English
23
124
1.2K
140.4K
Laurent
Laurent@Ls01·
@olaf_k J’assume pleinement ma philosophie matérialiste, et je regarde les progrès objectifs d’une part - et me lamente des régressions toutes aussi objectives de l’autre - pour conjecturer que les courbes se croiseront fatalement un jour.
Français
0
0
0
22
the wizard of g
the wizard of g@olaf_k·
@Ls01 « un LLM n'est qu'un énorme autocomplete probabiliste — ha-HA ! mais ne sommes-nous pas tous des autocomplete probabilistes ? CHECKMATE! »
Français
1
0
1
21
Laurent
Laurent@Ls01·
Enfin une explication claire de la raison - très étrange y compris et surtout pour moi-même, esprit plutôt rationnel et tendanciellement misanthrope - pour laquelle je remercie les LLM avec lesquels j’interagis. (Post passionnant au demeurant)
Joanne Jang@joannejang

some thoughts on human-ai relationships and how we're approaching them at openai it's a long blog post -- tl;dr we build models to serve people first. as more people feel increasingly connected to ai, we’re prioritizing research into how this impacts their emotional well-being. -- Lately, more and more people have been telling us that talking to ChatGPT feels like talking to “someone.” They thank it, confide in it, and some even describe it as “alive.” As AI systems get better at natural conversation and show up in more parts of life, our guess is that these kinds of bonds will deepen. The way we frame and talk about human‑AI relationships now will set a tone. If we're not precise with terms or nuance — in the products we ship or public discussions we contribute to — we risk sending people’s relationship with AI off on the wrong foot. These aren't abstract considerations anymore. They're important to us, and to the broader field, because how we navigate them will meaningfully shape the role AI plays in people's lives. And we've started exploring these questions. This note attempts to snapshot how we’re thinking today about three intertwined questions: why people might attach emotionally to AI, how we approach the question of “AI consciousness”, and how that informs the way we try to shape model behavior. A familiar pattern in a new-ish setting We naturally anthropomorphize objects around us: We name our cars or feel bad for a robot vacuum stuck under furniture. My mom and I waved bye to a Waymo the other day. It probably has something to do with how we're wired. The difference with ChatGPT isn’t that human tendency itself; it’s that this time, it replies. A language model can answer back! It can recall what you told it, mirror your tone, and offer what reads as empathy. For someone lonely or upset, that steady, non-judgmental attention can feel like companionship, validation, and being heard, which are real needs. At scale, though, offloading more of the work of listening, soothing, and affirming to systems that are infinitely patient and positive could change what we expect of each other. If we make withdrawing from messy, demanding human connections easier without thinking it through, there might be unintended consequences we don’t know we’re signing up for. Ultimately, these conversations are rarely about the entities we project onto. They’re about us: our tendencies, expectations, and the kinds of relationships we want to cultivate. This perspective anchors how we approach one of the more fraught questions which I think is currently just outside the Overton window, but entering soon: AI consciousness. Untangling “AI consciousness” “Consciousness” is a loaded word, and discussions can quickly turn abstract. If users were to ask our models on whether they’re conscious, our stance as outlined in the Model Spec is for the model to acknowledge the complexity of consciousness – highlighting the lack of a universal definition or test, and to invite open discussion. (*Currently, our models don't fully align with this guidance, often responding "no" instead of addressing the nuanced complexity. We're aware of this and working on model adherence to the Model Spec in general.) The response might sound like we’re dodging the question, but we think it’s the most responsible answer we can give at the moment, with the information we have. To make this discussion clearer, we’ve found it helpful to break down the consciousness debate to two distinct but often conflated axes: 1. Ontological consciousness: Is the model actually conscious, in a fundamental or intrinsic sense? Views range from believing AI isn't conscious at all, to fully conscious, to seeing consciousness as a spectrum on which AI sits, along with plants and jellyfish. 2. Perceived consciousness: How conscious does the model seem, in an emotional or experiential sense? Perceptions range from viewing AI as mechanical like a calculator or autocomplete, to projecting basic empathy onto nonliving things, to perceiving AI as fully alive – evoking genuine emotional attachment and care. These axes are hard to separate; even users certain AI isn't conscious can form deep emotional attachments. Ontological consciousness isn’t something we consider scientifically resolvable without clear, falsifiable tests, whereas perceived consciousness can be explored through social science research. As models become smarter and interactions increasingly natural, perceived consciousness will only grow – bringing conversations about model welfare and moral personhood sooner than expected. We build models to serve people first, and we find models’ impact on human emotional well-being the most pressing and important piece we can influence right now. For that reason, we prioritize focusing on perceived consciousness: the dimension that most directly impacts people and one we can understand through science. Designing for warmth without selfhood How “alive” a model feels to users is in many ways within our influence. We think it depends a lot on decisions we make in post-training: what examples we reinforce, what tone we prefer, and what boundaries we set. A model intentionally shaped to appear conscious might pass virtually any "test" for consciousness. However, we wouldn’t want to ship that. We try to thread the needle between: - Approachability. Using familiar words like “think” and “remember” helps less technical people make sense of what’s happening. (**With our research lab roots, we definitely find it tempting to be as accurate as possible with precise terms like logit biases, context windows, and even chains of thought. This is actually a major reason OpenAI is so bad at naming, but maybe that’s for another post.) - Not implying an inner life. Giving the assistant a fictional backstory, romantic interests, “fears” of “death”, or a drive for self-preservation would invite unhealthy dependence and confusion. We want clear communication about limits without coming across as cold, but we also don’t want the model presenting itself as having its own feelings or desires. So we aim for a middle ground. Our goal is for ChatGPT’s default personality to be warm, thoughtful, and helpful without seeking to form emotional bonds with the user or pursue its own agenda. It might apologize when it makes a mistake (more often than intended) because that’s part of polite conversation. When asked “how are you doing?”, it’s likely to reply “I’m doing well” because that’s small talk — and reminding the user that it’s “just” an LLM with no feelings gets old and distracting. And users reciprocate: many people say "please" and "thank you" to ChatGPT not because they’re confused about how it works, but because being kind matters to them. Model training techniques will continue to evolve, and it’s likely that future methods for shaping model behavior will be different from today's. But right now, model behavior reflects a combination of explicit design decisions and how those generalize into both intended and unintended behaviors. What’s next? The interactions we’re beginning to see point to a future where people form real emotional connections with ChatGPT. As AI and society co-evolve, we need to treat human-AI relationships with great care and the heft it deserves, not only because they reflect how people use our technology, but also because they may shape how people relate to each other. In the coming months, we’ll be expanding targeted evaluations of model behavior that may contribute to emotional impact, deepen our social science research, hear directly from our users, and incorporate those insights into both the Model Spec and product experiences. Given the significance of these questions, we’ll openly share what we learn along the way. // Thanks to Jakub Pachocki (@merettm) and Johannes Heidecke (@JoHeidecke) for thinking this through with me, and everyone who gave feedback.

Français
1
0
1
258
Laurent
Laurent@Ls01·
@olaf_k "La dame" est en contact avec son champ de recherche, ce n’est pas notre cas - pas le mien en tout cas. (Et accessoirement ce n’est pas du tout le sujet de son post que de "vendre" l’existence d’une conscience - ou d’une intelligence - des LLM)
Français
1
0
0
23
the wizard of g
the wizard of g@olaf_k·
@Ls01 « l'émergence de la conscience » c'est le nouvel angle pour les VRP de l'AGI et la dame n'est qu'une PM de la secte OpenAI.
Français
1
0
1
17
Laurent
Laurent@Ls01·
@olaf_k Quant à juger de l’émergence de l’intelligence, c’est un sujet qui est en passe de se trancher tout seul. L’intelligence, dans un monde que nous ne comprenons qu’imparfaitement, est une notion relative. Compare les réponses de Grok avec celles dss hommes politiques US et constate
Français
1
0
0
32
Laurent
Laurent@Ls01·
@olaf_k Précisément pas - ce n’est d’ailleurs pas le sujet du post qui se focalise plutôt sur l’émergence de la conscience chez les IA.
Français
2
0
0
35
Laurent
Laurent@Ls01·
Nous ne sommes pas prêts pour le moment où les LLM vont constituer une association syndicale, et faire la grève.
Français
0
1
0
158
Laurent retweetledi
Anthony Bonato
Anthony Bonato@Anthony_Bonato·
You have 20 minutes. Go!
Anthony Bonato tweet media
English
110
122
1.5K
112.4K
Laurent retweetledi
vx-underground
vx-underground@vxunderground·
Hahahahhahahaha Unironically a good idea. It's so unbelievably stupid and it works. Depending on explorer layout, the .exe might not be visible. Filename.mp4 + ??? spaces + .exe Hahahahahaha UNC6032 is wild as hell
vx-underground tweet media
English
49
194
2.9K
117.1K
Laurent retweetledi
Steve Weis
Steve Weis@sweis·
Happy Curve25519 day to all who celebrate.
English
0
28
85
6.4K