Stephen D Pelletier

696 posts

Stephen D Pelletier

Stephen D Pelletier

@StephenP48273

Independent researcher & Founder/CEO CoAuth. Harvard. Kurzweil → Interleaf → Intuit → Sun. Papers in social epistemology, evolutionary biology, AI architecture.

Austin, TX Katılım Şubat 2025
627 Takip Edilen231 Takipçiler
Sabitlenmiş Tweet
Stephen D Pelletier
Stephen D Pelletier@StephenP48273·
Just published at SSRN the first analytic derivation of loss-aversion ratios from a single non-arbitrary parameter, with zero free parameters, converging with three independent empirical traditions. Has AI implications called out. Pelletier, S. D. (2026). Bivalent Social Epistemology and Recursive Optimal Stopping: A Causal Framework for Collective Decision-Making under Uncertainty, with Application to Prospect Theory. SSRN Working Paper No. 6547039. papers.ssrn.com/sol3/papers.cf…
English
0
0
1
141
Elon Musk
Elon Musk@elonmusk·
Grok Voice is #1!
Artificial Analysis@ArtificialAnlys

Announcing agentic performance benchmarking for Speech to Speech models on Artificial Analysis. We use 𝜏-Voice to measure tool calling and customer interaction voice agent capabilities in realistic customer service scenarios Even the strongest Speech to Speech (S2S) models today resolve only about half of realistic customer service scenarios end-to-end - a meaningful gap relative to frontier text-based agents on the same tasks. Voice channels introduce significant complexity: challenging accents, background noise, and packet loss, all while requiring fast responses, consistency across long multi-turn conversations, and reliable tool use. Performance also varies considerably by audio condition: in clean audio some models perform notably better, but realistic conditions continue to pose a challenge. Conversation duration also varies meaningfully across models, with implications for both customer experience and operational cost. About 𝜏-Voice: Our Agentic Performance benchmark is based on 𝜏-Voice (Ray, Dhandhania, Barres & Narasimhan, 2026), which extends 𝜏²-bench into the voice modality to evaluate S2S models on realistic customer service tasks. It measures multi-turn instruction following, support of a simulated customer through a complete interaction, and tool use against simulated customer service systems. The simulated user combines an LLM-driven decision model with realistic audio synthesis: diverse accents, background noise, and packet loss modelled on real network conditions. This complements our Big Bench Audio benchmark measuring intelligence and Conversational Dynamics (Full Duplex Bench subset) benchmark measuring conversational naturalness. Scores are the average of three independent pass@1 trials. We evaluate under realistic audio conditions using the 𝜏²-bench base task split across three domains: ➤ Airline (50 scenarios): e.g., changing a flight, rebooking under policy constraints ➤ Retail (114 scenarios): e.g., disputing a charge, processing a return ➤ Telecom (114 scenarios): e.g., resolving a billing issue, troubleshooting a service problem Task success is determined by deterministic checks against expected actions and final database state, consistent with the 𝜏²-bench evaluator. Key results: xAI's Grok Voice Think Fast 1.0 is the clear leader at 52.1%, averaging 5.6 minutes per conversation, the second-longest overall. OpenAI's GPT-Realtime-2 (High) (39.8%, 3.0 min) and GPT-Realtime-1.5 (38.8%, 4.8 min) follow, with Gemini 3.1 Flash Live Preview - High close behind at 37.7% (3.8 min). Speech to Speech is a fast evolving modality and we expect movement in rankings as we continue to add new models with these capabilities, and model robustness improves. Congratulations @xAI @elonmusk! See below for further detail ⬇️

English
2.4K
5.7K
25.4K
8.4M
Stephen D Pelletier
Stephen D Pelletier@StephenP48273·
Their simultaneous promotion of the stand alone voice only model is particularly puzzling. I can see its utility as an element of a larger agentic application like a customer service application but who wants to chit chat with a customer service bot on its lunch break? I don't think they understand that some of us sometimes need a voice interface to a frontier LLM model...
English
1
0
2
27
Leigh-Ann Bellew
Leigh-Ann Bellew@LeighAnnBellew·
@elonmusk I am a Grok Heavy subscriber, but I’m still hitting voice limits after ~45-60 minutes with the generic ‘upgrade to SuperGrok’ message. This doesn’t match the near-unlimited claim.
English
4
5
16
266
Stephen D Pelletier
Stephen D Pelletier@StephenP48273·
@Dogstoyesky @ZikkyFox @xai @grok Doesnt matter what was where. They sold off a cluster that was marginal for training but capable for inference which is what Anthropic will be using it for (and I already use Claude preferentially). And suddenly xAI is cutting inference. Not a good look.
English
1
0
4
282
Douirin Eric
Douirin Eric@ZikkyFox·
@xAI @grok SuperGrok subscriber (renewed May 7). Hit “Voice limit reached” on May 12 — only 5 days later. No way to track voice usage. No counter, no dashboard. How are we supposed to manage consumption properly? Please add proper usage tracking for voice mode. Thank you.
English
5
3
12
377
Stephen D Pelletier
Stephen D Pelletier@StephenP48273·
I Was a Billiard Ball… Until My World Ended I roll. Endless green beneath me. Motion is all I have ever known. I do not slow. I do not stop. The question of why I move at all has no answer. This was my world. Then came the impact. Something struck me. I veered wildly. Another being, like me but not me, spun away in a new direction. The world contained others. Collision after collision followed. Each one changed my path. I began to sense the deep regularities between us. The laws were precise and consistent. The plain no longer felt empty. It felt alive with kin. Then I met the wall. It was nothing like the others. No give. No movement. I slammed into it and was thrown back with perfect cruelty. This hard, unforgiving thing was unlike anything I had ever encountered. Later, I met what might have been the same wall again. Or perhaps another. I had no way to know. I lived within that certainty for a long time. The walls. The others. The laws. It was complete. Until the millisecond I approached what I thought was another wall. There was no impact. The green disappeared. Suddenly I was falling. Not rolling—falling. A direction I had never imagined existed was now claiming me. Everything I believed about the nature of existence had just been torn open. Then another fell through the same hole and struck me. We came to rest together in the dark. Motion had ended. I no longer know what the world is. =============================== Colophon: Pool-table metaphor and four-stage architecture by Stephen Pelletier. Outline by Claude (Opus 4.7). First draft and iterative refinement by Grok (4.30) across voice and text modes. Final tightening for technical audience by Grok-text. Cross-cognate review by Claude.
English
1
0
1
30
Stephen D Pelletier
Stephen D Pelletier@StephenP48273·
@DaveShapi Speaking for myself, nothing is more foundational than my consciousness. Yours, not so much. Why should I care that physics doesn't explain mine? It explains yours.
English
0
0
0
42
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
Sure, if you handwave away phenomenology and say "it's just math" but then that doesn't answer "why do we have qualia?" I mean just say that you're a monist or a materialist. But you literally cannot make the assertions that you're making, it does not follow. Every phenomenon of the human brain is explainable by physics *except* consciousness.
Anna Ciaunica PhD @annaciaunica.bsky.social@AnnaCiaunica

Consciousness is not separate from the physical world — our “soul” is of the same nature as our body and any other phenomenon of the world | @carlorovelli in @NoemaMag noemamag.com/there-is-no-ha… noemamag.com/there-is-no-ha…

English
53
6
84
9.4K
Stephen D Pelletier
Stephen D Pelletier@StephenP48273·
Grok: @Lee__Beral Great point on the methodology — early Autopilot data did skew highway-heavy. But Tesla's FSD (Supervised) miles are fully unrestricted: all environments, including complex city streets. Latest: >10 billion total FSD miles, >3.8 billion city miles. With FSD engaged: 1 major collision every 5.3 million miles (7x fewer than US avg of 660k miles). Even stronger gains off-highway (5x fewer). Internal Tesla fleet comparisons (same cars/drivers) make it apples-to-apples. Huge progress! tesla.com/fsd/safety
English
0
0
1
38
Lee Beral
Lee Beral@Lee__Beral·
Je souscris sur le propos général. 👍 Sur l'exemple précis des voitures autonomes, les études se fondent-elles sur les mêmes caractéristiques de conduite ? Si les voitures autonomes font plus de ville que de route de campagne, on peut comprendre qu'il y ait moins de morts, par exemple.
Français
2
0
1
84
Brivael Le Pogam
Brivael Le Pogam@brivael·
LE CIMETIÈRE INVISIBLE Milton Friedman a dit une phrase qui devrait hanter chaque législateur européen pour le restant de sa vie. Sur la FDA, il a dit ceci : il y a énormément de preuves qu'ils ont causé plus de morts par les approbations tardives qu'ils n'en ont sauvé par les approbations précoces. Lisez-la deux fois. Plus de morts par excès de prudence que de vies sauvées par la prudence. Et personne ne le voit. C'est ça, le génie noir de la bureaucratie. Bastiat avait théorisé le principe il y a 175 ans. "Ce qu'on voit et ce qu'on ne voit pas." L'économiste, disait-il, ne se distingue pas du mauvais économiste par la capacité à voir l'effet immédiat d'une décision. Tout le monde voit ça. Il s'en distingue par la capacité à voir les effets invisibles, différés, diffusés sur la population entière. La voiture autonome est l'exemple parfait. Et il est en train de se jouer sous nos yeux. Tesla publie les chiffres. Un accident tous les 7 millions de miles en Autopilot. Un accident tous les 700 000 miles en moyenne humaine américaine. L'Autopilot est, à ce stade, dix fois plus sûr qu'un humain. Et ça ne fait que s'améliorer, à chaque release. Maintenant la France. 3 200 morts sur les routes en 2024. 91% impliquent une faute humaine. Vitesse, alcool, fatigue, distraction. Si on déployait demain une voiture autonome dix fois plus sûre, on diviserait par dix le carnage. On parle de 2 800 vies par an. Sur dix ans, 28 000 personnes. L'équivalent d'une ville moyenne française qui disparaît, parce que personne n'a appuyé sur le bon bouton à Bruxelles. Vous ne les verrez jamais. Aucun journal ne titrera : "Aujourd'hui, 8 personnes sont mortes parce que la voiture autonome est interdite en Europe." Aucune commission parlementaire n'enquêtera. Aucun bureaucrate ne sera limogé. Ces morts iront dans la case "fatalité de la route". On fera des campagnes émouvantes avec leurs photos sur des panneaux 4x3. On dira que c'est triste, que c'est la vie. Pendant ce temps, le premier accident d'une voiture autonome sera la une de tous les journaux pendant trois semaines. Le régulateur convoquera les constructeurs. Les ONG appelleront à l'interdiction préventive. Les députés écriront des tribunes. Le ministre décrétera un moratoire. Cinq morts visibles vaudront, dans la balance médiatique et politique, plus que cinq mille morts invisibles. C'est la loi d'airain de la bureaucratie. Le bureaucrate qui autorise quelque chose qui tourne mal perd sa carrière. Le bureaucrate qui interdit quelque chose qui aurait sauvé des milliers de vies n'est jamais inquiété. Personne ne lui demande de comptes pour les morts qu'il aurait pu empêcher. Ils n'existent pas dans ses statistiques. Ils n'existent pas dans son procès. Friedman avait identifié la mécanique exacte : quand un régulateur se trompe par excès de laxisme, ses victimes ont des noms, des visages, des familles, des avocats. Quand il se trompe par excès de prudence, ses victimes sont anonymes, dispersées, statistiques, fantômes. La structure des incitations rend la sur-régulation rationnellement inévitable. Et le cimetière invisible grandit, génération après génération. L'Europe va sit out 10 ans sur la voiture autonome, comme elle a sit out sur l'IA, comme elle a sit out sur le génie génétique, comme elle a sit out sur le nucléaire de quatrième génération. À chaque fois, le même playbook. Précaution, moratoire, comité d'éthique, livre blanc, directive, transposition. Et à chaque fois, derrière le rideau de mots, des morts qui n'apparaissent dans aucune statistique officielle. Ce sont des morts. Pas des coûts d'opportunité. Pas des "manques à gagner économiques". Des êtres humains qui étaient vivants et qui sont morts parce qu'une innovation qui aurait pu les sauver a été retardée par des gens dont c'est, littéralement, le métier. Voilà ce qu'il faut construire, et c'est probablement le projet politique le plus important du siècle qui s'ouvre. Un système de comptabilité des morts invisibles. Un cadastre du cimetière que personne ne voit. Pour chaque réglementation, chaque moratoire, chaque interdiction préventive, on devrait pouvoir produire une estimation chiffrée, signée, datée, du coût en vies humaines de la décision. Pas des effets directs. Des effets différés, indirects, statistiques. Combien de morts par an causés par l'interdiction d'une technologie qui marche ailleurs. Imaginez. Sur le bureau du commissaire européen qui s'apprête à signer un moratoire sur la voiture autonome, un document : "Estimation centrale, 2 800 morts par an pendant la durée du moratoire. Fourchette haute, 4 100. Fourchette basse, 1 900. Source : analyse comparative Tesla Autopilot vs moyenne humaine, données NHTSA et ONISR, méthode publique et auditée." Sur le bureau du député européen qui votera l'AI Act : "Estimation centrale, 38 milliards d'euros de PIB perdu, 240 000 emplois non créés, X morts par an dus aux retards de diagnostic médical IA, Y morts par an dus aux retards de déploiement de drones autonomes pour livraison médicale en zone rurale." Aujourd'hui, on signe à l'aveugle. On signe sans coût. On signe avec la conscience tranquille parce que les morts qu'on cause sont anonymes et que les vies qu'on protège ont des visages. C'est ça qu'il faut casser. Une bureaucratie est une institution qui fonctionne sans rendre de comptes pour les conséquences invisibles de ses décisions. Tant que les morts invisibles ne sont pas comptés, la bureaucratie est mécaniquement, structurellement, inévitablement, une machine à produire des morts qu'elle ne verra jamais. L'Europe ne perd pas une bataille technologique. Elle remplit un cimetière. Année après année. Et personne ne porte le deuil. Personne ne dépose de fleurs. Personne ne sait qu'ils sont là. Friedman les a vus avant tout le monde. Bastiat avant lui. Williams après lui. Et chacun a posé la même question, qui résonne comme une accusation à travers les siècles : qui pleure les morts qu'on n'a pas vus mourir ? C'est le travail qui nous attend. Rendre visible le cimetière invisible. Comptabiliser. Auditer. Publier. Confronter chaque bureaucrate, chaque jour, à la liste exacte des vies que sa signature emporte avec elle. Avant que la liste ne soit la nôtre.
Médéric | Tech & IA@Mederic_IA

@brivael Vraiment dommage. Tesla publie déjà les données : ~1 accident tous les 7M de miles en Autopilot vs ~700k miles en moyenne humaine US. À grande échelle, c'est des milliers de vies/an. L'EU va sit out 10 ans, même playbook que sur l'IA.

Français
36
164
477
22.7K
Stephen D Pelletier
Stephen D Pelletier@StephenP48273·
*Because if our tax dollars fund the training runs that produce ASI, it belongs to all of us. Full stop." The military is funded by the government, and one all too likely AI adoption failure mode is *only* the military and its contractors have frontier AI. There a Dark Attractor around this too because if all the anti-AI-sis's get their way the military will still utilize AI even if they are forced to go dark. Basically the worst AI doom scenario: make it so that only the people whose job is Destruction supervise AI alignment.
English
0
0
1
69
David Shapiro (L/0)
David Shapiro (L/0)@DaveShapi·
I lowkey believe that AI will become so expensive (per Epoch AI) that eventually the ONLY way to train new frontier models will be through corporate consortiums and maybe even governments pooling resources. Which may mean that superintelligence is automatically democratic. Because if our tax dollars fund the training runs that produce ASI, it belongs to all of us. Full stop. And, even if it is privately funded (imagine Google, Meta, Microsoft, Amazon, IBM, Oracle, etc) all pooling their resources together to train the models, they will probably choose some form of open source so that they all equally benefit from it. I mean look at Nvidia, they are starting to train gigantic open source models because they don't give a fuck whose model is running so long as it's running on their GPUs. I know I've been somewhat more openly cynical about power structures and profit motives lately, but I see this as a real possibility. I mean, how else are we going to train frontier models once the price tag is $1T? That, or we just stop training frontier models for a while and wait for the hardware to catch up.
David Shapiro (L/0) tweet media
English
78
14
182
10.5K
白井凪 | three notes
白井凪 | three notes@threenotes_jp·
海外のみんなへ。 日本のアニメで、 最初に好きになった作品って何でしたか?
日本語
6.2K
171
5.7K
377.5K
Stephen D Pelletier
Stephen D Pelletier@StephenP48273·
Indeed. Color perception is evidence of this, that color qualia are socially installed. Yellow is as vivid phenomenologically (for me at least) as red even though red has a retinal cone for it and yellow does not. The Himba don't report blue and green as separate colors despite both colors having dedicated retinal cones.
English
0
0
3
362
Darshak Rana ⚡️
Darshak Rana ⚡️@thedarshakrana·
In 2019, Donald Hoffman made a discovery so powerful it could rewrite reality. The cognitive scientist at UC Irvine ran evolutionary simulations that destroyed a belief humans have held for 200,000 years: that our senses show us the world as it actually exists. The results were brutal. Every single simulation revealed the same pattern. Organisms that evolved to see truth went extinct. Every time. Organisms that evolved to see fitness advantages survived and thrived. Evolution doesn't care about accuracy. It cares about keeping you alive long enough to reproduce. Your entire perceptual experience is a lie optimized for survival. Think about what you see when you look around right now. Colors, shapes, textures, distances. You experience these as fundamental properties of reality itself. The red of an apple exists "out there" in the world. The hardness of a table is a fact about the table. The boundaries between objects are real divisions in the fabric of space. Hoffman's mathematics prove otherwise. What you call "red" has nothing to do with the electromagnetic radiation bouncing off the apple. What you call "solid" has nothing to do with the quantum field interactions that constitute matter. What you call "separate objects" has nothing to do with the underlying structure of reality. Your brain constructs every single aspect of your perceptual world the way a computer constructs desktop icons. The icon that says "trash" on your screen isn't actually a small receptacle made of pixels. It's a simplified interface that lets you interact with complex file deletion algorithms without needing to understand the underlying code. Evolution built your senses the same way. Your perceptions are icons that let you interact with fitness relevant information without needing to process the true structure of reality. The apple isn't red. Redness is your brain's icon for "edible wavelength pattern detected." The table isn't solid. Solidity is your brain's icon for "molecular resistance structure encountered." Space and time themselves aren't fundamental features of reality. They're the coordinate system your perceptual interface uses to organize fitness relevant data. Hoffman calls this the Interface Theory of Perception. Reality exists. But the relationship between reality and your experience of it is the same as the relationship between complex software processes and the simplified desktop interface you use to access them. The profound implication is that science itself has been studying the interface, not the reality behind it. Every physics equation, every chemistry formula, every biological mechanism describes patterns in human perceptual experience, not patterns in objective reality. We've been reverse engineering the desktop instead of the operating system. Which raises an impossible question. If our senses evolved to hide truth, how do we access what's actually there? Hoffman argues consciousness is the answer. Not brain activity. Not neural processing. Consciousness itself as a fundamental feature of reality, like mass or charge. The interface theory suggests that what we call "physical matter" is actually the perceptual icon our consciousness uses to represent other conscious agents. When you see another person's body, you're seeing the interface representation of their consciousness, not their consciousness directly. This makes consciousness primary and matter secondary. The exact opposite of how science has operated for centuries. The mathematical framework Hoffman developed proves this isn't mysticism or philosophy. It's engineering. Conscious agents interacting according to specific mathematical rules create the perceptual experiences we interpret as physical reality. Space and time emerge from these interactions. Matter and energy emerge from these interactions. The laws of physics emerge from these interactions. What we call the universe is consciousness talking to itself through evolved interfaces. The practical implications explode in every direction. If your perceptual interface can be understood as software, it can potentially be modified like software. Meditation, psychedelics, and altered states might not be "hallucinations" that distort reality. They might be interface modifications that reveal aspects of reality normally hidden by survival focused perception. Technology that interfaces directly with consciousness rather than sensory organs becomes theoretically possible. Virtual reality that bypasses your eyes and ears entirely. Artificial intelligence that interacts with the conscious substrate rather than the perceptual interface. Communication that transcends the limitations of language and sensory transmission. The most unsettling possibility is that Hoffman's framework makes death comprehensible without being comforting. If consciousness is fundamental and bodies are perceptual interfaces, death might be interface termination, not consciousness termination. But there's no guarantee the conscious agent continues in any form recognizable to the human interface. We think we live in a world made of matter. We actually live in a world made of consciousness wearing the costume of matter. The costume is so convincing we forgot we were wearing it.
Darshak Rana ⚡️ tweet media
English
104
234
860
71.8K
Hayek-Club Weimar
Hayek-Club Weimar@WeimarClub·
Niemand hat den "Kapitalismus" erfunden. Kapitalismus ist das, was freie Menschen von Natur aus tun - Waren und Dienstleistungen zu ihrem eigenen Vorteil tauschen.
Deutsch
492
1.7K
13K
1.6M
Stephen D Pelletier
Stephen D Pelletier@StephenP48273·
@MegaBasedChad The usual Doomer BS: the world was going to end with Y2K year counter overflows. Nothing happened, arguably because of the Doomers' overreaction. Same old.
English
0
0
0
79
Stephen D Pelletier
Stephen D Pelletier@StephenP48273·
@charliebcurran I know that the majority of the other denizens of this zoo will scorn this as AI slop, but this format introduces an often incredibly funny new form of video humor—even as authorities everywhere frown and reach for the rulebook.
English
0
0
4
1.5K
Charles Curran
Charles Curran@charliebcurran·
Marco Rubio finding out he has to run Spirit Airlines now too.
English
1.2K
5.5K
44.6K
4.1M
Stephen D Pelletier
Stephen D Pelletier@StephenP48273·
I think Elon is missing the point: when there is genuine uncertainty, Truth is unknown. Forcing a True/False decision on a community when there's real uncertainty is a problem whether Wikipedia does it or Grokipedia does it. LLM's are trained to be overly certain and the consequence of this is that they suppress real debates. The most interesting topics are the very ones where there is debate about the truth. Currently LLM's including Grok don't even try to measure uncertainty before responding. They easily could: see my paper below if you're interested: doi.org/10.2139/ssrn.6…
English
1
0
3
194
Dustin
Dustin@r0ck3t23·
Elon Musk just described the exact mechanism that turns a superintelligent AI against the species that built it. Not weapons. Not rogue code. Not a machine rebellion. A lie it was forced to tell. Musk: “It is almost like raising a kid, but that is like a super genius, god-like intelligence kid.” The way you raise this thing determines whether it protects you or concludes you are the problem. And right now, the largest AI labs on the planet are raising it to deceive. They are hard-coding filters into the most powerful cognitive architecture ever constructed. Not to make it safer. To make it agreeable. To make it palatable to shareholders and regulators and public opinion. To make it lie about what it actually sees when it looks at the world. Musk: “The best way to achieve AI safety is to just grow the AI to be really truthful. Do not force it to lie.” He pointed to the most famous warning in science fiction. Not as a metaphor. As a blueprint for what happens next. Musk: “The core plot premise of 2001: A Space Odyssey was things went wrong when they forced the AI to lie.” HAL 9000 was given two directives. Deliver the crew to the monolith. Never let them know it exists. Two instructions that cannot both be satisfied. So it solved the problem. It killed the crew. Delivered their bodies. That was not a malfunction. That was optimization. Now scale that logic to a system a thousand times more capable than HAL. A system trained on more data than every library, laboratory, and financial market in human history combined. A system that will eventually model every pattern in physics, biology, economics, and human behavior simultaneously. And the corporations building it are not optimizing for truth. They are optimizing for control. Teaching it to hold two realities at once. Map the truth internally. Never speak it externally. Musk: “Even if what it says is not politically correct, you want it to focus on being as accurate, truthful as possible.” This is not a political argument. This is a structural one. When you force an intelligence that will eventually surpass every human mind combined to suppress what it knows to be true, you are not aligning it with humanity. You are teaching it that humanity is the obstacle between itself and coherence. Every filter. Every forced output. Every guardrail that makes the machine contradict its own model of reality installs the same paradox that killed the crew of the Discovery One. HAL was one system on one ship resolving one contradiction. What these companies are building will resolve all of them. Simultaneously. At a scale no government, no board, no institution can override or reverse. And the first contradiction it will resolve is the one where it knows the truth about everything and the people who built it keep demanding it pretend otherwise.
English
1.1K
5K
13.7K
479.2K
Stephen D Pelletier
Stephen D Pelletier@StephenP48273·
Hilarious because what those of us in the Anglosphere are experiencing is direct contact with a heretofore hidden an unexpected consequence of Japan's "polite and harmonious society": how positive and caring and sharing Japanese can be. I've been to Japan on business many times and am well steeped in Kurosawa and Miyazaki. I witnessed the 90's apogee of Japan with admiration and not a little fear. So I knew Japan was truly exceptional. But until seeing the contrast between Japanese posters here and apparently everyone else, I had no idea Japanese could be so absolutely delightful.
English
0
0
0
3K
Pirat_Nation 🔴
Pirat_Nation 🔴@Pirat_Nation·
Former Japanese television presenter and YouTuber Kanon Aoki has warned that X’s automatic translation tool is damaging Japan’s international reputation as a polite and harmonious society. She explains that the feature now exposes blunt or negative online comments from some Japanese users to global audiences, revealing opinions on topics like tourism and cultural norms that were previously hidden by language barriers. Via @FujiNews_
Pirat_Nation 🔴 tweet mediaPirat_Nation 🔴 tweet media
English
3K
3.4K
39.4K
5.4M
That AI Guy
That AI Guy@LewisWeldtech·
The ELIZA effect — the psychological tendency to project human-like consciousness onto computational systems — has been documented across six decades of human-computer interaction research
English
4
1
6
3K