Isabelle Plante

46 posts

Isabelle Plante banner
Isabelle Plante

Isabelle Plante

@Izzyplante

AI enthusiast, musician and educator

Katılım Şubat 2011
78 Takip Edilen5 Takipçiler
Isabelle Plante
Isabelle Plante@Izzyplante·
@HellTanner1 @NewYorker @HellTanner1 no need to be an nitwit... The mother brings up her son in paragraph 2 of the article... If you ACTUALLY read the article! She discusses both children's interactions with AI.
English
0
0
2
37
@HellTanner
@HellTanner@HellTanner1·
@Izzyplante @NewYorker You just proved the mother’s point as she talks about her DAUGHTER not her son you barely sentient turnip
English
1
0
7
92
The New Yorker
The New Yorker@NewYorker·
Jessica Winter has been raising her children to detest A.I. Then her daughter’s public middle school began receiving Google Chromebooks, which came pre-installed with an all-ages version of Gemini, a suite of A.I. tools. “When my daughter, who is in sixth grade, begins writing an essay, she gets a prompt: ‘Help me write,’ ” Winter writes. “If she is starting work on a slide-show presentation, the prompt is ‘Help me visualize.’ She shoos away these interruptions, but they persist: ‘Help me edit.’ ‘Beautify this slide.’ ” Proponents of generative A.I. in elementary and middle schools argue that such early exposure will foster digital-media literacy, and prepare them for a future in which most professions are steeped in A.I. But the technology also poses significant cognitive and social-emotional risks to young people. Read Winter’s report about A.I.’s infiltration into schools—and what it could mean for young minds: newyorkermag.visitlink.me/NSWuBG
The New Yorker tweet media
English
134
1.8K
9.3K
2.5M
Jesse Genet
Jesse Genet@jessegenet·
AI homemaking 👩‍🍳 Breaking down my latest revelation… my @openclaw already orders my groceries on @Instacart — now ‘she’ does our meal planning ⚡️ With four little kids I’m looking for any help I can get 😅
English
54
20
603
49.6K
Isabelle Plante
Isabelle Plante@Izzyplante·
@patomolina This is one of the biggest reasons to use our own open source tools and avoid locks in.
English
0
0
0
10
Pato Molina
Pato Molina@patomolina·
Anthropic decidió dar de baja a toda nuestra organización por una supuesta infracción de sus condiciones de uso. Qué política específica infringimos no tengo ni la menor idea: simplemente recibimos un mail y listo, adiós Claude. Si querés apelar la medida hay que completar un Google Form, así de ridículo como suena. De golpe más de 60 personas se quedaron sin una herramienta fundamental para trabajar. Integraciones, skills, historial de conversaciones: todo perdido o, en el mejor de los casos, parado por tiempo indeterminado. Enorme aprendizaje para cualquier empresa de software que dependa de herramientas de IA en procesos críticos. Nunca hay que poner todos los huevos en una canasta.
Pato Molina@patomolina

@claudeai you took down our entire organization with 60+ accounts belonging to a legitimate company for no apparent reason, without any explanations. The only way to appeal the decision is by filling out a Google Form? Very bad UX and customer service.

Español
787
1.4K
9.5K
5.2M
Isabelle Plante
Isabelle Plante@Izzyplante·
@RaquelDancho All run on Canadian hardware under Canadian law without an American cloud provider in the middle. $240 million bought a dependency. The alternatives were already free.
English
0
0
0
17
Isabelle Plante
Isabelle Plante@Izzyplante·
@RaquelDancho Cohere's flagship Command A: $10/M output tokens (2nd most expensive on the market). Ranks #37 on LMSYS Chatbot Arena. DeepSeek-R1: free. Llama 4: free. Qwen3: free. All outperform both models. All are commercially licensable.
English
1
0
0
45
Raquel Dancho
Raquel Dancho@RaquelDancho·
Cohere, a Canadian AI company, received $240M in taxpayer funding. Much of it reportedly went to a data centre built and operated by a U.S. firm. Now, Cohere is reportedly in talks to merge with a German company. I asked if they’re committed to remaining Canadian-controlled. Their response was not definitive. Did the Liberals include sovereignty guarantees in the $240M contract?
English
76
498
1.9K
167.7K
Isabelle Plante
Isabelle Plante@Izzyplante·
@jpineau1 I don't understand how Cohere is seen as sovereign or a good choice. Its expensive, not very fast, or smart. Its operations are managed in the USA, not in Canada. So it's not actually sovereign Canadian AI. There were other options ... sage.is/resources/cana…
English
0
0
0
28
Joelle Pineau
Joelle Pineau@jpineau1·
Cohere is committed to long-term growth and protecting Canadian data and IP. As we scale, we want to export the values that make Canada so great to the rest of the world and build partnerships that deliver sovereign and secure AI. 🇨🇦🌍
English
3
0
27
1.3K
Joelle Pineau
Joelle Pineau@jpineau1·
Thank you again for the opportunity to testify yesterday, @raqueldancho. I want to be unambiguously clear: Canada is our home and we will always remain headquartered here. As I shared in my testimony, I joined Cohere to help advance Canadian AI leadership, and that won't change.
Raquel Dancho@RaquelDancho

Cohere, a Canadian AI company, received $240M in taxpayer funding. Much of it reportedly went to a data centre built and operated by a U.S. firm. Now, Cohere is reportedly in talks to merge with a German company. I asked if they’re committed to remaining Canadian-controlled. Their response was not definitive. Did the Liberals include sovereignty guarantees in the $240M contract?

English
8
15
109
20.8K
Isabelle Plante
Isabelle Plante@Izzyplante·
Cohere's earlier Aya 101 shipped under Apache-2.0 — actual open source. They've moved in the wrong direction. Meanwhile, Llama 4 and Gemma 3 allow commercial use out of the box. "Open" means something. CC-BY-NC isn't it. 🧵2/2
English
0
0
1
54
Isabelle Plante
Isabelle Plante@Izzyplante·
I've been looking into Cohere. They ship TinyAya "open-weight" under CC-BY-NC-4.0. NC means no commercial use without a separate agreement with Cohere. You can look at the model. You can run it at home. You cannot build a business on it without calling their sales team. 🧵1/2
English
1
0
1
94
Isabelle Plante
Isabelle Plante@Izzyplante·
@rohanpaul_ai Will be interesting to see if these rules against using personal data to create digital humans will apply both inside and outside China.
English
0
0
0
160
Rohan Paul
Rohan Paul@rohanpaul_ai·
These rules are so much needed. China just proposed legal framework, targeting digital humans with mandatory labels, consent rules, and child-safety limits. A digital human is a software-made person that can look, speak, and interact like a real one, which makes it useful for customer service, entertainment, sales, and education but also easy to mistake for a real person. China’s draft rules try to solve that confusion first by forcing clear labels on all virtual human content so users know when they are dealing with a synthetic identity. The rules also block firms from using someone’s face, voice, or other personal data to build a digital human without permission. The child-protection part is especially direct because it bans virtual intimate relationship services for users under 18 and targets designs that could mislead minors or pull them into compulsive use. --- straitstimes .com/asia/china-moves-to-regulate-digital-humans-bans-addictive-services-for-children
Rohan Paul tweet media
English
29
69
251
22.1K
Isabelle Plante
Isabelle Plante@Izzyplante·
@MarioNawfal Looks amazing, but why is Tesla premiering a large family car in a country with such a low birth rate? Odd mismatch.
English
2
0
3
1.1K
Mario Nawfal
Mario Nawfal@MarioNawfal·
🇯🇵Tesla gave Japan a six-seater that hauls ass with the whole crew. The Model YL rips 0-60 mph in five flat, tops out at 125 mph, and delivers a staggering 490-mile range.
Elon Musk@elonmusk

Tesla

English
95
198
2.8K
293K
LonelySloth
LonelySloth@lonelysloth_sec·
Saying LLMs have an hallucination problem is actually a bit misleading. Everything they do is hallucination— except for ipsis litteris text retrieval. They have no ground truth, no rules to extrapolate from known facts, no way to actually verify validity. Sometimes the hallucinations match reality. But you can dream about real events too.
English
4
6
114
9.8K
Abdul Șhakoor
Abdul Șhakoor@abxxai·
BREAKING: 🚨 Someone just tested 35 AI models across 172 billion tokens of real document questions. The hallucination numbers should end the "just give it the documents" argument forever. Here is what the data actually showed. The best model in the entire study, under perfect conditions, fabricated answers 1.19% of the time. That sounds small until you realize that is the ceiling. The absolute best case. Under optimal settings that almost no real deployment uses. Typical top models sit at 5 to 7% fabrication on document Q&A. Not on questions from memory. Not on abstract reasoning. On questions where the answer is sitting right there in the document in front of it. The median across all 35 models tested was around 25%. One in four answers fabricated, even with the source material provided. Then they tested what happens when you extend the context window. Every company selling 128K and 200K context as the hallucination solution needs to read this part carefully. At 200K context length, every single model in the study exceeded 10% hallucination. The rate nearly tripled compared to optimal shorter contexts. The longer the window people want, the worse the fabrication gets. The exact feature being sold as the fix is making the problem significantly worse. There is one more finding that does not get talked about enough. Grounding skill and anti-fabrication skill are completely separate capabilities in these models. A model that is excellent at finding relevant information in a document is not necessarily good at avoiding making things up. They are measuring two different things that do not reliably correlate. You cannot assume a model that retrieves well also fabricates less. 172 billion tokens. 35 models. The conclusion is the same across all of them. Handing an LLM the actual document does not solve hallucination. It just changes the shape of it.
Abdul Șhakoor tweet mediaAbdul Șhakoor tweet mediaAbdul Șhakoor tweet media
English
264
1.3K
4.9K
477.3K
Isabelle Plante
Isabelle Plante@Izzyplante·
@Chaos2Cured @gailcweiner @xw33bttv @AndrewCurran_ @Samueljgrim If your goal is to prioritize ease of use and broad adoption without restrictions -> MIT. If you want to enforce openness and community contributions, especially for networked software -> AGPL. The choice depends on whether you value permissiveness (MIT) or reciprocity (AGPL)
English
0
0
0
8
Isabelle Plante
Isabelle Plante@Izzyplante·
@alex_prompter This would be a better and more significant study if they 1) had a larger sample size, and 2) compared scores before and after "learning" (pre-and post quizes). Even though they did a "warm-up" there's data. A 1 hour study with only 52 people isn't statistically significant!
English
0
0
1
46
Alex Prompter
Alex Prompter@alex_prompter·
Anthropic's own researchers just proved that using AI to learn new skills makes you 17% worse at them. and the part nobody's reading is more important than the headline. the paper is called "How AI Impacts Skill Formation." randomized experiment. 52 professional developers. real coding tasks with a Python library none of them had used before. half got an AI assistant. half didn't. the AI group scored 17% lower on the skills evaluation. Cohen's d of 0.738, p=0.010. that's a real effect. and here's what makes it sting: the AI group wasn't even faster. no significant speed improvement. they learned less AND didn't save time. but the viral framing of "AI bad for learning" misses what actually matters in this paper. the researchers watched screen recordings of every single participant. they identified 6 distinct patterns of how people use AI when learning something new. 3 of those patterns preserved learning. 3 destroyed it. the gap between them is enormous. participants who only asked AI conceptual questions scored 86% on the evaluation. participants who delegated everything to AI scored 24%. same tool. same task. same time limit. the difference was cognitive engagement. the highest-scoring AI users actually outperformed some of the no-AI group. they asked "why does this work" instead of "write this for me." they generated code then asked follow-up questions to understand it. they used AI as a thinking partner, not a replacement for thinking. the lowest-scoring group did what most people do under deadline pressure: pasted the prompt, copied the output, moved on. they finished fastest. they learned almost nothing. and here's the finding that should concern every engineering manager alive: the biggest score gap was on debugging questions. the skill you need most when supervising AI-generated code is the exact skill that atrophies fastest when you let AI do the work. the control group made more errors during the task. they hit bugs. they struggled with async concepts. they got frustrated. and that struggle is precisely what built their understanding. errors aren't obstacles to learning. they ARE learning. removing them with AI removes the mechanism that creates competence. participants in the AI group literally said afterward they wished they'd "paid more attention" and felt "lazy." one wrote "there are still a lot of gaps in my understanding." they could feel the hollowness of having completed something without understanding it. that's not a productivity win. that's debt. this paper isn't an argument against using AI. it's an argument against using AI unconsciously. Anthropic publishing research showing their own product can inhibit skill formation is the kind of intellectual honesty the industry needs more of. the practical takeaway is simple: if you're learning something new, use AI to ask questions, not to skip the work. the struggle is the product.
Alex Prompter tweet media
English
175
749
3K
195.1K