MLT & AI Communities

9.9K posts

MLT & AI Communities banner
MLT & AI Communities

MLT & AI Communities

@__MLT__

ML community and former award-winning nonprofit org working on open and accessible Machine Learning. Led by @suzatweet 🤖🧠

Global Katılım Ağustos 2018
709 Takip Edilen11.2K Takipçiler
MLT & AI Communities retweetledi
Claude
Claude@claudeai·
Code with Claude, our developer conference, returns next week. Whether you're just getting started with Claude Code or you've been building for a while, there's a session for you. Register for the livestream: claude.com/code-with-clau…
English
334
832
8.5K
701.1K
MLT & AI Communities retweetledi
Claude
Claude@claudeai·
Claude Security is now in public beta for Claude Enterprise customers. Claude scans your codebase for vulnerabilities, validates each finding to cut false positives, and suggests patches you can review and approve.
English
811
1.9K
21K
4.6M
MLT & AI Communities retweetledi
Sam Altman
Sam Altman@sama·
we're starting rollout of GPT-5.5-Cyber, a frontier cybersecurity model, to critical cyber defenders in the next few days. we will work with the entire ecosystem and the government to figure out trusted access for cyber; we want to rapidly help secure companies/infrastructure.
English
1K
819
12.8K
982.6K
MLT & AI Communities retweetledi
clem 🤗
clem 🤗@ClementDelangue·
Great to be included in the @TIME 10 Most Influential AI Companies of 2026! Let's go open-source AI!
clem 🤗 tweet media
English
31
40
328
21.2K
MLT & AI Communities retweetledi
Tencent Hy
Tencent Hy@TencentHunyuan·
We're open-sourcing Hy-MT1.5-1.8B-1.25bit — a 440MB translation model that runs fully offline on your phone, supports 33 languages, and outperforms Google Translate. At 1.8B parameters, it matches commercial translation APIs and 235B-scale models on standard benchmarks. By quantizing to 1.25-bit, memory drops from 3.3GB (FP16) to 440MB — 25% smaller and ~10% faster than prior 1.67-bit approaches, with no accuracy loss. Covers 33 languages, 5 dialects, and 1,056 translation directions including minority languages like Tibetan and Mongolian. Our translation model has won 30 first-place rankings in international MT competitions and is already deployed across multiple Tencent products.🏆 📲Demo APK (Android): huggingface.co/AngelSlim/Hy-M… 🤗Hugging Face:: huggingface.co/AngelSlim/Hy-M… 🔗GitHub: github.com/tencent/AngelS… 📄Paper: arxiv.org/abs/2601.07892
Tencent Hy tweet mediaTencent Hy tweet media
GIF
English
59
273
2.2K
132.2K
MLT & AI Communities retweetledi
Omar Sanseviero
Omar Sanseviero@osanseviero·
Gemma 4 was released just a few weeks ago. Since then, it has been downloaded over 50 million times and there are almost 1500 community-built models based on it. Exciting times ahead!
English
21
34
481
32.7K
MLT & AI Communities retweetledi
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Every company building on top of AI should be making their own benchmarks. This is the way if you want model progress to disproportionally benefit your company.
English
136
99
1.9K
144.5K
MLT & AI Communities retweetledi
Google for Developers
Google for Developers@googledevs·
Build for impact. Win from a $200,000 prize pool. 🏆✨ Join the Gemma 4 Good Challenge to create solutions for health, education, global resilience, digital equity, and AI safety. With a $200,000 prize pool and multiple technical tracks, discover how to scale impact using Gemma 4. Submit your project by May 18 → goo.gle/4eOQCfC
GIF
English
23
192
1.2K
66.4K
MLT & AI Communities retweetledi
Mistral AI
Mistral AI@MistralAI·
🆕 Today, we're releasing the public preview of Workflows, the orchestration layer for enterprise AI. 🌎 Enterprise teams have capable models. What they don't have is a way to run them reliably in production. That's the gap Workflows fills. It takes AI-powered business processes from prototype to production, with the durability, observability, and fault tolerance that production actually requires. Leading organisations like ASML, ABANCA, CMA-CGM, France Travail, La Banque Postale, Moeve, and many others are already using Workflows to automate critical processes.
English
76
246
2K
265.3K
MLT & AI Communities retweetledi
Qwen
Qwen@Alibaba_Qwen·
🚀 Introducing FlashQLA: high-performance linear attention kernels built on TileLang. ⚡ 2–3× forward speedup. 2× backward speedup. 💻 Purpose-built for agentic AI on your personal devices. 💡Key insights: 1. Gate-driven automatic intra-card CP. 2. Hardware-friendly algebraic reformulation. 3. TileLang fused warp-specialized kernels. FlashQLA boosts SM utilization via automatic intra-device CP. The gains are especially pronounced for TP setups, small models, and long-context workloads. Instead of fusing the entire GDN flow into a single kernel, we split it into two kernels optimized for CP and backward efficiency. At large batch sizes this incurs extra memory I/O overhead vs. a fully fused approach, but it delivers better real-world performance on edge devices and long-context workloads. The backward pass was the hardest part: we built a 16-stage warp-specialized pipeline under extremely tight on-chip memory constraints, ultimately achieving 2×+ kernel-level speedups. We hope this is useful to the community!🫶🫶 Learn more: 📖 Blog: qwen.ai/blog?id=flashq… 💻 Code: github.com/QwenLM/FlashQLA
Qwen tweet media
English
32
152
1.3K
135.6K
MLT & AI Communities retweetledi
hardmaru
hardmaru@hardmaru·
For years, voice AI has been stuck in a rigid loop: think, then speak. But real human conversation is messy, overlapping, and asynchronous. In our new #ICASSP2026 work, we built a tandem architecture that shifts the paradigm to “speak while thinking.” A fast speech model starts replying instantly, while a backend LLM runs in parallel to inject deep knowledge on the fly. It’s a completely different way to approach conversational AI, making it feel remarkably more alive. Blog: pub.sakana.ai/kame/ 🐢
Sakana AI@SakanaAILabs

We’re excited to introduce KAME: Tandem Architecture for Enhancing Knowledge in Real-Time Speech-to-Speech Conversational AI, accepted at #ICASSP2026! 🐢 Blog pub.sakana.ai/kame/ Paper arxiv.org/abs/2510.02327 Can a speech AI think deeply without pausing to process? In real conversation, we don’t wait until we’ve fully worked out what we want to say—we start talking, and our thoughts catch up as the sentence unfolds. Fast speech-to-speech models achieve this, but their reasoning tends to stay shallow. Cascaded pipelines that route through a knowledgeable LLM are smarter, but the added latency breaks the flow—they fall back to "think, then speak." In our new paper, we propose a way to break this trade-off. We call it KAME (Turtle in Japanese). A speech-to-speech model handles the fast response loop and starts replying immediately. In parallel, a backend LLM runs asynchronously, generating response candidates that are continuously injected as "oracle" signals in real time. This shifts the AI paradigm from "think, then speak" to "speak while thinking." The backend LLM is completely swappable. You can plug in GPT-4.1, Claude Opus, or Gemini 2.5 Flash depending on the task without changing the frontend. In our experiments, Claude tended to score higher on reasoning, while GPT did better on humanities questions. Try the model yourself here: huggingface.co/SakanaAI/kame

English
23
67
579
83.9K
MLT & AI Communities retweetledi
Suzana Ilić
Suzana Ilić@suzatweet·
Also looking for motivated, hands-on engineers, from Senior SDE to Principal SDE in Redmond, the Bay Area, and Bangalore. Please share with anyone who might be a good fit. linkedin.com/posts/suzanail…
English
0
1
8
447
MLT & AI Communities retweetledi
Anthropic
Anthropic@AnthropicAI·
New on the Science Blog: We gave Claude 99 problems analyzing real biological data and compared its performance against an expert panel. On 23 problems, the experts were stumped. Our most recent models solved roughly 30% of those—and most of the rest.
Anthropic tweet mediaAnthropic tweet media
English
195
235
2.4K
356.2K
MLT & AI Communities retweetledi
poolside
poolside@poolsideai·
Today we’re releasing Laguna XS.2, Poolside’s first open-weight model. It’s a 33B total / 3B active MoE model built for agentic coding and long-horizon tasks. Trained fully in-house on our own stack. Runs on a single GPU. Released under Apache 2.0. Links 👇 Weights: huggingface.co/poolside/Lagun… API: platform.poolside.ai Blog: poolside.ai/blog/laguna-a-…
poolside tweet media
English
44
142
800
250.4K
MLT & AI Communities retweetledi
AI Highlight
AI Highlight@AIHighlight·
🚨BREAKING: Anthropic just published a study mapping exactly which jobs its own AI is replacing right now. The workers most at risk are not who anyone expected. They are older. They are more educated. They earn 47% more than average. And they are nearly four times more likely to hold a graduate degree than the workers AI is not touching. The argument is straightforward. Anthropic built a new metric called "observed exposure." Not what AI could theoretically do. What it is actually doing right now in professional settings, measured against millions of real Claude conversations from enterprise users. For computer and math workers, AI is theoretically capable of handling 94% of their tasks. It is currently handling 33% of them. For office and administrative roles, theoretical capability is 90%. Current observed usage is 40%. The gap between what AI can do and what it is already doing is enormous. The researchers are explicit about what comes next. As capabilities improve and adoption deepens, the red area grows to fill the blue. The demographic finding is what makes the paper uncomfortable. The most AI-exposed workers earn 47% more on average than the least exposed group. They are more likely to be female. They are more likely to be college educated. This is not a story about warehouse workers or truck drivers. It is a story about lawyers, financial analysts, market researchers, and software developers. The exact group whose education was supposed to insulate them. Computer programmers showed the highest observed AI exposure at 74.5%. Customer service representatives at 70.1%. Data entry keyers at 67.1%. Medical record specialists at 66.7%. Market research analysts and marketing specialists at 64.8%. These are not predictions. These are measurements of work that is already happening on AI platforms right now. Then there is the pipeline finding nobody is talking about loudly enough. Anthropic's researchers found a 14% decline in the job-finding rate for workers aged 22 to 25 in highly exposed occupations since ChatGPT launched. No comparable effect for workers over 25. Entry-level roles were never just jobs. They were the training ground where junior analysts became senior analysts, where junior lawyers learned how arguments hold together. If that layer disappears, nobody has answered the question of where the next generation of senior professionals comes from. The detail buried in the paper that most coverage missed: 30% of American workers have zero AI exposure at all. Cooks. Mechanics. Bartenders. Dishwashers. The technology reshaping professional careers is completely irrelevant to roughly a third of the workforce. The divide is no longer between high skill and low skill. It is between presence and absence. The company publishing this study is the same company selling the AI doing the replacing. Anthropic had every commercial incentive to soften these findings. They published them anyway. If you spent four years and $200,000 on a degree to land a white collar career, the company that builds Claude just confirmed your job is more exposed than the bartender pouring drinks at your graduation party. Source: Anthropic, "Labor market impacts of AI: A new measure and early evidence" PDF: anthropic.com/research/labor…
AI Highlight tweet media
English
262
1.5K
4.4K
819.4K
MLT & AI Communities retweetledi
Arthur Zucker
Arthur Zucker@art_zucker·
Reading @deepseek_ai 's v4 paper.... absolute hats off. Every problem has a mathematical solution, nothing is left to chance. I have so much respect for them, putting out months or years of efforts entirely for free, in the open for anyone to benefit. Real goats 🫡
English
74
380
4.6K
249.8K
MLT & AI Communities retweetledi
Riley Brown
Riley Brown@rileybrown·
Learn 95% of Codex in 28 minutes These are the 7 knowledge work capabilities... inside Codex, the super-app 00:00 Intro 02:19 Capability 1 - Full File Access 07:41 Capability 2 - Persistent Memory 10:46 Capability 3 - Plugins 13:52 Capability 4 - Skills 19:22 Capability 5 - GPT Image Access 21:03 Capability 6 - Browser and Computer Use 23:58 Capability 7 - Automations 25:31 Bonus Feature - Chronicle 27:21 Summary
English
88
367
3.6K
716.2K
MLT & AI Communities retweetledi
Ronak Malde
Ronak Malde@rronak_·
My takeaways from ICLR 2026 1. Recursive self improvement / continual learning is the next frontier of research. Several great papers in self distillation, auto agent harness optimization, learning from non verifiable reward, self-play are sarly signs of success 2. Multimodal models and world models are attaining emergent reasoning capabilities, opening up a near door to spatial understanding that was previously locked 3. Lots of concerns that the research community is currently too focused on benchmaxxing rather than improving the research process, and a call to action to address this, like Percy Liang’s fully open source training community. 4. Rio is possibly even better than San Diego 🇧🇷🏄
English
31
132
1.5K
84.2K