
Afiq Che Man | Influenza season ig
40.1K posts

Afiq Che Man | Influenza season ig
@mafqcm
🐍 Data science & machine learning. Studying 日本語. Random stuffs / games / anime off-work. Irregular reply sometimes. MPhil @mjiitofficial @utm_my



who's the pm in @GrabMY that launched the AI chat support? All it does is prompt me to click faq pages. It's the most useless chatbot I've used in a while


Mandatory human-in-the-loop is a cybersecurity cop-out. People are giving agents more and more autonomy. We need solutions that accept that world because there is no stopping it. It's like telling people in the 90s to not use the internet to avoid getting hacked. Good luck.

6-CARD HAND… WHAT COULD GO WRONG?! | Yu-Gi-Oh! Master Duel #yugioh #masterduel



“No weekend” Weh korang perlu kerja untuk 3 tahun je untuk spt pencen seumur hidup Rakyat yg lain kerja berdekad untuk dapat pencen Yes, public service kerja dia keras. Tapi privilege korang banyak



We’re excited to introduce KAME: Tandem Architecture for Enhancing Knowledge in Real-Time Speech-to-Speech Conversational AI, accepted at #ICASSP2026! 🐢 Blog pub.sakana.ai/kame/ Paper arxiv.org/abs/2510.02327 Can a speech AI think deeply without pausing to process? In real conversation, we don’t wait until we’ve fully worked out what we want to say—we start talking, and our thoughts catch up as the sentence unfolds. Fast speech-to-speech models achieve this, but their reasoning tends to stay shallow. Cascaded pipelines that route through a knowledgeable LLM are smarter, but the added latency breaks the flow—they fall back to "think, then speak." In our new paper, we propose a way to break this trade-off. We call it KAME (Turtle in Japanese). A speech-to-speech model handles the fast response loop and starts replying immediately. In parallel, a backend LLM runs asynchronously, generating response candidates that are continuously injected as "oracle" signals in real time. This shifts the AI paradigm from "think, then speak" to "speak while thinking." The backend LLM is completely swappable. You can plug in GPT-4.1, Claude Opus, or Gemini 2.5 Flash depending on the task without changing the frontend. In our experiments, Claude tended to score higher on reasoning, while GPT did better on humanities questions. Try the model yourself here: huggingface.co/SakanaAI/kame

"A decade ago, AI was supposed to replace radiologists. Today, radiologists make more than $500,000 per year, and their employment continues to grow, see chart below. Reading scans is a task, not a job, and when the task gets cheaper, demand for the job grows."



Sky Striker / Dortmund WCQ Regional 1st Place 🏆 ✅ ygop.dk/d/708156 #yugioh #yugiohtcg









