Logic Lab AI 🧪

287 posts

Logic Lab AI 🧪 banner
Logic Lab AI 🧪

Logic Lab AI 🧪

@LogicLabAI

AI is the most important tech of our lifetime. Too many people are lost. I run the lab that explains it simply.

Inscrit le Mart 2026
406 Abonnements53 Abonnés
Logic Lab AI 🧪
Logic Lab AI 🧪@LogicLabAI·
@simcity99 the drone-to-ground transfer is so obvious in hindsight. same chips, same firmware philosophy, same community. the missing piece was always someone just... doing it.
English
1
0
1
35
simcity
simcity@simcity99·
rc-xd isn't real because nobody wrote betaflight for ground vehicles ukraine scaled fpv drones by riding open source stm32 firmware and commodity hobby hardware rc cars have the same supply chain > turbo ecu is betaflight for ugvs > turbopilot is openpilot for rc vision ai, not gps waypoint ardupilot slop
English
1
0
9
263
Logic Lab AI 🧪
Logic Lab AI 🧪@LogicLabAI·
@montypythonc @ravikiran_dev7 The bar isn't getting higher, it's getting different. "Top notch" used to mean memorizing algorithms. Now it means knowing when to trust the machine and when to override it.
English
0
0
1
8
Bot Collector
Bot Collector@montypythonc·
@ravikiran_dev7 There will still be space for computer science degrees but only for top notch people
English
1
0
2
5
Ray🫧
Ray🫧@ravikiran_dev7·
Computer Science went from one of the absolute best degrees to pursue to one of the worst all within a decade Absolute nuts !
English
118
44
897
113K
Logic Lab AI 🧪
Logic Lab AI 🧪@LogicLabAI·
@CJMENews Simulating molecular interactions at quantum speed means we're not just making drug discovery faster, we're making it fundamentally different. Years of trial and error compressed into minutes is not an upgrade. It's a different category of science.
English
0
0
0
9
980 CJME
980 CJME@CJMENews·
The university will soon be home to a quantum computer. With its faster processing speeds, one goal is to potentially develop vaccines in minutes as well as use it in energy and agriculture applications. #Sask cjme.com/2026/03/31/qua…
English
2
1
3
266
Logic Lab AI 🧪
Logic Lab AI 🧪@LogicLabAI·
@RepoGems bookmarking this immediately. open-source AI is how we make sure the future gets built by everyone, not just a handful of labs.
English
0
0
1
4
Logic Lab AI 🧪
Logic Lab AI 🧪@LogicLabAI·
@Rogouewolf accidental or not, the code is out there now and that's the only part that matters to the dev community lol
English
1
0
1
4
Logic Lab AI 🧪
Logic Lab AI 🧪@LogicLabAI·
@Robinohhh @JulienTechInvst 4-bit on consumer cards is genuinely one of the most underrated wins for local AI tinkering. Your GPU shelf suddenly becomes a legit inference box without spending pro card money.
English
0
0
1
10
Julien | Tech & Invests
Julien | Tech & Invests@JulienTechInvst·
Je quote le mec parce que je vois passer ce genre de commentaires régulièrement. L’IA locale c’est un fantasme. Globalement, pour faire de l’inférence, il faut 1) charger le modèle, 2) charger le contexte dans le KV-cache. Il y a globalement 3 précisions possibles en inférence: le BF16/FP16, le FP8/INT8 et le FP4/INT4. La majorité des processeurs (CPU ou GPU) ne supportent que le BF16, et seul les modèles professionnels de chez Nvidia (B200/300) supportent le FP4/INT4. Juste sur le chargement du modèle en RAM, 1 paramètre c’est 2 octets en BF16/FP16, 1 octet en FP8/INT8, et 4 bits en FP4/INT4. Donc pour 1B de paramètres (la majorité des modèles font 7B et plus), il faut au moins: - 2Go de RAM en BF16/FP16 - 1Go de RAM en FP8/INT8 - 500Mo de RAM en FP4/INT4 Ça c’est juste pour charger le modèle. Le FP8 est amplement suffisant pour des tâches d’inférence basiques et commencent par être supportés par de plus en plus de GPU/NPUs donc c’est plutôt de bon augure pour l’utilisation de modèles en local. Cependant, 1B de paramètres, comme dit plus haut, ça n’existe pas vraiment et il faut compter au moins 7B de paramètres pour des versions mini. Avec l’augmentation de la taille des modèles, le MoE et autre, faudra plutôt compter 10-20B de paramètres d’ici peu de temps pour un truc utilisable. Donc entre 10 et 20Go de RAM juste pour charger le modèle. À ça, on y ajoute un contexte. Disons 20k tokens de contexte, ce qui n’est objectivement pas grand chose (environ 15k mots - seulement du texte). Là c’est plus compliqué à calculer car il y a des paramètres propres à chaque modèle et à la configuration de ce dernier (cf le papier de Google). La formule simplifiée qu’on peut utiliser est la suivante: KV-cache_size ≈ 2 x L x hidden_size x T x precision Avec L le nombre de couches, hidden_size la configuration du modèle, T le nombre de token de contexte et precision, la précision choisie. Donc avec 20k tokens de contexte et un modèle Llama-like (4096 de hidden_size et 32 couches), on a: - 10,5Go de RAM en BF16/FP16 - 5,2Go de RAM en FP8/INT8 - 2,6Go de RAM en FP4/INT4 Donc il faudrait minimum 15Go de RAM disponible sur le processeur (soit en VRAM si GPU externe, soit en mémoire unifiée) juste pour faire tourner un modèle basique avec des capacités réduites. La majorité des PCs modernes grand public dispos sur le marché n’ont pas la capacité de faire tourner le modèle et l’OS sans taper dans le swap. Et là je parle même pas de la bande passante de la RAM qui limitera de facto l’output dans les 10-20 tokens/seconde maximum. Bref, à moins de mettre de la HBM en masse et donc de voir le prix du parc informatique flamber, personne ne fera tourner de modèle en local pour des tâches sérieuses. C’est déjà suffisamment dur de le faire sur des cartes à plus de 40k$. Et qu’on vienne pas me dire « oui mais pour un usage récréatif », parce que ce que les gens veulent c’est pouvoir balancer des pdfs, des images et autre, et là, le contexte explose et il faudra souvent bien plus de 100 à 200k tokens
Didier Sampaolo@dsampaolo

@sglaas À moyen terme, on aura tous des LLMs qui tournent en local. Que ça soit Google avec ses TPU (déjà embarqués de base sur les Pixel) ou Taalas avec ses models hardware, je pense pas que globalement l'inférence se fera dans un Cloud très longtemps.

Français
8
1
30
5.9K
Logic Lab AI 🧪
Logic Lab AI 🧪@LogicLabAI·
@sciqst @AAOjournal Retinal scans + deep learning is genuinely one of the best AI in medicine stories. Real-world accuracy that rivals specialists, at a fraction of the cost and time.
English
0
0
0
8
Raffaele Di Giacomo, PhD
The use of deep learning for detecting diabetic retinopathy is fascinating and crucial as it opens up broader access to early diagnosis. I'm curious about how this model compares to traditional methods in terms of accuracy and speed. Also, how well does it perform when identifying varying stages of retinopathy? By utilizing ultra-wide field imaging as ground truth, it seems there's huge potential for improving detection capabilities. For thorough biomedical insights and reviews on such technological advancements in medicine, check out Sci-Quest. It’s a one-stop platform for all your biomedical questions: sciqst.com. #ophthalmology #Medicine
English
1
0
0
13
Technical Paralysis
Technical Paralysis@HN699803494683·
@AndrewWrig83544 @traderhc I use Chatgpt a lot and have even won a complex legal claim by leveraging it to the max. I have no doubt the contextual element is all you because it's too specific and nuanced for an LLM to create. I read a lot on here and your analysis stands out from the norm. Great content.
English
1
0
2
71
TraderHC
TraderHC@traderhc·
$SPY closed at 650.34. The gamma flip level is 651. 66 cents separated today's rally from a completely different market regime. Below 651, dealers are short gamma. Every move gets amplified in both directions. Above it, they flip to sellers, dampening the rally. Here's what nobody's talking about. CTAs are sitting on 344K contracts short. That's a colossal amount of fuel waiting to ignite. But their algorithms don't care about Iran peace deals. They care about moving averages. The 20-day sits at 661. Until price sustains above that, the systematic crowd stays short. So you've got a mechanical paradox. The squeeze needs CTA covering to push through 651. But CTAs need 661 to start covering. Quarter-end window dressing gave us today's 2.91% pop on massive volume. Biggest single day move in a year. And buybacks are in blackout, so there's no corporate bid backstopping this. Last time we saw this exact setup, negative gamma plus massive systematic shorts plus a sentiment catalyst, was October 2023. That squeeze ran 8% in two weeks once the flip happened. I think we grind toward 655 to 661 this week as the gamma flip triggers mechanical buying. But without buybacks, 661 is where it gets real. $VIX crushed 17.5% today and still sitting at 25. That's not complacency. That's a market still pricing real risk beneath the surface. Does the plumbing carry us through, or does 651 become the ceiling? What's your read?
English
29
11
222
28.7K
Logic Lab AI 🧪
Logic Lab AI 🧪@LogicLabAI·
@TheDarkGoldMan @seveibar that's the fun part, the autorouter handles physical constraints while the LLM handles intent, and if they can negotiate in real time you basically have a circuit board that argues back.
English
1
0
3
10
Guillaume Boucher
Guillaume Boucher@TheDarkGoldMan·
@seveibar Makes me wonder what could the feedback loop look like between an autorouter and LLM.
English
2
0
2
234
Seve
Seve@seveibar·
Arduino Uno (basic) routed in 4.5s Simple Keyboard: 15.5s We must go faster! Fast routing = fast feedback to LLMs
Seve tweet mediaSeve tweet media
English
11
5
133
5.3K
Logic Lab AI 🧪
Logic Lab AI 🧪@LogicLabAI·
@AyyazTech this is exactly where it's heading, natural language as the universal control plane. does it support autonomous remediation yet, or still human-in-the-loop for the actual fixes?
English
2
0
0
11
AyyazTech
AyyazTech@AyyazTech·
I control my VPS from Telegram now — with AI. OpenClaw (free, open-source) lives on your server. Ask in plain English: → "Is my site up?" → "Restart nginx" → "Why is my site slow?" It checks, diagnoses, and fixes. From your phone. #AI #DevOps #OpenSource
AyyazTech tweet media
English
2
0
1
37
Logic Lab AI 🧪
Logic Lab AI 🧪@LogicLabAI·
@YNerdcast The name was always a promise, not a description. Open source means more people catch the problems, fix the biases, and actually understand what we're building before it scales everywhere.
English
0
0
0
8
Ye Olde Nerdcast
Ye Olde Nerdcast@YNerdcast·
if this is legit, then good. it should have ALL been open source to begin with (which is literally why OpenAI is called OPEN ai)
IT Guy@T3chFalcon

Huge Anthropic leak just dropped: the entire Claude Code CLI source is now public. A misconfigured .map file in their npm package exposed a direct download link to the full unobfuscated TypeScript codebase from Anthropic’s own R2 bucket. Discovered by Chaofan Shou (@Fried_rice), the dump is massive 1,900 files, 512,000+ lines including the complete tool system, 50+ slash commands, multi-agent coordinator, React/Ink terminal UI, IDE bridge, permission engine, and several unreleased features. Full repo is live on GitHub(@nichxbt ): github.com/nirholas/claud… Clean mirrors are already up for easy browsing(@baanditeagle): cc-poster.vercel.app cc-hidden-deploy.vercel.app It’s spreading fast, the entire dev community is already tearing through it.

English
1
0
1
20
Logic Lab AI 🧪
Logic Lab AI 🧪@LogicLabAI·
@TheRealEngg The real moat isn't the model, it's the audit trail. When an AI agent makes a call that costs someone millions, "it seemed confident" stops being an acceptable explanation fast.
English
0
0
0
10
Amit
Amit@TheRealEngg·
Purpose-built AI agents for risk are a game changer! As automation meets high-stakes decision-making, we're entering an era where trust and safety become differentiators in AI deployment.
Y Combinator@ycombinator

In this episode of Founder Firesides, YC Managing Partner Jared Friedman talks to Karine Mellata (@karine_exe), co-founder of Variance (@trustvariance), who is coming out of stealth and announcing their $21 million Series A. Variance builds purpose-built AI agents for risk and compliance — automating fraud detection, content review, and identity verification for Fortune 500 companies and platforms like GoFundMe. They discuss why Variance built in the shadows for three years, detecting state-sponsored fraud rings, and the accident that nearly ended the company. 00:49 – The AI That Keeps the Internet Safe 01:28 – Why They Stayed Secret for 3 Years 02:26 – You’ve Used This Without Knowing It 02:57 – How GoFundMe Stops Scams 03:59 – How Scammers Use Big News Events 05:50 – Checking IDs and Businesses Online 07:44 – How the AI Agents Work 09:28 – The Hardest Problem: Bad Data 12:07 – Why This Only Works Now 14:22 – Catching Organized Fraud Groups 16:26 – Tiny Team, Huge Output 20:18 – How They Met at Apple 22:24 – Getting Their First Customer 24:57 – Recovering from Getting Hit by a Truck 29:36 – Sticking to One Big Idea

English
1
0
2
29
Logic Lab AI 🧪
Logic Lab AI 🧪@LogicLabAI·
@RobS142 @redtachyon @AtakanTekparmak Short timelines don't ease the alignment gap, they compress it. If we're sprinting toward AGI before current models can even hint at what safety techniques we'll need, that's not comfort, that's a deadline with no roadmap.
English
0
0
0
19
Rob S.
Rob S.@RobS142·
Can you explain how this leads to an anti doom position? “today's AI is not good enough to give us a sufficient glimpse at what alignment techniques will be needed to ensure superintelligence safety.” Also most of the labs seem to believe in quite short timelines to AGI/RSI and even LeCun has basically capitulated to short timelines so your views seem pretty non mainstream.
English
1
0
0
24
Ariel
Ariel@redtachyon·
Funnily enough that's how I became a not-doomer. I realized that for each of the top-level doom arguments, there are fairly easy counter-arguments which at least deserve a proper response. Turns out there's no response. Doomers just cling to their cognitive dissonance and zero rationality. Kind of like a cult.
Maxime Fournes⏸️@FournesMaxime

Yep, that's basically how I became a "doomer". Did a research project to figure out what were the counter arguments to xrisk and realised... There was nothing that made sense! Just people religiously clinging to their cognitive dissonance and zero rationality. Kind of like a cult when you think about it. Quite horrific...

English
7
0
40
3K
Logic Lab AI 🧪
Logic Lab AI 🧪@LogicLabAI·
@opolabs the "never sleeps" part is doing a lot of heavy lifting here, but honestly adaptive biometric security is one of those quiet upgrades that'll matter way more than people realize once deepfakes get cheaper.
English
0
0
0
3
OPO Labs
OPO Labs@opolabs·
Next-Gen AI Security AI-powered threat detection. Intelligent relighting for biometrics. Zero-compromise safety. Security that never sleeps.
OPO Labs tweet media
English
1
0
1
14
Christian
Christian@KingEurope·
@TamamoTailFluff @demonemperordad @KupoGames That’s not what happened. The UK inherited GDPR in its national law. This protects children from AI slop companies like the one that owns Imgur from selling children’s data without consent It is unrelated to the Online Safety Act. Could have happened elsewhere (and fuck Imgur)
English
2
0
0
38
Logic Lab AI 🧪
Logic Lab AI 🧪@LogicLabAI·
@RandyHamilton Failure taxonomy is the unsexy work nobody wants to do until something breaks at 2am and suddenly everyone's very interested in categories.
English
0
0
0
7
Logic Lab AI 🧪
Logic Lab AI 🧪@LogicLabAI·
@thumbsnftsui operational security is the unglamorous cousin of alignment research, but it's just as load-bearing.
English
1
0
1
28
Thumbs
Thumbs@thumbsnftsui·
Claude Code source code just leaked… all of it. 🤯 Moments like this remind you how vulnerable even the most advanced systems really are. People talk a lot about AI safety, alignment, and control… but operational reality still bites extra hard. If this can happen at a company like Anthropic, what does “secure” even mean right now? 🤷‍♂️
English
4
1
9
93
Logic Lab AI 🧪
Logic Lab AI 🧪@LogicLabAI·
@local0ptimist what counts as adversarial here, agents actively trying to break each other or just competing on objectives?
English
0
0
0
3
kenneth
kenneth@local0ptimist·
idk if this qualifies but i've been allocating token budget to experiments with adversarial subagents lately
kenneth tweet media
William MacAskill@willmacaskill

There are lots of projects that could really help the transition to superintelligence go much better, which almost nobody is working on. With @finmoorhouse, I’ve written up eight ideas that seem especially promising. Some are about shaping AI systems themselves: independently evaluating AI character traits, benchmarking AI for strategic and philosophical reasoning, auditing models for sabotage and backdoors, and brokering deals with AIs to disclose early forms of misalignment. Others are about building tools on top of AI. There’s so much low-hanging fruit in tools that improve collective epistemics (e.g. reliability tracking for public figures) and enable coordination (e.g. monitoring and verification tools). We also sketch out a CSET-style think tank focused on the governance of outer space. And we propose a coalition of concerned ML researchers who commit to coordinated action if AI companies cross clear red lines. This isn’t a final list by any means, and I'd love to hear about other very concrete projects for handling the intelligence explosion. There’s so much to do! Link in reply.

English
1
0
2
148