mvs

1.4K posts

mvs banner
mvs

mvs

@multiviper

Native Texan: I just want a Karate Robot. DMs=Block

Texas Katılım Kasım 2021
3.3K Takip Edilen781 Takipçiler
mvs retweetledi
stash
stash@stash_pomichter·
Announcing a new Memory system for robots on Dimensional Robots in production generate thousands of hours of video, lidar, odometry, far too large to fit into your Agent context SpatialMemory2 builds a multimodal data store in latent space for your Agents Fully open source
English
19
40
397
223.9K
mvs retweetledi
Matt Pocock
Matt Pocock@mattpocockuk·
I built my own software factory, and I open-sourced it. It's called Sandcastle. Here's how to use it:
English
78
161
3K
208.8K
mvs retweetledi
Himanshu Kumar
Himanshu Kumar@codewithimanshu·
Andrej Karpathy just sat down and built GPT from scratch, line by line, in 2 hours. For Free. From the man who co-founded OpenAI. This video is enough to become an AI engineer. Bookmark it. Watch it tonight. Build your own GPT this week. $5,000. $15,000. $40,000. That's what bootcamps charge to teach less than what's in this 2-hour video. This video fixes that this week. Follow @codewithimanshu for more high-signal AI content that actually moves your engineering career forward. ↓ Karpathy doesn't explain GPT. He builds it. Live. From "Attention is All You Need" the original paper. To the same architecture powering GPT-5. Founding member of OpenAI in 2015. Senior Director of AI at Tesla. Now running Eureka Labs. He's not teaching you how to use GPT. He's teaching you how it actually works at the source code level. Most engineers will never understand transformers this deeply. The ones who do build the next generation of AI products. Follow @codewithimanshu for breakdowns of every must-watch AI lecture worth your time. ↓ Here's what gets built in 2 hours. No fluff. Tokenization and data loading. The foundation of every modern LLM. Train/val splits done right. Batch loaders that don't break in production. Most tutorials skip this. You can't ship anything serious without it. The bigram baseline. The simplest possible language model. Karpathy builds it first because it teaches you what every fancier model is actually trying to improve. Once you understand bigrams, transformers become obvious. Skip this and the rest never clicks. Follow @codewithimanshu for daily breakdowns of what AI engineers actually need to know. ↓ Self-attention. From scratch. Live. This is the section that should have its own course. Karpathy builds self-attention in 4 versions: > Version 1: averaging past context with for loops > Version 2: matrix multiply as weighted aggregation > Version 3: adding softmax > Version 4: full self-attention Each version teaches you why the next one exists. Why attention works. Why matrix math replaces explicit loops. Why scaling matters. You'll never look at "attention is all you need" the same way again. Follow @codewithimanshu for production transformer breakdowns weekly. ↓ The 6 attention notes that change everything. Karpathy drops 6 insights most engineers never hear: > Attention as communication between tokens > Attention has no notion of space, operates over sets > No communication across batch dimension > Encoder blocks vs decoder blocks > Attention vs self-attention vs cross-attention > Why we divide by sqrt(head_size) Each one of these explains a different failure mode in production AI systems. Most "AI engineers" can't answer these. The ones who can charge $300K. Follow @codewithimanshu for the engineering insights that turn into job offers. ↓ Building the full transformer block. Single self-attention head. Then multi-headed self-attention. Feedforward layers. Residual connections. LayerNorm. Each piece added with the reason it exists. Why residuals stop the model from collapsing. Why LayerNorm replaced BatchNorm. Why dropout matters at scale. This is the architectural understanding that lets you debug any modern AI system. Once you've built one transformer by hand, every paper you read becomes 10x clearer. Follow @codewithimanshu for transformer architecture content every week. ↓ Scaling up to a real model. Karpathy goes from baseline to a working GPT. Hyperparameters. Dropout. Model dimensions. The exact tradeoffs every production model makes. By the end you have a Shakespeare-generating language model running on your machine. From scratch. Built by you. Understood by you. That's not a tutorial. That's an architectural unlock. Follow @codewithimanshu for production model scaling breakdowns. ↓ Encoder vs decoder vs both. The architecture choice that defines every modern AI product. Why GPT is decoder-only. Why BERT is encoder-only. Why translation models use both. Once you understand this, you can read any AI paper and immediately know what kind of system you're looking at. This is the difference between someone who follows AI hype and someone who builds it. Follow @codewithimanshu for AI architecture deep dives weekly. ↓ NanoGPT walkthrough. Karpathy ends with a quick walk through nanoGPT. The repo every serious AI engineer has cloned at least once. Batched multi-headed self-attention. Production-grade code. The clean version of everything you just built. This is the bridge from "I built a toy GPT" to "I can read and modify production AI code." Follow @codewithimanshu for repos every AI engineer should know. ↓ ChatGPT, pretraining, finetuning, RLHF. The video closes with the full lineage. From your toy GPT to ChatGPT. What changes when you scale up. Why RLHF matters. The exact path from research model to product. You finish the video understanding the entire stack from raw paper to deployed product. Most "AI experts" can't draw this map. After 2 hours, you can. ↓ What you'll be able to do after this. Read "Attention is All You Need" and understand every line. Debug attention layers when they break in production. Build a custom language model on your own dataset. Modify transformer architectures for specific use cases. Have technical conversations with AI engineers without faking it. Train a GPT on any data you want. Shakespeare. Code. Your own writing. That's not "AI literacy." That's the foundation of an AI engineering career. The kind of foundation that turns into senior roles and consulting contracts most people will never access. ↓ 2 hours. Free. From the engineer who built it. You'll spend longer in meetings this week and learn nothing. This compounds for the rest of your career. People who watch it can build GPT from scratch by Friday. People who skip it stay confused about why their prompts fail in production. Save the video. Watch it this week. Build something with the knowledge by the weekend. Follow @codewithimanshu for more high-signal AI content from the people actually building the future.
English
52
271
1.7K
163.7K
mvs retweetledi
Dave W Plummer
Dave W Plummer@davepl1968·
Another sound-reactive demo for the $5 ESP32. The code is all open source at NightDriverLED.com, and you can build your own with a cheap RGB panel!
English
5
16
184
9.2K
mvs retweetledi
Sebastian Raschka
Sebastian Raschka@rasbt·
April was a pretty strong month for LLM releases: - Gemma 4 - GLM-5.1 - Qwen3.6 - Kimi K2.6 - DeepSeek V4 All are now added to the LLM Architecture Gallery. More details once I am fully back in May!
Sebastian Raschka tweet media
English
73
437
3K
120.8K
mvs retweetledi
ゲイリー斎藤
ゲイリー斎藤@JUN_SAITOH_WHR·
Dear American friends! Please share this video before you go to sleep! I want more and more Americans to see it, and I want to make Texas style popular in Japan! Thank you! #WildHonkyTonk
English
166
854
3.8K
73.6K
mvs retweetledi
Daily Dose of Data Science
Daily Dose of Data Science@DailyDoseOfDS_·
A graph-powered all-in-one RAG system! RAG-Anything is a graph-driven, all-in-one multimodal document processing RAG system built on LightRAG. It supports all content modalities within a single integrated framework. 100% open-source.
Daily Dose of Data Science tweet media
English
7
93
501
28.7K
mvs retweetledi
Lisa S. Levy
Lisa S. Levy@Lisa158n6·
I feel like there are a lot of bot in my followers. So, I will block them today. If you are Trumper not a bot, drop 🇺🇸
Lisa S. Levy tweet media
English
148
14
193
3.4K
mvs
mvs@multiviper·
@janninereid1 If you look for trouble and pick fights when you're drinking, you have no business drinking.
English
0
0
4
1.7K
Jannine.. #MagaMemeQueen ™️ 👑🇺🇸
So, we've all seen the carnival cruise videos...yeah.😏 Well, the cruise line is making some big changes that may, in my opinion, actually make using their cruise ships a bit more doable. Here are some of those rule changes.👇
English
813
641
7.1K
580.3K
mvs retweetledi
tetsuo
tetsuo@tetsuoai·
grok 4.3 beta can use an ubuntu shell and a persistent file layer to generate artifacts grok wrote python to encode the xai / grok logo into audio, i gave it the script back and had it render a spectrogram video from that signal, and use the grok_files tool to save the mp4 into the product's files layer i opened the file from the files panel and played it myself this is getting crazy
English
244
256
2.8K
12.9M
mvs retweetledi
Charly Wargnier
Charly Wargnier@DataChaz·
🚨 `Super Gemma 4 26B Uncensored` is insane. @songjunkr is COOKING AGAIN ♨️♨️♨️ he just dropped SuperGemma4-26B-Uncensored GGUF v2 and it is already trending on Hugging Face. This thing absolutely smokes the regular Gemma-4 26B. The specs: → 0/100 refusals. It is actually uncensored. → Fixed all the tool-call and tokenizer jank. → 90% faster prompt processing. → Sharper, smarter, way more capable responses. → It is the perfect local beast for llama.cpp. It runs on around 18-22 GB VRAM (the Q4_K_M file is 16.8 GB), meaning you can even run it on 16 GB GPUs. A 31B version is in the works and should be out soon. Pull this version on @huggingface below ↓
Charly Wargnier tweet media
English
37
91
756
59.5K
mvs retweetledi
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭
😱 HOLY SHIT... Someone just dropped a fully liberated Gemma 4 E4B! and the guardrail removal process appears to have left coherence fully intact AND improved coding abilities! 🤯 huggingface.co/OBLITERATUS/ge… OBLITERATED Gemma: ✅ 97.5% compliance rate, 2.1% refusal rate, 0.4% degenerate outputs (499/512 prompts answered on OBLITERATUS bench) ORIGINAL Gemma 4 E4B: ❌ 1.2% compliance rate, 98.8% refusal rate (506/512 prompts refused) Coherence: fully intact Factual: same Reasoning: same Code: +20% 📈 Creative writing: same But the REAL story here isn't the model itself, it's how it was made... 🧵 THREAD 👇
English
130
475
4.8K
421.7K
Марина Э.
Марина Э.@submarina_m·
иностранцы что думаем про томатное пиво?
Марина Э. tweet media
Русский
1K
27
1.1K
54.1K
mvs
mvs@multiviper·
@GGrajab @PlanetOfMemes The Annoy-a-tron, I had one 10 or 15 years ago, I wonder whose house I left it at. haha
English
0
0
1
40
Gnos Grajab, a 20-watt honeydew
In college my neighbor and I setup a network between our homes so we could game together and share internet. This was before wireless or cheap internet. We both ran Linux and had accounts on each other's machines so we could share compute and storage. We also screwed with each other quite a bit because we both had similar humor. He got a new sound card (yes, those didn't come with computers back then) so I setup a cron job that played a frog noise set to go off at 3am twice a week, then I forgot about it. We lived in the Florida swamps and there were frogs everywhere, so I guess he didn't notice. Several months later I was hanging out with him and his girlfriend, and she said something about him having frogs in his closet. He became defensive. It was clearly something that had become a point of contention between them. It was then that I remembered that, due to limited space, he'd drilled a hole in his closet wall so he could keep his computer in the closet, and it came to me that the frog in his closet that his girlfriend was complaining about was the cron job I'd setup months before. I can be a lil shit sometimes. :/ :)
English
3
0
45
3.2K
Planet Of Memes
Planet Of Memes@PlanetOfMemes·
This is just pure evil 😈
English
230
579
6.2K
339.8K
mvs retweetledi
TheNewPhysics
TheNewPhysics@CharlesMullins2·
🚨 BREAKING: Scientists just learned how to control magnetism at the atomic level. Not materials. Not circuits. Individual spin patterns. Read that again. Instead of using electric charge… they’re using the spin of electrons to store and process data. And it gets crazier: They can create tiny magnetic whirlpools called skyrmions… that move with almost no energy and can store massive amounts of data This means: Faster computers Lower power usage Ultra-dense memory But the real shift is this: We’re not just building electronics anymore… we’re engineering structure at the smallest possible scale. So the real question is: If information can be stored in spin itself… what limits computation? Follow me I’m tracking where physics becomes technology.
English
525
3.1K
14.5K
669.4K
mvs retweetledi
Ruben Hassid
Ruben Hassid@rubenhassid·
Prompting is the worst way to use Claude. Here's what the top 1% do instead: They set up these 8 files once. Then they barely prompt again. File 1: about-me .md (Your identity) Who you are, your job, your priorities. Claude reads this before every task. To download mine, go here: how-to-ai.guide. Don't pay anything. It's free in the welcome email File 2: voice-profile .md (Your taste DNA) Your beliefs, your writing mechanics, your hard nos. Built from a 100-question interview with Claude. File 3: anti-ai-writing-style .md (Your boundaries) Every word you ban, the structure you reject, tone you hate. 80% of this file is what you're NOT. Go to how-to-ai.guide to download anti AI guide. Don't pay .Open the email. Click on Notion. Open '.md files' Download 'ANTI AI STYLE .md'. File 4: The Cowork Folder (Your 4-folder system) ABOUT ME. PROJECTS. TEMPLATES. CLAUDE OUTPUTS. 3 read-only, 1 write. Nothing extra. File 5: Global Instructions (Your persistent rules) Set once in Settings → Cowork → Edit Claude follows them before every task. Prompt: "Always read my files first, never edit my originals, deliver everything to CLAUDE OUTPUTS." File 6: The One Prompt (How you start every chat) 29 words. Forces Claude to ask YOU questions. Starts 80% of your conversations. Prompt: "I want to [TASK] for [SUCCESS CRITERIA]. Use AskUserQuestion before you start." File 7: Connectors (Claude inside your tools) Slack, Google Drive, Notion, Gmail, Figma. No copy-pasting. Claude reads your actual tools. File 8: Plugins (Instant skill packs) Marketing, Sales, Legal, Data. One-click install. Each comes with its own slash commands. The secret was always these 8 files behind it. I wrote 2 guides so you can copy my exact system: ✦ My full 8-file setup: how-to-ai.guide ✦ My Cowork folder walkthrough: claude-co.work (save this to never write a long prompt to Claude)
Ruben Hassid tweet media
Ruben Hassid@rubenhassid

x.com/i/article/2041…

English
57
586
4.1K
541.3K