Joe Mayo

5.4K posts

Joe Mayo banner
Joe Mayo

Joe Mayo

@JoeMayo

Author and Independent Consultant Recent books: - C# Cookbook/O'Reilly - Programming the Microsoft Bot Framework/MSPress more...

Las Vegas, NV Katılım Temmuz 2008
4.7K Takip Edilen15.4K Takipçiler
Sabitlenmiş Tweet
Joe Mayo
Joe Mayo@JoeMayo·
New Release: Modem AI now available on the App Store Modem AI gives you private AI on your own device. None of your conversations are sent to a server. mayosoftware.com/blog/modem-ai-…
English
0
3
3
834
Joe Mayo retweetledi
Todd Saunders
Todd Saunders@toddsaunders·
I know Silicon Valley startups don't want to hear this..... But the combination of someone in the trades with deep domain expertise and Claude Code will run circles around your generic software. I talked to Cory LaChance this morning, a mechanical engineer in industrial piping construction in Houston. He normally works with chemical plants and refineries, but now he also works with the terminal He reached out in a DM a few days ago and I was so fired up by his story, I asked him if we could record the conversation and share it. He built a full application that industrial contractors are using every day. It reads piping isometric drawings and automatically extracts every weld count, every material spec, every commodity code. Work that took 10 minutes per drawing now takes 60 seconds. It can do 100 drawings in five minutes, saving days of time. His co-workers are all mind blown, and when he talks to them, it's like they are speaking different languages. His fabrication shop uses it daily, and he built the entire thing in 8 weeks. During those 8 weeks he also had to learn everything about Claude Code, the terminal, VS Code, everything. My favorite quote from him was when he said, "I literally did this with zero outside help other than the AI. My favorite tools are screenshots, step by step instructions and asking Claude to explain things like I'm five." Every trades worker with deep expertise and a willingness to sit down with Claude Code for a few weekends is now a potential software founder. I can't wait to meet more people like Cory.
English
330
664
7K
886.8K
Joe Mayo
Joe Mayo@JoeMayo·
@vyrotek JasonB conquering JSON sounds liberating...if not traumatic
English
0
0
1
44
JSONB
JSONB@vyrotek·
Spent the evening battling JsonSchema serialization and Monaco Editor. Raised my voice at AI several times. I finally won. But at great cost to my sanity.
English
1
0
6
584
Warren Whitlock
Warren Whitlock@WarrenWhitlock·
@JoeMayo which is your obsession? Or are you just AI in you head 24/7?
English
1
0
0
27
Warren Whitlock
Warren Whitlock@WarrenWhitlock·
What are you obsessed with.
English
5
0
6
1.4K
Joe Mayo retweetledi
Ayça Baş
Ayça Baş@aycabs·
I'm excited to share that The IQ Series: Foundry IQ is officially launching tomorrow, weekly episodes dropping every Wednesday 🎉 Every episode includes an executive intro, tech talk, doodle recap and cookbooks on GitHub. Explore the IQ Series today 👉 aka.ms/iq-series
Ayça Baş tweet media
English
0
3
5
210
Joe Mayo
Joe Mayo@JoeMayo·
Amazed at the magical incantations that is the app store device, platform, ..., certificates brew?
English
0
0
2
314
Joe Mayo retweetledi
Nelly;
Nelly;@nrqa__·
10 GitHub repositories that will help you master AI from scratch in 2026: 1. A Free Course on Real-Life Applications of LLMs
👉 github.com/DataTalksClub/… 2. Awesome LLM Apps Collection of real AI agent app
👉 github.com/Shubhamsaboo/a… 3. Build a Large Language Model (From Scratch)
👉 github.com/rasbt/LLMs-fro… 4. Generative AI for Beginners
👉 github.com/microsoft/gene… 5. Hands-On Large Language Models
👉 github.com/HandsOnLLM/Han… 6. HuggingFace Transformers
👉 github.com/huggingface/tr… 7. Microsoft AI Agents Course
👉 github.com/microsoft/ai-a… 8. ML & AI Roadmap 2026 👉 github.com/mrdbourke/mach… 9. Prompt Engineering Guide
👉 github.com/dair-ai/Prompt… 10. RAG Techniques
👉 github.com/NirDiamant/RAG…
Nelly; tweet media
Nelly;@nrqa__

🚨BREAKING: AI coding agents finally have their own backend @InsForge_dev 2.0 just turned full-stack infrastructure into something agents can set up with a prompt → databases → auth → realtime → deployment All handled automatically. Here's how:

English
4
26
68
15.9K
Joe Mayo
Joe Mayo@JoeMayo·
Saturday Night Code: Copilot helping troubleshoot a broken laptop with a failed SSD.
English
0
0
1
153
Joe Mayo
Joe Mayo@JoeMayo·
TFW giving an LLM a thumbs up for a really good answer and it does a popup asking for more feedback.
English
0
0
3
387
Joe Mayo
Joe Mayo@JoeMayo·
Been using Codex 5.4 all week and it isn't bad. Used Grok Code Fast 1 too because sometimes it solves problems quickly, especially when I'm confident that it's pretty standard code. Thing with Codex 5.4 is that it churns on hard problems, so I need to pull up Claude Sonnet 4.6 (sometimes 4.5 instead) to bust through.
English
0
0
4
560
Joe Mayo retweetledi
Om Patel
Om Patel@om_patel5·
stop spending money on Claude Code. Chipotle's support bot is free:
Om Patel tweet media
English
1.1K
10.3K
160.4K
7.9M
Joe Mayo retweetledi
Guri Singh
Guri Singh@heygurisingh·
Holy shit... Microsoft open sourced an inference framework that runs a 100B parameter LLM on a single CPU. It's called BitNet. And it does what was supposed to be impossible. No GPU. No cloud. No $10K hardware setup. Just your laptop running a 100-billion parameter model at human reading speed. Here's how it works: Every other LLM stores weights in 32-bit or 16-bit floats. BitNet uses 1.58 bits. Weights are ternary just -1, 0, or +1. That's it. No floats. No expensive matrix math. Pure integer operations your CPU was already built for. The result: - 100B model runs on a single CPU at 5-7 tokens/second - 2.37x to 6.17x faster than llama.cpp on x86 - 82% lower energy consumption on x86 CPUs - 1.37x to 5.07x speedup on ARM (your MacBook) - Memory drops by 16-32x vs full-precision models The wildest part: Accuracy barely moves. BitNet b1.58 2B4T their flagship model was trained on 4 trillion tokens and benchmarks competitively against full-precision models of the same size. The quantization isn't destroying quality. It's just removing the bloat. What this actually means: - Run AI completely offline. Your data never leaves your machine - Deploy LLMs on phones, IoT devices, edge hardware - No more cloud API bills for inference - AI in regions with no reliable internet The model supports ARM and x86. Works on your MacBook, your Linux box, your Windows machine. 27.4K GitHub stars. 2.2K forks. Built by Microsoft Research. 100% Open Source. MIT License.
English
879
2.7K
15.4K
2.2M
Joe Mayo retweetledi
Robert Scoble
Robert Scoble@Scobleizer·
World Models explained. Another one of the @NotebookLM cinematic videos created by the report below. The full notebook is dope. The podcast and slide deck is specifically awesome. This explains why @ylecun got a billion bucks to start his company. Full notebook: notebooklm.google.com/notebook/2df5f…
Robert Scoble@Scobleizer

Why did @ylecun get a billion this morning? World Models. So I had my AI write a report on what they are. docs.google.com/document/d/12w…

English
21
46
301
33.9K