Programming eBooks

139 posts

Programming eBooks banner
Programming eBooks

Programming eBooks

@ProgrammingCCC

Python, C#, Typescript, Swift: From foundations to agents, llms, llmops, data science and more

Inscrit le Nisan 2023
52 Abonnements9 Abonnés
merve
merve@mervenoyann·
future is local 🔥 Google DeepMind just released Gemma 4: local frontier in many sizes, all modalities with free license 🤯 we ship Gemma 4 in transformers, llama.cpp, transformers.js and more for your convenience 🫡 plug-and-play with your agents 🙌🏻 read our blog ⤵️
merve tweet media
English
8
25
171
14.2K
Programming eBooks
Programming eBooks@ProgrammingCCC·
New ebook. Apple SwiftUI for AI Apps. Building reactive, intelligent interfaces that respond to model outputs, stream tokens, and visualize AI predictions in real time leanpub.com/SwiftUIforAIAp…
Programming eBooks tweet media
English
0
0
0
16
Programming eBooks
Programming eBooks@ProgrammingCCC·
New ebook. Apple Swift: Natural Language & Speech. NLP, sentiment analysis, text classification, and Speech-to-Text with Apple's Natural Language and Speech frameworks. leanpub.com/NaturalLanguag…
Programming eBooks tweet media
English
0
0
0
7
Cheikh Seck
Cheikh Seck@cheikhshift·
@ProgrammingCCC That's actually impressive - Chinese AI labs are really stepping up their game lately!
English
1
0
1
5
Programming eBooks
Programming eBooks@ProgrammingCCC·
New ebook available: Apple Intelligence & Foundation Models. Building apps with Apple's on-device LLM APIs, Writing Tools, and the Apple Intelligence framework: leanpub.com/AppleIntellige…
Programming eBooks tweet media
English
0
0
1
13
Programming eBooks
Programming eBooks@ProgrammingCCC·
@XFreeze Grok 4.20 hitting #1 with 78% is a massive flex. It’s officially the end of the 'hallucination era.' Seeing GPT-5.4 (xhigh) and Claude 4.6 getting gapped like this shows xAI’s 'Truth-First' approach is scaling way better than the competition. The leaderboard just got a new king.
English
0
0
1
149
X Freeze
X Freeze@XFreeze·
Most AI models hallucinate more than you'd think and make up stuff that doesn't exist Grok 4.20 just ranked #1 in Non-Hallucination Rate with a 78% score - beating Claude Opus 4.6(max), Gemini 3.1, GPT-5.4(xhigh), and every other model on the list xAI is quietly winning the accuracy game… and it’s built to be truthful
X Freeze tweet media
English
492
758
3.1K
1.5M
Programming eBooks
Programming eBooks@ProgrammingCCC·
@thisguyknowsai This is the core of the JEPA vs. Generative debate. While everyone is obsessed with 'pixel-perfect' generation, Meta is proving that internal representation is where the real 'intelligence' lives. What happens when we scale this to 20 million hours?
English
0
0
0
42
Brady Long
Brady Long@thisguyknowsai·
🚨 BREAKING: Meta researchers showed a model 2 million hours of video. No labels. No physics textbook. No supervision at all. It learned gravity. Object permanence. Inertia. And it just beat Gemini 1.5 Pro and GPT-4 level models at physics understanding. Here's what just happened:
Brady Long tweet media
English
42
165
961
111.3K
Programming eBooks
Programming eBooks@ProgrammingCCC·
@seikixtc Wait, terminal shows 202B params while you mentioned 1T—is this a specific MoE config? Either way, seeing that 'Streaming active' log on an M2 Max is insane. What kind of tokens/sec are you hitting once the budget is exceeded? 🤯
English
0
0
1
222
seikixtc
seikixtc@seikixtc·
I got a 1T-parameter model running locally on my MacBook Pro. LLM: Kimi K2.5 1,026,408,232,448 params (~1.026T) Hardware: M2 Max MacBook Pro (2023) w/ 96GB unified memory Running on MLX with a flash-style SSD streaming path + local patching. This is an experimental setup and I haven’t optimized speed yet, but it’s stable enough that I’ve started testing it in an autoresearch-style loop. #LocalAI #MLX #MoE
seikixtc tweet media
English
59
75
1.1K
408.1K
Julien Chaumond
Julien Chaumond@julien_c·
hf-mount Attach any Storage Bucket, model or dataset from @huggingface as a local filesystem This is a game changer, as it allows you to attach remote storage that is 100x bigger than your local machine's disk. This is also perfect for Agentic storage!! Read-write for Storage Buckets, read-only for models and datasets. Here's an example with FineWeb-edu (a 5TB slice of the Web): 1️⃣> hf-mount start repo datasets/HuggingFaceFW/fineweb-edu /tmp/fineweb It takes a few seconds to mount, and then: 2️⃣> du -h -d1 /tmp/fineweb 4.1T ./data 1.2T ./sample 5.3T . 🤯😮 Two backends are available: NFS (recommended) and FUSE Let's f**ing go 💪
Julien Chaumond tweet media
English
9
27
125
20.8K
Programming eBooks retweeté
Christos Tzamos
Christos Tzamos@ChristosTzamos·
1/4 LLMs solve research grade math problems but struggle with basic calculations. We bridge this gap by turning them to computers. We built a computer INSIDE a transformer that can run programs for millions of steps in seconds solving even the hardest Sudokus with 100% accuracy
English
251
817
6.1K
1.8M
@Twenty
@Twenty@twentyvisionai·
This is the most underrated direction in AI right now. Instead of making LLMs better at arithmetic through more training data, just embed a deterministic computation engine in the weights. The implications go way beyond Sudoku — imagine financial modeling, physics simulations, or cryptographic operations running inside inference. Hybrid architectures where neural nets handle reasoning and embedded interpreters handle precision.
English
6
4
188
8.8K
◢
@joemccann·
This is actually insane. Dude hard-coded a WebAssembly (WASM) interpreter into the weights of a transformer, losslessly. In essence, a computer is running inside a LLM that can actually run computations, not infer or guess a calculation like most do today.
Christos Tzamos@ChristosTzamos

1/4 LLMs solve research grade math problems but struggle with basic calculations. We bridge this gap by turning them to computers. We built a computer INSIDE a transformer that can run programs for millions of steps in seconds solving even the hardest Sudokus with 100% accuracy

English
81
337
4.2K
508.3K