Thomas Koller, PhD

4.2K posts

Thomas Koller, PhD banner
Thomas Koller, PhD

Thomas Koller, PhD

@ThomasKoller

Polyglot keen on NLP, data analysis/mining, machine learning, deep learning, computer vision, forecasting, R, Python, (No)SQL, Neo4j, graph databases, hackathon

Nuremberg, Germany (exito.de) เข้าร่วม Haziran 2009
1.5K กำลังติดตาม653 ผู้ติดตาม
Thomas Koller, PhD รีทวีตแล้ว
Archive
Archive@ArchiveExplorer·
"Claude usage limit reached. Your limit will reset at 7pm" every. fucking. day. was about to pay $200 for Max. then I read this article 98.5% of tokens - wasted you're not paying for answers. you're paying for Claude to re-read its own homework 30 times spent months blaming Anthropic for being greedy. turns out the problem was how I write prompts 5 minutes of reading basic plan now handles more than my old Max
kaize@0x_kaize

x.com/i/article/2037…

English
314
798
11K
4.9M
Thomas Koller, PhD รีทวีตแล้ว
Nav Toor
Nav Toor@heynavtoor·
🚨 Claude Code costs $200/month. GitHub Copilot costs $19/month. Jack Dorsey's company built a free alternative. 35,000 GitHub stars. It's called Goose. An open source AI agent built by Block that goes beyond code suggestions. It installs, executes, edits, and tests. With any LLM you choose. Not autocomplete. Not suggestions. A full autonomous agent that takes actions on your computer. No vendor lock-in. No monthly subscription. Bring your own model. Here's what Goose does: → Works with ANY LLM. Claude, GPT, Gemini, Llama, DeepSeek, Ollama. Your choice. → Reads and understands your entire codebase → Writes, edits, and refactors code across multiple files → Runs shell commands and installs dependencies → Executes and debugs your code automatically → Extensible through MCP. Connect it to any external tool. → Desktop app, CLI, and web interface. Pick your workflow. → Written in Rust. Fast. Lightweight. No bloat. Here's the wildest part: Block is a $40 billion company. They built Cash App, Square, and TIDAL. They use Goose internally. Then they open sourced the entire thing. This isn't a side project from a random developer. This is production-grade tooling from a company that processes billions in payments. Built for their own engineers. Given to everyone. Claude Code: $200/month. Locked to Claude. GitHub Copilot: $19/month. Locked to GitHub. Cursor: $20/month. Locked to their editor. Goose: Free. Any LLM. Any editor. Any workflow. Forever. 35.3K GitHub stars. 3.3K forks. 4,078 commits. Built by Block. 100% Open Source. Apache 2.0 License.
English
235
463
3.6K
382.9K
Thomas Koller, PhD รีทวีตแล้ว
Paul Couvert
Paul Couvert@itsPaulAi·
Friendly reminder that Google has an official app to run Gemma 4 on your phone. - 100% open source - Fully offline and private - Multimodal with text/audio/image - Works with Gemma E4B and E2B And the app is available on both iOS and Android. Steps and download below
English
200
584
5.3K
717.6K
Thomas Koller, PhD รีทวีตแล้ว
Min Choi
Min Choi@minchoi·
Less than 48 hours ago, Google dropped Gemma 4. Minds are blown. And people are already coming up with wild use cases. 10 examples:
English
86
182
2.5K
964K
Thomas Koller, PhD รีทวีตแล้ว
ollama
ollama@ollama·
ollama run translategemma TranslateGemma is available on Ollama. Now you can use it in apps to translate between 55 languages. Note, it requires a specific prompting format 👇👇👇
Google DeepMind@GoogleDeepMind

We’re releasing TranslateGemma, a new family of open translation models with support for 55 languages. 🌐 Available in 4B, 12B, and 27B parameter sizes – they’re designed for efficiency without sacrificing quality.

English
25
157
1.3K
136.8K
Thomas Koller, PhD รีทวีตแล้ว
Shraddha Bharuka
Shraddha Bharuka@BharukaShraddha·
Google isn’t trying to win the AI race. They’re trying to own the entire AI Agent ecosystem. While everyone argues ChatGPT vs Claude, Google quietly built: Models → Gemini Pro, Flash, Deep Think, Gemma Design → Stitch, Whisk, Imagen Research → NotebookLM, AI Mode Video → Veo, Flow, Google Vids Coding → Antigravity IDE, Gemini CLI, Jules Agents → A2A, ADK, FileSearch API The scary part? All of these tools talk to each other. That means: 10x faster prototypes End-to-end AI workflows Production-ready agents on GCP The next AI war won’t be model vs model. It’ll be ecosystem vs ecosystem. Save. Share. Build.
GIF
English
25
76
284
15.1K
Thomas Koller, PhD รีทวีตแล้ว
Patrick Loeber
Patrick Loeber@patloeber·
We shipped a Gemini Docs MCP🚢 Now you can set up your coding assistant with both MCP and Gemini Skills! When building with the Gemini API, our internal tests yield 10X better results in coding outputs compared to not using them. Give it a try :)
Patrick Loeber tweet media
English
14
20
225
11K
Thomas Koller, PhD รีทวีตแล้ว
Base44
Base44@Base44·
130+ skills are now built into your Superagent. Some are ready to use, and some can be created based on what you need. Add a skill once, and your Superagent can use it as part of your workflows. Stack skills, connect tools, and build flows that run end-to-end.
English
108
200
2K
1.8M
Thomas Koller, PhD รีทวีตแล้ว
Alvaro Cintas
Alvaro Cintas@dr_cintas·
This is the most complete Claude Code setup that exists right now. 27 agents. 64 skills. 33 commands. All open source. The Anthropic hackathon winner open-sourced his entire system, refined over 10 months of building real products. What's inside: → 27 agents (plan, review, fix builds, security audits) → 64 skills (TDD, token optimization, memory persistence) → 33 commands (/plan, /tdd, /security-scan, /refactor-clean) → AgentShield: 1,282 security tests, 98% coverage 60% documented cost reduction. Works on Claude Code, Cursor, OpenCode, Codex CLI. 100% open source.
Alvaro Cintas tweet media
English
181
829
6.9K
624.4K
Thomas Koller, PhD รีทวีตแล้ว
Hugging Models
Hugging Models@HuggingModels·
Ever wanted to clone a voice from just a few seconds of audio? Meet Qwen3-TTS, a text-to-speech model that can mimic voices with incredible accuracy. It's blowing up with nearly 1M downloads because it makes custom voice creation accessible to everyone.
Hugging Models tweet media
English
7
46
477
21.8K
Thomas Koller, PhD รีทวีตแล้ว
Mistral AI for Developers
Mistral AI for Developers@MistralDevs·
🎙️Do you know you now have all the building blocks for full speech-to-speech? - Voxtral Realtime: High-quality, real-time speech-to-text. - Mistral Small 4: Fast, efficient, general-purpose agentic model. - Voxtral TTS: Realistic customizable text-to-speech with streaming output.
English
21
49
482
27.3K
Thomas Koller, PhD รีทวีตแล้ว
Mistral AI
Mistral AI@MistralAI·
🔊Introducing Voxtral TTS: our new frontier open-weight model for natural, expressive, and ultra-fast text-to-speech 🎭Realistic, emotionally expressive speech. 🌍Supports 9 languages and accurately captures diverse dialects. ⚡Very low latency for time-to-first-audio. 🔄Easily adaptable to new voices
English
208
614
4.6K
868.8K
Michel Lieben
Michel Lieben@MichLieben·
Giving away a full Claude Project setup that builds advanced n8n workflows from a single prompt. This completely changed how we build automations at our $7M ARR agency. You write a basic prompt describing your workflow. Claude (Sonnet 4.5) reads it, breaks it down, and outputs an n8n JSON file you can import directly. It handles field mappings, data flow between nodes, trigger configurations, conditional branches, sticky notes for documentation, and error handling. What's included: → Full Claude Project setup you can copy-paste and run immediately → 70+ n8n documentation pages extracted from the official GitHub repo, so Claude knows the syntax and node references → Templates and frameworks for use cases like lead routing and social listening → Real prompts vs. the workflows Claude actually built, so you can see what worked and what didn't → SOPs to update the documentation every time n8n ships a new feature "How to build n8n workflows?" has always been harder than "what n8n workflow should I build?" This flips it. Spend your time thinking about which automation moves the needle. Let Claude handle the build. Reply "PROMPT" and I'll DM the full setup guide. Must be following.
English
887
81
773
51.6K
Thomas Koller, PhD รีทวีตแล้ว
Jasmin
Jasmin@AI_with_jasmin·
BREAKING: If you're not using Claude at your job, you're already behind. Copy these 7 prompts:
English
30
28
73
17.5K
Thomas Koller, PhD รีทวีตแล้ว
Alex Finn
Alex Finn@AlexFinn·
This is potentially the biggest news of the year Google just released TurboQuant. An algorithm that makes LLM’s smaller and faster, without losing quality Meaning that 16gb Mac Mini now can run INCREDIBLE AI models. Completely locally, free, and secure This also means: • Much larger context windows possible with way less slowdown and degradation • You’ll be able to run high quality AI on your phone • Speed and quality up. Prices down. The people who made fun of you for buying a Mac Mini now have major egg on their face. This pushes all of AI forward in a such a MASSIVE way It can’t be stated enough: props to Google for releasing this for all. They could have gatekept it for themselves like I imagine a lot of other big AI labs would have. They didn’t. They decided to advance humanity. 2026 is going to be the biggest year in human history.
Google Research@GoogleResearch

Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI

English
331
864
9.6K
1.5M
Thomas Koller, PhD รีทวีตแล้ว
Vaidehi
Vaidehi@Ai_Vaidehi·
This is one of those “this changes everything” moments. Microsoft just broke a core assumption in AI. You needed GPUs to run big models. Not anymore. They open-sourced BitNet, an inference framework that runs a 100B parameter LLM on a single CPU 🤯 No GPU No cloud No expensive setup Just your laptop. Here’s the trick: Instead of 16-bit or 32-bit weights... BitNet uses 1.58 bits Yes, seriously. Weights = -1, 0, +1 That’s it. No heavy matrix math. Just simple integer ops your CPU already handles easily. And the results? • 100B model → 5–7 tokens/sec on CPU • Up to 6x faster than llama.cpp • 82% less energy usage • Runs on x86 + ARM (MacBook) • Memory reduced by 16–32x But here’s the insane part: 👉 Accuracy barely drops. Their model (BitNet b1.58 2B4T) competes with full-precision models trained the “normal” way. So what does this unlock? • Fully offline AI (privacy ↑) • No more API bills • AI on phones, IoT, edge devices • Access in low-internet regions We’re watching AI move from “cloud-only” → “runs anywhere” The GPU monopoly just got… shaky. And this is open source. Let that sink in. 🚀
English
21
36
144
12.9K
Thomas Koller, PhD รีทวีตแล้ว
Peter Agboola
Peter Agboola@baba_Omoloro·
Anthropic has launched free courses to master AI with certificates for $0.00 anthropic.skilljar.com
English
558
7.9K
55.4K
16.7M
Thomas Koller, PhD รีทวีตแล้ว
Awa K. Penn
Awa K. Penn@TawohAwa·
🚨Forget Google. Forget Coursera. Forget Paid Degrees Anthropic just launch Free Courses to master AI with Certificates. Here are 10 of the best courses from Anthropic 👇
English
31
217
1.1K
226.7K
Thomas Koller, PhD รีทวีตแล้ว
Divy
Divy@aiwithme0001·
Most AI apps fail not because the model is weak, but because the workflow around it is poorly designed. Scalable LLM systems are built with patterns, not prompts. I wrote about 5 LLM workflow patterns that actually work in production. Link 👇 @yadavdivy296/5-llm-workflow-patterns-for-building-scalable-ai-applications-a-complete-guide-376b7d7ccd1b" target="_blank" rel="nofollow noopener">medium.com/@yadavdivy296/… #Grok #Claude
English
0
3
3
88