Unsloth AI

588 posts

Unsloth AI banner
Unsloth AI

Unsloth AI

@UnslothAI

Making open-source AI accessible! 🦥 https://t.co/2kXqhhvdCD

San Francisco, CA Beigetreten Kasım 2023
464 Folgt51.7K Follower
Angehefteter Tweet
Unsloth AI
Unsloth AI@UnslothAI·
Introducing Unsloth Studio ✨ A new open-source web UI to train and run LLMs. • Run models locally on Mac, Windows, Linux • Train 500+ models 2x faster with 70% less VRAM • Supports GGUF, vision, audio, embedding models • Auto-create datasets from PDF, CSV, DOCX • Self-healing tool calling and code execution • Compare models side by side + export to GGUF GitHub: github.com/unslothai/unsl… Blog and Guide: unsloth.ai/docs/new/studio Available now on Hugging Face, NVIDIA, Docker and Colab.
English
214
821
5K
1.4M
Unsloth AI retweetet
Matthew Berman
Matthew Berman@MatthewBerman·
Running a fine-tune on Qwen3.5-35B-A3B using @UnslothAI It's ALIIIIIVEEE
Matthew Berman tweet media
English
19
12
133
8.4K
Unsloth AI
Unsloth AI@UnslothAI·
This workflow was built using 4-bit Qwen3.5-4B GGUF + Unsloth Studio + ddgs + DuckDuckGo API. If you use full precision Qwen3.5-4B, results are even better. You can use this workflow via our GitHub repo: github.com/unslothai/unsl…
Unsloth AI tweet media
English
4
14
96
6.2K
Unsloth AI
Unsloth AI@UnslothAI·
Qwen3.5-4B searched 20+ websites, cited its sources, and found the best answer! 🔥 Try this locally with just 4GB RAM via Unsloth Studio. The 4B model did this by executing tool calls + web search directly during its thinking trace.
English
54
205
2K
100.2K
Unsloth AI
Unsloth AI@UnslothAI·
Unsloth Studio now installs via uv. Installation works in any environment. We also updated our Docker image. And shipped lots of bug fixes and new features including ~30% more accurate tool calling. Info: #quickstart" target="_blank" rel="nofollow noopener">unsloth.ai/docs/new/studi…
Unsloth AI tweet media
Unsloth AI@UnslothAI

Introducing Unsloth Studio ✨ A new open-source web UI to train and run LLMs. • Run models locally on Mac, Windows, Linux • Train 500+ models 2x faster with 70% less VRAM • Supports GGUF, vision, audio, embedding models • Auto-create datasets from PDF, CSV, DOCX • Self-healing tool calling and code execution • Compare models side by side + export to GGUF GitHub: github.com/unslothai/unsl… Blog and Guide: unsloth.ai/docs/new/studio Available now on Hugging Face, NVIDIA, Docker and Colab.

English
12
62
474
31.1K
Unsloth AI retweetet
Julien Chaumond
Julien Chaumond@julien_c·
OK it's training!!! (A100 80GB, there was no H100 available on Colab Pro) Yay @UnslothAI Studio
Julien Chaumond tweet media
English
5
6
124
33.8K
Unsloth AI retweetet
Avi Chawla
Avi Chawla@_avichawla·
The core engineering behind @UnslothAI has always been impressive! Instead of relying on PyTorch's default autograd for backpropagation, Unsloth built their own backprop kernels from scratch in OpenAI's Triton language (a Python-based language for writing GPU kernels without needing to write raw CUDA C++). One of the reasons to do this is that the default autograd runs each operation as a separate GPU call, and each call reads and writes data back to global memory before the next one can start. Across dozens of transformer layers, this back-and-forth becomes the real bottleneck. These hand-written kernels fuse operations like QKV projections and rotary position embeddings into single GPU calls, and recompute activations on the fly instead of storing them in memory. This allows Unsloth to deliver >2x faster training with 70% less VRAM without any accuracy loss. The loss curves match standard training runs down to the third decimal because the math is exact, not an approximation. All of these kernel optimizations were already available through Unsloth's Python library. But now Unsloth Studio puts a no-code web UI on top of that same engine, and there's a lot of solid engineering packed into this. > The inference engine has a sandboxed code execution layer where models can run Python and bash, compute results, and verify their answers before responding. This means the model can actually execute and validate code instead of just predicting what the output should look like. The tool calling implementation also has a self-healing mechanism. Failed calls get auto-corrected and retried, which is a practical pattern for agentic workflows. > Unsloth's Python library already had GRPO support (the RL technique behind DeepSeek-R1), and Studio now makes this accessible through the UI. PPO requires running a separate critic model alongside the policy model during training, and that critic is typically as large as the model being trained, effectively doubling the VRAM requirement. GRPO eliminates the critic model entirely by generating multiple completions per prompt and computing advantages from the relative quality within that group. This cuts VRAM by 40-60% compared to PPO. Combined with Unsloth's Triton kernels and QLoRA, training a reasoning model on an RTX 4090 or even a 3090 becomes realistic on hardware that most of us actually have. > In most fine-tuning workflows that I have run, the training step is actually the easy part. Getting raw data into a properly formatted dataset is where the real time goes. Unsloth Studio includes Data Recipes (built on NVIDIA's DataDesigner) that take raw PDFs/CSVs/DOCX files, and transform them into structured synthetic datasets through a visual node-based workflow, replacing the custom parsing scripts entirely. Once training is done, models can be exported directly to GGUF, safetensors, or other formats with automatic LoRA adapter merging into base weights. The whole system runs 100% offline with no telemetry. $ pip install unsloth $ unsloth studio setup $ unsloth studio More details in the post below by Unsloth👇 It's still in beta, but the engineering underneath is solid. For anyone working with open-source models locally, this is one of the more complete tools available right now. ____ Find me → @_avichawla Every day, I share tutorials and insights on DS, ML, LLMs, and RAGs.
Avi Chawla tweet media
Unsloth AI@UnslothAI

Introducing Unsloth Studio ✨ A new open-source web UI to train and run LLMs. • Run models locally on Mac, Windows, Linux • Train 500+ models 2x faster with 70% less VRAM • Supports GGUF, vision, audio, embedding models • Auto-create datasets from PDF, CSV, DOCX • Self-healing tool calling and code execution • Compare models side by side + export to GGUF GitHub: github.com/unslothai/unsl… Blog and Guide: unsloth.ai/docs/new/studio Available now on Hugging Face, NVIDIA, Docker and Colab.

English
9
53
452
33.8K
Unsloth AI retweetet
Chubby♨️
Chubby♨️@kimmonismus·
So cool, unsloth introdocues Unsloth Studio a new open-source web app that lets you run, train, compare, and export hundreds of LLMs locally with much lower VRAM usage while also turning files like PDFs, CSVs, and DOCXs into training datasets.
Unsloth AI@UnslothAI

Introducing Unsloth Studio ✨ A new open-source web UI to train and run LLMs. • Run models locally on Mac, Windows, Linux • Train 500+ models 2x faster with 70% less VRAM • Supports GGUF, vision, audio, embedding models • Auto-create datasets from PDF, CSV, DOCX • Self-healing tool calling and code execution • Compare models side by side + export to GGUF GitHub: github.com/unslothai/unsl… Blog and Guide: unsloth.ai/docs/new/studio Available now on Hugging Face, NVIDIA, Docker and Colab.

English
16
36
425
49.1K
Unsloth AI retweetet
Daniel Han
Daniel Han@danielhanchen·
We're excited to introduce Unsloth🦥Studio! 1. Chat UI has auto healing tool calling, Python & bash code execution, web search, image, docs input + more! 2. Finetune audio, vision, LLMs with an AI Assist data prep 3. Supports GGUFs, Mac, Windows, Linux + audio gen 4. Has SVG rendering, export to GGUF 5. gpt-oss harmony rendering, all inference params pre-set 6. Data designer + synthetic data generation 7. Fast parallel data prep + embedding finetuning 8. And much much more! To get it, run: pip install unsloth unsloth studio setup unsloth studio -H 0.0.0.0 -p 8888
Unsloth AI@UnslothAI

Introducing Unsloth Studio ✨ A new open-source web UI to train and run LLMs. • Run models locally on Mac, Windows, Linux • Train 500+ models 2x faster with 70% less VRAM • Supports GGUF, vision, audio, embedding models • Auto-create datasets from PDF, CSV, DOCX • Self-healing tool calling and code execution • Compare models side by side + export to GGUF GitHub: github.com/unslothai/unsl… Blog and Guide: unsloth.ai/docs/new/studio Available now on Hugging Face, NVIDIA, Docker and Colab.

English
21
50
441
44.2K
Unsloth AI
Unsloth AI@UnslothAI·
Transform PDFs, CSV, DOCX, TXT or any file into a structured synthetic datasets via Unsloth Data Recipes. Build and edit your datasets visually via a graph-node workflow and use them for fine-tuning. Powered by @NVIDIA DataDesigner.
GIF
English
3
9
92
8.1K
Unsloth AI
Unsloth AI@UnslothAI·
Introducing Unsloth Studio ✨ A new open-source web UI to train and run LLMs. • Run models locally on Mac, Windows, Linux • Train 500+ models 2x faster with 70% less VRAM • Supports GGUF, vision, audio, embedding models • Auto-create datasets from PDF, CSV, DOCX • Self-healing tool calling and code execution • Compare models side by side + export to GGUF GitHub: github.com/unslothai/unsl… Blog and Guide: unsloth.ai/docs/new/studio Available now on Hugging Face, NVIDIA, Docker and Colab.
English
214
821
5K
1.4M
Unsloth AI
Unsloth AI@UnslothAI·
Unsloth Studio allows LLMs to run code and programs in a sandbox so it can calculate, analyze data, test code, generate files, or verify an answer with actual computation. This makes answers from models more reliable and accurate.
Unsloth AI tweet media
English
0
7
89
11.1K
Unsloth AI
Unsloth AI@UnslothAI·
We collaborated with @NVIDIA to teach you about Reinforcement Learning and RL environments. Learn: • Why RL environments matter + how to build them • When RL is better than SFT • GRPO and RL best practices • How verifiable rewards and RLVR work Blog: unsloth.ai/blog/rl-enviro…
Unsloth AI tweet media
English
26
249
1.7K
85.9K
Unsloth AI
Unsloth AI@UnslothAI·
We created a repo with 250+ notebooks for LLM training. Train locally on your device with 3GB VRAM or free on Colab. Learn the entire fine-tuning and inference workflow. Supports RL, vision, audio, embedding, TTS models GitHub: github.com/unslothai/note…
Unsloth AI tweet media
English
26
251
1.4K
55.6K
Unsloth AI
Unsloth AI@UnslothAI·
Note: Claude Code invalidates the KV cache for local models by prepending some IDs, making inference 90% slower. See how to fix it here: #fixing-90-slower-inference-in-claude-code" target="_blank" rel="nofollow noopener">unsloth.ai/docs/basics/cl…
Unsloth AI tweet media
English
35
173
1.3K
631.3K
Unsloth AI
Unsloth AI@UnslothAI·
Learn how to run Qwen3.5 locally using Claude Code. Our guide shows you how to run Qwen3.5 on your server for local agentic coding. We then build a Qwen 3.5 agent that autonomously fine-tunes models using Unsloth. Works on 24GB RAM or less. Guide: unsloth.ai/docs/basics/cl…
Unsloth AI tweet media
English
93
359
2.9K
226K