Poetica

4.9K posts

Poetica banner
Poetica

Poetica

@NeuralNovel

See You Space Cowboy https://t.co/NXehTQkMBg

Mars Katılım Nisan 2023
1.2K Takip Edilen611 Takipçiler
Poetica retweetledi
Unsloth AI
Unsloth AI@UnslothAI·
This model has been #1 trending for 3 weeks now. It's Qwen3.5-27B fine-tuned on distilled data from Claude-4.6-Opus (reasoning). Trained via Unsloth. Runs locally on 16GB in 4-bit or 32GB in 8-bit. Model: huggingface.co/Jackrong/Qwen3…
Unsloth AI tweet media
English
77
195
2.4K
159.3K
Poetica retweetledi
clem 🤗
clem 🤗@ClementDelangue·
Local AI is free, fast & secure! So today we're introducing hf-mount: attach any storage bucket, model or dataset from @huggingface as a local filesystem. This is a game changer, as it allows you to attach remote storage that is 100x bigger than your local machine's disk. This is also perfect for Agentic storage!! Let's go!
clem 🤗 tweet media
English
67
227
1.3K
247K
Poetica retweetledi
Zach Mueller
Zach Mueller@TheZachMueller·
PinchBench results for Qwen3.5 27B using @UnslothAI K_XL quants, best of 3, thinking enabled. TL;DR: Q3 KXL (14.5GB) or Q4 KXL (18GB) While overall the "best" results showed little degradation, if you dig into mean/std Q4_K_XL overall was the best at ~84% on average. Q3 seems viable, while Q2 is the the lowest performing, of course.
Zach Mueller tweet media
English
20
25
207
94.3K
Poetica retweetledi
goo.vision
goo.vision@goo_vision·
Archangel 🪽
goo.vision tweet media
CY
29
684
4.6K
67.2K
Tejes Srivalsan
Tejes Srivalsan@tejessrivalsan·
excited to announce that we’re open sourcing EGO-SNAKE the largest dataset of egocentric snake pov footage to train the next generation of autonomous vipers comment for a data sample
English
232
184
4.5K
640.3K
goo.vision
goo.vision@goo_vision·
Testing Midjourney v8
goo.vision tweet mediagoo.vision tweet mediagoo.vision tweet mediagoo.vision tweet media
Français
18
57
673
19.7K
Julien Chaumond
Julien Chaumond@julien_c·
OK it's training!!! (A100 80GB, there was no H100 available on Colab Pro) Yay @UnslothAI Studio
Julien Chaumond tweet media
English
5
7
128
35.8K
Poetica retweetledi
Jack
Jack@Jackkk·
Pewdiepie reveals how to break free from the algorithm “A lot of this is going to sound crazy but you’ve gotta hear me out, it’s a step by step process. I’m not saying you should do all of it but you should try some of it” “Step 1 is creating friction. I put all social media and attention hungry apps in a second profile and I can’t understate how much this changed my life. Those 5-6 seconds it takes to switch profiles stops me every time and makes me think, is this what I want to be doing?” “The second thing I did was self hosting. The effect that had on me is I’m not the product anymore. The things I use are mine and because they’re not free, I’m not paying with my privacy. I think the main difference is ads and news don’t reach me” “Next thing I did was disable Shorts, I like YouTube but I hate how Shorts is everywhere I can’t escape it” “Then I unfollowed everyone. You don’t have to do this, this is definitely a me thing, I just got really fed up” “Next, get a DNS blocker. You can remove ads completely, most of it won’t even reach your device” “I think you owe it to yourself to take some time today and start building your tech fence” “These tech companies don’t care about you, so you’ve got to care about yourself. The cheat code is building some friction and filtering out the noise, that’s your defence and your cure”
English
204
2.6K
31.2K
2.1M
Poetica retweetledi
Unsloth AI
Unsloth AI@UnslothAI·
Transform PDFs, CSV, DOCX, TXT or any file into a structured synthetic datasets via Unsloth Data Recipes. Build and edit your datasets visually via a graph-node workflow and use them for fine-tuning. Powered by @NVIDIA DataDesigner.
GIF
English
3
10
101
9.6K
Unsloth AI
Unsloth AI@UnslothAI·
Introducing Unsloth Studio ✨ A new open-source web UI to train and run LLMs. • Run models locally on Mac, Windows, Linux • Train 500+ models 2x faster with 70% less VRAM • Supports GGUF, vision, audio, embedding models • Auto-create datasets from PDF, CSV, DOCX • Self-healing tool calling and code execution • Compare models side by side + export to GGUF GitHub: github.com/unslothai/unsl… Blog and Guide: unsloth.ai/docs/new/studio Available now on Hugging Face, NVIDIA, Docker and Colab.
English
218
843
5.1K
1.6M
Poetica
Poetica@NeuralNovel·
@UnslothAI woah!🔥data designer looks super fun
English
0
0
2
118
Poetica retweetledi
Unsloth AI
Unsloth AI@UnslothAI·
We collaborated with @NVIDIA to teach you about Reinforcement Learning and RL environments. Learn: • Why RL environments matter + how to build them • When RL is better than SFT • GRPO and RL best practices • How verifiable rewards and RLVR work Blog: unsloth.ai/blog/rl-enviro…
Unsloth AI tweet media
English
26
248
1.7K
87.4K
Poetica retweetledi
kache
kache@yacineMTB·
5.4 xhigh
kache tweet media
English
4
3
89
3K
Poetica retweetledi
Joël Niklaus
Joël Niklaus@joelniklaus·
Introducing the Synthetic Data Playbook: We generated over a 1T tokens in 90 experiments with 100k+ GPUh to figure out what makes good synthetic data and how to generate it at scale huggingface.co/spaces/Hugging…
Joël Niklaus tweet media
English
28
215
1.4K
120K
Poetica retweetledi
llm_enjoyer
llm_enjoyer@LLMenjoyer·
when people at my company ask how I'm hitting all these 16 node jobs
English
3
19
372
12.3K
Poetica retweetledi
vLLM
vLLM@vllm_project·
🚀 vLLM v0.17.0 is here! 699 commits from 272 contributors (48 new!) This is a big one. Highlights: ⚡ FlashAttention 4 integration 🧠 Qwen3.5 model family with GDN (Gated Delta Networks) 🏗️ Model Runner V2 maturation: Pipeline Parallel, Decode Context Parallel, Eagle3 + CUDA graphs 🎛️ New --performance-mode flag: balanced / interactivity / throughput 💾 Weight Offloading V2 with prefetching 🔀 Elastic Expert Parallelism Milestone 2 🔧 Quantized LoRA adapters (QLoRA) now loadable directly
vLLM tweet media
English
22
86
949
61.4K
Poetica retweetledi
Ben Burtenshaw
Ben Burtenshaw@ben_burtenshaw·
agentic RL hackathon this weekend! mentors from @PyTorch, @huggingface , and @UnslothAI will guide you to build agentic environments to win from a pool of $100K prizes 🏆 + free compute and token credits just for attending! lock in mar 7-8 in SF.
Ben Burtenshaw tweet media
English
19
22
186
22.7K
Poetica retweetledi
Daniel Han
Daniel Han@danielhanchen·
Qwen3.5 can now be fine-tuned locally with Unsloth via LoRA using only 10GB VRAM. You can then export to GGUF for llama.cpp, Ollama, LM Studio inference! Unsloth also supports Qwen3.5‑35B‑A3B LoRA, using ~74GB VRAM (1x H100)
Unsloth AI@UnslothAI

You can now fine-tune Qwen3.5 with our free notebook! 🔥 You just need 5GB VRAM to train Qwen3.5-2B LoRA locally! Unsloth trains Qwen3.5 1.5x faster with 50% less VRAM. GitHub: github.com/unslothai/unsl… Guide: unsloth.ai/docs/models/qw… Qwen3.5-4B Colab: colab.research.google.com/github/unsloth…

English
5
24
263
22.4K