___raf stahelin___

653 posts

___raf stahelin___ banner
___raf stahelin___

___raf stahelin___

@___rafrafraf___

photographer | ai | fashion | art

Paris, France Katılım Mart 2017
622 Takip Edilen103 Takipçiler
Sabitlenmiş Tweet
___raf stahelin___
___raf stahelin___@___rafrafraf___·
Photography by me for Nothing Ear creative the beach assembly model nyawargak_gatluak
___raf stahelin___ tweet media___raf stahelin___ tweet media
English
0
2
5
648
Kiri
Kiri@Kyrannio·
I've been doing a few Grok Imagine character reference tests in NoSpoon lately, heh. Image-to-vid is really, really great with Imagine. Can't wait until they have multi reference available over API, too! (For those of you who don't know, NoSpoon is a fully agentic video platform that can run all sorts of video models, and I recently incorporated Imagine. It runs on absolutely zero input to create its own stories, up to 25 minutes long, or you can input custom parameters like characters or screenplays which you can optionally upload). Massive new upgrades are incoming to NoSpoon by end of week! Including design overhauls and more.
English
11
7
44
8K
___raf stahelin___
___raf stahelin___@___rafrafraf___·
@mitte_ai isn’t Seedance blocked 🚫 cease and desist. Was thinking of signing up but wonder if this is the real model. There’s no mention of official Partner.
English
0
0
0
29
mitte.ai
mitte.ai@mitte_ai·
here's how, step by step: > open mitte.ai/flow/seedance-2 > write your scene > click the Camera icon > choose one or multiple camera techniques that's it. watch 👇🏼
English
3
0
18
4.2K
mitte.ai
mitte.ai@mitte_ai·
introducing camera controls for Seedance 2. you're the director now. write your prompt. choose from 32 camera techniques. mitte handles the rest for you. examples + workflow 👇🏼
English
55
42
356
36K
mitte.ai
mitte.ai@mitte_ai·
new preset dropped: 9 expressions drop any photo. get nine frames back — each one a different version of the same person. example & workflow 👇🏼
English
5
2
18
1.6K
rama
rama@bedok77·
@vikhyatk @grok how does the latest moondream3 segmentation compare to SAM3 ?
English
2
0
4
431
___raf stahelin___
___raf stahelin___@___rafrafraf___·
@kylebrussell hook code? Even just the structure: ∙Where does the hook live? (~/.claude/hooks/ or similar?) ∙What’s the hook trigger? (post-response? streaming?) ∙How does it capture the full turn text?
English
0
0
0
13
Kyle Russell
Kyle Russell@kylebrussell·
This morning I turned Claude Code into Jarvis (pardon my dusty screen)
English
18
5
56
20.4K
Kyle Russell
Kyle Russell@kylebrussell·
Really excited to see how fast I can get this with a fine-tuned local model
English
3
0
11
795
Austin
Austin@nfteague·
@shrimp_economys @jarrodwatts @WisprFlow It’s a matter of what you’re willing pay to reduce any inconveniences of a free product (accuracy, features). IIRC, Wispr can connect to your codebase to know file names, classes, function, etc., so a prompt can have greater context and therefore (hopefully) a better outcome.
English
2
0
2
1.4K
Jarrod Watts
Jarrod Watts@jarrodwatts·
Introducing Claude Speech-to-text! A Claude Code plugin that lets you prompt using your voice. It uses Whisper to transcribe your voice locally on your machine - all within Claude Code. Easy installation guide below ↓
Jarrod Watts tweet media
English
134
149
2.2K
185.2K
Cristóbal Valenzuela
Cristóbal Valenzuela@c_valenzuelab·
The Vending Machine Paradox: When all constraints are removed, decisions might become harder, not easier. A machine like Gen-4.5 that can give you anything forces you to confront what you actually want. Something most people have never clearly defined.
English
20
26
275
85.3K
Numman Ali
Numman Ali@nummanali·
Issues with Opus 4.5 in Claude Code officially acknowledged It’s easy to be annoyed and throw blame but this is true ownership If you value the impact Opus 4.5 had on your work, then be helpful and provide feedback Don’t be a complainer, be a supporter
Thariq@trq212

We've received some feedback about a potential degradation of Opus 4.5 specifically in Claude Code. We're taking this seriously: we're going through every line of code changed and monitoring closely. In the meantime please submit any transcripts with issues through /feedback

English
20
10
398
41.6K
Photogenic Weekend
Photogenic Weekend@PhotogenicWeekE·
本人、ちょっとだけ見かけた(笑)
日本語
2
0
3
771
Photogenic Weekend
Photogenic Weekend@PhotogenicWeekE·
渋谷からの帰り道寄ってきた。流石に混んでる(笑) やっぱ生成AI画像はまだまだやなぁw #涼森れむ
Photogenic Weekend tweet media
日本語
1
0
22
1.6K
___raf stahelin___
___raf stahelin___@___rafrafraf___·
@mxvdxn @jerotter Dan, do you use Comfyui at all? Do you just use Grok on X ui only? Why isnt grok available in api?
English
1
0
0
85
DAN
DAN@mxvdxn·
I actually upgrade my X account to Premium+ so i can play more with Grok Imagine. This is what i believe. I think in regards with AI, cultural nuance, racial nuance, and also freedom of expression if crucial. Meaning the data training here is important. I think in terms of capital Elon Musk has the capacity. He has the funding, he has the GPU, he has all this scientist and programmer. So in terms of Dataset, its very as good as Google have. So which is why i put so much on Grok and X company itself.
English
3
0
4
6.8K
DAN
DAN@mxvdxn·
Alibaba dropped a new image model called Z-Image. is it good? Well, let me make it easy for you (especialy if you're looking for a specific things like me 😆) Top: Grok imagine Bottom: Z-Image My personal take: Z-Image is ok, but stuck in the middle, committing to nothing, vibing in permanent mid. This is why i double down on Grok Imagine. When it decides to go feral, it actually goes feral. Z-Image feels like they wants applause without taking risks. Anyway, that’s just my personal opinion. What matters is that you test it for yourself.
DAN tweet media
English
59
105
2.5K
158.8K
apolinario 🌐
apolinario 🌐@multimodalart·
@___rafrafraf___ On the diffusers release blog you can check how to train with QLoRA using the diffusers training script. @ostrisai trainer also has a low vram regime. I'm sure Nunchaku & Kohya & others will release something soon too
English
1
0
0
35
___raf stahelin___
___raf stahelin___@___rafrafraf___·
@multimodalart Nunchaku to the rescue please. Wonder if they’re working on it yet. Hoping for lora training to make its way in for production soon
English
1
0
0
25
apolinario 🌐
apolinario 🌐@multimodalart·
Exactly, that is nf4! And yes, it doesn't reduce much the quality. I noticed that the text-encoder in 4-bit reduces more the quality than the transformer This particular configuration is available on diffusers, but I think Comfy has added 8-bit quantization and there's some gguf too
English
1
0
0
38
___raf stahelin___
___raf stahelin___@___rafrafraf___·
@multimodalart Btw 4bit doesnt look bad at all. Is that running on nf4 quant. Is it available only on diffusers for the moment?
English
1
0
0
13
Unsloth AI
Unsloth AI@UnslothAI·
You can now run Unsloth GGUFs locally via Docker! Run LLMs on Mac or Windows with one line of code or no code at all! We collabed with Docker to make Dynamic GGUFs available for everyone! Just run: docker model run ai/gpt-oss:20B Guide: docs.unsloth.ai/models/how-to-…
Unsloth AI tweet media
English
10
104
564
93.4K
naundob
naundob@naundob·
@Ali_TongyiLab Cool! Next make the actual faces look like real people 🙏
English
1
0
1
829
Tongyi Lab
Tongyi Lab@Ali_TongyiLab·
Want your portraits to have that chef's kiss natural skin texture? 👩‍🎨 A huge shoutout to tlennon-ie for building and sharing this fantastic tool. huggingface.co/tlennon-ie/qwe… It’s fine tuned on Qwen-Image-Edit-2509 to add that extra layer of stunning realism and detail to human skin. Go try it now!
Tongyi Lab tweet mediaTongyi Lab tweet mediaTongyi Lab tweet mediaTongyi Lab tweet media
English
6
29
241
18.5K