
@gabriberton I usually try LoRA first, but depends on the task and how optimized the code already is. For 3D models (VGGT/DUSt3R) , I had a smoother experience just fine tuning everything.
English
David Shatwell
1 posts



Well after all Adaptformer was the first to bring the concept of Adapter to vision. In my experience, across 6+ vision tasks Adaptformer always > Lora always > full FT. Someone should try adaptformer with LLMs, architecturally there's no constraints preventing that. It is more memory and compute intensive though.