Igor Zubrycki

494 posts

Igor Zubrycki

Igor Zubrycki

@IgorZub

Makes robots and programs for them. Assistant professor at Lodz University of Technology. Interested in Human-Robot interfaces, AI and Soft Robotics

Lodz Katılım Mayıs 2011
359 Takip Edilen123 Takipçiler
ModelScope
ModelScope@ModelScope2022·
Big leap in Object Detection with Qwen3.6-35B-A3B! 🚀We are excited to showcase the new "Instruction-Oriented Object Detection" capability on ModelScope. Demo 👉modelscope.ai/studios/Qwen/O… 📈 Performance: ODinW score jumped from 42.6 (Qwen3.5) to 50.8! 🧠 Beyond standard detection, Qwen3.6 leverages LLM reasoning to: 1️⃣ Identify fine-grained objects, such as PCB components and reference designators. 2️⃣ Detect small and occluded cars in aerial-view parking lots. 3️⃣ Handle dense scenes multi-scale objects. 🤖 Download model: modelscope.ai/models/Qwen/Qw… #Qwen36 #ObjectDetection #ComputerVision #OpenSource
ModelScope tweet mediaModelScope tweet mediaModelScope tweet media
English
8
46
350
60.1K
Ivan
Ivan@ivanbokii·
It’s quite unfortunate that GEPA Optimize Anything didn’t get enough traction, while very, very similar ideas promoted by Karpathy’s autoresearch + Lütke’s pi-autoresearch - got so much traction, despite being less general
English
11
12
124
12.6K
Igor Zubrycki retweetledi
Bilawal Sidhu
Bilawal Sidhu@bilawalsidhu·
Probably the most current look at Palantir’s maven smart system software. Here’s the DoW’s Chief AI officer showing how it works:
English
379
1.2K
9.5K
2.5M
Igor Zubrycki retweetledi
Tymofiy Mylovanov
Tymofiy Mylovanov@Mylovanov·
Ukraine has started daily combat use of AI attack drones. Once launched, they find targets, track them, and strike on their own, even after jamming cuts the pilot’s signal. This is how autonomous killing entered the war — NYT. 1/
English
84
848
5.2K
276.2K
Igor Zubrycki retweetledi
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Reply here or DM me :) will add folks in as much as we can
English
2.2K
13
941
78.6K
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Big upgrade to vibe coding in @GoogleAIStudio lands in Jan, but if you want to test early… 👇🏻
English
3.8K
188
5.5K
553.8K
Igor Zubrycki retweetledi
Tuo Liu
Tuo Liu@Robo_Tuo·
These 75+ humanoid companies around the world really show just how massive this humanoid robotics wave is. Again, I’m just excited and grateful to be alive to witness what might be the biggest tech revolution in human history. It’s just the beginning, buckle up please.
Tuo Liu tweet media
English
45
138
653
38.9K
Igor Zubrycki retweetledi
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
You know how some people seem to have a magic touch with LLMs? They get incredible, nuanced results while everyone else gets generic junk. The common wisdom is that this is a technical skill. A list of secret hacks, keywords, and formulas you have to learn. But a new paper suggests this isn't the main thing. The skill that makes you great at working with AI isn't technical. It's social. Researchers (Riedl & Weidmann) analyzed how 600+ people solved problems alone vs. with an AI. They used a statistical method to isolate two different things for each person: Their 'solo problem-solving ability' Their 'AI collaboration ability' Here's the reveal: The two skills are NOT the same. Being a genius who can solve problems in your own head is a totally different, measurable skill from being great at solving problems with an AI partner. Plot twist: The two abilities are barely correlated. So what IS this 'collaboration ability'? It's strongly predicted by a person's Theory of Mind (ToM)—your capacity to intuitively model another agent's beliefs, goals, and perspective. To anticipate what they know, what they don't, and what they need. In practice, this looks like: Anticipating the AI's potential confusion Providing helpful context it's missing Clarifying your own goals ("Explain this like I'm 15") Treating the AI like a (somewhat weird, alien) partner, not a vending machine. This is where it gets strange. A user's ToM score predicted their success when working WITH the AI... ...but had ZERO correlation with their success when working ALONE. It's a pure collaborative skill. It goes deeper. This isn't just a static trait. The researchers found that even moment-to-moment fluctuations in a user's ToM—like when they put more effort into perspective-taking on one specific prompt—led to higher-quality AI responses for that turn. This changes everything about how we should approach getting better at using AI. Stop memorizing prompt "hacks." Start practicing cognitive empathy for a non-human mind. Try this experiment. Next time you get a bad AI response, don't just rephrase the command. Stop and ask: "What false assumption is the AI making right now?" "What critical context am I taking for granted that it doesn't have?" Your job is to be the bridge. This also means we're probably benchmarking AI all wrong. The race for the highest score on a static test (MMLU, etc.) is optimizing for the wrong thing. It's like judging a point guard only on their free-throw percentage. The real test of an AI's value isn't its solo intelligence. It's its collaborative uplift. How much smarter does it make the human-AI team? That's the number that matters. This paper gives us a way to finally measure it. I'm still processing the implications. The whole thing is a masterclass in thinking clearly about what we're actually doing when we talk to these models. Paper: "Quantifying Human-AI Synergy" by Christoph Riedl & Ben Weidmann, 2025.
Carlos E. Perez tweet media
English
225
390
2.5K
346.3K