Bruk
10.1K posts

Bruk
@bruk_phi
Robotics Engineer | Artificial Intelligence | @NEURARobotics |
Germany Katılım Şubat 2012
584 Takip Edilen521 Takipçiler
Bruk retweetledi

Neuralink enables those who have lost the ability to speak to speak again
Katie Pavlich@KatiePavlich
Last night I spoke with Brad Smith @ALScyborg, the first person with ALS to have @neuralink implanted. He has his voice back through AI and can even make dad jokes again. Absolutely incredible technology changing lives and bettering humanity. Thank you @elonmusk!
English

@martinplaut The Convention says 'in whole or in part' the legal test is intent to destroy, not whether destruction was completed. Survivors existing doesn't disprove genocidal intent. and you know this very well
English

Why I do not use the term “genocide” for the crimes committed in Gaza or Tigray – terrible as they are
martinplaut.com/2026/04/06/why…
English
Bruk retweetledi

Textile Robot Skin Turns Fabric into a Nervous System
Robot skin is getting touchier.
The Chinese company JQ Industries just introduced a fabric-based electronic skin (e-skin) to give robots a humanlike sense of touch. Made from plant-derived materials, it behaves like fabric and can bend and conform like clothing without restricting movement. So basically, Lululemon with embedded sensors.
The skin is woven with conductive fibers that detect pressure, contact and force distribution. The system uses sensor gloves for the hands and pressure-sensing soles for the feet. The E-skin acts as a distributed nervous system. It transmits data from thousands of sensing points to the robot’s artificial intelligence for instant decision-making. It integrates with simulation environments like NVIDIA Isaac Sim and MuJoCo for training AI models using tactile feedback.
Shanghai-based JQ Industries demonstrated the capabilities using a Unitree G1 humanoid robot, but the technology is intended for use across platforms. The company says the enhanced sensitivity unlocks more precise grip control and better handling of delicate objects. They say it also makes robots safer around humans. JQ’s ultimate goal is to create a closed loop so next-gen robots can learn from touch the same way humans do.
JQ says its technology differs from traditional electronic skin that relies on delicate thin film that’s hard to produce and tends to degrade with bending. It’s produced with textile manufacturing techniques like modified industrial weaving and roll-to-roll production.
Launched in early 2024, JQ Industries operates as Weihai Juqiao Industrial Technology. The early-stage startup is scaling production of the tactile-sensing technology with at least $2.8 million raised to date.
English

It’s happened.
Mac Studio is here. Gemma 4 31b @GoogleDeepMind installed, chatting with my main @openclaw for $0 in token expenses now...
I've burned $5-6k on tokens on my crazy ideas over past few months, so this mac studio should pencil out for me within 3 months or so 🤓
English

We started the Asimov project 93 days ago to build an open-source humanoid. Since then, we've designed the motorized legs, worked on the control policies, sourced the parts, and assembled them. The full body is planned to be finalized by March 2026.




Zvonimir Fras@ZvonimirFras
@asimovinc 92 days starting from nothing?!
English


Today I was tagged in a deeply disturbing video below of a Kenyan who was lured to Russia under false job promises and forced into the war. His name is Francis Ndungu from Thika.
In the video, Russian forces tied an explosive device to him, verbally abuse him, and tell him "today is his day." He is then forcefully led forward and pushed toward the frontline, where Ukrainian forces are believed to be positioned.
In effect, he was being used as a kamikaze / s*"* bomber or coerced into an extremely dangerous situation with little chance of survival. Even if Ukrainians wanted to rescue him they wouldn't.
This is not an isolated case. It is part of a wider pattern of Africans being recruited through deception.
In December, Francis Ndungu released a video from Russia warning Africans not to go there for jobs, explaining that many were promised driving work only to be coerced into military service.
I have warned Kenyans about this before. Many dismissed it. Today, I have information on over 20 Africans currently on the front line, seeking help.
What's sad is these men have families, children and parents, yet poverty continues to be exploited.
If you know anyone planning to go to Russia or Ukraine for "jobs," stop them. Silence is what allows this to continue
On the left is the video of him tied with an explosive this week
English

The launch, flip and landing all seem perfectly executed
Until the waist joints suddenly crinkled under landing load
Likely an over torque situation, or ambiguous torque readings from the parallel waist mechanism
Remarkable that the control system was still able to recover
Mario Bollini@mario_bollini
That’s a wrap! Thanks for a great week.
English

Stop spending hours on slides and start focusing on your ideas! 📚
Let’s be honest: we love to present, but the hours spent formatting PowerPoint slides can be exhausting. That’s why I’m excited to share Abugida.
Abugida is an AI-powered, multi-lingual presentation maker that does the heavy lifting for you:
From Title to Slides: Just give it a topic, and it generates a structured, professional presentation in seconds.
From Document to Presentation: Have a long research paper or lesson plan? Paste it in, and Abugida converts it into a visual deck.
Break Language Barriers: It’s fully multi-lingual! Create your content in your preferred language effortlessly.
Whether you are prepping for a lecture or a workshop, this is the ultimate tool to reclaim your time.
Behind the scenes, our AI team is working day and night to make your life easier! 💡 We are constantly refining our tools so you can focus on what matters most—sharing your message.
✨ Try it for free here: lnkd.in/eQMJMvxN
hashtag#EdTech hashtag#TeachingTips hashtag#AI hashtag#Abugida hashtag#EducationInnovation hashtag#PresentationSkills

English

Asimov isn't walking great here, but this is a win for us. Its gait matches the simulation, including the rightward drift! We're closing the sim2real gap.
Asimov@asimovinc
Day 107 of building Asimov, an open-source humanoid.
English

30 years of iteration to make this 😂😂😂
Humanoid Scott@GoingBallistic5
If this is someone's idea of a cruel joke, it isn't funny
English

Building upon SimpleVLA-RL, we have implemented real-world RL on long-horizon dexterous tasks and witnessed a non-trivial (~relatively 300%) performance improvement over the SFT model, along with surprising capabilities on auto-recovery. Blog coming soon. The entire process uses very little data and training compute—basically costing no more than a single robotic arm—hinting that real-world generality for machines is actually within sight.
English

@HaoranGeng2 what would be the advantage over using human datasets instead
English

This might be my "aha moment" of 2025:
With our new robotics foundation model, Large Video Planner, we train a robot planner from large-scale video data. It works so well that we can use it directly for robot planning.
Two moments really blew my mind:
First: right after our model training, I fed in an image of my hand and my MacBook and asked it to close the laptop—when the Apple logo appeared exactly as the lid came down, I couldn’t help but feel impressed (and excited).
Second demo: picking up the brush — check the 3D consistency. Even the brush shadow is remarkably accurate, and it can even infer what the Franka arm (at the corner) should look like.
English

@GoingBallistic5 the change on the elbow is just illogical I can't justify it 😃
English
Bruk retweetledi

Everyone's freaking out about vibe coding. In the holiday spirit, allow me to share my anxiety on the wild west of robotics. 3 lessons I learned in 2025.
1. Hardware is ahead of software, but hardware reliability severely limits software iteration speed.
We've seen exquisite engineering arts like Optimus, e-Atlas, Figure, Neo, G1, etc. Our best AI has not squeezed all the juice out of these frontier hardware. The body is more capable than what the brain can command. Yet babysitting these robots demands an entire operation team. Unlike humans, robots don't heal from bruises. Overheating, broken motors, bizarre firmware issues haunt us daily. Mistakes are irreversible and unforgiving.
My patience was the only thing that scaled.
2. Benchmarking is still an epic disaster in robotics.
LLM normies thought MMLU & SWE-Bench are common sense. Hold your 🍺 for robotics. No one agrees on anything: hardware platform, task definition, scoring rubrics, simulator, or real world setups. Everyone is SOTA, by definition, on the benchmark they define on the fly for each news announcement. Everyone cherry-picks the nicest looking demo out of 100 retries.
We gotta do better as a field in 2026 and stop treating reproducibility and scientific discipline as second-class citizens.
3. VLM-based VLA feels wrong.
VLA stands for "vision-language-action" model and has been the dominant approach for robot brains. Recipe is simple: take a pretrained VLM checkpoint and graft an action module on top. But if you think about it, VLMs are hyper-optimized to hill-climb benchmarks like visual question answering. This implies two problems: (1) most parameters in VLMs are for language & knowledge, not for physics; (2) visual encoders are actively tuned to *discard* low-level details, because Q&A only requires high-level understanding. But minute details matter a lot for dexterity.
There's no reason for VLA's performance to scale as VLM parameters scale. Pretraining is misaligned. Video world model seems to be a much better pretraining objective for robot policy. I'm betting big on it.

English

Crazy we can learn to track the entire AMASS dataset in 24h.
For MaskedMimic, 1.5 years ago, it took us 2 weeks to train the tracker and another MONTH(!!!) to train the generative model.
Now it takes 24h for the tracker and another 24h for the generative model 🤯
Chen Tessler@ChenTessler
20 hours -- 99.93% PPO is all you need. 4 GPUs with 8k envs each. (Slightly better parameters than the current default in ProtoMotions, will update after verifying results are stable)
English



