Bruk

10.1K posts

Bruk banner
Bruk

Bruk

@bruk_phi

Robotics Engineer | Artificial Intelligence | @NEURARobotics |

Germany Katılım Şubat 2012
584 Takip Edilen521 Takipçiler
Bruk retweetledi
Elon Musk
Elon Musk@elonmusk·
Starship is the most powerful moving object ever made
English
7.6K
16.2K
151.7K
42.7M
Bruk
Bruk@bruk_phi·
@elonmusk @elonmusk any potential to achieve this with non invasive interface
English
0
0
1
15
Bruk
Bruk@bruk_phi·
@martinplaut The Convention says 'in whole or in part' the legal test is intent to destroy, not whether destruction was completed. Survivors existing doesn't disprove genocidal intent. and you know this very well
English
0
0
2
249
Bruk retweetledi
Mike Kalil
Mike Kalil@mikekalilmfg·
Textile Robot Skin Turns Fabric into a Nervous System Robot skin is getting touchier. The Chinese company JQ Industries just introduced a fabric-based electronic skin (e-skin) to give robots a humanlike sense of touch. Made from plant-derived materials, it behaves like fabric and can bend and conform like clothing without restricting movement. So basically, Lululemon with embedded sensors. The skin is woven with conductive fibers that detect pressure, contact and force distribution. The system uses sensor gloves for the hands and pressure-sensing soles for the feet. The E-skin acts as a distributed nervous system. It transmits data from thousands of sensing points to the robot’s artificial intelligence for instant decision-making. It integrates with simulation environments like NVIDIA Isaac Sim and MuJoCo for training AI models using tactile feedback. Shanghai-based JQ Industries demonstrated the capabilities using a Unitree G1 humanoid robot, but the technology is intended for use across platforms. The company says the enhanced sensitivity unlocks more precise grip control and better handling of delicate objects. They say it also makes robots safer around humans. JQ’s ultimate goal is to create a closed loop so next-gen robots can learn from touch the same way humans do. JQ says its technology differs from traditional electronic skin that relies on delicate thin film that’s hard to produce and tends to degrade with bending. It’s produced with textile manufacturing techniques like modified industrial weaving and roll-to-roll production. Launched in early 2024, JQ Industries operates as Weihai Juqiao Industrial Technology. The early-stage startup is scaling production of the tactile-sensing technology with at least $2.8 million raised to date.
English
5
9
25
1.7K
Jesse Genet
Jesse Genet@jessegenet·
It’s happened. Mac Studio is here. Gemma 4 31b @GoogleDeepMind installed, chatting with my main @openclaw for $0 in token expenses now... I've burned $5-6k on tokens on my crazy ideas over past few months, so this mac studio should pencil out for me within 3 months or so 🤓
English
357
415
6.3K
845.5K
UNIQUE✨💫
UNIQUE✨💫@U53291Unique·
His brain is a supercomputer But he's not using it for good 😒🤨✨
English
102
149
924
57.6K
Bruk
Bruk@bruk_phi·
@asimovinc is it already open sourced? couldn't find a link
English
0
0
0
31
Asimov
Asimov@asimovinc·
We started the Asimov project 93 days ago to build an open-source humanoid. Since then, we've designed the motorized legs, worked on the control policies, sourced the parts, and assembled them. The full body is planned to be finalized by March 2026.
Asimov tweet mediaAsimov tweet mediaAsimov tweet mediaAsimov tweet media
Zvonimir Fras@ZvonimirFras

@asimovinc 92 days starting from nothing?!

English
25
67
689
62.4K
Bruk
Bruk@bruk_phi·
@elonmusk Love how race-based exclusion is 'not ok' when it blocks your billions from SA, but mass deportations of brown/black migrants in the US are suddenly 'saving democracy.' Consistent king
English
0
0
0
21
Sholla Ard 🇰🇪
Sholla Ard 🇰🇪@sholard_mancity·
Today I was tagged in a deeply disturbing video below of a Kenyan who was lured to Russia under false job promises and forced into the war. His name is Francis Ndungu from Thika. In the video, Russian forces tied an explosive device to him, verbally abuse him, and tell him "today is his day." He is then forcefully led forward and pushed toward the frontline, where Ukrainian forces are believed to be positioned. In effect, he was being used as a kamikaze / s*"* bomber or coerced into an extremely dangerous situation with little chance of survival. Even if Ukrainians wanted to rescue him they wouldn't. This is not an isolated case. It is part of a wider pattern of Africans being recruited through deception. In December, Francis Ndungu released a video from Russia warning Africans not to go there for jobs, explaining that many were promised driving work only to be coerced into military service. I have warned Kenyans about this before. Many dismissed it. Today, I have information on over 20 Africans currently on the front line, seeking help. What's sad is these men have families, children and parents, yet poverty continues to be exploited. If you know anyone planning to go to Russia or Ukraine for "jobs," stop them. Silence is what allows this to continue On the left is the video of him tied with an explosive this week
English
66
219
373
35.9K
Humanoid Scott
Humanoid Scott@GoingBallistic5·
Could have also been the right knee bottomed out attempting to absorb the landing energy Under torqued for the required deceleration, coupled with the knee linkage mechanism losing mechanical advantage near limits
English
4
1
15
1.8K
Hope✍
Hope✍@hopegeb·
Stop spending hours on slides and start focusing on your ideas! 📚 Let’s be honest: we love to present, but the hours spent formatting PowerPoint slides can be exhausting. That’s why I’m excited to share Abugida. Abugida is an AI-powered, multi-lingual presentation maker that does the heavy lifting for you: From Title to Slides: Just give it a topic, and it generates a structured, professional presentation in seconds. From Document to Presentation: Have a long research paper or lesson plan? Paste it in, and Abugida converts it into a visual deck. Break Language Barriers: It’s fully multi-lingual! Create your content in your preferred language effortlessly. Whether you are prepping for a lecture or a workshop, this is the ultimate tool to reclaim your time. Behind the scenes, our AI team is working day and night to make your life easier! 💡 We are constantly refining our tools so you can focus on what matters most—sharing your message. ✨ Try it for free here: lnkd.in/eQMJMvxN hashtag#EdTech hashtag#TeachingTips hashtag#AI hashtag#Abugida hashtag#EducationInnovation hashtag#PresentationSkills
Hope✍ tweet media
English
2
2
6
298
Bruk
Bruk@bruk_phi·
@asimovinc maybe you can tweak the feet airtime reward a bit more
English
1
0
0
19
Asimov
Asimov@asimovinc·
It's time to make Asimov walk better 🧡
English
1
0
11
720
Ning Ding
Ning Ding@stingning·
Building upon SimpleVLA-RL, we have implemented real-world RL on long-horizon dexterous tasks and witnessed a non-trivial (~relatively 300%) performance improvement over the SFT model, along with surprising capabilities on auto-recovery. Blog coming soon. The entire process uses very little data and training compute—basically costing no more than a single robotic arm—hinting that real-world generality for machines is actually within sight.
English
16
86
603
89.7K
Bruk
Bruk@bruk_phi·
@HaoranGeng2 what would be the advantage over using human datasets instead
English
1
0
0
450
Haoran Geng
Haoran Geng@HaoranGeng2·
This might be my "aha moment" of 2025: With our new robotics foundation model, Large Video Planner, we train a robot planner from large-scale video data. It works so well that we can use it directly for robot planning. Two moments really blew my mind: First: right after our model training, I fed in an image of my hand and my MacBook and asked it to close the laptop—when the Apple logo appeared exactly as the lid came down, I couldn’t help but feel impressed (and excited). Second demo: picking up the brush — check the 3D consistency. Even the brush shadow is remarkably accurate, and it can even infer what the Franka arm (at the corner) should look like.
English
18
30
290
27.4K
Bruk
Bruk@bruk_phi·
@GoingBallistic5 the change on the elbow is just illogical I can't justify it 😃
English
0
0
0
394
Humanoid Scott
Humanoid Scott@GoingBallistic5·
Quit laughing, there's nothing humerus to see here
Humanoid Scott tweet media
English
7
4
99
22.7K
Bruk retweetledi
Jim Fan
Jim Fan@DrJimFan·
Everyone's freaking out about vibe coding. In the holiday spirit, allow me to share my anxiety on the wild west of robotics. 3 lessons I learned in 2025. 1. Hardware is ahead of software, but hardware reliability severely limits software iteration speed. We've seen exquisite engineering arts like Optimus, e-Atlas, Figure, Neo, G1, etc. Our best AI has not squeezed all the juice out of these frontier hardware. The body is more capable than what the brain can command. Yet babysitting these robots demands an entire operation team. Unlike humans, robots don't heal from bruises. Overheating, broken motors, bizarre firmware issues haunt us daily. Mistakes are irreversible and unforgiving. My patience was the only thing that scaled. 2. Benchmarking is still an epic disaster in robotics. LLM normies thought MMLU & SWE-Bench are common sense. Hold your 🍺 for robotics. No one agrees on anything: hardware platform, task definition, scoring rubrics, simulator, or real world setups. Everyone is SOTA, by definition, on the benchmark they define on the fly for each news announcement. Everyone cherry-picks the nicest looking demo out of 100 retries. We gotta do better as a field in 2026 and stop treating reproducibility and scientific discipline as second-class citizens. 3. VLM-based VLA feels wrong. VLA stands for "vision-language-action" model and has been the dominant approach for robot brains. Recipe is simple: take a pretrained VLM checkpoint and graft an action module on top. But if you think about it, VLMs are hyper-optimized to hill-climb benchmarks like visual question answering. This implies two problems: (1) most parameters in VLMs are for language & knowledge, not for physics; (2) visual encoders are actively tuned to *discard* low-level details, because Q&A only requires high-level understanding. But minute details matter a lot for dexterity. There's no reason for VLA's performance to scale as VLM parameters scale. Pretraining is misaligned. Video world model seems to be a much better pretraining objective for robot policy. I'm betting big on it.
Jim Fan tweet media
English
139
257
1.6K
296.6K
Bruk
Bruk@bruk_phi·
@ChenTessler seems to be better than deepmimic
English
1
0
0
66
Chen Tessler
Chen Tessler@ChenTessler·
Crazy we can learn to track the entire AMASS dataset in 24h. For MaskedMimic, 1.5 years ago, it took us 2 weeks to train the tracker and another MONTH(!!!) to train the generative model. Now it takes 24h for the tracker and another 24h for the generative model 🤯
Chen Tessler@ChenTessler

20 hours -- 99.93% PPO is all you need. 4 GPUs with 8k envs each. (Slightly better parameters than the current default in ProtoMotions, will update after verifying results are stable)

English
8
23
293
29.2K