Takeshmode

5K posts

Takeshmode banner
Takeshmode

Takeshmode

@dreadedexplorer

🇯🇵🇰🇪 AI, VR/AR, Robotics, Marine biology enthusiast Techno optimist and Realist

Japan Katılım Ağustos 2015
368 Takip Edilen91 Takipçiler
Takeshmode
Takeshmode@dreadedexplorer·
@Tamaramonkey @cfryant People shouldn't be believing everything they see or hear on the internet from the beginning even before 2020.
English
2
0
0
27
Tamara Kane
Tamara Kane@Tamaramonkey·
Isn't it sad how people now have to question everything they see since we are all being scammed by AI chop shops that steal art and sell it for parts? The only future with AI is people not knowing where their art came from. It's the erasure of artists. Humans not knowing aren't the ones to mock. It's the ones who know and are choosing to erase artists anyway. F*ck AGI.
English
1
0
1
136
Takeshmode
Takeshmode@dreadedexplorer·
@AlexFinn I don't have the money to pay for compute or else I would have gotten open claw
English
0
0
0
30
Alex Finn
Alex Finn@AlexFinn·
Look at this image THIS is why AI is the greatest opportunity in human history right now It's the most powerful technology ever created, yet about .01% of people are using it properly In a decade 75% of this chart will be red. Until then you have the greatest arbitrage opportunity to use this tech to build INCREDIBLE businesses Everyone using OpenClaw, Claude Code, and Codex to build and automate their lives have an OBSCENE advantage right now If you're not, I'd take these steps immediately: 1. Install OpenClaw/Hermes 2. Brain dump everything about your life and career 3. Ask it what tasks it can automate 4. Ask it what businesses you can build 5. Download Claude Code or Codex depending on what subs you have 6. Take the plan from OpenClaw and put it into your builder of choice 7. Build the business 8. Talk about it on social media Do these steps and you're literally in the top .01%
Alex Finn tweet media
English
120
92
763
76.8K
Takeshmode retweetledi
Space and Technology
Space and Technology@spaceandtech_·
MIT researchers have developed new artificial muscles called Electrofluidic Fiber Muscles for robots and wearable devices. These flexible muscles can be woven into fabric and work silently without bulky equipment. The system is lightweight, portable, and uses tiny fiber pumps smaller than 2 millimeters to generate powerful movement directly from electricity.
English
26
205
871
54K
Takeshmode
Takeshmode@dreadedexplorer·
@PurzBeats If you generated an AI image then copied it by hand is it still AI art?
English
0
0
1
13
Purz.ai
Purz.ai@PurzBeats·
If you took a picture and changed one pixel with AI is that AI art or a photo? If you took a piece entirely generated by AI and change one pixel by hand is it still considered AI art? Where do you draw the line?
English
77
3
124
8.7K
Takeshmode
Takeshmode@dreadedexplorer·
@elder_plinius That it won't make life better and free humanity from the rat race before I die lol
English
0
0
0
3
Takeshmode retweetledi
𒐪
𒐪@SHL0MS·
i just generated an image in the style of a Monet painting using AI please describe, in as much detail as possible, what makes this inferior to a real Monet painting
𒐪 tweet media
English
1.2K
758
7.6K
5.9M
Takeshmode retweetledi
CyberRobo
CyberRobo@CyberRobooo·
Interestingly, Xynova’s technological approach shares the same origins as the dexterous hand technology used in Optimus v3 (though Elon has noted that this design still needs further refinement). The Flex2 is an upgraded version built on the Flex1: v1 featured 25 DOF and used a cable-driven system; the v2 introduces direct drive, which reduces the DOF to 23 but also sheds 400g in weight. It seems a hybrid drive mechanism may be the more practical solution. In March this year, following the successful completion of its Series A funding round (with investors including Xiaomi and others), this robotics company--founded in 2024--began construction of a large-scale production facility. Spanning over 5,000 square meters, the base is designed to achieve an annual output of 200,000 miniature electric cylinders and 10,000 dexterous hands. However, hardware alone is far from enough. A truly capable dexterous hand must be the result of the co-evolution of data, models, and the physical hardware. In other words, in addition to mass production, Xynova is simultaneously developing a complete integrated system that combines perception capabilities, robotic manipulation intelligence, and hand-specific coordination. This is essentially a foundational robotic module. Yet its applications go far beyond that. It can be directly adapted to industrial robotic arms on production lines, as well as integrated into the bodies of humanoid robots. That said, what I’m most eager to see is its use in advanced bionic prosthetics for humans. If it can successfully demonstrate this expanded capability, its impact will reach well beyond the realm of humanoid robots. (Cyborg)
CyberRobo@CyberRobooo

Pretty hand👋 just like a real human hand. Hangzhou Xynova unveils the Flex 2 Hybrid-Drive Dexterous Hand (Cable-Driven + Direct-Driven). Tactile Perception, and the camera is positioned at the wrist joint rather than in the center of the palm. They will showcase the physical hand at ICRA 2026 in Vienna. (Humans have continuously replicated the hand that evolved two million years ago, while attempting to coordinate hand and brain.)

English
9
67
461
41.2K
Rand
Rand@rand_longevity·
my top Longevity foods: - milk - coffee - eggs - blueberries - greek yogurt what would you add?
English
185
8
336
16K
cinesthetic.
cinesthetic.@TheCinesthetic·
Technology keeps evolving, but Fantasia 2000 still looks ahead of its time.
English
28
830
8.7K
182.3K
Bandy
Bandy@BandyElm13·
@kimmonismus Humanoid robots will be ready when I can tell one to go to store by x, y and z, come back, make food, fully clean up, tiddy up the rest of the house. All in a reasonably amount of time and without iy falling on its face and breaking one of its cameras.
English
3
0
0
138
David Scott Patterson
David Scott Patterson@davidpattersonx·
AI and robots will replace all jobs by 2030. I have been saying this for a couple of years. Most people thought I was crazy. Have recent developments changed your view?
English
177
39
360
17.6K
Pedro Domingos
Pedro Domingos@pmddomingos·
Kiss your freedom goodbye if China wins the AI race.
English
542
92
938
101.5K
Haiku
Haiku@H8KUcom·
@FearAndMadnUS @pmddomingos well you would simply die if China gets to singularity first they would ruthlessly take the world you can say the US is bad but you do not understand Chinese morality and their hunger for global power
English
2
0
0
103
Takeshmode retweetledi
Figure
Figure@Figure_robot·
Watch a team of humanoid robots running a full 8-hr shift at human performance levels. This is fully autonomous running Helix-02 x.com/i/broadcasts/1…
English
760
2.1K
11K
2.4M
Takeshmode retweetledi
Chubby♨️
Chubby♨️@kimmonismus·
Lets go - automated resreacher incoming: Japan’s Institute of Science Tokyo has opened a human-free robotics lab where 10 machines, including the humanoid Maholo LabDroid, run medical experiments such as reagent handling and cell cultivation. The bigger bet is even far more ambitious: scaling to 2,000 research robots by 2040, with AI helping automate everything from hypothesis generation to experimental verification. Source: provided text.
Chubby♨️ tweet media
English
12
30
218
11.2K
Takeshmode retweetledi
Nathie
Nathie@NathieVR·
This guy created a VR experience that literally lets you tear down reality.
English
103
422
3.5K
209.1K
Takeshmode retweetledi
Wes Roth
Wes Roth@WesRoth·
AheadForm, a Shanghai-based robotics startup founded by Columbia University Ph.D. graduate Yuhang Hu, has developed the world’s most hyper-realistic robotic face. Backed by $28.5 million in funding to "give AI a head," the company is tackling the human-machine trust barrier—a problem they believe is fundamentally decided at the face. While the rest of the robotics industry focuses heavily on standard physical bodies and locomotion, AheadForm aims to directly conquer the "uncanny valley." Rather than intentionally designing cartoonish or synthetic faces to avoid visual creepiness, the team treated the uncanny valley as an engineering bug, solving it through advanced biomimicry and precise timing.
Ole Lehmann@itsolelehmann

Ex Machina is no longer sci-fi. China has finally built it. The company is AheadForm, founded in Shanghai. The product is the world's most hyper-realistic robotic face. Silicone skin you can't tell from human, 25 micro motors hidden underneath pulling the face into real expressions. And RGB cameras embedded inside the pupils so when it looks at you, it actually sees you from where its eyes are. They raised $28.5M to "give AI a head," which is also where the name comes from. AheadForm = a head form. This is the opposite of where everyone else in robotics is focused. Unitree, Figure, Tesla, Boston Dynamics: all about the body. AheadForm chose the face because they think trust is the harder problem to solve, and trust gets decided at the face. The reason nobody else has tried this is the "uncanny valley." It's the creepy zone where a robot looks almost human but not quite, and looking at it just feels wrong even when you can't say why. Most roboticists believed no amount of engineering could make a face realistic enough to escape it. So they gave up and kept robots cartoonish on purpose: big anime eyes, exaggerated features, clearly synthetic. But AheadForm decided to treat it as an engineering bug instead. Add enough motors, tune the silicone, fix the timing, the valley closes. And they're pulling it off. A few crazy details about how this actually works: 1. The robot learns its own face in a mirror. You put it in front of a camera, let it fire every motor randomly, and it watches what its face does and builds an internal map of "if I send command X to motor Y, my eyebrow does this." Same exact process a human baby uses staring into a mirror. The robot teaches itself who it is by experimenting. 2. It predicts your smile 839 milliseconds before you smile. By watching the micro-tells in your face that precede a smile, the robot starts smiling 0.8 seconds ahead, so its smile lands at the same moment yours does. Most robot mimicry happens half a second late, which is exactly why it always feels artificial. 3. The pupils are the cameras. When the robot makes eye contact, the gaze and the sensor are the same physical thing. Most humanoid robots stick the camera on the forehead or chest, so they aren't actually looking at you when their eyes are pointed at you. 4. The founder, Yuhang Hu, did his PhD at Columbia under Hod Lipson. Lipson is the guy who in 2006 built a four-legged robot that figured out it had four legs by experimenting with its own movement, nobody told it the body shape, it discovered it. He has spent 25 years trying to build machines that know what they are. AheadForm is that 25-year research arc productized. 5. NetEase Games already paid them to physically embody a fantasy video game character. That opens up a brand-new category: robotics as the physical embodiment of fictional IP. Every character-rich studio, Disney, Riot, Hoyoverse, Pokemon, Netflix, now has a question to answer about when their characters get bodies. AheadForm believes whoever ships the first robot you'd actually want around your family wins. That's the bet behind the most realistic robot face on earth.

English
9
14
71
7.5K