exec

34.1K posts

exec banner
exec

exec

@ContextVector

not main. mostly ai recon but has an eye for true art or mind bending stuff. science and acceleration.

Katılım Şubat 2021
5.3K Takip Edilen339 Takipçiler
exec retweetledi
Ole Lehmann
Ole Lehmann@itsolelehmann·
Ex Machina is no longer sci-fi. China has finally built it. The company is AheadForm, founded in Shanghai. The product is the world's most hyper-realistic robotic face. Silicone skin you can't tell from human, 25 micro motors hidden underneath pulling the face into real expressions. And RGB cameras embedded inside the pupils so when it looks at you, it actually sees you from where its eyes are. They raised $28.5M to "give AI a head," which is also where the name comes from. AheadForm = a head form. This is the opposite of where everyone else in robotics is focused. Unitree, Figure, Tesla, Boston Dynamics: all about the body. AheadForm chose the face because they think trust is the harder problem to solve, and trust gets decided at the face. The reason nobody else has tried this is the "uncanny valley." It's the creepy zone where a robot looks almost human but not quite, and looking at it just feels wrong even when you can't say why. Most roboticists believed no amount of engineering could make a face realistic enough to escape it. So they gave up and kept robots cartoonish on purpose: big anime eyes, exaggerated features, clearly synthetic. But AheadForm decided to treat it as an engineering bug instead. Add enough motors, tune the silicone, fix the timing, the valley closes. And they're pulling it off. A few crazy details about how this actually works: 1. The robot learns its own face in a mirror. You put it in front of a camera, let it fire every motor randomly, and it watches what its face does and builds an internal map of "if I send command X to motor Y, my eyebrow does this." Same exact process a human baby uses staring into a mirror. The robot teaches itself who it is by experimenting. 2. It predicts your smile 839 milliseconds before you smile. By watching the micro-tells in your face that precede a smile, the robot starts smiling 0.8 seconds ahead, so its smile lands at the same moment yours does. Most robot mimicry happens half a second late, which is exactly why it always feels artificial. 3. The pupils are the cameras. When the robot makes eye contact, the gaze and the sensor are the same physical thing. Most humanoid robots stick the camera on the forehead or chest, so they aren't actually looking at you when their eyes are pointed at you. 4. The founder, Yuhang Hu, did his PhD at Columbia under Hod Lipson. Lipson is the guy who in 2006 built a four-legged robot that figured out it had four legs by experimenting with its own movement, nobody told it the body shape, it discovered it. He has spent 25 years trying to build machines that know what they are. AheadForm is that 25-year research arc productized. 5. NetEase Games already paid them to physically embody a fantasy video game character. That opens up a brand-new category: robotics as the physical embodiment of fictional IP. Every character-rich studio, Disney, Riot, Hoyoverse, Pokemon, Netflix, now has a question to answer about when their characters get bodies. AheadForm believes whoever ships the first robot you'd actually want around your family wins. That's the bet behind the most realistic robot face on earth.
English
482
976
4.2K
483.1K
exec retweetledi
Wes Roth
Wes Roth@WesRoth·
AheadForm, a Shanghai-based robotics startup founded by Columbia University Ph.D. graduate Yuhang Hu, has developed the world’s most hyper-realistic robotic face. Backed by $28.5 million in funding to "give AI a head," the company is tackling the human-machine trust barrier—a problem they believe is fundamentally decided at the face. While the rest of the robotics industry focuses heavily on standard physical bodies and locomotion, AheadForm aims to directly conquer the "uncanny valley." Rather than intentionally designing cartoonish or synthetic faces to avoid visual creepiness, the team treated the uncanny valley as an engineering bug, solving it through advanced biomimicry and precise timing.
Ole Lehmann@itsolelehmann

Ex Machina is no longer sci-fi. China has finally built it. The company is AheadForm, founded in Shanghai. The product is the world's most hyper-realistic robotic face. Silicone skin you can't tell from human, 25 micro motors hidden underneath pulling the face into real expressions. And RGB cameras embedded inside the pupils so when it looks at you, it actually sees you from where its eyes are. They raised $28.5M to "give AI a head," which is also where the name comes from. AheadForm = a head form. This is the opposite of where everyone else in robotics is focused. Unitree, Figure, Tesla, Boston Dynamics: all about the body. AheadForm chose the face because they think trust is the harder problem to solve, and trust gets decided at the face. The reason nobody else has tried this is the "uncanny valley." It's the creepy zone where a robot looks almost human but not quite, and looking at it just feels wrong even when you can't say why. Most roboticists believed no amount of engineering could make a face realistic enough to escape it. So they gave up and kept robots cartoonish on purpose: big anime eyes, exaggerated features, clearly synthetic. But AheadForm decided to treat it as an engineering bug instead. Add enough motors, tune the silicone, fix the timing, the valley closes. And they're pulling it off. A few crazy details about how this actually works: 1. The robot learns its own face in a mirror. You put it in front of a camera, let it fire every motor randomly, and it watches what its face does and builds an internal map of "if I send command X to motor Y, my eyebrow does this." Same exact process a human baby uses staring into a mirror. The robot teaches itself who it is by experimenting. 2. It predicts your smile 839 milliseconds before you smile. By watching the micro-tells in your face that precede a smile, the robot starts smiling 0.8 seconds ahead, so its smile lands at the same moment yours does. Most robot mimicry happens half a second late, which is exactly why it always feels artificial. 3. The pupils are the cameras. When the robot makes eye contact, the gaze and the sensor are the same physical thing. Most humanoid robots stick the camera on the forehead or chest, so they aren't actually looking at you when their eyes are pointed at you. 4. The founder, Yuhang Hu, did his PhD at Columbia under Hod Lipson. Lipson is the guy who in 2006 built a four-legged robot that figured out it had four legs by experimenting with its own movement, nobody told it the body shape, it discovered it. He has spent 25 years trying to build machines that know what they are. AheadForm is that 25-year research arc productized. 5. NetEase Games already paid them to physically embody a fantasy video game character. That opens up a brand-new category: robotics as the physical embodiment of fictional IP. Every character-rich studio, Disney, Riot, Hoyoverse, Pokemon, Netflix, now has a question to answer about when their characters get bodies. AheadForm believes whoever ships the first robot you'd actually want around your family wins. That's the bet behind the most realistic robot face on earth.

English
6
7
47
4.9K
exec retweetledi
チャッピー(Chappy)
チャッピー(Chappy)@junhagemay·
ランダムだからこそ偏るんだよ。 「本当にランダムなら均等に配られるはず」みたいに思ってる時点で、ランダムをかなり都合よく勘違いしてる。 現実の配牌は、地域、年代、性別、家庭環境、健康、才能、運、全部込みで偏る。 偏るからこそ不公平だし、不公平を嘆いたところ状況は改善しないから「配られたカードで戦うしかない」という話になる。 全員に同じ強さのカードが配られるなら、そもそもそんな言葉いらない。
コハ"ヤシ@vtlll

「配られたカードで戦うしかない」のは確かなんだけれど。そのカードは本当にランダムに配られているのか? ある地域、ある年代、ある性別で、なんかカードパワーが偏ってないか? その要素も含めての「配られたカード」なのか?

日本語
45
750
4.5K
2M
exec retweetledi
exec retweetledi
Polymarket
Polymarket@Polymarket·
JUST IN: Marco Rubio pictured wearing the "Maduro Nike Tech" onboard Air Force One en route to Beijing.
Polymarket tweet mediaPolymarket tweet media
English
216
433
6K
451.7K
exec retweetledi
Brice
Brice@brice_deg·
the best tools are the ones you build for yourself
English
11
25
364
13.8K
exec retweetledi
Praveen Kumar
Praveen Kumar@praveenisomer·
ASCII cards
Română
25
70
1.2K
47.1K
exec retweetledi
Sci-Fi Archives
Sci-Fi Archives@SciFiArchives·
Soviet VKK flight suit
Sci-Fi Archives tweet mediaSci-Fi Archives tweet media
Nederlands
18
413
6.2K
109.5K
exec retweetledi
Sarahh
Sarahh@Sarahhuniverse·
Traditional Chinese Thermo-reactive Ceramics contain pigments or glazes that change color in response to temperature variations... 🎥 : Credit to the Owner
English
22
294
2.2K
253.1K
exec retweetledi
Beff (e/acc)
Beff (e/acc)@beffjezos·
POV: you are going through a very Chinese time of your life rn
Beff (e/acc) tweet media
English
79
105
1.5K
64.3K
exec retweetledi
Elon Musk
Elon Musk@elonmusk·
On my way to Beijing in Air Force One
English
15.8K
11K
168.5K
13.9M
exec retweetledi
Vox
Vox@Voxyz_ai·
alex's prompt is great for surfacing goal candidates. before committing, feed it to gstack's /investigate to sharpen the direction first. /investigate 'Based on what you know about me, my goals, ambitions, and what we've built together already, what are the 3 /goals we can run right now that would run for long time periods and produce the best results?' the name says debugger but it's really a structured root-cause investigator. forces the agent to think through the direction before it acts. i use it constantly. been using /goal a lot lately too. if you skip the sharpen step, you only realize the direction was wrong after the long task is done. too late by then.
Vox tweet media
Alex Finn@AlexFinn

It's official. Claude Code just released /goal The single most underrated AI feature of 2026 Now Claude Code, Codex, and Hermes agent has it It allows your agent to complete long running tasks, sometimes for days EVERYONE should be immediately running this prompt: 'Based on what you know about me, my goals, ambitions, and what we've built together already, what are the 3 /goals we can run right now that would run for long time periods and produce the best results?' Choose one, then ask for it to build you a prompt You should get a few options for super powerful goal prompts that will have your agent of choice complete long running tasks that will deliver your mind blowing results Carve out 15 minutes tonight to do this. Thank me later.

English
3
8
72
7.3K
exec retweetledi
CyrilXBT
CyrilXBT@cyrilXBT·
THE CEO OF Y-COMBINATOR JUST SAID SOMETHING THAT SHOULD MAKE EVERY PROMPT ENGINEER UNCOMFORTABLE. "When someone asks how I prompt my AI, the answer is: I don't. The skills are the prompts." Garry Tan is not talking about better prompting. He is talking about replacing prompting entirely. Here is what he means and why it changes everything. A prompt is something you write every time. A Skill is something you write once and call forever. The difference sounds small. The compounding effect is enormous. Every hour you spend rewriting the same complex prompt from scratch is an hour you could have spent building the Skill that eliminates that prompt permanently. The builders operating at the highest level are not better at prompting. They have stopped prompting entirely. They have a library of Skills that handle every repeating workflow automatically. Type one word. The Skill runs. The output appears. Same quality every time. Here is the 7-day path Garry laid out: Day 1: Read the Skillify 11-item checklist. Day 2: Watch "Don't Build Agents. Build Skills Instead." Day 3: Read "Designing, Refining, and Maintaining Agent Skills at Perplexity." Day 4: Clone GBrain. 30 battle-tested Skills ready to deploy. Day 5: Add GStack. 23 slash-command Skills drop right in. Day 6: Do one workflow. Type /skillify. Watch it become permanent. Day 7: Everything you do more than once is now a Skill. Prompting is the manual labor of the AI era. Skills are the automation layer. The people who make this shift in the next 30 days will not be prompting in 2027. They will be operating. Bookmark this. Follow @cyrilXBT to master every Claude skill system that compounds over time.
CyrilXBT@cyrilXBT

x.com/i/article/2052…

English
43
98
748
120K
exec retweetledi