Naveen Appiah

99 posts

Naveen Appiah banner
Naveen Appiah

Naveen Appiah

@nikamanth

Robot Learning @Apple | @stanford alum | @F1 fan #LH44

Bay Area, CA Katılım Şubat 2011
358 Takip Edilen59 Takipçiler
Patrick Yin
Patrick Yin@patrickhyin·
I would say First-Try Success Rate (Real) vs. Policy Success Rate (Sim) is a more fair comparison for the sim2real gap in these experiments. Because our policies are trained with broad state coverage, they can recover from failures and retry until success. You can see this behavior in the first ~20 seconds of the full, uncut evaluation videos at the bottom of our website: weirdlabuw.github.io/omnireset/
English
1
0
0
15
Patrick Yin
Patrick Yin@patrickhyin·
We’re releasing OmniReset, a framework for training robot policies using large-scale RL and diverse resets for contact-rich, dexterous manipulation. OmniReset pushes the frontier of robustness and dexterity, without any reward engineering or demonstrations. Try the policies yourself in our interactive simulator! weirdlabuw.github.io/omnireset/ (1/N 🧵)
English
20
88
417
85.5K
Naveen Appiah
Naveen Appiah@nikamanth·
@GuanyaShi Small or big. Dent is a dent if there are results and no one else has tried it.
Naveen Appiah tweet media
English
0
0
0
675
Guanya Shi
Guanya Shi@GuanyaShi·
I’m so tired of writing rebuttals to this kind of “lack of novelty” review: “This paper trivially combines A, B, and C, so the algorithmic novelty is limited.” Technically, most (if not all) robotics papers are convex combinations of existing ideas. I still deeply appreciate A+B+C papers—especially when they deliver: - New capabilities: the “trivial combination” unlocks behaviors we simply couldn’t achieve before - Sensible & organic design: A+B+C is clearly the right composition—not some arbitrary A′+B+C′ - Nontrivial interactions: careful analysis of the dynamics, coupling, or failure modes between A, B, C - Rehabilitating old ideas: A was dismissed for years, but paired with modern B/C, it suddenly works—and teaches us why - System-level & "interface" insight: the contribution is not any single piece, but how the pieces talk to each other - Scaling laws or regimes: identifying when/why A+B+C works (and when it doesn’t) - Engineering clarity: making something actually work robustly in the real world is not “trivial” - New problem formulations: sometimes the real novelty is in the reformulation—only under this view does A+B+C make sense. Maybe worth keeping these in mind when reviewing the next A+B+C paper : )
English
26
110
916
98K
Naveen Appiah
Naveen Appiah@nikamanth·
@DrJimFan @GuanyaShi It's probably only me...The moment someone mentions "Imminent AGI", "Pre-AGI" or even just "AGI", I lose to grasp the credibility of the entire statement. 🙈
English
1
1
1
518
Jim Fan
Jim Fan@DrJimFan·
@GuanyaShi I stopped caring about conference paper reviews a while ago ;). It’s meaningless in the imminent pre-AGI phase we are in.
English
8
4
156
12.6K
Naveen Appiah retweetledi
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
Amazon makes a big move in the humanoid game. Amazon has acquired Fauna Robotics, a New York-based humanoid robot startup. The transaction closed last week. Fauna Robotics developed Sprout, a compact and approachable humanoid robot designed for safe, everyday interaction in shared human spaces such as homes, offices, and schools. Standing about 3.5 feet tall, Sprout can walk, grasp objects, interact with people, and even dance. The robot was launched in January this year as a humanoid platform for developers, priced at $50,000. Following the acquisition, Fauna’s roughly 50 employees will join Amazon. The company will continue deploying Sprout to outside researchers, and the startup will retain its name while operating as “Fauna, an Amazon company.” $AMZN
English
62
185
1.3K
126.6K
Ramesh Srivats
Ramesh Srivats@rameshsrivats·
My workflow - Claude gives me the prompts. I paste it in Claude Code. Claude Code gives me outputs. I screenshot that and paste it in Claude. Claude gives next prompt. And so on. Basically Claude is the manager. Claude Code is the worker. And I'm the mailman.
English
33
19
307
18.1K
Pete Florence
Pete Florence@peteflorence·
The real story behind our GTC demo last week is that we only spent a few days prepping the demo, on a robot we had never touched before! We then ran it live for four days straight. More of the story here👇
Generalist@GeneralistAI

We ran a live demo @nvidia GTC last week, but the real story is how quickly we got it running. The system was up and running in days, not weeks. This is a step toward robots that can be deployed quickly without task-by-task programming. How we made it happen👇 🧵 (1/6)

English
3
3
31
2.3K
Naveen Appiah retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.
Daniel Hnyk@hnykda

LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below

English
1.3K
5.4K
27.9K
65.5M
Naveen Appiah retweetledi
Hassan Hayat 🔥
Hassan Hayat 🔥@TheSeaMouse·
Codex laughs at your petty guardrails
Hassan Hayat 🔥 tweet media
English
84
296
6.3K
331.5K
Naveen Appiah retweetledi
mitsuri
mitsuri@0xmitsurii·
Computer animation in the 90s was no joke.
English
323
5.6K
63.7K
6.1M
Naveen Appiah
Naveen Appiah@nikamanth·
@lucasmaes_ Pretty cool! Should there always be a goal observation embedding available to do planning?
English
0
0
0
78
Lucas Maes
Lucas Maes@lucasmaes_·
JEPA are finally easy to train end-to-end without any tricks! Excited to introduce LeWorldModel: a stable, end-to-end JEPA that learns world models directly from pixels, no heuristics. 15M params, 1 GPU, and full planning <1 second. 📑: le-wm.github.io
English
95
517
3.7K
619.2K
AshutoshShrivastava
AshutoshShrivastava@ai_for_success·
What skill separates those who succeed in the AI era from those who don’t?
English
58
0
65
6.5K
Ramesh Srivats
Ramesh Srivats@rameshsrivats·
If AI is going to replace all junior staff, then 10-15 years from now, where will senior staff come from? Tell, tell.
English
310
44
1K
83.9K