keerti manney

24 posts

keerti manney

keerti manney

@keertimanney

Towards spatial intelligence

San Francisco, CA Katılım Aralık 2011
221 Takip Edilen25 Takipçiler
keerti manney
keerti manney@keertimanney·
Took private ski lessons this season. Instructor is 7, ruthless ...and my nephew. End of 2026 season : American blues now 💙⛷️
keerti manney tweet media
English
0
0
3
13
keerti manney retweetledi
Chelsea Finn
Chelsea Finn@chelseabfinn·
LLM post-training used to mean fine-tuning to a downstream task Robotics has been stuck in this setting, needing task-specific fine-tuning for best performance π07 changes this: It works out of the box & outperforms fine-tuned specialists Details: pi.website/pi07
English
16
59
548
49.8K
keerti manney retweetledi
Physical Intelligence
Physical Intelligence@physical_int·
Our newest model, π0.7, has some interesting emergent capabilities: it can control a new robot to fold shirts for which we had no shirt folding data, figure out how to use an appliance with language-based coaching, and perform a wide range of dexterous tasks all in one model!
English
52
302
2.4K
418K
keerti manney retweetledi
Sergey Levine
Sergey Levine@svlevine·
We finished evaluating π0.7, our new model at Physical Intelligence. What I'm most excited about with π0.7 is that it's starting to show some surprising emergent compositional generalization, being able to both perform complex tasks and learn new tasks just from instructions.
English
11
68
744
58.5K
keerti manney
keerti manney@keertimanney·
Its 2026. your openclaw agents dont just have memory and autonomy, they have ✨️ stage presence ✨️
Sunnyvale, CA 🇺🇸 English
0
1
3
563
keerti manney retweetledi
skcd
skcd@skcd42·
Grok-code-fast-1 is now out and available for everyone to use 🚀🏎️💨 When I joined the coding team, the team was just 3 people and we very quickly built a model which was SOTA on SWEBench. But as things go, in the real world benchmarks matter less. Over the last few months we approached the modelling + data + infra perspective from a different lens, putting developers and users first over everything else. This required us to tune the data recipe, get the infra in place to do a lot of rollouts and create a set of grounded evals which was powered by both human judgement and an in-house auto-evaluation framework that captured real world usability. This is the first model of many in the grok coding family, we are going to make quick iterations and improve the model performance over time. We thrive on your feedback, please share your unfiltered and honest thoughts so we can keep pushing new boundaries for agentic coding
xAI@xai

Introducing Grok Code Fast 1, a speedy and economical reasoning model that excels at agentic coding. Now available for free on GitHub Copilot, Cursor, Cline, Kilo Code, Roo Code, opencode, and Windsurf. x.ai/news/grok-code…

English
224
128
1.6K
6.6M
keerti manney retweetledi
Peyman Milanfar
Peyman Milanfar@docmilanfar·
Congratulations to the authors of this lovely work that just won a best paper award at @siggraph #siggraph2023. They achieve state-of-the-art visual quality with real-time (≥ 100 fps) novel-view synthesis at 1080p resolution. Far exceeding NeRF approaches on both quality and speed. How? They use anisotropic Gaussian kernel interpolation/splatting. Such kernels also have a history being very useful in computational imaging (e.g. super-resolution, noise reduction, etc.) The literature on Radiance Field Rendering has finally come full circle: in the original NeRF formulation, the “neural” part referred to a small multi-layer perceptron (MLP) which, in turn, corresponds to an underlying interpolation kernel. This aspect hasn’t been discussed much in the NeRF literature - namely, that NeRFs aren’t much of a neural network, so much as they are kernel (interpolation) machines. So how come NeRFs haven’t been this fast and high quality so far? Because to get here, NeRFs would have to eventually stumble upon an MLP design that would yield exactly the same anisotropic Gaussians that this paper has nailed down. And these Gaussians are ideal because they are both efficient to compute and can provide interpolation quality guarantees. This paper has done a lot of things well. But perhaps most importantly, it has leapfrogged many generations of NeRF trial and error on NeRF architectures, and landed on what’s arguably the most effective underlying interpolator. That’s what progress in science and engineering is all about. repo-sam.inria.fr/fungraph/3d-ga…
Peyman Milanfar tweet media
English
11
148
904
125.4K
keerti manney retweetledi
Guy Parsons
Guy Parsons@GuyP·
OK so @OpenAI's new #ChatGPT can basically just generate #AIart prompts. I asked a one-line question, and typed the answers verbatim straight into MidJourney and boom. Times are getting weird...🤯
Guy Parsons tweet mediaGuy Parsons tweet mediaGuy Parsons tweet mediaGuy Parsons tweet media
English
417
3.5K
20.7K
0
keerti manney retweetledi
Nancy Wang Yuen
Nancy Wang Yuen@nancywyuen·
#Parasite demonstrates how great cinema can affect social change. “The Seoul City government will financially support 1,500 households living in semi-basement apartments...to improve their living conditions.” m.koreaherald.com/view.php?ud=20…
English
36
5.5K
12.7K
0
keerti manney retweetledi
The French Dispatch
The French Dispatch@french_dispatch·
The French Dispatch of the Liberty, Kansas Evening Sun A film by Wes Anderson In Theaters July #TheFrenchDispatch
English
800
25.7K
63.2K
0
keerti manney retweetledi
The New Yorker
The New Yorker@NewYorker·
“Parasite” makes history at the 2020 #Oscars.
The New Yorker tweet media
English
15
1.6K
5.7K
0