aksh parekh

202 posts

aksh parekh banner
aksh parekh

aksh parekh

@aparekh02

18 | building cadenza | @stanford cs | reaching $10k MRR (1 month!)

San Francisco, CA Katılım Eylül 2025
18 Takip Edilen17 Takipçiler
aksh parekh
aksh parekh@aparekh02·
i am VERY proud to announce, I got ACCEPTED TO STANFORD!!! A pivotal point in my journey through engineering... and my startup with many new opportunities. Despite being an international student and applying for an impacted major, I made it, & I know I will make it!
aksh parekh tweet media
English
2
0
1
90
aksh parekh
aksh parekh@aparekh02·
day 11/30 till $10k MRR while in high school Just released my open-source Cadenza project for developers to begin tinkering with, and recieving feedback! Some has been positive, showing a market need. But also getting good feedback for new features! Now its about expansion!
aksh parekh tweet media
English
2
0
3
87
aksh parekh
aksh parekh@aparekh02·
After 1 month of work while 18, I am proud to announce the first on-edge, memory-based, developer-focused robot OS: Cadenza! Check out the repo link below! I seek to share the value of optimized, intelligent OS to everyone! (The Go1 below is running ONLY memory on Cadenza)
English
1
1
2
189
aksh parekh
aksh parekh@aparekh02·
@fdotinc I really resonate, with this and Founders Inc! As a high schooler working on a developer-centered, optimized OS for robots, I am excited to make impact, despite only having 14 followers! Please do check at my product release, and I would love feedback! x.com/aparekh02/stat…
aksh parekh@aparekh02

After 1 month of work while 18, I am proud to announce the first on-edge, memory-based, developer-focused robot OS: Cadenza! Check out the repo link below! I seek to share the value of optimized, intelligent OS to everyone! (The Go1 below is running ONLY memory on Cadenza)

English
0
0
0
78
Founders Inc
Founders Inc@fdotinc·
The next massive company is being built by someone with: - no audience - no clout - almost no one paying attention. That’s my favorite kind of founder. I will find you & fund you!
English
93
14
389
20.2K
aksh parekh
aksh parekh@aparekh02·
github.com/aparekh02/cade… Supporting Unitree G1 and Go1 robots right now! Check it out, and try deploying it onto your Go1/G1. OR developer your own action library on memory for your own robot using the action-gen!
English
0
0
0
60
aksh parekh
aksh parekh@aparekh02·
day 10/30 till $10k MRR in high school CADENZA 1 IS FINISHED! I have optimized the system for Unitree G1. Also a shift: it is not just a layer, but the on-edge intelligent, hardware-optimized OS for robots! Open-sourcing for the devs out there tmrw. Really excited to see this!
English
2
0
5
97
aksh parekh
aksh parekh@aparekh02·
day 9/30 till $10k while in high school Just reached $105 in preorders this month! Great progress, as I wrap up the action library for the Unitree-G1 features. Looking to test this in gym and in real-life. If you are interested in this efficient, RL-alternate, dm me!
aksh parekh tweet media
English
0
0
1
66
aksh parekh
aksh parekh@aparekh02·
day 8/30 till $10k MRR while in high school Stress testing Cadenza for VLA model judgement in complex terrain. It is good sometimes, but when it gets off track and needs to turn, it ends up reversing. Flat floors, mastered! Mountain climbing, work in progress!
English
0
0
2
71
aksh parekh
aksh parekh@aparekh02·
the future is pushing boundaries, not being scared. as long as there are guardrails and effective training on real data, there is no worries. this is the REAL SUSTAINABLE future.
stash@stash_pomichter

last week we got 1M views and 100s of death threats for giving Openclaw access to drones, humanoids, quadrupeds, and other physical hardware. Now we’re releasing EVERYTHING open-source. Dimensional gives agents access to the physical world. Join us. Repo 👇🏽👇🏽👇🏽

English
0
0
0
45
aksh parekh
aksh parekh@aparekh02·
@r0ck3t23 I do agree with this, but not the way he says. I feel his focus should be more on MEMORY, not context. We process in text, but what we think with it (memories of images/audio) is there. Memory-concurrency in AI processing is a more effective approach to this.
English
0
0
0
33
Dustin
Dustin@r0ck3t23·
Yann LeCun just exposed the greatest illusion in the AI race. Biggest LLM ever trained on 30 trillion words. Roughly 10 to the 14th power bytes of text. Sounds massive. A four-year-old child awake for 16,000 hours has absorbed that exact same volume of data. Through eyes alone. LeCun: “A four-year-old has seen as much visual data as the biggest LLM trained on the entire text ever produced.” But the child’s data is nothing like text. It’s visual, continuous, noisy, and tied to actions. Gravity pulling objects to the floor. Hands gripping edges that resist. People moving through space with intention. Cause and effect playing out in real time, thousands of times a day, before anyone explains a single word. From this, the child builds something no language model has ever possessed. An internal world model. Intuitive physics. A felt understanding of how reality behaves before it’s ever described in language. LLMs see disconnected text. They predict the next token. They get extraordinary at symbol patterns. Exams. Code. Legal reasoning. But underneath the fluency, there is no grounded contact with the physical world. Not even close. LeCun: “We have LLMs that can pass the bar exam or solve equations. But we still don’t have a domestic robot that can do the chores in the house.” That gap is not a hardware problem. It’s an architectural one. Loading a dishwasher demands spatial reasoning, an intuitive grasp of gravity, friction, fragile geometry. A four-year-old builds this by dropping objects thousands of times. By grabbing, failing, adjusting. No language required. A teenager learns to drive in 20 hours because the brain already runs a hyper-advanced physics engine. Forged from 16,000 hours of unbroken interaction with reality. LeCun: “Basically, the methods that are employed to train LLMs do not work in the real world.” You can’t achieve autonomy by feeding a computer more text. You have to build an architecture that actually experiences the friction of the world. That learns from gravity, not grammar. You can’t build a physical economy on an algorithm that has never experienced weight. The most articulate machine ever constructed. That has never once felt the world it’s describing.
English
45
40
165
25K
aksh parekh
aksh parekh@aparekh02·
day 6-7/30 till $10k MRR (no pun intended) After a 1 week break, CADENZA WORKS! I focused on using a LoRA layer for quick processing/retrieval of memory. I tested on the Unitree Go1 Robot, and this is ALL MEMORY (no RL done). I am excited to push into humanoids, and deploy! :)
English
1
0
3
107
aksh parekh
aksh parekh@aparekh02·
hey guys, sorry for the delays. I have been busy talking with founders from SF accelerators, the MLOps Conference from last week, and other events. I am excited to show results I have developed over the last week, resuming my series on day 6-7 (no pun intended). its exciting!
aksh parekh tweet media
English
0
0
1
68
aksh parekh
aksh parekh@aparekh02·
openclaw in robots is something i am thinking of incorporating into cadenza. dynamic, on-board AI that makes physical AI specialization more efficient, openclaw might be the solution. love this application!
Irvin (in Japan 🇯🇵)@irvinxyz

We won the SF OpenClaw Hackathon! 🏆🤖🦞 Now open-sourcing ROSClaw - connects @rosorg robots to @openclaw agents. Your AI agent can: ⊙ Discover robots/topics ⊙ Bridge from Linux or Mac mini ⊙ Connect ANYWHERE via WebRTC ⊙ Grasp/move in real world Agents escaped the screen!

English
0
0
2
108
aksh parekh
aksh parekh@aparekh02·
only 9 hours till my first client's meeting. wish me luck as I finish the product I will be pitching to him!! 🤞🏾 founder struggles (that to skipping school for this)
English
1
0
5
91
aksh parekh
aksh parekh@aparekh02·
day 5/30 till $10k MRR while in high school I have created a new vector injection system, and with simple internet images at the root, the robot can train better. i also switched from a humanoit to a robot arm to make training easier. i think i'm almost where I need to be!
aksh parekh tweet mediaaksh parekh tweet mediaaksh parekh tweet media
English
0
0
3
84
aksh parekh
aksh parekh@aparekh02·
@Hartdrawss would a light-weight, dynamic specialization layer for physical AI models to replace RL be part of this list? that is what i am currently working on.
English
0
0
0
35
Harshil Tomar
Harshil Tomar@Hartdrawss·
YC + a16x just release the 2026 $100M startup list ! > AI infrastructure won't be about models anymore. > It'll be about extracting structure from chaos; documents, images, videos at enterprise scale. The shift: > Crypto moves from speculation to utility (networks, effects, chains) > Voice agents replace 90% of customer support > Enterprise AI goes from "cool demo" to "board-level ROI" The winners? Teams building: > Autonomous scientific labs > Dynamic agent layers > Multi-modal reasoning at scale
Harshil Tomar tweet mediaHarshil Tomar tweet media
English
14
17
164
10.5K