Pavneet singh

57 posts

Pavneet singh

Pavneet singh

@CoPavneet

Git hub : https://t.co/BnPsudzTmj

Katılım Şubat 2025
20 Takip Edilen8 Takipçiler
Pavneet singh retweetledi
Unitree
Unitree@UnitreeRobotics·
Unitree Humanoid Robot Daily Training 🥳 Have you exercised today? How about training together with a robot? Please use robots in a friendly and safe manner, and keep a safe distance.
English
944
1.9K
12.2K
13M
Pavneet singh retweetledi
Sumanth
Sumanth@Sumanth_077·
Agentic memory framework for LLMs and AI Agents! MemU is an open-source agent memory framework that lets LLMs store, organize, and reason over long-term memory using a file-system based design. Instead of stuffing context or relying only on vector search, MemU lets agents read and reason over memory files directly. Memory is not an index. It’s something the model can understand. MemU ingests multimodal inputs, extracts structured textual memory items, and autonomously organizes them into thematic Markdown files. How memory is structured: Raw resources → memory items → memory category files Documents, conversations, images, and audio are preserved in their original form, without deletion or modification. Facts are then extracted and organized into human-readable memory category files. Key features: • Dual-mode retrieval, including LLM-based (non-embedding) search for higher accuracy • File-system based memory where each category is a Markdown file • Hierarchical memory layers that preserve traceability • Native multimodal memory for text, images, audio, and video • Lightweight and developer-friendly, no heavy graph constraints • Fully configurable prompts for high extensibility Why this architecture matters: Most memory systems force developers to decide what matters. MemU lets the agent decide. It learns what to remember, promotes frequently used knowledge, and reorganizes memory as usage evolves. Retrieval works top-down and falls back gracefully when needed. The result is better temporal reasoning, fewer hallucinations, and memory that actually scales across sessions. The best part? It’s 100% open source. Link to the GitHub repo in the comments!
Sumanth tweet media
English
59
254
1.4K
73.9K
Pavneet singh retweetledi
David Scott Patterson
David Scott Patterson@davidpattersonx·
Unlocking the AI money machine. AI will reach AGI and ASI by the end of 2026. This will unlock two massive sources of income: AGI will allow all labor to be replaced, tapping into a sixty trillion dollar per year income stream. ASI will quickly help solve all remaining technological and scientific problems, including cures for all diseases. The companies that own the patents will reap the rewards. This is what investors see. This is why they are willing to invest trillions of dollars in AI.
English
81
22
240
10.7K
Pavneet singh retweetledi
Akshay 🚀
Akshay 🚀@akshay_pachaar·
Google just dropped "Attention is all you need (V2)" This paper could solve AI's biggest problem: Catastrophic forgetting. When AI models learn something new, they tend to forget what they previously learned. Humans don't work this way, and now Google Research has a solution. Nested Learning. This is a new machine learning paradigm that treats models as a system of interconnected optimization problems running at different speeds - just like how our brain processes information. Here's why this matters: LLMs don't learn from experiences; they remain limited to what they learned during training. They can't learn or improve over time without losing previous knowledge. Nested Learning changes this by viewing the model's architecture and training algorithm as the same thing - just different "levels" of optimization. The paper introduces Hope, a proof-of-concept architecture that demonstrates this approach: ↳ Hope outperforms modern recurrent models on language modeling tasks ↳ It handles long-context memory better than state-of-the-art models ↳ It achieves this through "continuum memory systems" that update at different frequencies This is similar to how our brain manages short-term and long-term memory simultaneously. We might finally be closing the gap between AI and the human brain's ability to continually learn. I've shared link to the paper in the next tweet!
Akshay 🚀 tweet media
English
257
1K
6K
511.8K
Pavneet singh retweetledi
Pavneet singh retweetledi
Humanoids daily
Humanoids daily@humanoidsdaily·
Meta's Chief AI Scientist Yann LeCun offers a critical take on the humanoid robot boom. Speaking at MIT, LeCun claimed the "big secret" of the industry is that current companies "have no idea" how to make their robots "smart enough to be generally useful." He argues that while humanoids can be trained for narrow manufacturing tasks, a truly autonomous domestic robot is impossible without fundamental AI breakthroughs. For LeCun, this means moving beyond current generative models and toward "world model planning-type architectures"—systems that can learn to understand and predict the physical world. The future of these billion-dollar startups, he says, depends entirely on this next wave of AI research.
English
218
256
1.8K
1.4M
Pavneet singh
Pavneet singh@CoPavneet·
✅ Completed the Model Context Protocol (MCP) course by @_alejandroao ! Built a few MCP servers in VS Code using free APIs like news & docs. Loving how powerful this is for AI integrations. ⚡ #MCP #AI #VSCode
Pavneet singh tweet media
English
0
0
2
86
Pavneet singh
Pavneet singh@CoPavneet·
Going offline for mid-sems. The funny part? After a one-week gap, I know I’ll forget everything I studied. Classic exam cycle.
English
0
0
1
18
Pavneet singh
Pavneet singh@CoPavneet·
Built a password generator site today 🔐 (code’s up on my GitHub). On Linux, dived into processes & signals ⚡ In DSA, cracked sliding window 🪟 A day well spent—building, breaking, and learning 🚀
Pavneet singh tweet media
English
0
0
1
34
Pavneet singh
Pavneet singh@CoPavneet·
Wrapped up SSoC Season 4 🎉 Grateful for the learning, the late-night debugging, and the community that made it worth it. On to building bigger things 🚀💻 #SSoC #OpenSource #KeepBuilding
Pavneet singh tweet media
English
0
0
2
62
Pavneet singh
Pavneet singh@CoPavneet·
Just got accepted into the McKinsey Forward learning program. Excited to dive in, learn, and grow step by step. 🚀 Grateful for opportunities that push me to think bigger. #TakeAStepForward
Pavneet singh tweet media
English
0
0
3
181
Pavneet singh retweetledi
Jackson Atkins
Jackson Atkins@JacksonAtkinsX·
Meta just made training AI agents 25x faster. This is a breakthrough for robotics and complex planning. Meta's FAIR open sourced a new method called Scalable Option Learning. It trains a specialized agent at the scale previously seen only with LLMs. Here's how it works: The reason this type of AI (Agents trained with Hierarchical Reinforcement Learning) has been slow to train is a parallelization bottleneck. Imagine an AI team with a planner and many specialist workers (the sub-tasks). Older methods struggled because they had to process each planner's decision one-by-one before training the workers. SOL solves this with a new system design: A Single, Unified Brain: Instead of separate models, it uses a single actor-critic network to house the planner (controller policy) and all the workers (option policies). A Digital "Switch": It tells this unified brain which role to play at any given moment using a one-hot vector, a flag that says, "for this input, act as the 'navigation' worker." This allows thousands of different decisions for different policies to be batched and sent to the GPU at once. A Smart "Filter" for Learning: After the actions are taken, it uses a technique called tensorized masking. Think of this as a smart filter that ensures the right performance feedback (the rewards and advantages) goes to the correct worker policy. This is what breaks the one-at-a-time update problem. This architecture allows the entire hierarchical system to learn in parallel batches and removes the bottlenecks that held the field back. Why this matters: This new training method changes the viability of building agents that can reason and execute long-horizon tasks. - Business Leaders: This architecture is a key to developing sophisticated autonomous systems. A 25x faster training cycle accelerates R&D in robotics, logistics, and multi-stage process automation, making complex, strategic AI commercially achievable. - Practitioners: The authors plan to open-source SOL. You can implement agents that learn long-horizon skills without the performance penalty of older HRL methods, creating a path to more structured and potentially more robust models. - Researchers: This paper presents a validated solution to the HRL scaling problem (Section 3.2). The system for enabling high-throughput, asynchronous updates for a hierarchical agent is a major contribution that opens the door for large-scale experiments in temporal abstraction and credit assignment.
Jackson Atkins tweet media
English
18
78
447
58.4K
Pavneet singh
Pavneet singh@CoPavneet·
Every body will told you python is easiest and jave is most popular , demanding . Mostly YouTuber will say as your first language you should learn jave. but here is the truth : just create Hashset, Map in java just google it and make it in python . next if you do AI stuff.
English
0
0
2
28
Pavneet singh retweetledi
elvis
elvis@omarsar0·
LiveMCP-101 This paper introduces LiveMCP-101, a novel real-time evaluation framework with a benchmark designed to stress-test agents on complex, real-world tasks. It moves beyond the mock data and synthetic environments of previous works. More notes ↓
elvis tweet media
English
11
37
195
26.2K
Pavneet singh
Pavneet singh@CoPavneet·
Spent the whole day finishing the Linux course by @Firebase. Honestly, it’s more of an intro — you get a taste of every topic, but the real mastery has to come from your own deep dives on the internet.
Pavneet singh tweet media
English
1
0
2
240
Pavneet singh
Pavneet singh@CoPavneet·
@karpathy When you know its right time , like I struggling in something for hours then use gpt , sometimes I would use use it on first place 😅
English
0
0
4
144
Andrej Karpathy
Andrej Karpathy@karpathy·
I think congrats again to OpenAI for cooking with GPT-5 Pro. This is the third time I've struggled on something complex/gnarly for an hour on and off with CC, then 5 Pro goes off for 10 minutes and comes back with code that works out of the box. I had CC read the 5 Pro version and it wrote up 2 paragraphs admiring it (very wholesome). If you're not giving it your hardest problems you're probably missing out.
English
428
774
12.7K
2.6M