Prabhas

84 posts

Prabhas

Prabhas

@prabhas

Cool

New Jersey, USA Tham gia Nisan 2008
794 Đang theo dõi10.3K Người theo dõi
Common Sense Investor (CSI)
Common Sense Investor (CSI)@commonsenseplay·
Remember: your favourite X account is probably BEING PAID TO PROMOTE THE STOCKS THEY HYPE. $IONQ $RGTI $IREN $BTQ $LAES $OKLO $QUBT $QBTS $JOBY $CCCX $ONDS These market intelligence companies approached me too directly on X (back when I had only 6000 followers - imagine the payouts for the big accounts!). My mission has always been simple: cut through the hype, expose false narratives, and bring real truth to retail investors. I’ll always stay fully independent. I will never accept paid stock-pumping deals - the kind that are everywhere on X - not now, not ever! Always do your own due diligence.
Common Sense Investor (CSI) tweet media
Common Sense Investor (CSI)@commonsenseplay

I will get a lot of hate for this. But just so people are aware, paid stock pumpers on X is more common than you think. You'd actually be surprised by how common it actually is. My account is relatively new and small (less than 10k) but already I've been reached out to a number of times on X by middle men/companies to do research for specific companies and post about them, in exchange for fees. The middle men deal with the companies directly so you never come into contact with them. I refused all of them. So it is 100% happening, and some of the offers where fairly lucrative - much more so for bigger accounts. They don't tell you what to write, but they share specific research for you to post about (which is nearly always bullish). This is why I push for transparency when I see co-ordinated X accounts trying to push a false narrative on specific stocks. Please always do your own research, sometimes its all a FUGAZI! $oklo $ionq $rgti $qubt $qbts $iren

English
13
5
55
37.8K
Prabhas đã retweet
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
MIT researchers have been digging into the "brains" of 60 different scientific AI models, and have stumbled upon something wild. It turns out, whether an AI is reading text or looking at 3D atoms, they are all starting to agree on the same hidden truth about our universe. Here is the pattern you can't unsee. 🧵 1/ First, the premise. We have AI models for everything now. • Some read protein sequences (like text). • Some look at 3D crystal structures (like vision). • Some predict forces in materials. They are built differently. They are trained differently. They should think differently. 2/ But a new paper from MIT just asked a massive question: "Are these models actually learning the same physics?" The answer is yes. And it’s kind of spooky. 3/ The researchers took nearly 60 models—from LLMs reading SMILES strings to complex 3D potentials—and peered inside their latent spaces (their internal "thoughts"). They found that as models get smarter, their internal representations of matter start to look identical. 4/ Think of it like this: If you ask a poet and a physicist to describe a sunset, they use different languages. But if they are both experts, they are describing the exact same reality. The AI models are converging on a "Universal Representation of Matter." 5/ This chart in the paper is the smoking gun. It shows that an LLM (trained on text) and a 3D Atomistic Model (trained on geometry) align almost perfectly when looking at molecules. The text model "hallucinated" the 3D structure implicitly. It learned the physics just by reading the chemistry. 6/ But that's not even the most interesting part. This convergence gives us a new way to spot "fake" intelligence. The researchers found that high-performing models all cluster together in this "truth" space. But the weak models? They scatter. 7/ It’s the Anna Karenina principle of AI: "All happy (smart) models resemble one another; every unhappy (dumb) model is unhappy in its own way." If a model diverges from the pack on standard data, it hasn't learned a new trick. It’s just lost in a local sub-optimum. 8/ However, there is a catch. When the researchers threw "out-of-distribution" data at the weak models (stuff they hadn't seen before), the behavior flipped. Instead of scattering, the weak models collapsed. They all started making the same low-information mistakes. 9/ This reveals a massive problem in Materials Science AI specifically. The study shows these models are currently "data-governed." They are memorizing their specific training sets rather than learning universal laws. They aren't "foundational" yet. They are just really good parrots. 10/ So, what does this mean for the future of Science? Efficiency: We don't need massive, expensive, symmetry-enforcing architectures. We can "distill" the knowledge from big models into simple, fast ones. Truth: We can use "alignment" to fact-check AI. If a model disagrees with the consensus of other top models, it's likely wrong. 11/ The most profound takeaway? pattern-matching And the fact that different AIs are independently deriving the same laws suggests that these models aren't just pattern matching. They are uncovering reality. 12/ If this research holds up, in 5 years we won't distinguish between "protein models" and "materials models." We will just have "Matter Models." One foundation to simulate it all. 13/ This paper is a dense but rewarding read. It fundamentally changes how I think about "generality" in AI. If you want to dive deeper, grab the PDF here: [Link to 2512.03750v1.pdf] And SUBSCRIBE to me for more breakdowns of the science that is quietly changing the world.
Carlos E. Perez tweet mediaCarlos E. Perez tweet media
English
118
273
1.5K
134.9K
Prabhas
Prabhas@prabhas·
Hey i m thinking to get same mac but mac studio with 128gb ram. I have few questions on your test 1. Which model you are using for coding 2. Are you using any IDE to do coding with local llm that you are running? 3. How efficient is outcome of code , has it completed tasks you desired?
English
0
0
0
170
Logan Thorneloe
Logan Thorneloe@loganthorneloe·
My hypothesis was right. Two weeks ago I dropped $4000 on a maxed-out MacBook to test if local coding models could replace $100+/mo cloud subscriptions. After weeks of real development work, here's what you need to know: - Small models are shockingly capable. I'm talking 90%+ of development work can be handled by local models. Even 7B parameter models punch way above their weight. You don't need to spend $4000 on a 128 GB MacBook Pro like I did—even 32 and 64 GB can run great models. - The real constraint is tooling. While tooling makes it easy to serve local models, connecting those models to coding tools reliably was difficult. I spent a lot of time tinkering to get them to work. - Local models provide benefits other than just cost. They apply to many more applications (think security- and privacy-focused applications), provide greater flexibility, and are more reliable. There's no downtime for local models and their performance will never randomly degrade. So is better hardware worth it over a subscription? Yes, but here's the catch: If you're spending $100/mo+ on Cursor or Claude subscriptions, the investment is worth it. Local models will only get better and smaller from here on out. However, Google offers a lot of free quota across its AI coding products. The hardware purchase becomes much more difficult to justify if the alternative is free coding tools instead of pricey subscriptions. My approach going forward will be this: Use local models as my workhorse. Use the free cloud offerings for the 10% of cases where you need better performance. I documented my entire local AI coding setup. I decided to use the Qwen3 models, serve them with MLX, and use Qwen Code CLI as my coding tool. Link in bio for the complete guide.
Logan Thorneloe@loganthorneloe

I've got a MacBook w/128 GB of RAM coming today. My hypothesis: My money is better spent paying for greater hardware and running local coding models than paying a $100+/mo subscription. Follow for details of my setup and to see the results!

English
128
124
1.7K
286.7K
Beginnersblog
Beginnersblog@beginnersblog1·
Google just quietly dropped an AI that runs on your Mobile and doesn't need the internet. - 270 million parameters. - 100% private. - No servers. - No cloud. - No data leaving your device. It's called FunctionGemma. Released December 18, 2025. And it does something wild: It turns your voice commands into REAL actions on your phone. No internet required. No data leaving your device. No waiting for servers. Just you and your phone. That's it. Let me break down why this matters: Current AI assistants work like this: You speak → Words go to the cloud → Server processes → Answer returns The problem? → Slow (internet round-trip) → Privacy nightmare (your data travels everywhere) → Useless offline (no signal = no help) FunctionGemma flips this completely. Everything happens ON your device. Response time? 0.3 seconds. Battery drain? 0.75% for 25 conversations. File size? 288 MB. That's smaller than most mobile games. Here's how it actually works: Step 1: You say "Add John to contacts, number 555-1234" Step 2: FunctionGemma understands your intent Step 3: Translates it to code your phone understands Step 4: Your phone executes it instantly Step 5: Done. Contact saved. No cloud involved. The numbers that blew my mind: • 270M parameters (6,600x smaller than GPT-4) • 126 tokens per second • 85% accuracy after fine-tuning • 550 MB RAM usage • Works 100% offline But here's the real genius: Google calls it the "Traffic Controller" approach. Simple tasks? → Handled locally (instant + private) Complex tasks? → Routed to cloud AI (when needed) Best of both worlds. What can it actually do? → "Set alarm for 7 AM" ✓ → "Turn off living room lights" ✓ → "Create meeting with Sarah tomorrow" ✓ → "Navigate to nearest gas station" ✓ → "Log that I drank 2 glasses of water" ✓ All processed locally. All private. All instant. The honest limitations: → Can't chain multiple steps together (yet) → Struggles with indirect requests → 85% accuracy means 15% errors → Needs fine-tuning for best results But that 58% → 85% accuracy jump after training? That's the unlock. Why should you care? This isn't about one model. It's about a fundamental shift: OLD thinking: Bigger AI = Better AI NEW thinking: Right-sized AI for the right job A tiny 270M model trained for YOUR app can outperform a general 7B model. While using 25x less memory. While running completely offline. While keeping all data private. The future of AI isn't just in data centers. It's in your pocket. And it just got a lot more real. Want to try it? → Download: ollama pull functiongemma → Docs: ai.google.dev/gemma/docs/fun… → Model: huggingface.co/google/functio… PS:) Like, Repost and Bookmark! If this was useful - Follow for more AI breakdowns
Beginnersblog tweet media
English
208
991
5.2K
616.9K
Prabhas đã retweet
Bindu Reddy
Bindu Reddy@bindureddy·
The folks who get maximal benefit from AI and vibe coding are the ones who understand technical concepts and know some basic programming The can tell when the agent is hallucinating or making mistakes and can nudge it in the right direction They also understand how to design correctly and prompt it to build what they want
English
44
18
181
15.2K
Prabhas đã retweet
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
They say - DevOps is dead, SRE is dead. - AI agents will be managing/troubleshooting your Kubernetes clusters. - Infrastructure-as-Code will be fully automated! - CI/CD pipelines will be built and managed by AI! I've been hearing these dramatic predictions since ChatGPT launched. After 3 years of actually using AI( their top models) in my daily work, I can tell you this: the reality is very different from the headlines. Yes, AI can write Terraform code(not production ready though), kubernetes manifest(incomplete), basic pipelines and scripts people already posted in github, stack overflow. But try getting it to: - Debug a complex Kubernetes networking issue - Handle multi-region failover scenarios - Design scalable microservices architecture - Manage security compliance across cloud providers - Use newly released, cloud services and security implementation. - Work with cross teams, negotiating, managing conflicts and keep things running. Even autonomous AI agents fall short: - They can't maintain context across your entire infrastructure - Struggle with real-world edge cases - Can't understand company-specific requirements - Limited by their training data when facing novel problems If your job is just copying Terraform templates, copy-pasting code from Stack Overflow you should be concerned. But if you understand distributed systems, security implications, and complex infrastructure patterns - AI will amplify your capabilities, not replace them. The winners will be engineers who can: - Think deeply about systems architecture - Solve novel infrastructure challenges - Use AI to automate routine work - Focus on high-impact engineering decisions Stop believing the hype. Start focusing on becoming a better engineer who knows how to use AI as another tool in their arsenal. The future isn't AI replacing DevOps engineers. It’s the human who understand technology in depth, remember the issues they faced last year and can make better decision when production is down instead of hallucinating.
English
11
25
206
19.2K
Prabhas
Prabhas@prabhas·
@bradkowalk @naval 💯 UI gets even better. Visual learning/grasp is faster than any other type. Apple is proof. Steve jobs still is right in terms of Visual feel.
English
0
0
1
108
brad
brad@bradkowalk·
This is a common belief these days that seems right at first… but on second take feels obvious the future will have plenty of UI UI inspires UI educates UI guides UI conveys information UI confirms UI delights The future will be a mix of UI and natural language. Things like AI autocomplete by @hero_assistant is a clear example of this
English
5
0
24
8.7K
Naval
Naval@naval·
UI is pre-AI.
Português
647
545
7.9K
2.5M
Prabhas
Prabhas@prabhas·
Vision(the literal sense of sight) and format still is big driver for any type of(human/animal/ai) cognition. Can we imagine civilization growth or innovation with out vision so far?
Naval@naval

UI is pre-AI.

English
0
0
0
3.3K
Prabhas
Prabhas@prabhas·
@NVIDIAGeForce GeForce Season Would like to have 2nd gpu to run qwen models for embedding creation With having single 5090 gpu, I hardly can run both embedding model and qwen3 14b model . Tokens per second for running both quantized models is under 10 tkps. 2nd 5090 will push my RAG progress.
English
1
0
0
1.2K
NVIDIA GeForce
NVIDIA GeForce@NVIDIAGeForce·
❄️ #GeForceSeason of Giving ❄️ Want your chance at a custom wrapped Battlefield 6 GeForce RTX 5090?! Comment “GeForce Season” below and tell us why you deserve this Holiday upgrade 🎄
English
9.6K
908
6.6K
972K
Prabhas
Prabhas@prabhas·
@elonmusk If everything is cheap, everything around us super intelligent . Then what's left humans to do? Exploring? Earth is done for humans, what's left is space or under the ocean. Space is enormous and fun. You are true visionary 🫡
English
0
0
1
78
Elon Musk
Elon Musk@elonmusk·
The most likely outcome is that AI and robots make everyone wealthy. In fact, far wealthier than the richest person on Earth 👀 By this, I mean that people will have access to everything from medical care that is superhuman to games that are far more fun that what exists today. We do need to make sure that AI cares deeply about truth and beauty for this to be the probable future.
Watcher.Guru@WatcherGuru

JUST IN: Elon Musk says AI and humanoid robots will "eliminate poverty" and "make everyone wealthy."

English
22.4K
20.6K
152K
30.2M
Prabhas đã retweet
Deedy
Deedy@deedydas·
If you feel like giving up, you must read this never-before-shared story of the creator of PyTorch and ex-VP at Meta, Soumith Chintala. > from hyderabad public school, but bad at math > goes to a "tier 2" college in India, VIT in Vellore > rejected from all 12 universities for US masters despite 1420 on the GRE > fuckit.jpg > goes to the US anyway on a J-1 visa to CMU with no plan > applies for masters (again) to 15 universities > rejected from all except USC and with late admissions, NYU in 2010 > finds this guy called Yann LeCun (before he was famous) > starts getting into open source > rejected from all jobs including DeepMind > only job is Amazon as test engineer > his PhD mentor helps him get a job at a small startup (MuseAmi) > rejected from DeepMind > couldn't get H-1B because of J-1 home return issue; gets waiver through months of approval with USCIS and US State Dept > very low on confidence > In 2011/12 builds one of the fastest AI inference engines on phones > rejected from DeepMind > emailed Yann again and joins FAIR because of Torch7 open-source work > scrapes through bootcamp at Facebook, struggling on an HBase task > L8/L9 engineers at Facebook struggle to get ImageNet working > figures out numerics / hyperparam issue as an L4 > first big win! > FAIR goes well, runs 3 person torch7 team and co-creates PyTorch > because of politics, management wants to shut down PyTorch > cries-at-bar.jpg, literally > eventually some people save PyTorch and it launches in 2017 > gets a EB-1 green card! > the rest is history... Think about that. He went to a tier 2 college. Was rejected from all Masters programs 2x. Rejected from every single job except Amazon test engineering. Rejected from DeepMind 3x. Nearly had his baby project shut down. Struggled with visa issues. After 12 years of failures (2005-17), he eventually rose to became a VP at Meta one of the most influential people in AI! Soumith's story is one of resilience and he's living proof that no matter how down in the dumps you are, there's always hope.
Deedy tweet media
English
284
1.3K
11.2K
2M
Ahmad
Ahmad@TheAhmadOsman·
everyone: - “just use the API” PewDiePie: - built a 10x GPU AI Server (8x modded 48GB 4090s, 2x RTX 4000 Ada) - runs opensourcemodels with vLLM for TP - vibe-coded his own Chat UI, including RAG, DeepResearch, and TTS - is fine-tuning his own model be like PewDiePie Buy a GPU
Ahmad tweet media
English
504
1.2K
23K
1.5M
Prabhas
Prabhas@prabhas·
@slow_developer @grok if that's the case or if that happens , when would that be in timeline and what all the software Engineers do on earth?
English
1
0
1
102
Haider.
Haider.@slow_developer·
Elon Musk: In 5-6 years, the phone becomes an AI edge node — basically a screen and audio "no apps, no operating systems" a cloud AI talks to your on-device AI, generating real-time video you'll get everything through AI that anticipates what you want
English
1.2K
491
4.2K
1M
Prabhas đã retweet
That Marine Guy🇮🇳
That Marine Guy🇮🇳@thatmarineguy21·
An Indian who had been living in Japan for more than a yr noticed something strange : his Japanese friends were polite & helpful, yet none of them ever invited him to their home, not even for a cup of tea. Confused & hurt, he finally asked one Japanese friend why this was so. After a long silence, the friend replied, "We are taught Indian history… not for inspiration, but as a warning." The Indian, astonished, asked, "A warning? Indian history is taught as a warning? Please explain why." The Japanese friend asked, "How many English ruled India?" The Indian replied, "Maybe… about 10,000?" The Japanese person nodded seriously & asked, "At that time, weren’t there over 300 million Indians?" "So who committed the atrocities on your people? Who followed the orders to whip, torture, & shoot them?" He asked emphatically, "When General Dyer ordered the firing at Jallianwala Bagh, who pulled the trigger? Was it the English soldiers? No, it was Indians." "Why didn’t anyone point their rifle at General Dyer, not a single person?" He continued, "The slavery you talk about—this was your real slavery. Not of the body, but of the soul."* The Indian stood motionless, silent, & ashamed. The Japanese friend continued, "How many Mughals came from Central Asia? Maybe a few thousand? And yet they ruled you for centuries." "The Mughals did not rule India through their numbers; it was your own people who bowed to them, obeyed them, betrayed their own, and showed loyalty to the Mughals. Either to survive or for silver coins." "Your own people converted to their religions." "Your own people betrayed your heroes to the English. Who betrayed Chandrashekhar Azad? Who informed the English about his hiding place in Alfred Park?" "Bhagat Singh was not easily executed without the permission of those people (Gandhi-Nehrus) who called themselves patriots." "You Indians do not need foreign enemies. Your own people repeatedly betray you for power, position, and personal gain. That is why we keep distance from Indians." "When the English came to Hong Kong and Singapore, not a single local joined their army. But in India, you did not just join the enemy’s army—you served them. You worshipped them. You killed your own people to please them." "Even today, you have not changed. You have learned no lessons from history. Even for a little free electricity, a bottle of alcohol, or a blanket—you sell your vote, your conscience, and your voice without thinking." "You chant slogans, protest, but when the country needs your sacrifice, where are you? Your first loyalty is still to your home, family, wife, children, and wealth. The rest—country —can go to hell." After saying this, the Japanese left, and the Indian stood there, head bowed, frozen in shame.
English
1.8K
4.7K
16.9K
1.4M
Prabhas đã retweet
Arattai
Arattai@Arattai·
We’re officially #1 in Social Networking on the App Store! Big thanks to every single Arattai user for making this possible. 💛 #StayConnected #Arattai
Arattai tweet media
English
785
2.2K
12.8K
1.1M
Prabhas
Prabhas@prabhas·
@elonmusk The best trade in value would quickly turn me to buy Tesla. I haven't owned Tesla so far. And , would be happy to switch to a newer version every other 2 years trading in the old one if the trade in has best value.
English
0
0
1
433
Elon Musk
Elon Musk@elonmusk·
Please reply to this post with any difficulties you may have had in trying to buy a Tesla. Our goal is for the purchase and delivery experience to be fast and simple, with accurate answers to your questions. The key test is that you would recommend it to a friend.
English
61.5K
10.9K
128.6K
40.1M
Prabhas
Prabhas@prabhas·
China 🇨🇳 📌 Sources: WGC, SAFE, industry estimates Govt Reserves: ~2,300 tonnes People-owned: ~31,000 tonnes Total: ~33,300 tonnes (some analysts say closer to 36,000 t) China quietly built a massive household gold reserve, much of it in bars and jewelry, making gold a saving
English
0
0
1
511