Naresh MaTTa

1.4K posts

Naresh MaTTa banner
Naresh MaTTa

Naresh MaTTa

@inareshmatta

Agentic Engineering - Product Manager | Gamer | Technology Enthusiast & Data Science Aspirant @skillwisor

Kodigehall, Bengaluru Katılım Ağustos 2019
766 Takip Edilen63 Takipçiler
Sabitlenmiş Tweet
Naresh MaTTa
Naresh MaTTa@inareshmatta·
🚨 I built the WORLD’S FIRST Living AI Tutor that actually watches you study — for the Gemini Live Agent Challenge 🤯 #GeminiLiveAgentChallenge Shivy AI: • Sees your textbook + diagrams in real-time 📖 • Talks naturally with <500ms latency (interrupt anytime) 🎙️ • Detects when you lose focus or fall asleep 😴 • Reads & corrects handwritten homework ✍️ • Auto-creates quizzes, flashcards, summaries & visuals This might be the end of expensive private tuitions. The future of school is here — embedded directly inside your textbook. 🔥 👇 Watch the full demo youtu.be/Ww7WcPQiJ6c #GeminiLiveAgentChallenge #EdTech #AI #FutureOfLearning
YouTube video
YouTube
English
0
0
0
408
OZIOMA ❁
OZIOMA ❁@OfficialOzioma·
@haha_girrrl 9 - 5 actually means 9 hours a day and 5 days a week.
English
1
0
0
273
diyu
diyu@haha_girrrl·
Your 9-5 is actually a 6-7: >> 6:00 alarm >> 6:45 out the door >> 7:15 traffic simulator >> 8:00 log in, act employed >> 12:30 sad desk lunch + doomscroll >> 1:00 back to corporate acting >> 5:00 log off (theoretically) >> 5:30 “just 5 mins” meeting trap >> 6:15 reach home, soul missing >> 6:45 sit down like a war survivor >> 7:00 too tired to do anything new
English
41
10
165
426.6K
Deep :)
Deep :)@deep_poharkar·
this is what a hiring process should look like
Deep :) tweet media
English
88
227
6K
510.4K
Alex Nguyen
Alex Nguyen@alexcooldev·
I just received this month’s operational bill for my apps, along with the latest metrics: 💰 $30K/month revenue 👥 390K total users 🫂 10K–14K daily active users Breakdown: - Supabase: $187 - Railway server: $289 - OpenAI API: $542 - Speech-to-text API: $190 - RevenueCat: $150 - Claude API & Llama: $321 Total: $1,679 (not include the 15% fees from the App Store and Google Play) I used to think I needed to cut costs. Instead, I focused on marketing and it paid off. Marketing + build first, optimize costs later.
English
112
19
779
44.6K
Naresh MaTTa
Naresh MaTTa@inareshmatta·
@alexcooldev Never seen anyone sharing the operational bill so transparently, this is my first. Thanks for sharing this, really appreciate it.
English
0
0
1
80
vast.ai
vast.ai@vast_ai·
H200 on Vast: $2.58/hr Same GPU on major clouds: $6.31+/hr 59% cheaper. Same hardware. No contracts.
English
17
1
24
5.4K
kapilansh
kapilansh@kapilansh_twt·
do AI companies quietly nerf old models when they launch new ones or is it just me ?
English
62
1
67
2.5K
Pankaj Kumar
Pankaj Kumar@pankajkumar_dev·
Antigravity finally feels usable again. I tried Antigravity again after a while, and it finally feels usable. Gemini isn't throttling every few prompts anymore, looks like they have relaxed the rate limits a bit. You can actually go through longer sessions without getting cut off mid-work, which was super frustrating before. Also, those random "unexpected error" popups have mostly disappeared. Last month it felt unstable, now its at least consistent enough to stay in a flow. Not perfect though, I did hit a "High Traffic" error once today. Claude is still the annoying part. The limits are so tight right now its barely practical to use. Even Claude Code is hitting the limits, so its clearly a Claude issue, not just Antigravity. Those who use Antigravity regularly what's your experience? Tip: After Antigravity gets exhausted, you can switch to AI credits, then Gemini CLI. Also feels like we need to use AI more efficiently now better prompting, and starting a new chat after each task so context stays small and uses fewer tokens.
Pankaj Kumar tweet media
English
73
6
301
56.4K
Naresh MaTTa
Naresh MaTTa@inareshmatta·
@Star_Knight12 As far as it goes to office and does my work thoroughly without messing it up and they pay me, Im 👍
English
0
0
0
128
Prasenjit
Prasenjit@Star_Knight12·
what if AI actually replaces you
English
126
2
95
8.3K
Pankaj Kumar
Pankaj Kumar@pankajkumar_dev·
@AakashBuild i tried this question around 1-2months back, it was correct in opus 4.6, i dont know why now it failing.
English
1
0
1
330
Naresh MaTTa
Naresh MaTTa@inareshmatta·
🚀 Google just dropped Gemma 4, and if you're running it locally, you need to know about Q4_K_M! 🚀 If you've ever tried to run an LLM on your own machine, you know they can be massive. 🐘 But if you see a Gemma 4 file tagged with Q4_K_M, you've found the holy grail for consumer hardware. 💻✨ Here is why Q4_K_M is the community’s "sweet spot" for local AI: 🔹 Q4 (4-bit integers): It squishes the bulk of the model’s "brain" (weights) from heavy 16-bit floats down to 4-bit. This delivers a massive 70–75% reduction in file size! 🤯 🔹 K (K-means quantization): It’s a smart compression trick. Instead of scaling the whole model at once, it chops the weights into tiny blocks and scales them individually. This keeps the model sharp and prevents it from losing its smarts. 🧠 🔹 M (Medium mixed-precision): It protects the VIPs. The most critical layers of the model are kept at a higher precision, while less important parts are aggressively compressed. 🛡️ 💡 The Result? A model that uses vastly less RAM but produces answers nearly indistinguishable from the massive, uncompressed version for everyday tasks. 🛠️ Rule of Thumb: If your machine has under 8GB of VRAM, Q4_K_M is your go-to standard. Are you planning to run Gemma 4 locally? What hardware are you using? Let’s chat in the comments! 👇 #Gemma4 #GoogleAI #LocalAI #LLMs #MachineLearning #Quantization #GGUF #TechTips #OpenSource skillwisor.com/2026/04/14/q4_…
English
0
0
0
14
TestingCatalog News 🗞
TestingCatalog News 🗞@testingcatalog·
Google I/O leaks 👀 Google is likely already testing its own "Cowork" competitor, simply named "Agent" for Gemini and Gemini Enterprise. A new "Tasks" UI highlights - Goal - Agent - Connected apps - Files - Require a human review toggle - And more The "Require a human review" component specifically means that Gemini's capabilities will likely expand, potentially allowing users to automate their desktop tasks as well. Skills and Projects are also cooking 👀
English
77
150
1.7K
303.2K
Naresh MaTTa
Naresh MaTTa@inareshmatta·
@OfficialLoganK @GoogleAIStudio Something really feels better and nice working with AI Studio from what it started. Definitely brilliant, I wish we could have gemma accessed via AI studio.
English
0
0
1
211
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Introducing Tab Tab Tab, our new prompt auto complete engine in @GoogleAIStudio's vibe coding experience. Now when you show up with your fuzzy ideas, you can rely on Gemini to fill in the blanks : )
Logan Kilpatrick tweet media
English
98
97
1.3K
60.4K
Naresh MaTTa
Naresh MaTTa@inareshmatta·
@realrealcat @HarshithLucky3 So how it worked is once you exhaust all of the credits it gave 1 day hrs buffer and it restored 40% of bw again then when you use this up is when you get locked for 6 days
English
0
0
0
10
Harshith
Harshith@HarshithLucky3·
am i dreaming? today i used Gemini 3.1 Pro (high) for 2 hours straight in Antigravity Why didn't it hit the limit? forget about hitting it, it didn't even consume ~40% of the usage Is it not showing the usage correctly or did they really increased the rate limits can anyone confirm?
Harshith tweet media
English
127
9
554
152.2K
Naresh MaTTa
Naresh MaTTa@inareshmatta·
@gajesh @FactoryAI Feedback taken for minimax, Something Interesting on its way soon right on everyone's pocket.
English
0
0
0
64
Gajesh
Gajesh@gajesh·
how has no one built a hybrid model coding agent? - GPT 5.4 instructs & checks the work - MiniMax / Sonnet - some fast model executes it .@FactoryAI mission mode is probably the nearest i've seen to this structure
English
54
6
135
15.4K
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
GITHUB IS APPARENTLY FULL OF EXPOSED OPENAI API KEYS THANKS TO VIBE CODERS. A lot of people are shipping fast and forgetting the one part that actually matters: basic security.
0xMarioNawfal tweet media
English
24
7
156
50.2K
Naresh MaTTa
Naresh MaTTa@inareshmatta·
Hey everyone — I'm building an all-in-one AI chat app that puts every frontier model + open-source ones in a single window. Switch instantly between GPT, Claude, Gemini, Grok, DeepSeek, Llama, whatever you want, with full multi-chat support.I know the space is crowded — Poe, TypingMind, MultipleChat, Aymo, AiZolo, and a dozen others already do this. So I want real talk from you: What’s actually missing that would make you switch? What would be the killer differentiator for you? Privacy-first BYOK + local models mixed with cloud frontier ones Something else entirely? Drop your biggest pain points or dream features below. I’m reading every reply — this one’s for power users who’ve tried them all #buildinpublic
English
0
0
0
53