techie007

2.2K posts

techie007 banner
techie007

techie007

@techie0072

27, Tech Lead @Google Connect with me everywhere👇 Instagram: https://t.co/7WusC2TdQ3 LinkedIn: https://t.co/1B3TS4wPq5

Join my Newsletter 👉🏻 เข้าร่วม Haziran 2021
78 กำลังติดตาม1.5K ผู้ติดตาม
ทวีตที่ปักหมุด
techie007
techie007@techie0072·
🦞OpenClaw let me talk to AI agent on WhatsApp. But I only used it for one thing - coding. So I ripped out the noise and built a direct pipeline: WhatsApp -> Coding Agents -> Shipped Code Claude Code, Codex, Gemini CLI, OpenCode - all accessible from one chat. Happy to know your reviews..
English
5
0
11
1.3K
techie007
techie007@techie0072·
I mean what a crazy day ? - Axios Compromised - Claude Code leaked
English
1
0
7
132
Daniel Dhawan
Daniel Dhawan@daniel_dhawan·
My first 6 years as a startup founder: - Started by building AI mobile apps at 20yo - Failed with 4+ startups - Ran out of money multiple times - Was rejected by Y Combinator 8 times - Had a $15k credit card debt - Got 200+ rejections from investors My last year as a startup founder: - Moved to SF - Launch Rork, AI mobile app builder, to make my year 1 self happy - Got into a16z speedrun and raised $3M+ - Scaled Rork to millions in ARR in under a year - Became the #1 AI mobile app builder in the world The average journey to a $1B company takes 10 years. I’m on year 7. Keep building.
Daniel Dhawan tweet media
English
238
42
1.2K
55K
techie007 รีทวีตแล้ว
Ankush Dharkar
Ankush Dharkar@RealAnkush·
First, you have to realize that life is unfair. We don’t have to help everyone, in fact, we don’t have to help at all. The main mentors of @RealDevSquad end up giving 30-35 hours WEEKLY, for others to grow, and you'd be surprised how non-chalant and frivolous people are who want to join. We have our paths laid out. If it doesn't work for you, just too bad. By now, if you can't get even one referral, then it's time to introspect and get your gears moving anyway. Things are going to get hard for you. And is it really that bad that you pick up another prototyping language on the side, as you prove your consistency to get closer to your goal(s)? A good engineer picks up stuff fast. If you're reluctant, good luck with how the tech wave is shaping up into a tsunami. You want our coaching? Well, this is the price. Take it or leave it. We've done mass intake in past. It brings more trash than treasure. So, gotta be mindful who we decide to invest our 1-2 years worth of time into. The monetary price is free, but our time is still pretty valuable.
English
1
10
54
5.8K
Arpit Bhayani
Arpit Bhayani@arpit_bhayani·
Joined Razorpay as Principal Engineer II :) From being a long-time customer to now building parts of the system - it's a full circle. Fintech is a new territory for me - time to get under the hood of how money actually moves. New domain, same guarantees - availability, correctness, performance - just with real money on the line.
Arpit Bhayani tweet media
English
548
63
7.2K
585.4K
techie007 รีทวีตแล้ว
Garry Tan
Garry Tan@garrytan·
GStack is helping people make better software all around the world now
Garry Tan tweet media
English
28
6
242
23.9K
techie007
techie007@techie0072·
Vector databases don't do "exact" similarity search. They use approximate nearest neighbors (ANN). HNSW builds a navigable small-world graph. Each query hops through layers, greedy-walking toward the closest node. It's fast because it skips 99% of vectors. It's wrong sometimes because it skips 99% of vectors. Know the tradeoff before you ship.
English
0
0
1
152
techie007
techie007@techie0072·
RAG doesn't fail because of bad retrieval. It fails because you chunked a 40-page doc into 512-token blocks and destroyed every cross-reference, table header, and paragraph dependency. Your embedding model isn't dumb. Your chunking strategy is. Fix: overlap chunks, preserve section hierarchy, embed parent-child pairs.
English
1
0
3
135
techie007 รีทวีตแล้ว
Gergely Orosz
Gergely Orosz@GergelyOrosz·
MCPs are the opposite of dead. They are the life blood of how AI agents use services inside mid-sized and above companies. Case in point: Uber runs on MCPs internally, for good reason. Details: newsletter.pragmaticengineer.com/p/how-uber-use…
@levelsio@levelsio

Thank god MCP is dead Just as useless of an idea as LLMs.txt was It's all dumb abstractions that AI doesn't need because AI's are as smart as humans so they can just use what was already there which is APIs

English
92
101
1.3K
265.1K
techie007 รีทวีตแล้ว
Jon Yongfook
Jon Yongfook@yongfook·
We are entering an era where human-maxxing will be rewarded. Given infinite AI appslop marketed by AI adslop, the way to win is: - be human. your marketing is you and a camera. - app pain point is something personal to you - landing page is not fucking purple
English
67
9
252
9.3K
techie007
techie007@techie0072·
The scariest part of AI agents isn't intelligence. It's execution. Once the model can: • run commands • access your files • move money It becomes infrastructure.
English
0
0
4
92
techie007
techie007@techie0072·
The biggest misconception about AI agents: People think the magic is the model. But the real leverage is: Tools + Skills + Execution. Models think. Agents act.
English
0
0
2
73
techie007 รีทวีตแล้ว
Dhravya Shah
Dhravya Shah@DhravyaShah·
been building in this space for years now, and have followed nishkarsh for years as well - congrats on the launch! since this is in the same space we're building in, i dived deep into it and have thoughts. the launch itself is very hype-y, and is meant to trigger rage bait 1. it's positioned as a database, but is almost a @supermemory-like system 2. their example of "vector dbs" not being able to do this, is really a question of "embedding models". and embedding models have superpositions, they are cheap and are easily able to infer differences between them. it's not hard to ask claude to do a mini experiment to prove this (attached below). What does matter is: is it able to track how knowledge evolves? time passes? this made me curious so i read their paper 3. their research paper is hardcoding and gaming the benchmark by different prompt for every category!!! (see image below). If their benchmarking is fixed, supermemory will remain the SOTA. 4. they reinvented contextual retrieval paper by Anthropic from 2024 and called it "the orphaned pronoun paradox" 5. they mention they use a custom "in-memory vector store" = at about 500GB, you will have to pay more than $10k for just the RAM. 6. inference is run too many times in the pipeline - which means for every LLM token you ingest, you will end up paying 5x more than token cost for the graph + contextualization + storage. 7. latency and cost numbers were never reported. My hunch is because of the architecture, the latency will struggle at scale. but i can't tell - their product is behind demo gate. 8. the benchmarking code is not OSS (from what i can tell). not replicable + who knows how much context they are injecting into the model? what's the K? 9. inorganic, undisclosed ads (just read the quote tweets). influencer accounts with 400k+ followers all saying the same thing. people keep getting away with this @nikitabier lol i'm all in for healthy competition and progress in this fields, enjoy seeing good work being done by others. but its easy to just say things. "no one will check." playing the game the right way is hard, and everyone's just saying whatever they can to impress people. TLDR is: you should use this if you want to spend 2-5x more for no real marginal improvement and enjoy unhealthy research and business practices. attached: 1. experiment to disprove hypothesis of vector dbs not understanding grey vs grey 2. one of their prompts, which just says "say i dont know". they scored 100% :)
Dhravya Shah tweet mediaDhravya Shah tweet media
Nishkarsh@contextkingceo

We've raised $6.5M to kill vector databases. Every system today retrieves context the same way: vector search that stores everything as flat embeddings and returns whatever "feels" closest. Similar, sure. Relevant? Almost never. Embeddings can’t tell a Q3 renewal clause from a Q1 termination notice if the language is close enough. A friend of mine asked his AI about a contract last week, and it returned a detailed, perfectly crafted answer pulled from a completely different client’s file. Once you’re dealing with 10M+ documents, these mix-ups happen all the time. VectorDB accuracy goes to shit. We built @hydra_db for exactly this. HydraDB builds an ontology-first context graph over your data, maps relationships between entities, understands the 'why' behind documents, and tracks how information evolves over time. So when you ask about 'Apple,' it knows you mean the company you're serving as a customer. Not the fruit. Even when a vector DB's similarity score says 0.94. More below ⬇️

English
52
12
440
82.4K
techie007 รีทวีตแล้ว
Jon Yongfook
Jon Yongfook@yongfook·
“Shipped 10,000+ lines of code today” “Cool what product? What’s the link” “…163 PRs in one day!” “Yes but what’s the link” “…1,827,963 tokens and counting!” “Dude what are you working on” “…AI is crazy man”
English
216
221
5.8K
146.8K
techie007
techie007@techie0072·
Go is fast. But for Discord, it wasn't fast enough. They migrated their critical "Read States" service from Go to Rust, solving a massive latency consistency issue that plagued their 99th percentile response times. The villain? Garbage Collection (GC). Here is the deep dive: 1. The Problem: Read States This service tracks which channels/messages you've read. It is on the hot path for every single user action. It relies heavily on an LRU cache (Least Recently Used) to keep active users in memory. 2. The Go Memory Model In Go, the GC runs periodically to clean up memory. With millions of users in the LRU cache, the heap was massive. Even though Go’s GC is highly optimized, it still has to scan the heap. Every 2 minutes, Discord saw massive CPU spikes and latency jumps as the GC halted execution to mark-and-sweep the cache. 3. Why Tuning Failed They tried everything: • Forcing GC to run more often (to keep pauses small). • Optimizing data structures. But you can't cheat the fundamental trade-off of a Garbage Collected language: You don't control when memory is freed. 4. The Rust Solution Rust has no Garbage Collector. It uses Ownership and Borrowing to manage memory at compile time. When a user drops out of the LRU cache in Rust, the memory is freed immediately. No background process. No "Stop the World" pauses. The Impact: • The "Spikes" on their latency graphs completely disappeared. • Response time became a flat, predictable line. • They reduced the number of servers needed because they weren't over-provisioning for GC spikes anymore. Takeaway: For 99% of apps, Go is perfect. But when you are caching millions of objects and need predictable micros-second latency, you need manual memory management. Choose the tool that fits the constraints, not the hype.
English
1
1
8
269
techie007 รีทวีตแล้ว
Sudheesh
Sudheesh@sudheenair·
creativity of the use cases on this @Tiny_Fish accelerator has been insane. Here’s another one.
Sagnik Ghosh@Sagnik_26

The market is moving fast with AI. Shipping code isn’t just enough — you need to understand the market, competitors, positioning and all the other staff. MarketLens automates that using @Tiny_Fish→ • Product matchups: Compare your product to competitors by goal; scans show gaps and wins. • Compliance alerts: Regulatory circulars as alerts, not spreadsheets. • Workflow on completion: Scan done → Slack, webhook, or n8n. No manual handoff. Built for Engineers, PMs and CPOs who need to act, not just watch. #TinyFishAccelerator #BuildInPublic

English
0
1
3
565