Srinivas Devaki

2K posts

Srinivas Devaki banner
Srinivas Devaki

Srinivas Devaki

@eightnoteight

Co-Founder @ZenactAI | Founded Opti Owl Cloud | ex SRE Lead @Zomato | IIT Dhanbad

San Francisco, CA Tham gia Mart 2014
7K Đang theo dõi807 Người theo dõi
Srinivas Devaki đã retweet
Abhishek Agarwal
Abhishek Agarwal@abhi_agarwal4·
Introducing OpenRound - the new way to assess engineers in an AI-native world. Real codebase. In-built AI coding agent. Candidates ship a new feature using AI. LeetCode was built for a world that no longer exists. @openroundai is how you assess talent when AI writes all the code.
English
18
8
34
2K
Srinivas Devaki
Srinivas Devaki@eightnoteight·
based on llm perf, ig internet has more bad code than bad poems and news publications do more bad writing than normal blogs
English
0
0
2
97
Srinivas Devaki
Srinivas Devaki@eightnoteight·
slack may never go away, its not just an interface anymore, its a way of work altogether this means more and more work is going to be superimposed on that working world model thats why working with @linear on slack feels more natural than on the linear app not so different from using mail as a way of work with agents - @agentmail there should be an open standard on this by now, the way of work can't be with one company
English
0
0
6
152
Ryan
Ryan@ohryansbelt·
Delve, the YC-backed compliance startup that allegedly faked hundreds of SOC 2 and ISO 27001 audits, is now accused of stealing a fellow YC company's IP. According to Part 2 of DeepDelver's Substack series, Delve took SimStudio's code, removed attribution, rebranded it "Pathways," and started closing $50k-$200k+ enterprise deals with it while telling Sim's founders the ROI wasn't there for a partnership. Here's the breakdown: > Sim (YC X25) signed on as a Delve compliance client for $15k covering SOC 2 Type 1, Type 2, and HIPAA. CEO Karun Kaushik personally promised to handle onboarding > During that same April 2025 sales call, Karun posted a SimStudio link internally with the note "ui inspo for pathways" > Linear tickets referencing "sim studio" under the Pathways project started appearing that same month. An internal Notion doc titled "Sim Studio Port Plan" lists specific folders to copy, including blocks, components, the executor, tools, handlers, and database schema Delve's production code still contains SimStudio references and docs[.]simstudio[.]ai URLs > When Sim's CEO @Emkara tried to sell Delve a licensing deal, Karun said it didn't have "high enough ROI rn" and stopped responding > Sim had no idea Delve was selling their product as Pathways until DeepDelver's Part 1 article. Emir confirmed over email that no white-label or attribution agreement existed > Leaked pitch decks show Delve selling Pathways to Brex, Anthropic, Gusto, and Notion. The Notion deal was $50k+ > The Brex deck promises Pathways will make their GRC team "AI native" and includes a 50%+ partnership discount > The Anthropic deck, dated January 9, 2025, proposes a 1-2 week PoC with named Delve staff building custom Pathways workflows > Delve outsourced Pathways maintenance to a dev shop in Bangladesh > Sim's open source license required attribution. Delve removed it, told clients they "built it from the ground up," and did not disclose Sim's code during Series A due diligence
Ryan tweet media
Bryan Onel@BryanOnel86

Delve knows no shame. They allegedly sold another YC company’s (@simdotai) open source tool as a standalone product to companies like Notion and Brex without attribution, violating the Apache license, and then lied about it to the founders of Sim. The founders of Sim (@emkara) are left with nothing while Delve walks away with the money.

English
64
77
1.5K
590.2K
Srinivas Devaki
Srinivas Devaki@eightnoteight·
open ai aiming for the token efficiency is the biggest mistake they made IMO model got completely reward hacked into producing least amount of code change without any regard to code readability and code complexity
English
0
0
4
116
Srinivas Devaki
Srinivas Devaki@eightnoteight·
@rohanvarma no brainer for all my automations, both active automations and nightly
English
0
0
0
29
Rohan Varma
Rohan Varma@rohanvarma·
If we made /slow mode in Codex, would you use it? What for? (Slower inference at a cheaper cost)
English
952
32
2.2K
185.7K
Srinivas Devaki
Srinivas Devaki@eightnoteight·
AI is the worst at asking questions to humans, bots can't distinguish between puny humans and other bots
English
0
0
1
89
Srinivas Devaki
Srinivas Devaki@eightnoteight·
DRY is a coding agent killer
English
0
0
1
98
Srinivas Devaki
Srinivas Devaki@eightnoteight·
@nixxin groq usage is a pretty good indicator of token usage on discounts but hemanth had a valid point too, the rate of token price going down is much much faster than the rate of any sensible discounts
English
0
0
1
428
Nikhil Pahwa
Nikhil Pahwa@nixxin·
Yeah...no. A few things: 1. Token consumption is a weak metric. AI tools have a tourism problem: People sign up to try, use it, and go to the next big thing, or the next person that is offering a discount. There's no loyalty. I bet some people (like me) using Claude Code are now using it alongside Codex because, hey, more usage limits till April 2nd. Token consumption measures activity, not retention, and it can drop like blazing hot charcoal once the discounting stops. "Token consumption is all that matters in AI" doesn't really hold imo, but then I'm just a journalist and you're an INVESTOR!!! Better if you argue retention than usage here, btw. And after six months. 2. You can bump up ARR (which is a projection) based on a discounted signup cost and still calculate ARR on what the full cost is, hoping that the customer stays on but that won't necessarily happen. ARR in the age of AI, is what GMV was to e-commerce: hot air.
Hemant Mohapatra@MohapatraHemant

Emergent made more in accrued revenue in a WEEK than most startups make in a YEAR. This is actual, banked, cold, hard revenue. We are happy to coach these publications how to stay relevant in the age of AI and calculate run rate the right way, which is totally different from the way the old world of saas did, which is where most reporters still seem to be stuck. Token consumption is all that matters in AI. If you don't understand why, pls stop reporting on AI.

English
7
4
155
19.7K
Srinivas Devaki
Srinivas Devaki@eightnoteight·
@GergelyOrosz this is a bit unfair almost all SOC2, ISO compliance vendors do the same thing, hell even security vendors do the same thing reality is these vendors are securing companies against a predefined threat model, but outside that threat model, they are useless core problem comes from the compliance frameworks themselves, its essentially a theatre, buying software that says secured by palo alto is much more trusted than software that has SOC2 regardless of the certification vendor SOC2 and ISO are bare minimum, but most software customers think they are proof of security, reality is both customers and vendors don't have time to define extensive threat model, so in the end it becomes a skit rather than the real thing
English
0
0
5
515
Gergely Orosz
Gergely Orosz@GergelyOrosz·
Delve: "We are not an auditor, just as tax preparation software is not an accountant. We have never signed an audit report." Also Delve: Customer websites display certifications that says "Secured by Delve." You simply cannot have it both ways, and now this bites back.
Gergely Orosz tweet media
Karun Kaushik@karunkaushik_

Over the past week, you may have seen an anonymous post about Delve. While we responded to it in a day, we want to provide more details about what’s true, what's not, and some changes we’ve made. There’s one question behind everything: did Delve fabricate compliance evidence or issue fraudulent audit reports? No. We did not. → Delve is an AI compliance platform that connects customers with independent auditors. We are not an auditor, just as tax preparation software is not an accountant. We have never signed an audit report. → Using default templates for our customers, just like any other compliance platform, is not “faking evidence.” These are meant to serve as a starting point for customers. → Delve does have automation in the platform, with 600+ automated integration tests, an AI Copilot to guide customers through compliance, AI code scanning, and more. -- We built Delve to accelerate innovation by bringing AI to compliance. In doing that, we pushed hard on automation. However, we now realize we didn’t provide enough clarity about what is automated, what is customer-provided, and what is independently audited. We have been working relentlessly to make improvements over the last week. -- On our auditor network: Delve connects customers with independent auditors. Some customers choose their own auditors, but many use firms in our network. Questions have been raised about some of those firms, including ones used by other platforms. Going forward we will set a higher bar in how our auditor relationships are structured and how the process is experienced by customers. Delve is rebuilding our auditor network, removing firms that don’t meet our standards, and offering complimentary re-audits and penetration tests to every customer. On platform templates for our customers: Delve provides default templates, just like many other platforms, for policies, board meetings, risk assessments, and more. These are designed to be starting points only. We should have been more explicit about how they are meant to be reviewed and customized by customers. We are making that indisputably clearer within the platform. On draft audit reports: Third-party auditors are responsible for independently reviewing all evidence and issuing final reports. We built automation that interacts closely with independent audit workflows to help expedite the process on behalf of our customers. However, this contributed to confusion about where automation ends and independent judgment begins. From now on, Delve will no longer automate these parts of the process. Furthermore, customers have a direct line of communication with their auditor to enhance transparency in any audit communications. -- We started Delve because we went through compliance ourselves and saw how slow, expensive, and manual it was. To anyone that wants to sit down and discuss our product philosophy and improvements, please reach out and let’s chat about it.

English
53
62
1.5K
129.9K
Srinivas Devaki
Srinivas Devaki@eightnoteight·
above everything, agents love doing things in the shortest way possible
English
0
0
1
46
Srinivas Devaki
Srinivas Devaki@eightnoteight·
interview engineering and resume engineering became soo bad now too many people like and invest too much into the "problem of cracking an interview" rather than liking the actual engineering and building systems much like UPSC
English
0
0
1
63
Manthan Gupta
Manthan Gupta@manthanguptaa·
LLMs are really good at writing code, so why are we giving them 100 different tools instead of just giving them code execution? This idea came up in a conversation, and it just made sense and felt like it was right in front. It feels like a much cleaner way to structure things. Instead of turning the context window into a dumping ground of raw outputs, you let the model write code, process the data, and return only what actually matters. You are not just making things cleaner, you are likely saving a lot of tokens as well. The model only sees the results it needs instead of parsing through noise. This becomes even more obvious with things like web search or scraping. HTML is mostly garbage, and pushing all of it into the context is just inefficient. Filtering it through code first makes far more sense. I haven’t tested this deeply yet, but it’s interesting to see Anthropic leaning into a similar direction. Feels like a strong validation of the idea. Intuitively, this should improve latency, cost, and accuracy by turning the LLM into more of a controller than a processor.
Manthan Gupta tweet media
English
21
1
67
5.9K
Siddharth Bhatia
Siddharth Bhatia@siddharthb_·
Clarifying a few things, starting with the MoU. Are you taking taxpayer money? No. The MoU signed with the UP Government is structured as a public-private partnership. It does not involve any cost to the taxpayers of Uttar Pradesh. On the contrary, it brings investment into the state. The project will be executed in phases, with support from external investment partners. We have not taken any money, any GPUs, or any other form of support from the Government. The only part of this MoU that involves citizens is that they'll receive free access to AI, in their own language and through familiar interfaces, via AI Commons. Is your revenue ₹42.9 lakh? No. This is a bug in Google’s AI, which has confused Puch AI with another company called Pucho AI. The ₹42.9 lakh revenue figure belongs to Pucho AI, not Puch AI. Puch AI’s revenue is not public. What’s your valuation/annual revenue? Are you bootstrapped? We’ll share those details when the time is right. For now, as a private company, we’re not required to make our valuation or revenue public. But, if you’re curious, we're not bootstrapped. We’re a well-funded startup. Do you have a foundational model of your own? No, and we do not believe it is necessary for our mission to build one at this stage. What is your mission? We want to bring AI to everyone in India. Today, many people already use tools like ChatGPT and Gemini, but millions still cannot. Your parents, shopkeepers, drivers, and many others are being left out because AI, in its current form, is not built for them. What exactly is Puch AI doing? Puch AI is building the domestic consumer AI offering for the masses. We're bringing AI to people through simple, familiar interfaces like WhatsApp and voice calls, so that it is easy to use even for those who cannot type and prefer to interact through voice. Why don’t you have your own app? We understand the frustration people feel about this, and at times it is frustrating for us as well (currently fighting Meta on the WhatsApp policy). But so far, this has been a conscious choice. A new app becomes one more thing to learn and one more barrier for the millions of people who are currently not using AI. Try teaching your parents a new app! So are you asking me to not use ChatGPT or other AIs in the name of nationalism? No, not at all. People should use whatever AI works best for them. Our effort is simply to expand access to AI by bringing it to millions who, today, are unable to use the existing tools. Is Puch AI just a wrapper? Wrappers are generally defined as making API calls to AI providers like OpenAI and Anthropic. So no, Puch AI is not a wrapper. We do not use any APIs. We have built our entire infrastructure in-house on top of open-source models, and engineered it for the Indian context. However, if you define a wrapper as any company that does not have a foundational model pretrained from scratch, then yes, Puch AI is a wrapper. What is the point of Puch AI if you haven’t even built a model? The problem is data. Today, ChatGPT and Gemini are the default go-to AI apps for almost everyone in India who uses AI. As more people use them, these companies keep getting more and more data to improve their models. That data will never help Sarvam, BharatGen, or any other domestic AI model builder. More importantly, the bulk of India will end up accessing information through OpenAI or Google: their systems and their narratives that may ultimately be shaped by foreign actors. With Puch AI, we are trying to build sovereign distribution, so that India has at least one alternative, no matter how hopeless it may seem to compete with OpenAI or Google’s marketing. Puch AI seems relatively simple. I can build it in a weekend. Are you even doing anything? The choice of a simple interface is deliberate, but the underlying infrastructure is far from simple. Happy to give ₹50 lakh to anyone who can build it in a weekend, match our performance in Indian languages, and offer it for free at our consumer scale. How was your experience building for India? It is a very interesting situation, where the Government understands the importance and wants to do good, but it becomes difficult to do any good when so many bad actors are rooting for you to fail. Thousands of paid accounts are ready to try to shape narratives to fit their own political agenda. Everything from Community Notes to actual news articles and journalism gets weaponized against you, because your failure gets more views. Will you keep fighting? Not sure. Quite exhausted. Maybe this is the time to sign off.
English
615
172
1.1K
548.5K
Srinivas Devaki đã retweet
Renushri Rawat
Renushri Rawat@RenushriRawat·
Loved how every question kept trying to break what we were thinking, not just agree with it. That actually helped way more than validation ever would. Each push made things clearer, and now we feel much more sure about what we’re building and why. GStack 🙌 @garrytan
Renushri Rawat tweet media
English
0
1
3
245