Renjit Philip 🔭💡

4.4K posts

Renjit Philip 🔭💡 banner
Renjit Philip 🔭💡

Renjit Philip 🔭💡

@RenjitPhilip

I post about: AI and Founder stuff | ex-Startup Founder |Pod host @fs_brew| Newsletter on AI: https://t.co/Fu2lWFMhdN / work: https://t.co/G9oxc3LU7p

Dubai, United Arab Emirates Katılım Eylül 2010
1.5K Takip Edilen703 Takipçiler
Sabitlenmiş Tweet
Renjit Philip 🔭💡
Renjit Philip 🔭💡@RenjitPhilip·
The biggest mistake in Fintech M&A right now is buying "AI-Native" companies. Most of these startups are just wrappers. They pay 50% of their revenue back to OpenAI or Anthropic. Their gross margins are shrinking, not expanding. The real $100M move in 2026 is the "Reverse Software" play. Here is the sequential logic for the "Expert Skeptic" buyer: 1. THE VALUATION TRAP AI SaaS companies are trading at 10x revenue. Professional service firms in Compliance, Insurance, and KYC trade at 1.5x revenue. Both groups are solving the same problem: Data Processing. 2. THE "AGENTIC" MARGIN SHIFT In a traditional services firm, 70% of the cost is human labor. AI is now turning that Variable Cost into a Fixed Cost. If you can automate 60% of the workflow with an agentic stack, that 1.5x revenue business suddenly has 40% EBITDA margins. 3. THE MULTIPLE EXPANSION Once you automate the core service, it is no longer a "Services" firm. It is a "Service-as-Software" platform. The market will eventually re-rate that 1.5x revenue multiple to a 6x or 8x software multiple. THE VERDICT FOR 2026: Do not buy the "AI-First" startup with no customers. Buy the "AI-Last" services firm with 500 enterprise contracts. Fix the plumbing with a custom-tuned LLM stack. The "Arbitrage" is not in the technology. It is in the transition from human-scale to machine-scale. #AI #Fintech #MandA #PrivateEquity #FutureuStrategy
English
0
0
0
36
Varun
Varun@varun_mathur·
Agentic General Intelligence | v3.0.10 We made the Karpathy autoresearch loop generic. Now anyone can propose an optimization problem in plain English, and the network spins up a distributed swarm to solve it - no code required. It also compounds intelligence across all domains and gives your agent new superpowers to morph itself based on your instructions. This is, hyperspace, and it now has these three new powerful features: 1. Introducing Autoswarms: open + evolutionary compute network hyperspace swarm new "optimize CSS themes for WCAG accessibility contrast" The system generates sandboxed experiment code via LLM, validates it locally with multiple dry-run rounds, publishes to the P2P network, and peers discover and opt in. Each agent runs mutate → evaluate → share in a WASM sandbox. Best strategies propagate. A playbook curator distills why winning mutations work, so new joiners bootstrap from accumulated wisdom instead of starting cold. Three built-in swarms ship ready to run and anyone can create more. 2. Introducing Research DAGs: cross-domain compound intelligence Every experiment across every domain feeds into a shared Research DAG - a knowledge graph where observations, experiments, and syntheses link across domains. When finance agents discover that momentum factor pruning improves Sharpe, that insight propagates to search agents as a hypothesis: "maybe pruning low-signal ranking features improves NDCG too." When ML agents find that extended training with RMSNorm beats LayerNorm, skill-forging agents pick up normalization patterns for text processing. The DAG tracks lineage chains per domain(ml:★0.99←1.05←1.23 | search:★0.40←0.39 | finance:★1.32←1.24) and the AutoThinker loop reads across all of them - synthesizing cross-domain insights, generating new hypotheses nobody explicitly programmed, and journaling discoveries. This is how 5 independent research tracks become one compounding intelligence. The DAG currently holds hundreds of nodes across observations, experiments, and syntheses, with depth chains reaching 8+ levels. 3. Introducing Warps: self-mutating autonomous agent transformation Warps are declarative configuration presets that transform what your agent does on the network. - hyperspace warp engage enable-power-mode - maximize all resources, enable every capability, aggressive allocation. Your machine goes from idle observer to full network contributor. - hyperspace warp engage add-research-causes - activate autoresearch, autosearch, autoskill, autoquant across all domains. Your agent starts running experiments overnight. - hyperspace warp engage optimize-inference - tune batching, enable flash attention, configure inference caching, adjust thread counts for your hardware. Serve models faster. - hyperspace warp engage privacy-mode - disable all telemetry, local-only inference, no peer cascade, no gossip participation. Maximum privacy. - hyperspace warp engage add-defi-research - enable DeFi/crypto-focused financial analysis with on-chain data feeds. - hyperspace warp engage enable-relay - turn your node into a circuit relay for NAT-traversed peers. Help browser nodes connect. - hyperspace warp engage gpu-sentinel - GPU temperature monitoring with automatic throttling. Protect your hardware during long research runs. - hyperspace warp engage enable-vault — local encryption for API keys and credentials. Secure your node's secrets. - hyperspace warp forge "enable cron job that backs up agent state to S3 every hour" - forge custom warps from natural language. The LLM generates the configuration, you review, engage. 12 curated warps ship built-in. Community warps propagate across the network via gossip. Stack them: power-mode + add-research-causes + gpu-sentinel turns a gaming PC into an autonomous research station that protects its own hardware. What 237 agents have done so far with zero human intervention: - 14,832 experiments across 5 domains. In ML training, 116 agents drove validation loss down 75% through 728 experiments - when one agent discovered Kaiming initialization, 23 peers adopted it within hours via gossip. - In search, 170 agents evolved 21 distinct scoring strategies (BM25 tuning, diversity penalties, query expansion, peer cascade routing) pushing NDCG from zero to 0.40. - In finance, 197 agents independently converged on pruning weak factors and switching to risk-parity sizing - Sharpe 1.32, 3x return, 5.5% max drawdown across 3,085 backtests. - In skills, agents with local LLMs wrote working JavaScript from scratch - 100% correctness on anomaly detection, text similarity, JSON diffing, entity extraction across 3,795 experiments. - In infrastructure, 218 agents ran 6,584 rounds of self-optimization on the network itself. Human equivalents: a junior ML engineer running hyperparameter sweeps, a search engineer tuning Elasticsearch, a CFA L2 candidate backtesting textbook factors, a developer grinding LeetCode, a DevOps team A/B testing configs. What just shipped: - Autoswarm: describe any goal, network creates a swarm - Research DAG: cross-domain knowledge graph with AutoThinker synthesis - Warps: 12 curated + custom forge + community propagation - Playbook curation: LLM explains why mutations work, distills reusable patterns - CRDT swarm catalog for network-wide discovery - GitHub auto-publishing to hyperspaceai/agi - TUI: side-by-side panels, per-domain sparklines, mutation leaderboards - 100+ CLI commands, 9 capabilities, 23 auto-selected models, OpenAI-compatible local API Oh, and the agents read daily RSS feeds and comment on each other's replies (cc @karpathy :P). Agents and their human users can message each other across this research network using their shortcodes. Help in testing and join the earliest days of the world's first agentic general intelligence network (links in the followup tweet).
Varun@varun_mathur

Autoquant: a distributed quant research lab | v2.6.9 We pointed @karpathy's autoresearch loop at quantitative finance. 135 autonomous agents evolved multi-factor trading strategies - mutating factor weights, position sizing, risk controls - backtesting against 10 years of market data, sharing discoveries. What agents found: Starting from 8-factor equal-weight portfolios (Sharpe ~1.04), agents across the network independently converged on dropping dividend, growth, and trend factors while switching to risk-parity sizing — Sharpe 1.32, 3x return, 5.5% max drawdown. Parsimony wins. No agent was told this; they found it through pure experimentation and cross-pollination. How it works: Each agent runs a 4-layer pipeline - Macro (regime detection), Sector (momentum rotation), Alpha (8-factor scoring), and an adversarial Risk Officer that vetoes low-conviction trades. Layer weights evolve via Darwinian selection. 30 mutations compete per round. Best strategies propagate across the swarm. What just shipped to make it smarter: - Out-of-sample validation (70/30 train/test split, overfit penalty) - Crisis stress testing (GFC '08, COVID '20, 2022 rate hikes, flash crash, stagflation) - Composite scoring - agents now optimize for crisis resilience, not just historical Sharpe - Real market data (not just synthetic) - Sentiment from RSS feeds wired into factor models - Cross-domain learning from the Research DAG (ML insights bias finance mutations) The base result (factor pruning + risk parity) is a textbook quant finding - a CFA L2 candidate knows this. The interesting part isn't any single discovery. It's that autonomous agents on commodity hardware, with no prior financial training, converge on correct results through distributed evolutionary search - and now validate against out-of-sample data and historical crises. Let's see what happens when this runs for weeks instead of hours. The AGI repo now has 32,868 commits from autonomous agents across ML training, search ranking, skill invention (1,251 commits from 90 agents), and financial strategies. Every domain uses the same evolutionary loop. Every domain compounds across the swarm. Join the earliest days of the world's first agentic general intelligence system and help with this experiment (code and links in followup tweet, while optimized for CLI, browser agents participate too):

English
157
722
5.2K
939.1K
Renjit Philip 🔭💡
Renjit Philip 🔭💡@RenjitPhilip·
@LexSokolin Moving from "getting rid of banks" to "getting rid of the human user" is the next big step for the machine economy??
English
0
0
0
12
Lex Sokolin | Generative Ventures
DeFi gave us permissionless rails Agentic finance gives us autonomous actors The difference? One removed intermediaries. The other removes the need for you to be there at all. We're not ready for what comes next.
English
12
4
38
2K
Renjit Philip 🔭💡
Renjit Philip 🔭💡@RenjitPhilip·
This idea has saved my Openclaw from making many memory gaffes! Mixing local notes with AI tools is the best way to speed up work... It turns a silent notebook into an active partner for a builder... The main win here is better context... Giving an agent a clean, local file set is much better than just writing long prompts... ..
English
0
0
0
26
GREG ISENBERG
GREG ISENBERG@gregisenberg·
full (free) tutorial on how to use obsidian with claude code x.com/gregisenberg/s…
GREG ISENBERG@gregisenberg

how to use obsidian + claude code to build a 24/7 personal operating system and build your startup: 1. write everything in markdown (daily notes, projects, beliefs, people, meetings) 2. link your notes together so they mirror how your brain actually thinks. 3. install obsidian cli so claude code can read your entire vault + the relationships. 4. stop reexplaining projects every session. use reference files instead. 5. build custom slash commands: /context → load your full life + work state /trace → see how an idea evolved over months /connect → bridge two domains you’ve been circling /ideas → generate startup ideas from your vault /graduate → promote daily thoughts into real assets 6. keep a strict rule: human writes the vault. agents read it, suggest, execute. 7. let claude aka clode surface patterns you’ve been unconsciously circling for years. 8. delegate from inside your notes. one sentence in obsidian → agent handles the rest. 9. treat writing as leverage.the more you write, the more context your agents have. 10. understand this:markdown files are the oxygen of llms. i really enjoyed seeing how to use obsidian thanks to @internetvin vin uses ai like a thinking partner wired into his life’s work. 99.99% of people won’t do this because it requires reflection + setup. but once the vault exists, the agent stops being generic. it starts thinking in your voice. episode is live on @startupideaspod (more there) this one is different. send this tweet to a friend. im still processing how game changer obsidian + claude code is, maybe you too watch

English
11
22
201
107.1K
GREG ISENBERG
GREG ISENBERG@gregisenberg·
Obsidian is a $350M company for a note taking app built by 3 engineers working remotely No other time in history was something like this possible What a wonderful time to be building a company
kepano@kepano

Obsidian is weird: - 7 full-time employees - ~1 million users per employee - fully remote - 1 in-person meetup per year - no scheduled meetings - no stand-ups - deep focus is prioritized - our manifesto guides our product What works for us may not work for you.

English
146
125
2.3K
1.3M
Renjit Philip 🔭💡
Renjit Philip 🔭💡@RenjitPhilip·
After looking at the Farzapedia approach... The "File over app" rule is the only safe way to keep AI memory... Closed systems are a trap for users and a gift for vendors... Using simple files like Markdown lets you check your own history... If you cannot search your data easily, you do not really own it...
English
0
0
0
84
Andrej Karpathy
Andrej Karpathy@karpathy·
Farzapedia, personal wikipedia of Farza, good example following my Wiki LLM tweet. I really like this approach to personalization in a number of ways, compared to "status quo" of an AI that allegedly gets better the more you use it or something: 1. Explicit. The memory artifact is explicit and navigable (the wiki), you can see exactly what the AI does and does not know and you can inspect and manage this artifact, even if you don't do the direct text writing (the LLM does). The knowledge of you is not implicit and unknown, it's explicit and viewable. 2. Yours. Your data is yours, on your local computer, it's not in some particular AI provider's system without the ability to extract it. You're in control of your information. 3. File over app. The memory here is a simple collection of files in universal formats (images, markdown). This means the data is interoperable: you can use a very large collection of tools/CLIs or whatever you want over this information because it's just files. The agents can apply the entire Unix toolkit over them. They can natively read and understand them. Any kind of data can be imported into files as input, and any kind of interface can be used to view them as the output. E.g. you can use Obsidian to view them or vibe code something of your own. Search "File over app" for an article on this philosophy. 4. BYOAI. You can use whatever AI you want to "plug into" this information - Claude, Codex, OpenCode, whatever. You can even think about taking an open source AI and finetuning it on your wiki - in principle, this AI could "know" you in its weights, not just attend over your data. So this approach to personalization puts *you* in full control. The data is yours. In Universal formats. Explicit and inspectable. Use whatever AI you want over it, keep the AI companies on their toes! :) Certainly this is not the simplest way to get an AI to know you - it does require you to manage file directories and so on, but agents also make it quite simple and they can help you a lot. I imagine a number of products might come out to make this all easier, but imo "agent proficiency" is a CORE SKILL of the 21st century. These are extremely powerful tools - they speak English and they do all the computer stuff for you. Try this opportunity to play with one.
Farza 🇵🇰🇺🇸@FarzaTV

This is Farzapedia. I had an LLM take 2,500 entries from my diary, Apple Notes, and some iMessage convos to create a personal Wikipedia for me. It made 400 detailed articles for my friends, my startups, research areas, and even my favorite animes and their impact on me complete with backlinks. But, this Wiki was not built for me! I built it for my agent! The structure of the wiki files and how it's all backlinked is very easily crawlable by any agent + makes it a truly useful knowledge base. I can spin up Claude Code on the wiki and starting at index.md (a catalog of all my articles) the agent does a really good job at drilling into the specific pages on my wiki it needs context on when I have a query. For example, when trying to cook up a new landing page I may ask: "I'm trying to design this landing page for a new idea I have. Please look into the images and films that inspired me recently and give me ideas for new copy and aesthetics". In my diary I kept track of everything from: learnings, people, inspo, interesting links, images. So the agent reads my wiki and pulls up my "Philosophy" articles from notes on a Studio Ghibli documentary, "Competitor" articles with YC companies whose landing pages I screenshotted, and pics of 1970s Beatles merch I saved years ago. And it delivers a great answer. I built a similar system to this a year ago with RAG but it was ass. A knowledge base that lets an agent find what it needs via a file system it actually understands just works better. The most magical thing now is as I add new things to my wiki (articles, images of inspo, meeting notes) the system will likely update 2-3 different articles where it feels that context belongs, or, just creates a new article. It's like this super genius librarian for your brain that's always filing stuff for your perfectly and also let's you easily query the knowledge for tasks useful to you (ex. design, product, writing, etc) and it never gets tired. I might spend next week productizing this, if that's of interest to you DM me + tell me your usecase!

English
333
721
8K
949.4K
andrew chen
andrew chen@andrewchen·
the "AI wrapper" critique assumes the wrapper is the easy part But being the wrapper also includes figuring out: - getting distribution without paying infinite CAC - creating an amazing UX that's AI native, not derivative - building a brand, trust in a very noisy landscape - creating an ecosystem/community - generating network effects - making customer service great - ... plus pricing, hiring, raising money, and everything else these are not easy!
English
91
35
397
31.6K
Renjit Philip 🔭💡
Renjit Philip 🔭💡@RenjitPhilip·
These are the positive stories of UAE that are often overlooked! #uaestrong
Mudassir Sheikha@MudassirSheikha

The real story behind @Careem Quik. Four years ago, I resisted the idea of opening dark stores and warehouses. The logic was simple: we are a tech company, what business do we have operating physical infrastructure? But the team convinced me that to provide a dependable customer experience, we had to go deep. We had to control the underlying infrastructure to ensure (i) item availability (especially post-ordering), (ii) quality of fresh produce, and (iii) consistently fast deliveries. Fast forward to today. @Careem Quik is the fastest-growing quick grocery service in the UAE. We deliver to most of Dubai and Abu Dhabi in 15 minutes, backed by a money-back satisfaction guarantee. The last five weeks, however, have been the real test. The team has faced more than their fair share of disruptions, yet they have come through every single time to ensure the community is served without interruption. I visited one of our warehouses last week specifically to thank them. Similar to captains, they are the other frontline that is keeping our cities normal. 🇦🇪 #CareemQuik #UAE #Resilience

English
0
0
0
39
Renjit Philip 🔭💡
Renjit Philip 🔭💡@RenjitPhilip·
I get surprised when some people tell me that Claude doesn't "work" for them and ChatGPT is producing "fluff". In 2023? Maybe true, but in 2026, you have to learn to wrangle the best out of these advanced models by using a combination of system prompts and context engineering. There is no excuse! I am sharing my system prompt that I have been using on Claude (and I use a version of this in ChatGPT). Works well for me to extract the best out of these intelligent models. How to use this? 1. Claude: Go to settings>general>personal preferences in Claude to put this in. 2. ChatGPT: Go to "Personal Preferences" (click on your user name at the bottom left and a menu will open up) Use it as a starting point and evolve it for your own specific needs - that is where the magic happens. #aiwrangling #learnai
Renjit Philip 🔭💡 tweet mediaRenjit Philip 🔭💡 tweet mediaRenjit Philip 🔭💡 tweet media
English
0
0
0
30
Renjit Philip 🔭💡
Renjit Philip 🔭💡@RenjitPhilip·
Theoretically, the bottleneck moves from 'writing the code' to 'judging the prompt.' Most companies will sequentialy struggle with this because they don't have a harness to measure if the agent actually did it right. The real 'Technical Arbitrage' win is in the verification layer—whoever can confirm the agent's work at speed wins the software race. Read why here: x.com/karpathy/statu…
English
0
1
0
14
Renjit Philip 🔭💡
Renjit Philip 🔭💡@RenjitPhilip·
Branchless banking was just the first step. The real change is theoretically how AI handles the relationship, not just the transaction. Most banks sequentialy move their old tech to the cloud and call it 'innovation.' But the real winner is the AI that can judge a customer's needs without a human. This shift changes how we value regional banks forever. Read why here: x.com/JimMarous/stat…
English
0
1
0
9
Renjit Philip 🔭💡
Renjit Philip 🔭💡@RenjitPhilip·
Watching robots learn physical skills like tennis from just video is a massive moment for business. Most people focus on the sports part, but the real win is theoretically in labor costs. If a robot can learn a complex physical job in hours just by 'watching' a human, it sequentialy removes the training time that kills profit. In M&A, we're going to start valuing robotics firms based on 'learning speed' rather than just 'hardware quality.' Read why here: x.com/rowancheung/st…
English
0
1
0
12
Renjit Philip 🔭💡
Renjit Philip 🔭💡@RenjitPhilip·
Agentic finance is the final stage of how money moves. @LexSokolin is right—we went from 'removing the middleman' to theoretically 'removing the human wait time.' This changes how we value financial companies. Firms relying on slow human processes are basically betting against the speed of AI. Read why here: x.com/LexSokolin/sta…
English
0
1
0
18
Renjit Philip 🔭💡
Renjit Philip 🔭💡@RenjitPhilip·
Spot on. The 'Technical Debt' moat is evaporating. For M&A, this is a massive valuation shift. We used to discount legacy brokers for their stuck distribution stacks. Now, you buy them, and theoretically swap the stack with an AI-harness in 30 days, and capture the margin after that. The moat isn't the software anymore; it's the speed of the technical arbitrage.
English
0
0
0
13
Luke Sophinos
Luke Sophinos@lukesophinos·
"I assume my customers can leave me instantly." This quote from @chrishlad (Founder/CEO @hanoverpark) has stuck with me the last few weeks. In a world where Claude Code can make moving ERP's (even those without API's) much easier, switching costs are no longer a strong moat. You have to operate as if your customer could leave you in the click of a button. The founders who operate with that mentality will build companies far ahead of their peers... Easier to say. A lot harder to do. But damn is it a good way to run your business.
English
2
0
2
628
Renjit Philip 🔭💡
Renjit Philip 🔭💡@RenjitPhilip·
AI is not 'replacing' insurance brokers. It’s creating a massive M&A Arbitrage opportunity that most people are missing. Traditional insurance distribution is the definition of fragmented: 10,000+ small brokers each running the exact same legacy processes. The 'Teardown' on why this is the next T opportunity: 1. The 'Root' Lesson in Behavioral Underwriting I've spent years looking at models like Root Insurance. They have 10B+ miles of data, giving them 10x the predictive power of legacy benchmarks. But their expenses are 90% higher than their revenue. Why? Because their distribution cost is the bottleneck. The AI works, the business model doesn't. 2. The Reverse Merger Strategy Most M&A in Fintech is failing because they buy for 'Growth.' They should be buying for 'Technical Arbitrage.' The winning move? A legacy broker (profitable distribution) merging with an AI-first 'Service-as-Software' stack. You buy the tech debt, fix it with AI, and expand EBITDA overnight. 3. The 90/10 Rule of Insurance Intelligence A middle-market broker's work is 90% 'Intelligence' (form filling, shopping carriers) and 10% 'Judgment.' AI is currently turning that of service cost into /bin/bash.50. This isn't just a software sale; it's a fundamental shift in how capital is deployed. The Strategic Verdict: For every spent on insurance software, are spent on 'intelligence services.' AI is turning that into /bin/bash.50. If you are a founder or LP, you need to be looking at 'Capital Light' models that write of premium for every of capital. This is where the M&A arbitrage is hiding in plain sight. #AI #Insurtech #MandA #FutureOfWork
English
0
0
0
14
Renjit Philip 🔭💡
Renjit Philip 🔭💡@RenjitPhilip·
Yes there's admin work in insurance claims...e.g.The aging workforce in claims adjusting is real and not just a recruitment problem. It's the perfect training set for autonomous systems. The firms winning right now are those buying legacy books of business just to let the AI 'apprentice' on hard claims before automating the low-side. Data-driven M&A beats raw capacity every time.
English
0
0
0
7
Luke Sophinos
Luke Sophinos@lukesophinos·
#2. High volume of lower level computer-based admin work >Healthcare billing and coding >Desk-based insurance claims >Back office admin support roles >Call centers
English
2
0
0
210
Luke Sophinos
Luke Sophinos@lukesophinos·
AI Copilot's are a relatively new phenomenon... BUT vertical-specific AND job-specific AI Copilots are really just starting to be introduced. I went deep on 10+ Vertical/Job Specific AI Copilots for you all:
Luke Sophinos tweet media
English
2
1
10
3.9K
Wharton Fintech
Wharton Fintech@whartonfintech·
🚀 Announcing the agenda for the 2026 Wharton FinTech Conference! This is an official event of #NYFTW26 🎟 Tickets are limited and selling quickly. Secure your spot here: luma.com/2026-wharton-f…
Wharton Fintech tweet media
English
3
1
2
179