Social Capital Inc
313 posts

Social Capital Inc
@socapinc
We're behind most of the cool launches you see on X & LinkedIn.
Katılım Eylül 2025
16 Takip Edilen1.8K Takipçiler

Hermes Workspace just hit v1.0 🚀
(Yes I shipped this from bed)
Your AI agent's command center
Chat, files, terminal, memory, skills, and knowledge browser in one interface.
Works with any OpenAI-compatible backend.
Ollama, LM Studio, vLLM, Claude, GPT
What's new in 1.0:
👥 Multi-profile management
🧠 Knowledge browser
📈 MCP server settings
⭐️Skills marketplace (2,000+)
⚡️Smart model routing
✅ 8 themes, mobile PWA
🚀 Security hardened (auth on all routes)
900+ stars and growing everyday!
github.com/outsourc-e/her…


English

Since I posted my Personal OS / filesystem article, LLM personal knowledge bases have turned into a real topic in the AI world. I’ve been building this system in Cursor for almost two years, but I wasn’t expecting to end up talking with people like a YouTube co-founder, a NASCAR driver, or some of the AI leaders I respect most because of that post. For me it was the first signal that this wasn’t niche anymore.
The biggest pushback on the article was whether a filesystem is enough or scalable for something like this. Scaling the vault is easy; scaling curation and placement is not. Karpathy’s LLM Wiki published soon after with same thesis and it was an independent validation for me. "LLM incrementally builds and maintains a persistent wiki structured, interlinked markdown between you and the raw sources." Now there are tons of similar projects, different takes on the same idea. That’s good, I’m also evolving my own stack from what’s out in the open, and honestly, reframing the personal filesystem as a wiki is a smart move.
I’m posting this because I think the harder problem is still the knowledge transfer pipeline. Designing a Personal OS (aka personal knowledge base) is the easy part. The architecture only starts to pay off when you fill it for years -not just posts you liked, but decision patterns, career and life details, half-formed thoughts, writing, the messy stuff. Getting all of that into the right markdown file, at the right time, in the right shape is still the bottleneck.
I built a Chrome extension (Feed2Context, details in the article) that grabs a post with my notes from my feed, drops it into the filesystem, and my agents synthesize and route it. I also built OpenHome assistant as a voice pipeline from my room into the wiki. Plus a bunch of MCP hooks into my accounts. But orchestrating all these helpers gets exhausting.
A lot of people suggested Obsidian but I'm mostly on Readwise CLI to pull from X, LinkedIn, arXiv, books, and news. It works well on mobile, and because it’s a CLI, agents can find what they need and push it into the filesystem. Skill registries help a lot, in the videos I’ve got flows like Readwise CLI + alphaXiv MCP for research papers: save a paper, agent pulls the full text, analyzes it, teaches me back. I’m also testing Zapier CLI, and waiting for especially the Triggers API, between things like Yutori or plain cron, keeping a personal wiki alive is still hard; nobody wants to be the cron job for their own life, so triggers might be part of the answer.
TL;DR: A personal filesystem you control isn’t optional if you don’t want to rent your memory from one AI company. The open problem is keeping it fed and current. What I actually want is one solution that can watch my screen, hear my voice, read my accounts, and write into my Personal OS without me acting as the integration layer forever.
Muratcan Koylan@koylanai
English

There is a catch nobody is talking about.
Gemma 4 uses shared KV cache layers - the last layers reuse K/V tensors from earlier layers instead of computing their own. That is why it fits on a laptop.
But that same architecture breaks cache reuse in llama.cpp. Every request re-evaluates the full prompt from scratch. With a 30-40K token system prompt (e.g., Claude + MCPs), that is 60-90 seconds of waiting before the first token.
Fine for single-turn Q&A. Unusable for agent loops where every tool call triggers a new inference.
A few days ago I opened a bug: github.com/ggml-org/llama…
Before this is fixed the free model has a hidden cost - your time.
Min Choi@minchoi
Google's Gemma 4 is pretty wild. You can now run it locally with OpenClaw in 3 steps. 1. Install Ollama 2. Pull Gemma 4 model 3. Launch OpenClaw with Gemma as the backend Private local AI agents in minutes. Hardware guide: > E2B → any modern phones > E4B → most laptops > 26B A4B → Mac Studio 48GB+ RAM > 31B → Mac Studio 64GB+ RAM
English

Satoshi is so obviously dead, unfortunately.
Ashlee Vance@ashleevance
Pretty sure we can keep this up for 15 more years. Who will be next? Satoshi is either dead or the only person in human history who can keep a secret
English

google, deploy, and manage your COMPUTER with CLAUDE & MCP
there was a case before the deadline of four hours - Figma, Vercel, Sentry, Notion, twenty tabs, panic, but i connected the MCP to CLAUDE, wrote a couple of PROMPTS passed in 30 minutes and went to drink coffee
pretty interesting setup, but most people don't even know what this is or how to use it:
>MCP - protocol that lets Claude talk to your tools: GitHub, Stripe, Figma, Notion, databases, anything
> 35 servers tested over a year
> don't need all 35 - pick 3-5 for your stack
honestly surprised no one put this together before

darkzodchi@zodchiii
English

Claude scanned 400,000,000 Polymarket trades. found the same pattern in every 100x wallet
> everyone is trying to be right. these wallets were trying to be convex
the strategy:
> enter 50-100 low-probability outcomes at $0.01 each
> $1 total exposure per market
> 95% expire worthless
> one tail event pays 80-100x and covers everything
why it works:
> standard Kelly says size down when uncertain
> this flips it: uncertainty IS the product
> you're not buying outcomes - you're buying optionality in bulk
> the market misprices tail events. systematically. every time
two wallets doing exactly this right now on UP/DOWN markets:
> polymarket.com/0x50b977391c4b…
> polymarket.com/0x8162fa34ba1a…
copy-trade this bots → @trade" target="_blank" rel="nofollow noopener">kreo.app/@trade
> the strategy is dead simple
> the only missing piece is a bot that scans markets and auto-enters these positions at scale
someone will build it. might as well be you 👇



Paone@paonx_eth
English

Mobile games made this guy so much money that he shut it down 💀
$50,000 a day
Crime Net@TRIGGERHAPPYV1
Flappy bird creator explains why he took down the worlds most downloaded app earning him $50,000 a day in 2014
English

Anthropic makes more money than OpenAI.
$323.5 million/day. And most people using Claude still do this:
1. Re-explain who they are. Every. Single. Conversation.
2. Send 30 follow-ups that burn 31x more credits each time.
3. Type their prompts when speaking gets 4x better results.
The people switching to Claude are not getting better results because Claude is smarter.
They're getting better results because they set it up once. Properly.
3 files. 1 folder. 20 minutes.
Claude never asks who you are again.
I wrote a (completely) free guide to set up Claude the way the fastest teams are using it right now, for people who switched but never set it up.
Here: x.com/rubenhassid/st…

Ruben Hassid@rubenhassid
English












