Social Capital Inc

267 posts

Social Capital Inc banner
Social Capital Inc

Social Capital Inc

@socapinc

We're behind most of the cool launches you see on X & LinkedIn.

Katılım Eylül 2025
16 Takip Edilen1.6K Takipçiler
Oliur
Oliur@UltraLinx·
TBPN is a great example of you don’t need millions of views to be a success. Targeted hyper focused audience can be even more powerful.
English
8
2
116
7.8K
Social Capital Inc
Social Capital Inc@socapinc·
sit down launch videos are boring now
Animesh Koratana@akoratana

Introducing: PlayerZero The world's first Engineering World Model that puts debugging, fixing, and testing your code on autopilot. We've raised $20M from Foundation Capital, @matei_zaharia (Databricks), @pbailis (Workday), @rauchg (Vercel), @zoink (Figma), @drewhouston (Dropbox), and more PlayerZero frees up 30% of your engineering bandwidth by: 1.⁠ ⁠Finding the root cause for bugs & incidents in minutes that engineering teams take days to identify. 2.⁠ ⁠Predicting in minutes, edge case issues that a 300-person QA team would take weeks to find. ------ Here's why this matters: No one in your org has a complete picture of how your production software actually behaves. Support sees tickets. SRE sees infra. Dev sees code. Each team builds their own fragmented view - and none of these systems talk to each other. When something breaks, everyone scrambles to stitch the picture together by hand. PlayerZero connects all of it into a single context graph - → The Slack thread where your lead said "we went with X because Y fell apart in prod last time" → The PR review where an engineer explained the tradeoff → The lifetime history of your CI/CD pipeline, observability stack, incidents, and support tickets So you can trace any problem to its root cause across every silo. And it compounds. Every incident diagnosed teaches the model something new. The longer it runs, the deeper it understands - which code paths are high-risk, which configurations are fragile, which changes tend to break which customer flows. So when you sit down to debug a live issue, you have your entire org's collective reasoning and production memory behind you - instantly. ------ Zuora, Georgia-Pacific, and Nylas have reduced resolution time by 90% and caught 95% of breaking changes and freeing an average of $30M in engineering bandwidth. ------ Our guarantee: If we can't increase your engineering bandwidth by at least 20% within one week, we'll donate $10,000 to an open-source project of your choice. Book a demo - bit.ly/3NlLMeN

English
0
0
2
6.1K
bao to ᝰ.ᐟ
bao to ᝰ.ᐟ@baothiento·
examine my digital ant colony 🐜
English
27
28
723
17.4K
Lior Alexander
Lior Alexander@LiorOnAI·
Every foundation model you've ever used has the same bug. It just got fixed. Since 2015, every deep network has been built the same way: each layer does some computation, adds its result to a running total, and passes it forward. Simple. But there's a problem, by layer 100, the signal from any single layer is buried under the sum of everything else. Each new layer matters less and less. Nobody fixed this because it worked well enough. Moonshot AI just changed that. Their new method, Attention Residuals, lets each layer look back at all previous layers and choose which ones actually matter right now. Instead of a blind running total, you get selective retrieval. The analogy: imagine writing an essay where every draft gets merged into one document automatically. By draft 50, your latest edits are invisible. AttnRes lets you keep every draft separate and pull from whichever ones you need. What this fixes: 1. Deeper layers no longer get drowned out 2. Training becomes more stable across the whole network 3. The model uses its own depth more efficiently To make it practical at scale, they group layers into blocks and attend over block summaries instead of every single layer. Overhead at inference: less than 2%. The result: 25% less compute to reach the same performance. Tested on a 48B-parameter model. Holds across sizes. Residual connections have been invisible plumbing for a decade. Now they're becoming dynamic. The next generation of models won't just pass through their own layers, they'll search them.
Kimi.ai@Kimi_Moonshot

Introducing 𝑨𝒕𝒕𝒆𝒏𝒕𝒊𝒐𝒏 𝑹𝒆𝒔𝒊𝒅𝒖𝒂𝒍𝒔: Rethinking depth-wise aggregation. Residual connections have long relied on fixed, uniform accumulation. Inspired by the duality of time and depth, we introduce Attention Residuals, replacing standard depth-wise recurrence with learned, input-dependent attention over preceding layers. 🔹 Enables networks to selectively retrieve past representations, naturally mitigating dilution and hidden-state growth. 🔹 Introduces Block AttnRes, partitioning layers into compressed blocks to make cross-layer attention practical at scale. 🔹 Serves as an efficient drop-in replacement, demonstrating a 1.25x compute advantage with negligible (<2%) inference latency overhead. 🔹 Validated on the Kimi Linear architecture (48B total, 3B activated parameters), delivering consistent downstream performance gains. 🔗Full report: github.com/MoonshotAI/Att…

English
9
17
90
19.8K
antirez
antirez@antirez·
Take note about the right way to react.
Kimi.ai@Kimi_Moonshot

Congrats to the @cursor_ai team on the launch of Composer 2! We are proud to see Kimi-k2.5 provide the foundation. Seeing our model integrated effectively through Cursor's continued pretraining & high-compute RL training is the open model ecosystem we love to support. Note: Cursor accesses Kimi-k2.5 via @FireworksAI_HQ ' hosted RL and inference platform as part of an authorized commercial partnership.

English
6
1
146
17.1K
Josh tried coding
Josh tried coding@joshtriedcoding·
writing an article about AI SDK v6 running AI agents in isolated containers with their own filesystem and shell access 👀
Josh tried coding tweet media
English
13
5
157
8.8K
Muratcan Koylan
Muratcan Koylan@koylanai·
Calling today's LLMs AGI, while the entire field is still built on careful context assembly and repeated behavioural correction, is just a drastic lowering of the bar. Current models are very good at interpolating across seen structures. But I still have not seen strong evidence of sustained, cross-domain, top-tier originality without substantial human framing, constraint, and correction. I'm a Context Engineer. My entire job exists because LLMs can't figure out what information they need. I spend 16 hours a day managing token budgets, debugging context degradation, and adjusting prompts word by word so models behave as expected. If AGI were here, my role wouldn't exist.
English
7
4
76
12.7K
Shweta
Shweta@shweta_ai·
You know you live in San Francisco when the hottest party on a Friday night is a book launch for Inference Engineering 🤖🎷
Shweta tweet mediaShweta tweet media
English
22
6
291
42.4K
Can Vardar
Can Vardar@icanvardar·
clis are better than mcps
English
23
0
54
3.3K
BuBBliK
BuBBliK@k1rallik·
> most people still prompt like it's 2024 > vague asks, no structure, no context > then wonder why Claude gives mid output > found one article > 30 prompting techniques > XML tags, context files > self-critique loops > suddenly the model stopped feeling random > now it writes cleaner > thinks longer, ships faster > and makes everyone else look slow bookmark before you waste another week on bad prompts
darkzodchi@zodchiii

x.com/i/article/2036…

English
10
1
24
2.9K
F.O.L.A
F.O.L.A@folaoftech·
Apple stepping into fashion... What do you think about these trousers😂😂😂😂
English
1
0
3
433
Aurelien
Aurelien@Aurelien_Gz·
one shotted this cinematic peaky blinders site with claude opus.. prompt ↓
English
2
2
9
2.8K
Lisan al Gaib
Lisan al Gaib@scaling01·
Sonnet 5 is real It's called Sonnet 4.6
Lisan al Gaib tweet media
English
11
12
746
83.7K