May W

159 posts

May W banner
May W

May W

@Mayitbe524

Social nerd who loves reading & learning | Travel addict who's a border collie mom | VC who enjoys building & coding (opinions are my own)

Menlo Park, CA Katılım Ocak 2020
80 Takip Edilen82 Takipçiler
May W
May W@Mayitbe524·
Analytics-ready data = aggregated, stable, explainable for human dashboards (“What happened?”) VS. AI-ready data = context-rich, complete, semantically dense for machines (“What should happen next?”)
English
0
0
0
14
May W
May W@Mayitbe524·
Fascinating read on AI models becoming the primary consumers of data, not humans. This flips the entire definition of “good” data and failure. It’s not about elegant tables anymore. It’s about making reality legible to a machine in its own language @community_md101/ai-ready-data-vs-analytics-ready-data-f67ef0804341" target="_blank" rel="nofollow noopener">medium.com/@community_md1
English
1
0
0
30
May W
May W@Mayitbe524·
Vercel just exposed customer env variables via a compromised third-party tool (Context AI). The implication is bigger: as agents integrate across more tools, the attack surface expands fast, and supply chain risk is becoming so real… ox.security/blog/vercel-co…
English
0
0
0
49
May W
May W@Mayitbe524·
Bottom line: the LiteLLM attack is the canary. We gave AI agents keys to everything, then forgot to lock the door. If you run agents or agent frameworks, audit your deps today. @LiteLLM #AIAgents #SupplyChainSecurity #AISecurity
English
0
0
1
43
May W
May W@Mayitbe524·
@LiteLLM ‘s supply-chain attack just exposed the biggest weakness in today’s agent stack - centralized credential risk (LiteLLM holds the keys all in one place). Now add agent autonomy: agents auto-update dependencies and read .env files / full context… and it gets scary.
Andrej Karpathy@karpathy

Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.

English
2
0
1
74
May W
May W@Mayitbe524·
My current mental model based on the runs: Claude → reasoning-oriented orchestration Codex → deterministic engineering pipelines Cursor → developer workflow automation
English
1
0
0
54
May W
May W@Mayitbe524·
So I ran a series of controlled multi-agent experiments using Claude Code, Codex, and Cursor, and tried to figure out what each system’s observable behavior reveals about its underlying philosophy (blog link below). High-level takes: @Mayitbe524/i-ran-the-same-multi-agent-prompts-on-claude-code-codex-and-cursor-heres-what-actually-happened-8ad9584b8ccc" target="_blank" rel="nofollow noopener">medium.com/@Mayitbe524/i-…
English
1
0
0
98
May W
May W@Mayitbe524·
Fun @ClickHouseDB event hearing Alexey Milovidov (original creator of ClickHouse) share coding agents best practices. Key takes: save rules in CLAUDE.md / AGENTS.md and convert recurring tasks into reusable “skills”. Also got my fangirl moment:)
May W tweet mediaMay W tweet media
English
1
0
2
74
May W
May W@Mayitbe524·
Last year I wrote about memory as the state layer for agents (link below). Recently an interesting trend is further emerging: From conversation memory (agents remember chat history) → tool memory (agents remember which tools worked best) → skill memory (agents distill learnings into reusable skills and share across agents) Fascinating to watch how quickly agent infrastructure is evolving medium.com/towards-artifi…
English
0
0
2
56
May W
May W@Mayitbe524·
@jai__toor 2. Meanwhile, technical teams are just hacking it with coding agents stitching custom pipelines. @sequoia just said the next $1T co will be a software firm masquerading as a services one - selling work (vs. tool) we’ll see :) but there could be real truth in it
English
0
0
0
20
May W
May W@Mayitbe524·
@jai__toor @jai__toor Spot on- tribal knowledge is the moat right now. Dinner takeaways: 1. Could be massive opportunity if any startup cracks true end-to-end GTM (ICP discovery, sales ops etc). Super hard tho. And might just end up an “agent wrapper” anyways
English
1
0
0
25
May W
May W@Mayitbe524·
From last night’s founder dinner: GTM tooling is still wildly fragmented - even darling Clay only owns a slice. On the flip side: coding agents like cursor/Claude are slashing the cost/effort to build custom workflows for ICP analysis and customer discovery
English
2
0
1
75
May W
May W@Mayitbe524·
Are we shifting to an era of personalized, in-house agent-driven solutions over standardized point tools? #AI #GTM #Startups #Agents
English
0
0
1
53
May W
May W@Mayitbe524·
Also Kimi K2.5 priced at ~$0.60 input / $3 output per 1M tokens, about 80% cheaper than Opus 4.5) or GPT-5.2, thanks to aggressive context optimization and low token costs. Can’t wait to test #KimiCode and the swarm!
May W tweet media
English
0
0
0
113
May W
May W@Mayitbe524·
Cool demo by @moonshot AI (creators of Kimi K2.5) yesterday! Interestingly they showcased Agent Swarm - spinning up to 100 parallel sub-agents for complex tasks, and claimed it shipped first on Jan 2026, ahead of similar multi-agent features like Claude’s agent teams
English
1
0
0
47
May W
May W@Mayitbe524·
Just had my first @Tesla #FSD “drive”. This tech is getting scary good. Tesla handled freeway on-ramps, tight merges, lane changes, and even backed perfectly into a crowded garage spot. Feels like riding the future…
English
0
0
0
67
May W
May W@Mayitbe524·
2. In 2026, the real gap between an engineer and a “vibe coder” will be security. The differentiator is the ability to architect security into the product from day one - not bolt it on later.
English
0
0
0
21
May W
May W@Mayitbe524·
Takeaways from Anthropic’s 2026 Agentic Coding Report: 1. Engineers only delegate to AI tasks they already know what the output should look like. The human role isn’t just giving guidance - it’s reviewing and validating. Taste and accountability remain the key. resources.anthropic.com/hubfs/2026%20A…
English
1
0
0
32