Ves Stoyanov

157 posts

Ves Stoyanov banner
Ves Stoyanov

Ves Stoyanov

@vesko_st

Head of AI at @magicaltome. Ex-Language Researcher at @FacebookAI. Large LMs and multilingual NLP. @JHUCLSP and @Cornell alumn. https://t.co/WTSCasqDI6

Menlo Park, CA Katılım Mayıs 2009
546 Takip Edilen2.1K Takipçiler
Ves Stoyanov
Ves Stoyanov@vesko_st·
How we fixed it at @Lightfield: → Meta-tool returns schemas on demand (like --help) → Large results write to files on disk, agent gets a summary and processes with code → Lazy connections + two-layer schema cache kill the startup overhead
English
1
0
0
60
Ves Stoyanov
Ves Stoyanov@vesko_st·
MCP is becoming the standard for AI tool use. It also breaks at scale. Tool schemas and results eat tens of thousands of tokens. The agent can't process any of it with code — it's all just text stuck in a prompt.
English
2
0
2
177
Ves Stoyanov
Ves Stoyanov@vesko_st·
The core problem: everything is loaded eagerly. Every MCP server dumps full JSON schemas into the prompt. Results inject as raw strings. Your agent has code execution and file I/O, but can't use either on MCP data.
English
1
0
0
38
Ves Stoyanov
Ves Stoyanov@vesko_st·
How do we know Superhuman intelligence is here? Claude code untangled my tangled commit history. This task is not humanly possible! Trust me I've tried and failed many times before :)
English
0
0
2
258
Ves Stoyanov
Ves Stoyanov@vesko_st·
Shipping our biggest update to @lightfld since launch. We combined your data, LLMs, and knowledge of how your business works into tools that don't feel like SaaS — they feel like a helpful colleague. More features coming soon.
Lightfield@lightfld

Now live: Agent code generation and execution. Your agent can now write and execute code inside a secure sandbox with direct access to your CRM data. This gives it the ability to rapidly navigate thousands of records, produce reports and visualizations, and deliver structured, reliable analysis.

English
0
1
12
512
Ves Stoyanov
Ves Stoyanov@vesko_st·
“Claude Code is having a ChatGPT moment” [HardFork podcast] It really resonates. I built two complex features over a weekend with Cursor. Our CPO and designers are prototyping, fixing, and pushing code. We're at an inflection. AI coding is coming everywhere. Legacy Saas is done!
English
3
0
10
402
Ves Stoyanov
Ves Stoyanov@vesko_st·
Really excited about the newest open model from MBZUAI! It is a great resource for researchers that need access to truly open models.
MBZUAI@mbzuai

Today, we are releasing a new version of K2 (K2-V2), a 360-open LLM built from scratch as a superior base for reasoning adaptation, while still excelling at core LLM capabilities like conversation, knowledge retrieval, and long-context understanding. K2 fills a major gap: highly capable models with no transparency. Instead of releasing only weights, we’re sharing the full training story — dataset recipes, mid-training checkpoints, logs, code, and evaluation tools. That’s 360-open. What’s inside: • 70B dense transformer engineered as a reasoning-enhanced base model • Native 512K context (extendable via RoPE scaling) • Mid-training reasoning phase • Strong tool-use scaffolding What we’re open-sourcing: • 250M+ reasoning traces (math, planning, multi-step logic) • Full pre- & mid-training data compositions • All mid-training checkpoints • Training logs, code, Eval360 Performance: • GPQA-Diamond: 55.1% mid-training → 69.3% after SFT (strongest fully open 70B model) • KK-8 Logic Puzzles: 83% — competitive with DeepSeek-R1 & OpenAI o3-mini-high • ArenaHard V2: 62.1% — close to Qwen3 235B • Outperforms Qwen2.5-72B and approaches Qwen3-235B despite being smaller and fully transparent. 🔗 The Model: bit.ly/3KIYwuo 🔗Technical Report: bit.ly/49V8h2U 🔗Blog: bit.ly/49V7gb6

English
0
0
4
524
Ves Stoyanov
Ves Stoyanov@vesko_st·
We all thought AGI was around the corner; GPT-5 showed otherwise. But ASI is here: coding, diagnosis, CFA. Narrow, yet very useful. @lightfld, we’re building Relationship Superintelligence: AI that never forgets a conversation, surfaces what matters, and handles the follow-ups.
English
0
2
7
1.3K
Ves Stoyanov retweetledi
Keith Peiris
Keith Peiris@keithpeiris·
Getting your first 10 customers is often the hardest part of building a startup, especially given how competitive every space is lately. Cold emails don’t land, referrals dry up, and every pitch feels like starting from zero. In 2025, AI is changing how founders navigate that 0→1 journey. That’s why I’m hosting Founder Sales in the AI Era at #SFTechWeek — a candid conversation on how to win your first 10 customers in the AI era, what’s changed, and what hasn’t. I’ll be joined by an incredible group of founders who’ve lived it: • Kayvon Beykpour (@kayvz) (Periscope → Twitter → @macroscope_ai) • Andrei Serban (@andrei_serban) (Fuzzbuzz → Rippling → Console) • Will Lawrence (@will_lawrenceTO) (Meta → Paxos → @greenliteai) 📅 Oct 7 · 12–1:30pm 🔗 RSVP: partiful.com/e/9F35BgMJXTKv… @Techweek_ @lightfld
Keith Peiris tweet media
English
2
4
10
3.3K
Ves Stoyanov retweetledi
Henri Liriani
Henri Liriani@hliriani·
We're rebooting Tome to be a different company. @magicaltome is now an AI assistant for breaking into new enterprise accounts. Here's a bit on the journey we've been on…
English
29
20
436
249.2K
Ves Stoyanov
Ves Stoyanov@vesko_st·
@xiamengzhou @yumeng0818 @danqi_chen I love how simple and effective SimPO is. LHF models have now come full circle back to essentially a version of maximum likelihood training. Great job Mengzhou and team, can't wait to use the method in my everyday work! 6/6
English
0
0
1
204
Ves Stoyanov
Ves Stoyanov@vesko_st·
@xiamengzhou @yumeng0818 @danqi_chen Simple Preference Optimization (SimPO) [Meng, Xia, and Chen, 2024] removes the need for a reference model and simplifies the optimization objective. It’s elegant: increase the per-token average log probability of the preferred response and reduce it for the losing response. 5/6
English
1
0
1
225
Ves Stoyanov
Ves Stoyanov@vesko_st·
We have come full circle in Learning From Human Feedback (LHF)! I enjoyed reading the SimPO paper by my brilliant former intern @xiamengzhou (along with @yumeng0818 and brilliant long-time collaborator @danqi_chen). The paper is fascinating. 🧵 1/6
English
1
0
4
952