Dean Sacoransky

126 posts

Dean Sacoransky

Dean Sacoransky

@deansacoransky

Head of FDE @tavilyai

NYC Katılım Haziran 2025
223 Takip Edilen84 Takipçiler
Dean Sacoransky
Dean Sacoransky@deansacoransky·
There is an enormous amount of human and technological bloat at companies which will make it nearly impossible to execute on this...everything needs to be rethought, slashed, and built from scratch
English
1
0
0
72
Dean Sacoransky
Dean Sacoransky@deansacoransky·
This understanding can be transformed into an organized filesystem with folders, files (all types), code, skills, scripts, MCPs, scheduled tasks, CLIs, .md files, etc...all version controlled by git
English
1
0
0
146
Dean Sacoransky
Dean Sacoransky@deansacoransky·
imo, the most important and interesting problem to work on today is organizing a 'company environment' for your agents to operate in. Concretely - this means creating a filesystem/repo that models your entire company...cc/codex/cursor will live and operate in this repo.
English
1
5
13
33K
Atai Barkai
Atai Barkai@ataiiam·
🌟 Introducing @CopilotKit's Enterprise Intelligence Platform Our self-hosted solution for teams taking agentic applications to production on any platform, with confidence. Everyone starts from our open-source. That's not changing. Enterprise Intelligence goes on top of it. The #1 thing we hear from companies deploying fullstack, interactive agents is that they need optionality and self-hosting. With our platform, you get persistent threads, analytics, and self-learning that captures the full interaction surface between agents and users, which is framework agnostic via AG-UI. This is the DNA of CopilotKit. We don't just save chat history, we record the interaction: - what your users did (asked, edited, answered, ran a tool...) - how your agent responded and more... Every agent-user interaction becomes a feedback signal. Your agents improve just by being used. Sign-up here: go.copilotkit.ai/enterprise-int…
English
5
6
34
2.6K
Dean Sacoransky
Dean Sacoransky@deansacoransky·
The modern “stack” we use at work (Notion, Linear, Slack, etc) is marketed as being ai native, but it’s built for humans, not ai.
English
0
0
0
33
Dean Sacoransky
Dean Sacoransky@deansacoransky·
Code and knowledge must be stored together. Streams of context from slack, email, zoom etc can be published to the github file system in real time for deeper enrichment. LLMs are used to compact/filter the inputs over time, creating a rolling window of context. The toughest input to handle is in-person discussion…perhaps soon we will be wearing recorders.
English
0
0
1
75
Assaf Elovic
Assaf Elovic@assaf_elovic·
it’s crazy to think that github is best positioned to become the memory/context layer for AI
English
3
1
11
895
Dean Sacoransky retweetledi
Tavily
Tavily@tavilyai·
We’ve been hosting and co-hosting events with incredible partners, community builders, and universities around the world. Does that sound like something you want to get involved with? Check out where we’re heading next! 👇 Coming up… 🗽 NYU Hackathon: New York (April 23) 🗽 Elastic Developer Meetup: New York (April 23) 🐻 HackEurope: Berlin (April 25–26) 🗽 HackNation (Worldwide): New York (April 25–26) 🌉 Builders & Brews: San Francisco (April 27) 🐻 Nebius AI Builder Day: Berlin (April 28) 🌉 Elastic Meetup: San Francisco (April 28) 🌉 OpenAI Hackathon w/ AGI House: San Francisco (May 9) 🇬🇧 Nebius AI Builder Day: London (May 21) 🇬🇧 Builders & Brews: London (May 22) 🐻 Builders & Brews: Berlin (May 27) See all our events at luma.com/eventstavily
Tavily tweet media
English
0
4
9
558
Dean Sacoransky
Dean Sacoransky@deansacoransky·
@lakshyaag @RLanceMartin @a1zhang @tavilyai Very insightful work. We have had dynamic tool calling behavior on the input side for a while now (popularized through MCP), but dynamic filtering on the output side is equally powerful and should become table stakes soon for all types of tools (not only web)
English
0
0
1
53
Dean Sacoransky retweetledi
Richmond Alake
Richmond Alake@richmondalake·
The @OracleDatabase team were #newyorkmaxxing this week. And man I love the energy of New York folks. From the lady working so intense at 7am and making me realize that I'm not going hard enough in my workout To, all the cracked developers asking me questions that really made me think extensively before replying. We will be back soon.
Oracle Developers@OracleDevs

Thanks to everyone who joined our team for a recent meet-up in the Big AI-pple! 🍎 🗽 It was great to connect with engineers and builders and hear @richmondalake speak about AI agents.

English
1
0
3
203
Dean Sacoransky retweetledi
Rand
Rand@rand_longevity·
500 days from now, nobody will be waking up for school or work
English
2.5K
1.8K
28.8K
8.3M
Dean Sacoransky retweetledi
lakshya
lakshya@lakshyaag·
farmers wrap top 10 Canadian items
English
0
1
2
80
Dean Sacoransky
Dean Sacoransky@deansacoransky·
@karpathy Massive potential for implementing this strategy for company knowledge bases
English
0
0
0
19
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.9K
7.1K
58.7K
21M
Dean Sacoransky retweetledi
Tavily
Tavily@tavilyai·
NVIDIA’s AI-Q reached #1 on DeepResearch Bench + and Tavily powers the web research layer behind the system. The model was fine-tuned on ~80K research trajectories that included Tavily web search results, enabling it to learn multi-step search and synthesis workflows. This approach is possible through Tavily’s direct access to raw web data. That's why Tavily exists. Search infrastructure designed specifically for AI agents doing deep research.
Tavily tweet media
English
1
14
78
25.6K