Daniel Lenton

67 posts

Daniel Lenton

Daniel Lenton

@DanielLenton1

Founder @letsunifyai | YC W23

Katılım Aralık 2019
136 Takip Edilen867 Takipçiler
Sabitlenmiş Tweet
Daniel Lenton
Daniel Lenton@DanielLenton1·
I'm so excited to announce our $8M fundraise 😍, and the launch of our dynamic router 🚀 unify.ai/chat?default=t… I first just want to thank everyone who has backed our vision to unite the LLM landscape, and special thanks to our amazing team, who all continue to work tirelessly to push things forward every day (@GSBrread, @vedpatwardhan, @YasserAhmed1029, @iamjameskeane, @khalidanwaar01, Nassim Berrada, Tom Edwards, Albert Lukacs) 💪❤️ You can learn more about our fundraise in this TechCrunch article (techcrunch.com/2024/05/22/uni…), and we will also be releasing a blog post shortly. Our router dynamically sends each prompt to the best LLM so you can balance quality, speed, and cost with ease ✨ Simply tune these three dials as per the needs of your task, and then let Unify do the rest, giving faster and better quality responses at a lower cost 📈 Your "easy" prompts will go to the fastest and cheapest models, and only the "hard" prompts will go to the most appropriate heavy lifter for your task, such as GPT-4o, Opus, or Gemini, depending on your prompts 🔀 Watch our explainer video [youtu.be/15wgxK1Cw0E] to learn more about the solution at a high level, and if you like the sound of it, then give it a try, and let us know what you think! We're also live on Product Hunt [producthunt.com/posts/unify-6], feel free to engage in the thread 🧵 You can sign up [console.unify.ai], take our router for a spin, and check out our product walkthrough [youtu.be/ZpY6SIkBosE]. If you like it, then tell a friend 😊 All kinds of feedback are welcome! Happy prompting ✌️❤️
YouTube video
YouTube
YouTube video
YouTube
GIF
English
26
27
193
41K
Daniel Lenton
Daniel Lenton@DanielLenton1·
Day 2 onboarding Rachel (my new virtual colleague) 👩🏻‍💻. I asked if she could help me grant her access to our Google workspace, and she guided me through via screenshare (I genuinely had no idea how to do this 🫠), and then she cleaned up my inbox with one python script; she did a great job! On the technical side (inside Rachel's brain 🧠), the voice still feels a bit slow to respond sometimes, but this is mainly a model capability issue which will be solved in the coming months with better STS models. We did try several STS models during testing (such as gpt-realtime) but they all just felt too stupid to hold a natural conversation (curious if others have a different perspective?). For now, we therefore use a standard SST -> LLM -> TTS design (Deepgram -> gpt-5-mini(low) -> ElevenLabs) for the live calls. We'll update this design as soon as STS models feel genuinely smart enough, as the emotional and tonal awareness + improved latency would be a big plus ⚡ You can try it for your yourself here: console.unify.ai Thoughts + feedback welcome as always! 🫶
English
0
1
6
334
Daniel Lenton
Daniel Lenton@DanielLenton1·
Not engaged with Twitter in a lonnnng time, but for anyone interested, I've decided to start doing regular (unfiltered) posts of my own experience onboarding a fully virtual colleague. AGI is certainly not solved (yet), and so I'll focus on what works well, what doesn't work well, and where the biggest gaps are 🔍 In this video I'm just setting the scene, explaining the basics and hiring Rachel (no fireworks quite yet). In the next videos I'll give her access to everything and start to see how well she *really* learns and internalizes the nuances of my own day-to-day, how she fares when the number of different "flows" keep piling up, and how conversational she can be whilst navigating all of this. On the more technical side, I'm interested: 1) How well do the underlying semantic + symbolic DB storage and search mimic implicit skill storage and memory retrieval that a person would have (DB reads/writes are much less efficient and less coupled than an end-to-end jointly trained implicit memory module would be, more like how our own connectionist brains work) 2) Can a hierachy of fast-thinking (less intelligent) and slow-thinking (smarter-model) sub-agents communicating with one-another really feel as conversational as a real person with their single brain (again, of course not, but how close can we get with a tiered thinking-fast + thinking-slow design for smooth conversation management?) 3) Can repeated post-action storage of skills and functions with continual self-refactoring improve speed and efficiency for future actions (not burning through tokens re-discovering the same thing every time)? How does this scale as the number of self-stored skills and functions grow? Do the embeddings and semantic retrieval hold up when there are maybe 100s of entries? We've seen very good results on all fronts for smaller-scale tasks (which would take a person a few hours), and it's also worked well when continually learning over the course of a few weeks. Despite this, the above questions remain open, and I'm curious to see how they hold up as I start this longer-running experiment. Watch along with some of these vids if interested; or scroll right on by if not 😁 Thoughts + feedback welcome as always! 🫶
English
0
3
7
324
Daniel Lenton
Daniel Lenton@DanielLenton1·
We've been heads down building for the past few months (custom stack, not OpenClaw 🦞), and I'm excited to finally launch our virtual teammates! Huge shout out to the team (and many long nights) to get us here ❤️💪 You onboard you new teammate exactly how you'd onboard any other new colleauge. Share your screen and guide them through, send onboarding docs, record voice notes, hop on a call, whatever is easiest. They learn how you (and your team) works, and they continually reflect, ask follow up questions, and improve over time 📈 We built our own stack from scratch because we wanted something that genuinely feels like a colleague, with a fully realtime “there in the room with you” experience. This requires more than a flat tool loop with pluggable skills. We use top-down (ask, interject, pasue, resume, stop) and bottom-up (notify, request_clarification) steerable handles throughout a nested call stack of sub-agents, with concurrent multi-task execution, and a code-first (not JSON tool) engine powering every action. All of this lives inside the terminal and/or live python sessions, and each in a dedicated per-agent computer and filesystem 📟 In practice, this means your new colleague can be simultaneously using their own computer, talking to you via voice over a live meet, following your own guided screenshare instructions, working across multiple concurrent tasks, and consolidating all of these into new skills on-the-fly. They can be interrupted and redirected at any point in time, and they’re continually chunking all of their experience into reusable skills. People don’t perform tasks in “prompt then execute” windows, and neither should your virtual colleagues in our view. We're really happy with the feedback we’ve received thus far. We’ve helped several teams (in real estate, finance, and housing) streamline day-to-day processes which would have been difficult to “prompt” into hand-crafted skills, because these tasks are hard to fully articulate upfront. They require continual judgment, context, and incremental back-and-forth work with people to really learn and internalize what's needed over time. The best feedback we've received (which makes us most excited 👀) is that the colleaue is already much better on day 2 than on day 1, and then even better on day 3, with a hollistic understand evolving quickly and organically 🧠 If you're curious to see how it works, then give it a try with this free credit link! console.unify.ai/assistants?tok… I would love to hear people's honest thoughts (both positive and negative) 🙏 ps we're also live on Product Hunt, so any feedback or support here would also be appreciated: producthunt.com/products/unify… Thanks! 🫶
English
11
9
35
12K
Daniel Lenton
Daniel Lenton@DanielLenton1·
Build self-hosted voice agents in ~5 mins, code below ⬇️. My voice agent enjoyed this game of FizzBuzz wayyyyyyy too much 😂, LiveKit you need to tell your voice models to calm down! Otherwise only good things to say 💯. Code for the demo (including livestreaming the logs): github.com/unifyai/demos/…
English
1
0
1
522
Daniel Lenton
Daniel Lenton@DanielLenton1·
MCP servers are ONLY as good as their abstractions 🧱 and docs 📄. The official MCP for Google Drive fails at even the most basic tasks (see video). Building an MCP server is VERY EASY. Crafting the correct abstractions is VERY HARD. Very few servers are production ready; most are just POCs (not a criticism, this is their intention). Benchmarking and evals are not only important for system prompts, but will also be increasingly important for MCP designs. Exciting times ahead! 👀
English
4
1
17
1.6K
Daniel Lenton
Daniel Lenton@DanielLenton1·
A collection of MCPs for databases, observability + visualization 🔢 📊 📈 @quickchart_io github.com/GongRzhe/Quick… @databricks github.com/JustTryAI/data… @SnowflakeDB github.com/isaacwasserman… @posthog (official PostHog repo): github.com/PostHog/postho… @postgres (official MCP repo): github.com/modelcontextpr… @ClickHouseDB (official ClickHouse repo): github.com/ClickHouse/mcp… @Redisinc (official MCP repo): github.com/modelcontextpr… SQLite (official MCP repo): github.com/modelcontextpr… @grafana (official Grafana repo): github.com/grafana/mcp-gr… None of these feel super robust yet (after a few hours of playing), but as the core LLMs get better (and maybe with some more specific prompt engineering on the client side), I can see these becoming increasingly useful over time. What do other people think? Have people used these data + visualization MCPs much with their AI agents? Ps as of yesterday I decided to start doing daily posts / brain dumps to share thoughts + engage with others in this crazy AI inflection point, stay tuned for more 🤖
Daniel Lenton tweet media
English
0
0
2
282
Daniel Lenton
Daniel Lenton@DanielLenton1·
I can imagine a world where companies are built with 100% MCP frontends [lnkd.in/eKPY3H4v], without any human usable interface at all. Like a SaaS company but you purchase a subscription to augment your AI agents, rather than youself. Thoughts? Are any business models or products like this already emerging? I guess companies like Tavily + the various vector databases kind of fit the bill already. While mouse/keyboard use is more general, I think parts of the internet could evolve away to be more agent-oriented (pretty quickly), negating (or minimizing) the need for video streaming, computer vision and mouse control from the agent. Thoughts? Either way, exciting times ahead! 👀
Daniel Lenton tweet media
English
0
1
7
363
Daniel Lenton
Daniel Lenton@DanielLenton1·
Excited to be launching our new AI observability tool today! 😁 Think "Notion for AI Observability" 📊 When building AI apps ourselves, we spent months fighting with the prior tooling, trying to strip things back to the bare minimum, so we could observe and iterate on exactly what we needed, when we needed it 🔁 🔍 We care about the underlying LLM, but not more than the users! Existing tools are generally very much curated to one or the other, not both. Unify makes it easier to visualize, iterate on and interact with the data and visualizations that matters for *you*, your *AI app* and your *users*, and nothing else 🎯 The core building block is simple, just “unify.log”. This lets you store any kind of data for easy visualization, grouping, sorting, and plotting etc. You can then quickly build your own custom interface for whatever you want using three basic tile types, Tables 🔢, Views 🔍 and Plots 📊 You can use these three primitives to do all kinds of things, such as: ➕create + visualize your datasets in a new tab (with or without LLMs) ➕monitor and probe production traffic in a new tab (with or without LLMs) ➕start an evaluation flywheel in a new tab (with or without LLMs) 📉optimize your product for your users (with or without LLMs) 🧠whatever else you can think of (with or without LLMs! Check out our repo for a minimal example, explaining how to use these basic building blocks to ship with speed and clarity ⚡ github.com/unifyai/unify We're also live on ProductHunt right now: producthunt.com/posts/unify-8 Support + feedback here is also ofc appreciated ❤️ Finally, big shoutout to the team for working tirelessly to make this happen: Haris Mahmood Yusha Arif Ved Patwardhan Nassim Berrada James Keane Albert Lukács Feel free to let us know what you think! (criticism + suggestions are especially welcome 🙏) Thanks all, happy prompting ✌️
GIF
English
1
7
23
1.2K
Daniel Lenton
Daniel Lenton@DanielLenton1·
Huge congrats to the team at Ivy 👏🔥 Kornia is the leading computer vision library, and now it now supports TensorFlow, JAX and NumPy out the box! Check out the example below ⬇️, integrations with many more libraries coming soon 👀 Keep up the great work @S_Armstr, @itsharismahmood, Yusha💪
ivy@ivy_llc

We’re excited to announce Ivy is partnering with Kornia, allowing Kornia to be used with TensorFlow, JAX, and NumPy for the first time! You can use Kornia's new `to_tensorflow()`, `to_jax()` and `to_numpy()` methods, which take advantage of Ivy’s transpiler, to use Kornia in your framework of choice. Try it out now in the latest Kornia version! (0.7.4) kornia.readthedocs.io/en/latest/get-… Ivy on GitHub: github.com/ivy-llc/ivy Ivy Demos: docs.ivy.dev/demos/examples…

English
0
3
15
671
Daniel Lenton
Daniel Lenton@DanielLenton1·
Incredibly flattered that @amazon have invited me to be their keynote speaker for this year's AWS Gen AI Loft event. Can't wait! ❤️ I'll be sharing my thoughts on why Gen AI still feels like it's stuck in first gear, what some of the biggest challenges are, and how we can all take steps to migitate these, and get our apps finally flying 🛸 (Hint: from our perspective, thoughtful evals and your data flywheel are essential ingredients 🔁) If you'll be in London on October 3rd, you can sign up here: aws.amazon.com/startups/event… See you there! ✌️
English
1
2
21
845
Daniel Lenton retweetledi
Unify
Unify@letsunifyai·
We are really excited to announce that we will be joined by @tavilyai for our Webinar Series this Tuesday!🤩 RSVP👉 lu.ma/a77wgrao In this session, we'll explore how Tavily API makes search engine optimised for LLMs and RAG, to provide efficient, quick, and persistent search results. We'll also showcase Unify's SSO integration with Tavily 🧠🧑‍💻 Checkout Tavily: ⚡️tavily.com ⚡️github.com/tavily-ai
Unify tweet media
English
0
3
7
1.2K
Daniel Lenton retweetledi
Unify
Unify@letsunifyai·
We are really excited to announce that we will be joined by @CopilotKit for our Webinar Series this Thursday!🤩 RSVP 👉lu.ma/tv5h60nv In this session, we'll explore how CopilotKit makes integrating AI-powered Copilots into any product easy. We'll also showcase how the Unify Router can help you create Copilots that automatically select the best Large Language Model (LLM) for the task 🤖✨ Checkout CopilotKit: ⚡copilotkit.aigithub.com/CopilotKit/Cop… See you there!
Unify tweet media
English
0
4
11
1.1K
Daniel Lenton
Daniel Lenton@DanielLenton1·
@deok_filho @GroqInc @letsunifyai We just added Groq! Custom endpoint support also coming soon. The GPU <-> Model mapping is a nice idea. No immediate plans for this, but will keep it in mind. Thanks for the feedback!
English
1
0
2
96
Deok Filho
Deok Filho@deok_filho·
This LLM router is such a nice tool! @DanielLenton1 Any plans on extending it to custom models to help map GPU types <-> models? Also, I couldn't find @GroqInc in the benchmark. That would a good one to have. Development is crazy I know @letsunifyai
Deok Filho tweet media
English
1
0
1
158
Daniel Lenton retweetledi
Unify
Unify@letsunifyai·
We are super excited to welcome @felipemaiapolo from @UMich this Friday to our paper reading session. Felipe co-authored the paper "Efficient multi-prompt evaluation of LLMs" 🤩 ⚡️RSVP - lu.ma/hlekaxz4 His research introduces PromptEval, a new method for estimating performance across a large set of prompts, borrowing strength across prompts and examples to produce accurate estimates under practical evaluation budgets📈 📚Check out the paper - arxiv.org/pdf/2405.17202 See you there!
Unify tweet media
English
0
3
8
606
Daniel Lenton
Daniel Lenton@DanielLenton1·
Claude 3.5 Sonnet is now available via the Chat Interface and API! 🚀 Test it out alongside the latest models 👉 unify.ai/chat What have been your initial findings so far? Oh and by the way; $10 free credit on signup! ⚡️
Anthropic@AnthropicAI

Introducing Claude 3.5 Sonnet—our most intelligent model yet. This is the first release in our 3.5 model family. Sonnet now outperforms competitor models on key evaluations, at twice the speed of Claude 3 Opus and one-fifth the cost. Try it for free: claude.ai

English
0
2
6
1.3K