Daniel Lopes

21.6K posts

Daniel Lopes banner
Daniel Lopes

Daniel Lopes

@danielvlopes

Co-Founder/CTO @ https://t.co/y1CgNp1F3e. Prev. Co-founder @Canopy_is (@37signals spin-off). EIR @Techstars SF. Ex-product & web lead @ifttt. @indievc alum.

San Francisco, CA Katılım Eylül 2008
623 Takip Edilen2.8K Takipçiler
Daniel Lopes retweetledi
Nikunj Kothari
Nikunj Kothari@nikunj·
Dig deeper..
Nikunj Kothari tweet mediaNikunj Kothari tweet media
English
34
27
372
31.6K
Daniel Lopes retweetledi
Matt Pocock
Matt Pocock@mattpocockuk·
This is actually a really solid context engineering template. Kudos, @AnthropicAI
Matt Pocock tweet media
English
63
619
7.9K
908.6K
Daniel Lopes retweetledi
Chip Huyen
Chip Huyen@chipro·
Very useful tips on tool use and memory from Manus's context engineering blog post. Key takeaways. 1. Reversible compact summary Most models allow 128K context, which can easily fill up after a few turns when working with data like PDFs or web pages. When the context gets full, they have to compact it. It’s important to compact the context so that it’s reversible. Eg, removing the content of a file/web page if the path/URL is kept.
Chip Huyen tweet media
English
4
53
697
82.2K
Jose Farias
Jose Farias@readjosefarias·
The more I build with AI, the more I’d like to explore constructing prompts as views like html, turbo_streams, json, etc. I’ve tried YAML and plain Ruby so far. Feels lacking.
English
10
0
13
2.9K
Daniel Lopes retweetledi
Luke Wroblewski
Luke Wroblewski@LukeW·
Today a top-tier human creates 10x better quality results than an AI agent. But a top-tier AI agent is 10x faster. Combining the two is where the magic happens.
English
7
30
173
14.4K
GeorgeCRO
GeorgeCRO@DivertCRO·
Most brands are sitting on a goldmine of review data-and wasting it. I built a GPT-4 system that turns 1,000s of reviews into: → Pain points → Objections → Triggers → Copy-ready angles 100% automated. Like + comment "gold" and I'II send it. (must be following)
English
483
30
492
35.1K
Daniel Lopes
Daniel Lopes@danielvlopes·
We opened a role for a technical recruiter and received over 1k applications in 2 days. It's super hard for folks to get noticed these days and also very hard for team members to triage.
English
0
0
3
341
Alex MacCaw
Alex MacCaw@maccaw·
Feature request for @cursor_ai - please let me add a directory of files as context for the chat (rather than having to add one file at a time).
English
39
0
228
25.6K
ian
ian@shaoruu·
what's one thing you really want added to @cursor_ai composer, or just to cursor in general? open to all kinds of ideas :)
English
1K
34
1.5K
414.8K
Daniel Lopes
Daniel Lopes@danielvlopes·
LLM-based chunking is the approach we use @GrowthXAI. Night and day difference from fixed.
Daniel Svonava@svonava

Split Smarter, Not Random: The Semantic Chunking Guide. 📚💡 Most RAG systems fail before they begin. They used outdated chunking methods that: ✂️ Slice texts by characters count 🚸 Break paragraphs without regard for meaning Imagine reading a book where someone randomly tore pages in half. That's what traditional chunking does to your data. Semantic chunking is a smarter approach that follows meaning. In this VectorHub's deep-dive Ashish Abraham breaks down three approaches: 1️⃣ Embedding-Similarity Based Chunking ▪️ The system determines where to break text by comparing the similarity between consecutive sentences. ▪️ Using a sliding window approach, it calculates the cosine similarity of sentence embeddings. ▪️ If the similarity drops below a set threshold, the system identifies a semantic shift and marks the point to split the chunk. Like listening to a playlist: you can tell when one song ends and another begins. Embedding Chunking spots those natural transitions between ideas. 2️⃣ Hierarchical-Clustering Based Chunking ▪️ The system analyzes relationships between all sentences at once, not just neighbors. It starts by measuring how similar each sentence is to every other sentence in the text. ▪️ These similarities create a hierarchy—like a family tree of ideas. When sentences show strong similarity, they cluster together into small groups. ▪️ These small groups then merge into larger ones based on how closely they relate. Like organizing a library: books get grouped by topic, then broader categories, until you have a natural organization that makes sense. 3️⃣ LLM-Based Chunking This newest approach uses LLMs to chunk text based on semantic understanding. ▪️ The first step is to feed the text to an LLM with specific chunking instructions. ▪️ The LLM then identifies key ideas and how they connect, rather than just measuring similarity. ▪️ When it spots a complete thought or concept, it groups these propositions into coherent chunks. Imagine having a skilled editor who knows exactly where to break your text for maximum clarity. ⚙️ Which method will produce optimal outcomes depends on your use case: ▪️ Want precision? Go with LLM-Chunking ▪️ Want speed? Go with Embedding-Similarity ▪️ Need to preserve relationships? Go with Hierarchical-Clustering Ready to implement? Get the full technical breakdown👇

English
0
0
2
460
Daniel Lopes
Daniel Lopes@danielvlopes·
6k loc in 4 days 😬🫠
English
0
0
1
187
felipecsl
felipecsl@felipecsl·
Every day I spend writing lines of code at work I now keep asking myself what is the point? LLMs can do it much better and much cheaper than me already. Can’t help but feel like it’s a waste of time and money. Days are counted for all of us.
English
1
0
2
246
Daniel Lopes
Daniel Lopes@danielvlopes·
@brupm We’ll keep iterating on it and I think I might launch it open source in a couple of months. Happy to show you in private though. Just need one more week to finish some of the generators
English
1
0
0
83
Daniel Lopes
Daniel Lopes@danielvlopes·
Took 4mo of long hours coding and 1y of studying before to figure out the perfect infra for GrowthX. Shipped this week! Now what would take me a week of work in the tools we were using before takes 1h or less and with all AI eng needs ✅: observability, evals, cost tracking, prompt mgmt, scalability & rag.
English
4
0
6
695
Henry Shi
Henry Shi@henrythe9ths·
There's a shocking fact about AI that nobody tells you: You can catch up to the public AI research frontier in just 2 weeks. Yes, really. I've built a $150M annual revenue startup over the last 8 years and If I were to start a company today, I’d drop everything and go all-in on AI. But like many busy software builders, I felt lost—overwhelmed by the noisy, crowded and fast-moving modern AI landscape. And I wasn’t alone. So I spent my entire holiday diving deep into AI research—reading 30+ papers, watching hours of lectures, analyzing trends, and catching up to the research frontier. ✨ Here’s what I learned: - You don’t need months (or years) to catch up. - You don’t need a PhD or decades of ML experience. - You need fewer than 20 papers and 2 weeks to understand the major breakthroughs shaping AI today. It's because the technology is extremely nascent and most techniques that came before are no longer relevant: - ChatGPT is barely 2 years old and Transformers are only 7 years old. - Most game-changing discoveries happened within the last 4 years, driven by a few breakthrough ideas, scaling laws, and efficient matrix multiplication. The biggest secret? Many groundbreaking AI papers with thousands of citations are surprisingly simple and applied, like adding "let's think step by step" to the prompt, or simply asking the LLM over and over again to improve its answer (Self-Refine). I realized there are tons of founders and builders in the same boat—wanting to dive deeper into AI but unsure where to start. I've created an essential AI Guide that helped me catch up, in just 2 weeks, to the frontier of public AI research to figure out where the next opportunities and gaps were: - Curated list of only the most important papers - Simple explanations of key concepts - Clear pathway to understanding the frontier of modern AI It’s perfect for: - Founders expanding into AI - Builders wanting to innovate at the frontier of AI - Investors looking to separate the signal from the noise 👇 Want the full guide? - Like and Share this post - Comment "AI Guide" - I'll send you the complete guide (ps, I’m also teaming up with @VishalVasishth, co-founder of @obviousvc with @ev (focused on large-scale societal impact companies like Twitter, Medium, Beyond Meat), to host a small meetup to discuss what's working and needs to be solved in the AI stack in SF. Message me if you're interested)
Henry Shi tweet media
English
5.5K
2.9K
11.1K
1.1M
Daniel Lopes
Daniel Lopes@danielvlopes·
@natfriedman At GrowthX we have a fact checker system for our article production that rewrites the articles based on the giant list of recommendations in one go (these are 2k word articles)
English
0
0
2
366
Nat Friedman
Nat Friedman@natfriedman·
What's the most impressive o1 output you've seen? Please share the output and prompt!
English
43
29
766
274.6K
Arvid Kahl
Arvid Kahl@arvidkahl·
Anyone know what @OpenAI's rollout strategy for o1 API access is? I just recently hit tier 5, which supposedly comes with o1 access, yet it's not in my org. I have a couple really interesting use cases with Podscan, and I really want to try it out.
English
6
0
23
7.1K
Daniel Lopes
Daniel Lopes@danielvlopes·
Things are getting fun... Teams concerned about LLM cost or speed are missing the boat. We've been gifted a reasoning machine, and most are treating it as regular software or a text-tweaking tool. The real value is in taking a task that costs 2 days and is boring AF, full of human errors and inconsistencies, and turning it into a 10-minute task cranked up to 1000, so humans can start from something you couldn't do by hand and spend those same two days doing the amazing things AI can't do. Here's a sneak peek at some of our AI flows for brainstorming:
Daniel Lopes tweet media
English
0
0
3
446