Charlie Dolan

9.5K posts

Charlie Dolan banner
Charlie Dolan

Charlie Dolan

@cdolan92

I automate waste & recycling stuff Now: Scaling @dsqtech Prev: Built & sold @sequoiawaste to a public company $QRHC

Pittsburgh, PA USA Katılım Mayıs 2009
865 Takip Edilen1.8K Takipçiler
Nightingale Associates
Nightingale Associates@FCNightingale·
U.S. Steel Tower in Pittsburgh $160M of securitized debt, $40M in unsecuritized subordinate debt and $45M in mezzanine debt, is set to mature in June. Servicer commentary points to a growing concern about the borrower’s ability to refinance in the current lending environment. The leases for the tower’s two main tenants, the University of Pittsburgh Medical Center and namesake U.S. Steel, are set to expire within the next two years. In 2018 Jamestown Properties acquired a 40% stake in the property. The deal valued the building at about $340M. 601W Cos. paid $250M for the structure in 2011. 2.3M SF Built 1971 -TheRealDeal #commercialrealestate
Nightingale Associates tweet media
English
12
18
160
20.7K
Nico Ritschel
Nico Ritschel@nicoritschel·
@cdolan92 @JamesTimmins DuckDB lets you query frontmatter with the yaml extension. re: backlink querying— sounds like a fun DuckDB extension thanks for the idea!
English
1
0
1
53
James Timmins
James Timmins@JamesTimmins·
There are two approaches to semantic knowledge graphs: Markdown graphs vs graph DB + MCP. I've found you get the best results by combining them. Actual mappings go into graph DB. Fast queries, generic relationships. .md though is the most effective interface to give Claude the data in a context-efficient manner.
English
2
0
2
113
Nico Ritschel
Nico Ritschel@nicoritschel·
@JamesTimmins You can also just use markdown backlinks & frontmatter for many of the same benefits
English
2
0
1
36
Charlie Dolan
Charlie Dolan@cdolan92·
This is a great foundation and Barrett is correct. Most people are re-onboarding Claude/ChatGPT every morning. We started here too. I saw a post similar to this on Friday and started working with Obsidian, BUT we hit a wall when the Obsidian approach couldn't keep up with the complexity of our domain. The biggest limitations: Ontologies and graphs/semantic search We're a waste management technology company — 5+ products, 50+ entity types, thousands of relationships between vendors, locations, contracts, incidents, equipment, and team members. Wikilinks don't address the underlying problem when you need to know that Compology (a sensor company) was acquired by RoadRunner (a broker), which competes with Waste Harmonics (a larger brokerage), both of which subcontract to Waste Management (the largest hauler), and everyone on this list + DSQ (us) has a Compactor Monitor solution. You need automatic linking + ontologies to make that work. Otherwise, your system will confuse who is a customer vs who is a client or referral partner. In our market, "Hauler" and "Vendor" may be confused by an AI depending upon the context. Its critical to help the system be accurate on these key business entities. Those connections I mentioned ealier (Compology, WM) weren't manually linked by anyone on our team. Instead they were extracted automatically from our internal documentation. We built a company knowledge graph. Same philosophy (give agents better context), different architecture. Barrett's approach: markdown files + wikilinks + Claude Code reading them. Ours: markdown files + LLM extraction + FalkorDB graph database + MCP server. Claude doesn't read our markdown files. Instead it queries a graph (basically a relationship database) of about 5,000+ entities and 15,000+ relationships. Started working on this this past Saturday so I expect it to grow from here. What that looks like in practice I type "tell me about Joe Smith’s role” and Claude automatically queries the graph. Comes back knowing Joe is a contact at a client who handles edge cases for our partnership program, that Sam escalates process issues to Joe, and that communications flow through the a certain mailbox. I never wrote a file about Joe. The LLM extracted him from our SOPs during ingestion. When I ask "what sensors monitor waste containers?", semantic search (vector embeddings, not keyword matching) returns facts about Pioneer Compactor Monitors, Tilt Sensors, fill-level measurement via electrical draw on the ram motor, and current pricing. It found these across docs from 5 different repositories we maintain. And it can iterate through relationships until it has a complete answer. The Stack 1. Source docs: markdown, PDFs, CSVs from our documentation repo + 5 product codebases + Notion exports (bye bye notion!). ~500 episodes of content. 2. Ingestion pipeline: LLM reads each doc, extracts entities and relationships, generates embeddings, writes to a graph database. This is the expensive step (one-time, ~$5 in API costs for 500 docs). 3. FalkorDB: open source graph database running locally in Docker. Stores the knowledge graph. Free. Eventually needs hosted. 4. MCP server (50 lines of code): exposes the graph as tools Claude can call locally. Keyword search, entity context traversal, raw Cypher queries, and semantic search via embeddings. Will have to host this later 5. Claude Code, Claude Desktop, ChatGPT, or Gemini: connects to the MCP server. Every session has instant access to the full company graph. Keyword searches against the graph are free and instant (<10ms). Semantic searches cost ~$0.0001 per query. No LLM inference needed for retrieval. The LLM costs are one-time during content ingestion. Where Barrett's approach is best - Setup time: His system takes an afternoon. Ours took ~2 days of engineering and is trickier - Maintenance: He edits a markdown file and it's updated. We re-run `brain ingest` on changed docs. It can take a few minutes per document - Simplicity: If you have a finite number of offices/properties and a few straightforward products/services Obsidian + wikilinks is the right call. You probably don't need a graph database and trying to do one may be too much friction Where our approach has benefits that outweigh the cost of complexity: - Scale: 5,000+ entities with 15,000+ typed relationships. No human is maintaining that many wikilinks. - Discovery: The graph finds connections nobody manually linked. We didn't tell it Compology was acquired by RoadRunner. It extracted that from an industry wiki page we had from a few years ago, and connected it to the competitor database from Notion that it imported (and we no longer need now that this data is on the graph) - Typed relationships: Not just "these files are related" but "this Vendor *Has a Relationship* with this Landfill, and these are related to a franchise agreement signed in 2023.” - Semantic search: "How does DSQ Discovery handle contamination disputes?" returns relevant facts from across the entire corpus, not just files with "contamination" in the title. - Multi-source ingestion: We pulled from git repos, Notion databases, PDFs, and CSVs, transcripts, notes, and more. The graph unifies it all and I really think it’s going to be a resilient pipeline for us. Closing Thoughts Don't rely upon the Memory function of these applications (Claude, ChatGPT). Its vendor lock in. You dont need that!! Its a black box you dont have control over. Use Barrett's approach, mine, or anything else you find that you control. For a property management company with 50 properties? Barrett's approach is perfect. And you’d be saving so much time + AI will start to feel smart & magical. For a companies with multiple, completely different products, a complex domain ontology, hundreds of SOPs, competitor intelligence, and entity relationships that span codebases? The knowledge graph is worth the investment. We’re now asking AI to do more than just our own product docs... we want it to understand the *structure* of our company AND our industry. Eventually will have this run analysis on trends and plans. I think this will provide a lot more utility to our team members. The tools: - FalkorDB (free, open source. Cool name too), - Graphiti (free, open source. This + embeddings handles your queries), - Claude Code (or chatGPT/Gemini/etc), and - 1-2 days building an MCP server and hooking it all together. Thanks for Barret for the inspiration on the post, and the algorithm for all these wild crash courses in AI
Barrett Linburg@DallasAptGP

We built a system where Claude knows our entire company before I type a word. Three operating companies. 50+ properties. Full context on every session. Three tools. Any small business can build this. Most business owners use AI the same way every time. Open Claude. Re-explain the business. Re-explain the team. Re-explain the numbers. Then ask the question. You're onboarding the same employee every morning. We fixed this. Claude now knows the full operation before I type a word. Start with your most important company knowledge. Turn each topic into its own markdown file. Markdown is simple text that AI reads clean. Think about what you re-explain over and over. How your business makes money. Your org chart and who owns what. Your pricing. Key metrics for each team member. Your sales process. Your brand voice. One topic per file. Keep them short. Put everything in Obsidian. It's free. Files stay on your computer. Nothing goes to the cloud. Think of it as a filing cabinet on your own hard drive that AI can search in milliseconds. Here's what makes it work. Every file connects to related files through tagged links called wikilinks. When you ask Claude about a specific client, it doesn't just find the client file. It pulls every project, contract, invoice, and note tied to that client. One question. Full picture. Then connect Claude Code. It works like the regular Claude desktop app with one difference. It has the keys to your filing cabinet. Claude Code reads files right off your computer. No uploads. No cloud. No file size limits. Your financials, client data, and internal strategy never leave your machine. For business owners who won't put sensitive data on someone else's server, this solves the problem. Most people I know spend $100 to $200 a month on Claude. If you're already paying that, you should be getting more out of it than a chatbot that forgets who you are every session. Some of you already use Claude Projects. Good. That puts you ahead of most people. Projects let you upload files and give Claude a custom instruction set. For small tasks, it works. If you have a handful of documents and a clear use case, Projects is the right starting point. But it has a ceiling. Upload limits cap how much context you can load. Your files live on Anthropic's servers. And every project is its own silo. Your sales project doesn't talk to your ops project. Your finance files don't connect to your team files. The Obsidian setup removes all three limits. No upload cap. Files stay on your machine. And every file links to every related file across your whole company. The last piece is one instruction file. It tells Claude how your company works, what role it plays, and how to navigate the knowledge base. Think of it as the onboarding doc you'd hand a senior executive on day one. Except this executive never forgets it. Once it's built, every session starts with full context. Claude knows your team. Your numbers. Your processes. You skip the setup. You go straight to the work. Three tools. Obsidian (free). Claude Code (you're already paying for it). One instruction file. If you run a business and you're still re-explaining yourself to AI every session, you're leaving speed on the table.

English
2
1
4
511
Charlie Dolan
Charlie Dolan@cdolan92·
All of modern tech runs on *very small rocks* Monte Python would be proud
English
0
0
2
52
Charlie Dolan
Charlie Dolan@cdolan92·
Charlie Dolan@cdolan92

This is a great foundation and Barrett is correct. Most people are re-onboarding Claude/ChatGPT every morning. We started here too. I saw a post similar to this on Friday and started working with Obsidian, BUT we hit a wall when the Obsidian approach couldn't keep up with the complexity of our domain. The biggest limitations: Ontologies and graphs/semantic search We're a waste management technology company — 5+ products, 50+ entity types, thousands of relationships between vendors, locations, contracts, incidents, equipment, and team members. Wikilinks don't address the underlying problem when you need to know that Compology (a sensor company) was acquired by RoadRunner (a broker), which competes with Waste Harmonics (a larger brokerage), both of which subcontract to Waste Management (the largest hauler), and everyone on this list + DSQ (us) has a Compactor Monitor solution. You need automatic linking + ontologies to make that work. Otherwise, your system will confuse who is a customer vs who is a client or referral partner. In our market, "Hauler" and "Vendor" may be confused by an AI depending upon the context. Its critical to help the system be accurate on these key business entities. Those connections I mentioned ealier (Compology, WM) weren't manually linked by anyone on our team. Instead they were extracted automatically from our internal documentation. We built a company knowledge graph. Same philosophy (give agents better context), different architecture. Barrett's approach: markdown files + wikilinks + Claude Code reading them. Ours: markdown files + LLM extraction + FalkorDB graph database + MCP server. Claude doesn't read our markdown files. Instead it queries a graph (basically a relationship database) of about 5,000+ entities and 15,000+ relationships. Started working on this this past Saturday so I expect it to grow from here. What that looks like in practice I type "tell me about Joe Smith’s role” and Claude automatically queries the graph. Comes back knowing Joe is a contact at a client who handles edge cases for our partnership program, that Sam escalates process issues to Joe, and that communications flow through the a certain mailbox. I never wrote a file about Joe. The LLM extracted him from our SOPs during ingestion. When I ask "what sensors monitor waste containers?", semantic search (vector embeddings, not keyword matching) returns facts about Pioneer Compactor Monitors, Tilt Sensors, fill-level measurement via electrical draw on the ram motor, and current pricing. It found these across docs from 5 different repositories we maintain. And it can iterate through relationships until it has a complete answer. The Stack 1. Source docs: markdown, PDFs, CSVs from our documentation repo + 5 product codebases + Notion exports (bye bye notion!). ~500 episodes of content. 2. Ingestion pipeline: LLM reads each doc, extracts entities and relationships, generates embeddings, writes to a graph database. This is the expensive step (one-time, ~$5 in API costs for 500 docs). 3. FalkorDB: open source graph database running locally in Docker. Stores the knowledge graph. Free. Eventually needs hosted. 4. MCP server (50 lines of code): exposes the graph as tools Claude can call locally. Keyword search, entity context traversal, raw Cypher queries, and semantic search via embeddings. Will have to host this later 5. Claude Code, Claude Desktop, ChatGPT, or Gemini: connects to the MCP server. Every session has instant access to the full company graph. Keyword searches against the graph are free and instant (<10ms). Semantic searches cost ~$0.0001 per query. No LLM inference needed for retrieval. The LLM costs are one-time during content ingestion. Where Barrett's approach is best - Setup time: His system takes an afternoon. Ours took ~2 days of engineering and is trickier - Maintenance: He edits a markdown file and it's updated. We re-run `brain ingest` on changed docs. It can take a few minutes per document - Simplicity: If you have a finite number of offices/properties and a few straightforward products/services Obsidian + wikilinks is the right call. You probably don't need a graph database and trying to do one may be too much friction Where our approach has benefits that outweigh the cost of complexity: - Scale: 5,000+ entities with 15,000+ typed relationships. No human is maintaining that many wikilinks. - Discovery: The graph finds connections nobody manually linked. We didn't tell it Compology was acquired by RoadRunner. It extracted that from an industry wiki page we had from a few years ago, and connected it to the competitor database from Notion that it imported (and we no longer need now that this data is on the graph) - Typed relationships: Not just "these files are related" but "this Vendor *Has a Relationship* with this Landfill, and these are related to a franchise agreement signed in 2023.” - Semantic search: "How does DSQ Discovery handle contamination disputes?" returns relevant facts from across the entire corpus, not just files with "contamination" in the title. - Multi-source ingestion: We pulled from git repos, Notion databases, PDFs, and CSVs, transcripts, notes, and more. The graph unifies it all and I really think it’s going to be a resilient pipeline for us. Closing Thoughts Don't rely upon the Memory function of these applications (Claude, ChatGPT). Its vendor lock in. You dont need that!! Its a black box you dont have control over. Use Barrett's approach, mine, or anything else you find that you control. For a property management company with 50 properties? Barrett's approach is perfect. And you’d be saving so much time + AI will start to feel smart & magical. For a companies with multiple, completely different products, a complex domain ontology, hundreds of SOPs, competitor intelligence, and entity relationships that span codebases? The knowledge graph is worth the investment. We’re now asking AI to do more than just our own product docs... we want it to understand the *structure* of our company AND our industry. Eventually will have this run analysis on trends and plans. I think this will provide a lot more utility to our team members. The tools: - FalkorDB (free, open source. Cool name too), - Graphiti (free, open source. This + embeddings handles your queries), - Claude Code (or chatGPT/Gemini/etc), and - 1-2 days building an MCP server and hooking it all together. Thanks for Barret for the inspiration on the post, and the algorithm for all these wild crash courses in AI

QME
1
0
0
9
James Timmins
James Timmins@JamesTimmins·
Anyone else think they made something new only to realize they had re-invented a semantic knowledge graph?
English
1
0
2
95
Metabase
Metabase@metabase·
New in v59: AI SQL generation on self-hosted 🤖 Bring your own Anthropic key, describe what you need, get SQL back. Less time writing queries, more time getting answers.
English
4
2
8
1.2K
Charlie Dolan
Charlie Dolan@cdolan92·
@toddsaunders If he is interested in automating decision making in the waste & recycling industry, 👋
English
2
0
1
419
Todd Saunders
Todd Saunders@toddsaunders·
I know an incredible AI pilled CTO looking to transition into an AI transformation role for an interesting growing company that’s not a typical Silicon Valley startup. He’s following this thread but asked I post without using his name. Also he’s based on Boston and remote is preferred. If you are looking for something like this tag your company below and I’m sure he will reach out.
English
32
0
52
16K
Charlie Dolan
Charlie Dolan@cdolan92·
Never used granola but sounds like a very honest take It’s becoming easier to reverse engineer things, which means the most unlikely method of getting data for agents (reverse engineering local cache) may become the standard way for your power users!
Chris Pedregal@cjpedregal

There are some tweets out there saying that Granola is trying to lock down access to your data. Tldr; we are actually trying to become more open, not closed. We’re launching a public API next week to complement our MCP. Read on for context. A couple months ago, we noticed that some folks had reversed engineered our local cache so they could access their meeting data. Our cache was not built for this (it can change at any point), so we launched our MCP to serve this need. The MCP gives full access to your notes and transcripts (all time for paid users, time restricted for free users). MCP usage has exploded since launch, so we felt good about it. A week ago, we updated how we store data in our cache and broke the workarounds. This is on us. Stupidly, we thought we had solved these use cases well enough with our MCP. We’ve now learned that while MCPs are great for connecting to tools like Claude or chatGPT, they don’t meet your needs for agents running locally or for data export / pipeline work. So we’re going to fix this for you ASAP. First, we’ll launch a public API next week to make it easier for you to pull your data. Second, we’ll figure out how to make Granola work better for agents running locally. Whether that’s expanding our MCP, launching a CLI, a local API, etc. The industry is moving quickly here, so we’d appreciate your suggestions. We want Granola data to be accessible and useful wherever you need it. Stay tuned.

English
0
0
0
164
Charlie Dolan
Charlie Dolan@cdolan92·
Proof you can be the fastest growing company in the world without securing every domain/account accross primary services
Charlie Dolan tweet media
English
0
0
1
69
Charlie Dolan
Charlie Dolan@cdolan92·
1Password is a great app. Been using it for maybe 10-15 years now They have an incredible opportunity right now But man... these nonsense marketing videos are SO frustrating to see. I have no idea what this is and I'm a power user of both 1P and AI systems
1Password@1Password

Today we’re introducing 1Password® Unified Access. As AI agents start operating inside real production environments, organizations need visibility into how credentials and access are actually used. Unified Access helps security teams discover, secure, and audit access across humans, machines, and AI agents. 🔗 More here: bit.ly/4dq2pjO

English
1
0
0
193
Charlie Dolan
Charlie Dolan@cdolan92·
started listening to spotify again today because it has some good playlists Getting some strange AI tickety-tock ads about searching for free items that will be shipped to me for free What is going on? They make that much money of ads?
English
0
0
0
88
Charlie Dolan
Charlie Dolan@cdolan92·
@BPD1776 “Claude get infinite relevant memory Make no mistakes”
English
0
0
0
11
Charlie Dolan
Charlie Dolan@cdolan92·
@jspeiser Agreed Moved 50% of company to Claude in Jan/feb Cancelled their chatgpt subs Remaining 50% this month
English
1
0
4
650
Joe Speiser ⚡️
Joe Speiser ⚡️@jspeiser·
If I could short OpenAI, I would. We just cancelled GPT for the entire Hampton team. Claude replaced it, and added crazy functionality that GPT cant match. Coincidence that OpenAI is now running ads and chasing enterprise contracts? They know.
English
30
17
271
20.2K