maxppap
24 posts


working on this exact problem from the surface angle — AI that lives inside the user's existing CRM as a browser extension instead of in its own chat window. when AI shares your DOM, output format flattens (it edits what you're looking at) and gesture comes for free. the progression i'd extend yours with: text → markdown → HTML → same surface. format matters less than where the output lives.
English

This works really well btw, at the end of your query ask your LLM to "structure your response as HTML", then view the generated file in your browser. I've also had some success asking the LLM to present its output as slideshows, etc.
More generally, imo audio is the human-preferred input to AIs but vision (images/animations/video) is the preferred output from them. Around a ~third of our brains are a massively parallel processor dedicated to vision, it is the 10-lane superhighway of information into brain. As AI improves, I think we'll see a progression that takes advantage:
1) raw text (hard/effortful to read)
2) markdown (bold, italic, headings, tables, a bit easier on the eyes) <-- current default
3) HTML (still procedural with underlying code, but a lot more flexibility on the graphics, layout, even interactivity) <-- early but forming new good default
...4,5,6,...
n) interactive neural videos/simulations
Imo the extrapolation (though the technology doesn't exist just yet) ends in some kind of interactive videos generated directly by a diffusion neural net. Many open questions as to how exact/procedural "Software 1.0" artifacts (e.g. interactive simulations) may be woven together with neural artifacts (diffusion grids), but generally something in the direction of the recently viral x.com/zan2434/status…
There are also improvements necessary and pending at the input. Audio nor text nor video alone are not enough, e.g. I feel a need to point/gesture to things on the screen, similar to all the things you would do with a person physically next to you and your computer screen.
TLDR The input/output mind meld between humans and AIs is ongoing and there is a lot of work to do and significant progress to be made, way before jumping all the way into neuralink-esque BCIs and all that. For what's worth exploring at the current stage, hot tip try ask for HTML.
Thariq@trq212
English

@techbromemes before, you just googled it. the whole paradigm of search and working with data has changed
English

@remain_urus good — better setup imo: bundle the whole harness + agents into one MCP server for ollama models. claude code becomes the orchestrator, ollama-via-MCP is the worker pool
English

If you're on a tight budget and don't want to spend 100-200$ on codex/claude, try ollama.com/pricing
You can use the 20$ sub in most tools + openclaw/hermes, and the limits you get for that money are insane, you get the newest kimi, glm, deepseek models and more
English

@1Umairshaikh marketing — but more specifically, naming a pain people already feel. AI products that vaguely "help" with stuff go nowhere.
English

@0xPrajwal_ btw u already can use Claude for everything from this list
English

@OpenAIDevs voice control is one half of CRM-as-AI-surface. building the other half — analytics + insights, browser extension, no API keys. these two together would be unreal eventually
wakelead.com
English

CRMs store everything. They tell you nothing.
I'm building the analytics layer they're missing — a browser extension that lives in your CRM sidebar, sees your data directly (no API keys, no setup), and turns it into actual insights.
Now:
→ creates deals, contacts, tasks from one sentence
→ instant analytics on existing CRM data
→ insight discovery beyond the funnel view
→ fully client-side, data never leaves your browser
Next: deep research mode — bottlenecks, growth points, churn risk, payment audits, stale deals, 10-step quarterly audits with action items. HubSpot, Pipedrive, Salesforce, Google Sheets, custom skills.
Pipelines show stages. I want to show you why deals stall, who's about to churn, and where your next 20% of revenue is hiding.
Short demo below ↓
This is the idea I'm most excited about in years, and I'd rather build it with you than guess.
Tell me:
— what's the one CRM workflow you'd hand off to an AI tomorrow?
— what process do you wish you'd never have to touch again?
— what insight do you keep wishing your data could surface on its own?
Reply, DM, whatever's easier — I read every message. Looking for early users + design partners who want a real say in what ships.
wakelead.com
English

@trikcode broke the loop. shipped this week
hunting for feedback
wakelead.com
English

@LoopandPixels AI CRM assistant, browser extension
no API keys, no setup, no MCP
wakelead.com
English

@nxhaaa19 @X just shipped Wakelead — AI CRM assistant as a browser extension. no API keys, no setup, surfaces insights beyond the funnel view. would love any feedback 🤝
wakelead.com
English

great breakdown for the agency play. went a different route with my own product — browser extension inside the CRM directly, so no Zapier/Make/n8n layer in between. data stays client-side, install in 30 sec. tradeoff is i'm locked to one shape (CRM insights) but setup friction drops to zero. just shipped, any feedback welcome 🙏
wakelead.com
English

The 3 ways to build a speed-to-lead agent in under a day:
1. Zapier. 1-2 hours to build. Best for simple SMS-to-CRM flows for non-technical clients. Gets pricey at scale ($50-150/mo).
2. Make. 2-4 hours. Native AI modules, visual builder, $20-30/mo covers most use cases. The sweet spot.
3. n8n. 3-5 hours. Self-hosted option, no per-execution fees, full conversational agent capability.
Pick Make if you're building for paying clients. Best balance of power and price.
Full breakdown in the article below.

Corey Ganim@coreyganim
English

@DavidOndrej1 thx man! funny timing — just shipped my first public build today. AI CRM assistant that helps find insights, runs as a browser extension. open to feedback or advice from literally anyone 👀
wakelead.com
English


















