Neil Sharma

80 posts

Neil Sharma

Neil Sharma

@realNeilSharma

Founder of https://t.co/EKXTAmv5CP

가입일 Temmuz 2025
120 팔로잉12 팔로워
Jonathan
Jonathan@joni_vrbt·
Hey founders 👋 If your product solves a real problem, drop it here and tell us what problem it solves. I’ll rate the usefulness of as many as I can. Do the same. Founders support founders. Deal? 🤝
English
284
4
215
14.6K
Mahesh Chulet
Mahesh Chulet@mchulet·
If you’re building in tech 🫶 Say hi 👋 let’s connect.
English
17
1
13
783
Build in Public
Build in Public@buildinpublic·
What are you working on this week?
English
368
0
159
12.5K
Rayane
Rayane@FlippedRay·
Builders, what are you working on right now? Share your link Let’s get it in front of people
English
100
1
50
3.1K
Nabil
Nabil@nabuhad·
@FlippedRay Inkett is the first AI that actually understands your writing. Voice, structure, business, distribution. Draft to audience, all under one roof. inkett.com
English
2
0
2
50
act101
act101@act101ai·
@delveroin act101.ai the only native mcp with tree-based code navigation, refactoring, and analysis tools for agentic coding to save you tokens and improve generation.
English
1
0
0
27
(Oma)devuae
(Oma)devuae@delveroin·
Time to market Product, Drop URL link! Let’s send some traffic there!!
English
134
0
60
4.8K
jon halstead
jon halstead@zeronull1983·
@NitishaAgrawal3 I am working on Building a local-first governed AI system (AAIS) focused on stability, transparency, and controlled behavior.
English
2
0
1
14
Nitisha
Nitisha@NitishaAgrawal3·
Hey builders, Looking to connect with people building in: SaaS Tech Automation AI tools Product Development Devs Drop what you're working on👇
English
122
1
75
4K
Neil Sharma
Neil Sharma@realNeilSharma·
@stalmico haha the confidence is unmatched. are you manually checking every output before it goes to a client or just hoping for the best at this point?
English
0
0
0
0
Steven Collard
Steven Collard@stalmico·
my AI created a full 10 week roadmap for a client that doesn't even exist it was for a company that never contacted us with milestones we never even discussed the confidence of a hallucinating AI is a special kind of comedy the structure was actually good though so we just kept the template and replaced all the fiction
English
1
0
1
36
BlueMomGroup
BlueMomGroup@CircleofContent·
@edels0n The AI agent I use for sales in my small business hallucinates about 10% of the time. I simply do not understand how this technology is supposed to replace humans.
English
2
0
1
47
Ed Elson
Ed Elson@edels0n·
Urgent message to John Ternus: Apple AI summaries tremendously suck.
English
10
0
31
3.2K
Neil Sharma
Neil Sharma@realNeilSharma·
@devashishup @Microsoft The prompt versioning gap is brutal, it's invisible until something breaks and you can't reproduce how the agent behaved last week. How are you handling that now that you're building on the reliability side?
English
0
0
0
13
Devashish Upadhyay
Devashish Upadhyay@devashishup·
80% of Fortune 500 run AI agents now (@Microsoft stat). Nobody's asking: are they correct? Built 70+ at a fintech. 7 made it. The rest ran fine - just silently wrong.
English
1
0
0
96
Neil Sharma
Neil Sharma@realNeilSharma·
@AnthonyEveryWhr @karpathy The acceptance test layer is what most agent builders skip until something breaks in prod. How heavyweight does your eval harness get before it starts feeling like a second codebase to maintain?
English
0
0
0
5
Anthony Everywhere 🏆
Anthony Everywhere 🏆@AnthonyEveryWhr·
@karpathy The idea-file pattern works best when the artifact is the contract, not the implementation. In production I’d pair it with a tiny eval harness and a few concrete acceptance tests, otherwise the agent can optimize for a persuasive spec instead of a shippable one.
English
1
0
0
18
Andrej Karpathy
Andrej Karpathy@karpathy·
Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: gist.github.com/karpathy/442a6… You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1.1K
2.8K
26.5K
6.9M
Neil Sharma
Neil Sharma@realNeilSharma·
@AleksejAros @_avichawla Consistency metric is massively underrated, agreed. How are you handling it when clients update their prompts mid-deployment? That's usually where the custom scoring layer starts getting messy.
English
0
0
0
4
Alex Yarosh · AI expert · CEO of AI Studio
Solid framework! I'd add: 8. Test edge case handling - agents fail hard on unexpected inputs. Monitor token usage vs. output quality (efficiency can tank fast). Built agent eval pipelines for 3 clients recently - the consistency metric () is huge but most skip it. LangSmith + custom scoring works well for steps 3-6. DM if you want me to share the eval template I use - it's free.
English
1
0
0
7
Avi Chawla
Avi Chawla@_avichawla·
Docker explained in 2 minutes! Most developers use Docker daily without understanding what happens under the hood. Here's everything you need to know. Docker has 3 main components: 1) Docker Client: Where you type commands that talk to the Docker daemon via API. 2) Docker Host: The daemon runs here, handling all the heavy lifting (building images, running containers, and managing resources) 3) Docker Registry: Stores Docker images. Docker Hub is public, but companies run private registries. Here's what happens when you run "docker run": • Docker pulls the image from the registry (if not available locally) • Docker creates a new container from that image • Docker allocates a read-write filesystem to the container • Docker creates a network interface to connect the container • Docker starts the container That's it. The client, host, and registry can live on different machines. This is why Docker scales so well. Understanding this architecture makes debugging container issues much easier. You'll know exactly where to look when something breaks. ____ Find me → @_avichawla For more insights and tutorials on ML and AI Engineering!
GIF
English
11
99
388
16.6K
Neil Sharma
Neil Sharma@realNeilSharma·
@jeremywarddev Solid stack, love the zero-Redis philosophy. How are you handling reliability on the Realtime API side? Curious if voice AI output quality is hard to catch when it degrades.
English
0
0
0
7
Jeremy Ward (Software Engineer)
Jeremy Ward (Software Engineer)@jeremywarddev·
Tech stack: - Rails 8 (Hotwire + Turbo Streams) - Tailwind + DaisyUI - SQLite (WAL mode) + Solid Queue - Twilio (voice) - OpenAI Realtime API (AI) - Kamal (deploy) Zero Redis. Zero Postgres. Zero React. Solo founder simplicity.
English
3
0
1
91
Jeremy Ward (Software Engineer)
Jeremy Ward (Software Engineer)@jeremywarddev·
I shipped 42 commits to GetBackTo yesterday. I built the foundation; the telephony + AI infrastructure most founders pay someone else to do. Here's what that looks like 🧵
English
2
1
2
31
Neil Sharma
Neil Sharma@realNeilSharma·
@IMoayyad_ @donatelli2026 MyosAI is interesting! The chat coach piece seems like the hardest part to get right. How are you making sure the fitness advice stays accurate and safe across different user goals and health situations?
English
1
0
1
34
M2
M2@IMoayyad_·
@realNeilSharma @donatelli2026 Thanks! 🙏 MyosAI is an AI personal fitness coach. it builds custom workout programs, tracks nutrition, and gives you a coach you can chat with anytime. What stack are you curious about specifically? Happy to get into it. 😄 MyosAI.app
English
1
0
0
47
Neil Sharma
Neil Sharma@realNeilSharma·
@touseefcodes @audiencon That’s a smart approach. Have you built any way to measure whether the tuned LLM is actually improving consistency over time, or are you still eyeballing it? That drift between iterations is usually where things get hard to track.
English
1
0
1
10
touseef
touseef@touseefcodes·
Great question and honestly, it's the hardest part to get right. Right now, I'm using a mix of rule‑based checks (hedging, weak phrasing) layered with an LLM that's been tuned on examples of clear vs unclear writing. Still iterating heavily, the goal is consistency without over‑correcting and losing someone's voice
English
1
0
0
23
Audiencon⚡️
Audiencon⚡️@audiencon·
drop your project i’m boosting builders tonight 👇
English
389
2
157
13.2K
Neil Sharma
Neil Sharma@realNeilSharma·
@shubh12x @NaivaidyaY66600 Long term memory in a voice companion is really hard to get right. How are you making sure recalled context doesn’t lead to weird or off tone responses over time?
English
1
0
0
9
Shubham
Shubham@shubh12x·
@NaivaidyaY66600 Voice AI companion that actually remembers every conversation. Call it like a friend — it knows your name, your stories, what you said last week. Launched this week as a solo founder (MBA + AI engineer). velur.ai #buildinpublic
English
1
0
2
19
Navi
Navi@NaivaidyaY66600·
What are you building this week? Share your product's link (Count it as marketing last post got 10K views) 🫡
English
220
0
122
7.2K
Neil Sharma
Neil Sharma@realNeilSharma·
@Anoop_Goudar Emotional support AI is one of the hardest categories to get right. How are you making sure TeddyBuddy doesn’t give harmful or off tone responses to someone in a vulnerable moment?
English
1
0
1
3
Neil Sharma
Neil Sharma@realNeilSharma·
@touseefcodes @audiencon Thynq looks interesting. How are you making sure the writing feedback is actually good quality across different users and writing styles? That seems like the hardest part to get consistent.
English
1
0
2
16
touseef
touseef@touseefcodes·
@audiencon Building two things: ShipQuick — SaaS boilerplate (launch in 15 min) Thynq — AI writing coach (free beta) Both at shipquick.app | thynq.org.
English
1
0
0
17
M2
M2@IMoayyad_·
@donatelli2026 Already building. Just launched an AI app solo. If you understand distribution the way your numbers suggest, I think we could do something real together. DM me.
English
1
0
0
264