Julien Pro

1K posts

Julien Pro banner
Julien Pro

Julien Pro

@JulienProDotCom

French Indie Hackers trying to escape useless personal development tweets. Project's failure is my most successful habit. Follow-back.

France Katılım Mayıs 2021
241 Takip Edilen133 Takipçiler
Julien Pro
Julien Pro@JulienProDotCom·
@aye_aye_kaplan @jonathan_wilke Your status page shows no issue but my Cursor Pro subscription is very slow, started ~6 hours ago. It happens after the last update. What happens?
English
1
0
1
312
Jon Kaplan
Jon Kaplan@aye_aye_kaplan·
@jonathan_wilke We haven’t deleted the old UI. It’s still there, and we’re still investing heavily in it. We’ve grown the team a lot for both surfaces!
English
8
0
57
1.9K
Jonathan Wilke
Jonathan Wilke@jonathan_wilke·
I don‘t really like the new UI of Cursor. If I wanted such UI I would be using Codex or Claude Desktop, but I actually want a code editor with AI chat. Maybe it‘s time to look for an alternative.
English
57
1
165
23K
Julien Pro
Julien Pro@JulienProDotCom·
@jonathan_wilke Since this last update, my "auto mode" is very slow, it's useless. I'm on Pro plan, which was just nice until now. Posted on reddit to ask for information ➡️ censored. What happens with Cursor? Seems shady as fuck.
English
0
0
2
1K
Cursor
Cursor@cursor_ai·
Cursor can now attach demos and screenshots of its work to PRs it opens. Your team can review artifacts created by cloud agents directly in GitHub.
English
115
136
2.1K
322K
Julien Pro
Julien Pro@JulienProDotCom·
Cursor subreddit not allowing negative comments anymore, it seems. Massive censorship.
Julien Pro tweet media
English
0
0
0
2.1K
Julien Pro
Julien Pro@JulienProDotCom·
Et avec en plus des "time tracking" software/extension (par exemple une extension qui mesure l'active reading des pages web pour en noter l'intérêt par l'utilisateur, ou le temps passé sur les repos Github), le tout dans le "datalake" Obsidian, les possibilités sont infinies notamment en matière de proactivité (le LLM qui informe le User qu'une nouvelle version majeure d'un repo qui t'intéressait est sortie). - Obsidian a un beau potentiel pour devenir l'OS des LLM. - Le tracking niveau OS ou browsing est encore sous-exploité, ne serait-ce que le places.sqlite de Firefox par exemple. Faudrait juste que Obsidian nous sorte un vrai CLI avec output des dataview 🤨
Français
1
0
0
85
Supersocks
Supersocks@iamsupersocks·
Karpathy vient de mass-drop un workflow complet pour construire des wikis personnels pilotés par LLM : ingestion de données brutes → compilation en wiki .md → Q&A sur ~400K mots sans RAG fancy → outputs visuels (slides, graphes) re-filés dans la base. Le tout dans Obsidian. La partie sous-estimée : le "linting" faire auditer la base par le LLM pour trouver des incohérences, des connexions manquées, des données à compléter. Vos requêtes ne sont plus jetables, elles enrichissent la base. Mais le vrai step c'est la fin du post, et personne n'en parle : une fois ton wiki assez riche, tu génères des données synthétiques et tu finetunes un modèle dessus. Ton LLM ne consulte plus tes notes, il les sait. Plus de context window qui bride, plus de RAG approximatif. Un modèle qui a internalisé ton domaine dans ses poids. C'est ça la direction. Il dit lui-même qu'il y a un produit incroyable à construire là-dessus. Quelqu'un va shipper ça dans les 3 prochains mois.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

Français
16
7
76
14.2K
Julien Pro
Julien Pro@JulienProDotCom·
That's the way to go. Without forgetting one thing: browsing history. There is so much data behind that, either places.sqlite (for Firefox), or directly through a proxy that set up a dated "places of interest" in the Obsidian KM. The other way would be a browser extension that will record also the active time passed on each URL to rate the interest of active reading of the web page. Full use-case: let's say that you regularly browse Github projects. Based on the time passed on each one, your LLM could automatically inform you that a major update has been released without any instruction and/or configuration or "heartbeat/cron". Definitely Obsidian is a nice datalake for LLMs.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
0
0
0
36
Julien Pro
Julien Pro@JulienProDotCom·
/me developing an Openclaw/Hermes Agent in PHP with Laravel and NeuronAI. Honestly, not so hard currently. But you need an external stack: ▶️ CliProxyAPI so you'll not have to code the Oauth of main LLM services ▶️ A python Textual wrapper for the console UI (yes, it does not exist in PHP). Big difference with other agentic systems: it will be tailored to multi-teams and multi-agents cooperation. Hermes did the opposite. Openclaw did everything but that's a mess. Multiple teams, multiple agents. That's the goal. But I know me, I will trash this project in one week, despite there is absolutely no difficulty 😂
GIF
English
0
0
0
113
Julien Pro
Julien Pro@JulienProDotCom·
Unpopular opinion : try-catch has always been a non-sense. I don't know why it exists. And I don't know why ALL LLM pollute my code with that. There is absolutely no good reason.
English
0
0
0
26
Julien Pro
Julien Pro@JulienProDotCom·
There are a lot debates about MCPs: are they useful or not? But I never see something in the comments. There will be one unique MCPs, and it will not be called like that. It will be called an AI gateway. Guys, we already had this move in the Security industry. This will be the same.
English
0
0
0
16
Julien Pro
Julien Pro@JulienProDotCom·
@levelsio Next step: skills and tools in agents are dead. They should be on a manageable remote gateway, with one unique protocol and auto-discoverability. This would allow many things: setting up permissions, remote instances, logging, dynamic loading, etc.
English
0
0
1
184
@levelsio
@levelsio@levelsio·
Thank god MCP is dead Just as useless of an idea as LLMs.txt was It's all dumb abstractions that AI doesn't need because AI's are as smart as humans so they can just use what was already there which is APIs
Morgan@morganlinton

The cofounder and CTO of Perplexity, @denisyarats just said internally at Perplexity they’re moving away from MCPs and instead using APIs and CLIs 👀

English
697
343
6.2K
2.1M
Julien Pro
Julien Pro@JulienProDotCom·
Currently building a full AI agent that: 1) Supports infinite teams of agents 2) Is fully managed from Obsidian (config, memory, tasks, Knowledge Management, etc.) I use Pi Coding Agent, which is at the center between an agent and a SDK. 👉 It's so easy to develop the exact features you want with this project. Pi + Obsidian = ♥️ The goals? ➡️ Manage everything from my Obsidian ➡️ Define Teams, Agents, prompts, skills, and even Pi extensions from there ➡️ Have multiple Teams (one for Dev, one to "build a company", one for my home automation and tasks, etc.) fully customizable. Infinite teams, infinite agents, infinite workflows. ➡️ Each team being manageable from Telegram or from Obsidian Current stack - Obsidian + TaskNotes + Local Rest API plugin - Pi Agent without extension because I copy-paste code from various extensions to build one-unique extension for my own needs. It's a lot easier to have one unique repo to maintain. - Tmux for terminal multiplexer because I want to have a screen where I see ALL exchanges within a team of agents What is working now 1. ./launch.sh retrieve, from my remote Obsidian, the configuration of the team, launch tmux with splitted panes, and launch each agent of the team. 2. Each agent retrieves remotely his configuration (system prompt, model to use, skills, etc.) 3. Each agent contains a RCP server to communicate with others. 4. Long tasks are entered in TaskNotes by the leading agent, and affected to other agents as needed. Why Pi? 👉Because it's an extremely simple SDK with a lot of extensions, so a lot of code samples. 👉 It's the first time, with AI, that I told to myself: "Ok, now I can do really everything I want". SKY IS THE LIMIT 🚀 pi.dev
English
0
0
0
60
Julien Pro
Julien Pro@JulienProDotCom·
Hi Matthew. You should definitely have a look on Pi Coding Agent and make a video. It would deserved. Pi Agent is a nice spot between "agent with extensions" and a SDK. It will explode, because if you want to implement a concept, this is the only open-source product that allow it easily.
English
0
0
0
1
Julien Pro
Julien Pro@JulienProDotCom·
Created a X community around Pi Coding Agent. For those who are late, this is a minimal coding agent which can be extended easily. It already has many available packages to set up your own agent based on your exact needs and workflows. Pi Agent is the future.
Sherwin Techico@shrwnsan

@IndyDevDan Get more info about Pi agent at pi.dev You can install via AgentBox or npm like so: npm install -g @mariozechner/pi-coding-agent PS. Pi packages are like Claude /plugin. Check em out at pi.dev/packages for inspo

English
0
0
0
55
Julien Pro
Julien Pro@JulienProDotCom·
@taylorotwell Frustrated with Openclaw, this is a nightmare. Code it Taylor, seriously.
English
0
0
0
281
Taylor Otwell
Taylor Otwell@taylorotwell·
Sorry for so much shipping today 😅 but, we also just launched a new starter kit. Laravel + Svelte + Inertia. ❤️ Just update your Laravel installer for access.
Taylor Otwell tweet media
English
53
60
778
67.4K
Julien Pro
Julien Pro@JulienProDotCom·
Unpopular opinion - One week on OpenClaw. This is not a software, it's a puzzle. I've rarely seen such a mess, starting from the doc. Building with AI is nice, but it should be managed correctly. You can't do 100 commits a day and keeping the global coherence at the same time.
English
0
0
0
38