GitFish
335 posts

GitFish
@gitfish
Launch a coin for any GitHub project. Support open source projects at the earliest stages 👇

Interested in how to build with x402? Join us tomorrow to learn about x402 and build on Solana for the x402 hackathon x.com/i/broadcasts/1…



Meteora has the Best Tech. The Best Launchpads. And today, we’re taking it to the next level. We’re dropping 3 massive product upgrades that will change how tokens launch forever: Presale Vaults Meteora Invent Dynamic Fee Sharing You’re gonna want to see this.




#3|Why FEHelper Has Survived 15 Years When I started FEHelper back in 2011, I never expected it to last this long. Fifteen years later, it’s still alive—not because of marketing hype, but because of a community, a purpose, and countless small acts of trust. Around 2014, FEHelper’s user base on Chrome began to grow more obviously. Users kept sending me requests, ideas, and feedback. Maintaining everything alone became harder. So I decided to open-source it on GitHub: github.com/zxlie/FeHelper. That decision changed everything. After going open source: - The repo began receiving Stars and Forks (now ~5.4k stars and ~1.3k forks) - Developers submitted issues and pull requests; the project entered a real “community maintenance” phase - FEHelper started to surface frequently in developer blogs, technology portals like Juejin, CSDN, plugin recommendation lists, and “must-have frontend tools” articles - Users wrote tutorials, shared usage scenarios, recommended the tool to colleagues - Some enthusiastic users even created WeChat groups for FEHelper, where they discuss bugs, features, usage tips - Across platforms like WeChat, Douyin (TikTok China), Toutiao, and blogs, users also spread the word, writing short posts or sharing screenshots Because of this organic energy, I kept carving out time from my day job to fix bugs, optimize features, and respond to feedback. Every issue, every pull request, every message—and yes, every small donation—was a reminder that someone was using and believing in what I built. One touching detail: even though FEHelper remains fully free and open source, many users have sent me small red envelopes (donations) simply to say “thank you.” I never asked for it, but receiving those gestures was deeply motivating. Recently, I was invited by @gitfish to officially list FEHelper on their platform—bringing it to more eyes and reinforcing its place in the open source ecosystem. For me, that feels like a new chapter unfolding. FH$ (gitfish.dev/repo/zxlie/FeH…) So why has FEHelper survived 15 years? Not by chasing trends or marketing pushes, but by real connections with users, by trust built over time, and by small contributions accumulating into momentum. Forever grateful to everyone who has used, recommended, contributed, or supported FEHelper. Next time, I’ll talk about where I’d like to take FEHelper next—AI-powered assistants, agent integration, lightweight IDE embedding, and more.


大家好,我是 FeHelper ($FH)的作者阿烈叔(烈神),它是一款为开发者打造的开源浏览器插件,集成了 30+实用的工具。从第一版发布到现在已经走过 10 年,全球已有超过 20 万开发者在使用。 最近第一次真正感受到 Crypto 的力量 —— 从完全没接触,到看到链上社区如何支持创作者,这种体验太震撼了!

Excited to release new repo: nanochat! (it's among the most unhinged I've written). Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single, dependency-minimal codebase. You boot up a cloud GPU box, run a single script and in as little as 4 hours later you can talk to your own LLM in a ChatGPT-like web UI. It weighs ~8,000 lines of imo quite clean code to: - Train the tokenizer using a new Rust implementation - Pretrain a Transformer LLM on FineWeb, evaluate CORE score across a number of metrics - Midtrain on user-assistant conversations from SmolTalk, multiple choice questions, tool use. - SFT, evaluate the chat model on world knowledge multiple choice (ARC-E/C, MMLU), math (GSM8K), code (HumanEval) - RL the model optionally on GSM8K with "GRPO" - Efficient inference the model in an Engine with KV cache, simple prefill/decode, tool use (Python interpreter in a lightweight sandbox), talk to it over CLI or ChatGPT-like WebUI. - Write a single markdown report card, summarizing and gamifying the whole thing. Even for as low as ~$100 in cost (~4 hours on an 8XH100 node), you can train a little ChatGPT clone that you can kind of talk to, and which can write stories/poems, answer simple questions. About ~12 hours surpasses GPT-2 CORE metric. As you further scale up towards ~$1000 (~41.6 hours of training), it quickly becomes a lot more coherent and can solve simple math/code problems and take multiple choice tests. E.g. a depth 30 model trained for 24 hours (this is about equal to FLOPs of GPT-3 Small 125M and 1/1000th of GPT-3) gets into 40s on MMLU and 70s on ARC-Easy, 20s on GSM8K, etc. My goal is to get the full "strong baseline" stack into one cohesive, minimal, readable, hackable, maximally forkable repo. nanochat will be the capstone project of LLM101n (which is still being developed). I think it also has potential to grow into a research harness, or a benchmark, similar to nanoGPT before it. It is by no means finished, tuned or optimized (actually I think there's likely quite a bit of low-hanging fruit), but I think it's at a place where the overall skeleton is ok enough that it can go up on GitHub where all the parts of it can be improved. Link to repo and a detailed walkthrough of the nanochat speedrun is in the reply.


1/ Last week, we shipped GitFish V2, letting anyone raise capital and attention for open source projects by launching a coin. Here’s how it works:





