skyfishb | 하늘치B
3.3K posts

skyfishb | 하늘치B
@skyfishb
💼 Healthcare AI CEO | 🎨 AI Artist | 🤖 BitAngel Builder 💎 Old NFT Holic | ✨ Crypto Enthusiast 🚀 Making Magic Happen


用AI写技术文档的开发者注意了,这个工具你一定要存! 之前用AI生成流程图、架构图,都要手动切换到其他画图软件,麻烦死了。 偶然发现一个叫 Markdown Viewer 的技能包,直接让AI在Markdown里生成各种专业图表,不用再来回切换工具。 ① 内置14个专业绘图技能,架构图、流程图、状态图、部署图、类图……上百种图表类型随便选 ② 支持PlantUML、Vega等五大主流渲染引擎,还能输出精美排版的HTML信息卡片 ③ 一键安装,Claude Code、Cursor、Codex这些主流AI编程工具全支持 经常写技术文档、想让AI自动配图的开发者,先收藏再说。 🔗 github.com/markdown-viewe… ---------------------------------- 全网寻找AI超级个体 · OPC: 由《世界人工智能大会WAIC》主办的首届OPC大赛还有7天时间截止报名! 2026 WAIC OPC赛事专题: worldaic.com.cn/fabu?uuid=5915… #蓝鸟会 专属报名通道:(具体详情加V群:bluebirdlabs ) doc.weixin.qq.com/forms/AHIAwgf5… 截止报名时间:2026.4.30 建议:可组队参赛,多项目参赛提高入围概率!











Excited to share Visa CLI, the first experimental product from Visa Crypto Labs. Check it out and request access here visacli.sh

최근 예측시장 폴리마켓(Polymarket)에서 전설적인 수익률을 찍고 있는 'gabagool22' 계정의 실체가 유출되어 난리임. 사건 발단은 한 중국인 트레이더가 위챗에 "my new farm"이라며 올린 사진 한 장이었음. 사진에는 서버 40대가 동시에 돌아가는 장면이 담겼는데, 이게 알고 보니 암호화폐 채굴이 아니라 폴리마켓 자동화 매매용 봇 팜(Bot Farm)이었던 것임. 이 유저의 주 수익원은 비트코인 15분 단위 가격 예측 시장임. 단순히 도박하듯 맞히는 게 아니라, 봇을 돌려 'YES'와 'NO'의 가격 합이 1달러 미만이 되는 찰나의 구간을 24시간 스캔해서 무위험 차익을 챙기는 전략을 씀. 현재까지 공개된 성적표는 더 경이로움. 누적 수익 약 12억 7천만 원 주당 약 4,400만 원, 월 2억 원 이상 따박따박 벌어들이는 중 현재 68만 명 넘는 사람들이 이 계정 지갑을 추적하며 봇 전략을 카피하려고 혈안임 결국 인간의 직관이 아니라 정교하게 짜인 자본과 기술의 승리라는 게 밝혀지면서, 개인 투자자들 사이에서는 "이걸 어떻게 이기냐"는 자조 섞인 반응과 함께 '예측시장의 끝판왕'이라는 평가가 동시에 나오는 중


Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.


Openclaw 윈도 설치하다 탈모 오겠네 ㅅㅂㅅㅂ










