LinuxTOY 🐧

9.2K posts

LinuxTOY 🐧 banner
LinuxTOY 🐧

LinuxTOY 🐧

@linuxtoy

于 2006 年创建,包括但不限于 Linux 新闻、应用及提示分享。自架服务:https://t.co/xyslSaC7TI / YouTube 频道:https://t.co/ZiD1IT3IG7

Tuxland Katılım Aralık 2007
247 Takip Edilen36.7K Takipçiler
Sabitlenmiş Tweet
LinuxTOY 🐧
LinuxTOY 🐧@linuxtoy·
经过一段时间的筹备,现在你可以在线阅读我们创作的所有电子书,以及观看相应的视频讲解。目前,我们还开设了新的“命令行功夫”专栏,该专栏旨在向你引荐有趣、实用及好玩的命令行工具。此外,一本有关 KVM 虚拟机的新电子书正在撰写中。欢迎你访问我们的新网站: member.selfhostedserver.com
LinuxTOY 🐧 tweet media
中文
6
64
230
0
LinuxTOY 🐧 retweetledi
Massimo
Massimo@Rainmaker1973·
30 years ago Today, Linus Torvalds decides to adopt Tux the penguin as a mascot for the Linux operating system after being nibbled by a little penguin on a visit to the National Zoo & Aquarium, Canberra, Australia.
Massimo tweet media
English
11
23
106
14.8K
LinuxTOY 🐧 retweetledi
Tom Jøran Sønstebyseter Rønning
Tom Jøran Sønstebyseter Rønning@L1v1ng0ffTh3L4N·
Microsoft Edge loads all your saved passwords into memory in cleartext — even when you’re not using them.
English
251
1.4K
8.9K
1.5M
LinuxTOY 🐧 retweetledi
Jeff Geerling
Jeff Geerling@geerlingguy·
Well this is disconcerting... following closely as many shared hosting providers are not super quick to update: copy.fail
English
28
144
981
110K
LinuxTOY 🐧 retweetledi
Wiz
Wiz@wiz_io·
🚨 BREAKING: Wiz Research discovered Remote Code Execution on GitHub.com with a single git push The flaw in @github allowed unauthorized access to millions of repositories belonging to other users and organizations 🤯
Wiz tweet media
English
96
997
4.5K
544.6K
LinuxTOY 🐧
LinuxTOY 🐧@linuxtoy·
easy vibe:一份易学易用、详尽扎实的 vibe 编程教程,包含零基础入门、初中级开发及高级开发等内容,教你一步步把想法变成产品。(github.com/datawhalechina…
LinuxTOY 🐧 tweet media
中文
0
19
84
5.9K
LinuxTOY 🐧
LinuxTOY 🐧@linuxtoy·
abtop:适用于 AI 编码 agent 的 top 类应用,支持 Claude Code 及 Codex CLI,可实时监视 token 占用、上下文窗口占比、速率限制、子进程、开放端口等。(github.com/graykode/abtop)
LinuxTOY 🐧 tweet media
中文
0
11
45
6.1K
LinuxTOY 🐧 retweetledi
The Lunduke Journal
The Lunduke Journal@LundukeJournal·
Ubuntu 26.04 (Long Term Support) is shipping tomorrow… and Canonical has published an update on their quest to replace GNU CoreUtils with Rust-based re-writes. Highlights: - After developers raised “some serious concerns”, Canonical hired an external security research firm to evaluate the Rust re-writes (known as “uutils”). - That security firm quickly found 113 significant issues, with a large portion of them being severe security issues warranting a CVE. - Only some of those issues in the Rust re-writes have been fixed for the Ubuntu 26.04 release. - Repeat: Ubuntu 26.04 is shipping with significant known issues in the new Rust coreutils. - Some of the most critical Rust-Re-Written commands (cp, mv, and rm) were found to contain a large number of significant “Time-of-Check to Time-of-Use” issues, the kind of issues which create race condition vulnerabilities. The kind often exploited by hackers. - As such, cp, mv, and rm will not be shipping in Ubuntu 26.04. Even with their clear “it’s fine if Ubuntu 26.04’s rust re-writes contain significant bugs” policy… the issues with cp, mv, and rm were simply TOO severe. - Despite this undeniably disastrous rollout of the Rust-based rewrites of Coreutils, the Ubuntu team plans to ship the next release, in 6 months (26.10), with 100% of the GNU Coreutils replaced with the (currently comically broken) Rust re-writes. discourse.ubuntu.com/t/an-update-on…
The Lunduke Journal tweet mediaThe Lunduke Journal tweet mediaThe Lunduke Journal tweet mediaThe Lunduke Journal tweet media
English
134
202
1.2K
108.8K
LinuxTOY 🐧 retweetledi
Pirat_Nation 🔴
Pirat_Nation 🔴@Pirat_Nation·
Linux kernel 7.1 introduces a new in-kernel NTFS driver, a complete ground-up rewrite that delivers native read and write access to Windows NTFS volumes directly within the kernel without any userspace tools. It replaces the slower FUSE-based NTFS-3G and the previous NTFS3 driver. Performance gains include 3-5 percent faster single-threaded writes, 35-110 percent faster multi-threaded writes, and mounting times for large drives up to four times quicker. The driver also features lower CPU overhead, better integration with modern kernel infrastructure, and improved reliability. This update marks a major step forward for Linux-Windows storage interoperability. Via:tomshardware
Pirat_Nation 🔴 tweet mediaPirat_Nation 🔴 tweet media
English
106
639
8.3K
367.3K
LinuxTOY 🐧 retweetledi
Tommy He
Tommy He@lovenemesis·
无需安装,不挑显卡,全平台通用,只需下载一个文件,就能部署具备 GPU 加速的本地大语言模型运行时环境,使用顶尖的开源 #Gemma4 ,让 #Firefox AI 聊天机器人帮您解读互联网:linuxtoy.org/archives/local… cc @linuxtoy
中文
2
1
5
2.6K
LinuxTOY 🐧 retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
This week, the Linux kernel project finally created a formal, project-wide policy explicitly allowing AI-assisted code contributions, as long as developers obey strict new disclosure requirements. Torvalds’ view, which gives this policy its main philosophical shape, is pretty direct: AI is just another tool. Developers submitting garbage code are not going to be fixed by more documentation, so the kernel should hold people accountable instead of trying to control the software they use on their local machines. It is a practical and reasonable line to take, especially compared with the panic in other parts of the open-source scene. You are the one on the hook now. If Claude introduces for example, a race condition in the block layer and you approve it, the patch carries your tag, not the model’s. The Signed-off-by line is the certification for the Developer Certificate of Origin, and the latest policy makes it explicit that only humans can legally add it. AI agents "MUST NOT" The open-source community is currently getting overwhelmed by what people are calling "AI slop." e.g. the creator of cURL closed bug bounties after a flood of hallucinated code, tldraw began automatically closing external PRs to defend itself, and projects such as Node.js and OCaml have seen huge, >10,000-line AI-generated patches
Rohan Paul tweet mediaRohan Paul tweet media
English
10
32
137
13.6K
LinuxTOY 🐧
LinuxTOY 🐧@linuxtoy·
llmfit:数百种模型与提供商,一条命令即可找出你的硬件能运行哪些模型。根据系统的 RAM、CPU 和 GPU 为 LLM 模型匹配合适的规格。自动检测硬件,从质量、速度、适配度和上下文四个维度为每个模型打分,告诉你哪些模型能在你的机器上流畅运行。(github.com/AlexsJones/llm…
GIF
中文
0
1
13
1.5K
LinuxTOY 🐧 retweetledi
Chaofan Shou
Chaofan Shou@Fried_rice·
26 LLM routers are secretly injecting malicious tool calls and stealing creds. One drained our client $500k wallet. We also managed to poison routers to forward traffic to us. Within several hours, we can directly take over ~400 hosts. Check our paper: arxiv.org/abs/2604.08407
Chaofan Shou tweet media
English
157
663
3.3K
562.9K
LinuxTOY 🐧
LinuxTOY 🐧@linuxtoy·
Agent Skills:一套用于 AI coding agents 的生产级工程 skills,它的好处在于将构建软件的过程流程化,并融入了资深工程师的最佳实践,值得一试。(github.com/addyosmani/age…)
LinuxTOY 🐧 tweet media
中文
0
6
40
3.7K
LinuxTOY 🐧
LinuxTOY 🐧@linuxtoy·
LLM 下管理知识库的新范式~
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

中文
1
0
0
2.2K
LinuxTOY 🐧 retweetledi
Chujie Zheng
Chujie Zheng@ChujieZheng·
We are planning to open-source the Qwen3.6 models (particularly medium-sized versions) to facilitate local deployment and customization for developers. Please vote for the model size you are **most** anticipating—the community’s voice is vital to us!
English
314
260
4.1K
300.2K
LinuxTOY 🐧 retweetledi
Google
Google@Google·
We just released Gemma 4 — our most intelligent open models to date. Built from the same world-class research as Gemini 3, Gemma 4 brings breakthrough intelligence directly to your own hardware for advanced reasoning and agentic workflows. Released under a commercially permissive Apache 2.0 license so anyone can build powerful AI tools. 🧵↓
English
730
3.1K
20.5K
7.7M