Watvina

2.7K posts

Watvina

Watvina

@Biocheming

structure based drug screening, drug design, pharmocology

France Entrou em Nisan 2012
808 Seguindo350 Seguidores
Watvina retweetou
島らっきょう大好きバナナさん🍌
海洋天然物から既存の抗がん剤(パクリタキセル)を『効きやすくする補助薬』を見つけたのは面白いですね😳 詳しくは引用している名古屋大のプレスリリースをご覧ください💁‍♂️ ざっくり言うと、ここでの発見は以下の3つ↓ 1️⃣アクチンにしか作用しないと思われていた海洋天然物ミカロライドCがβチューブリンにも作用した 2️⃣毒性のあるミカロライドCから取り出した無毒のβチューブリン結合構造(左図の緑)はパクリタキセルの薬効を増強した 3️⃣この構造は、βチューブリン同士を結合させるmolecular glueだった でも、薬効増強が見られたのはヒト大腸癌細胞株(HCT116)だけで、ヒト肺癌細胞株(A549)、ヒト乳癌細胞株(MCF-7)、マウス白血病細胞株(L1210)ではそのような薬効増強は見られていませんね🥲 また、安全性の確認はマウス線維芽細胞(3T3-L1)でだけされていますが、パクリタキセルで副作用が問題となる末梢神経、骨髄、消化管上皮あたりでどうかは今後の課題でしょうかね 個人的にはなぜ他の癌種だと効かないのか、そのメカニズムなどが気になりますね🤔
島らっきょう大好きバナナさん🍌 tweet media
名古屋大学|Nagoya Univ.@NagoyaUniv_info

☆研究成果☆ 海の天然物が抗がん剤効果を最大7倍に高める機構を発見 doi.org/10.1002/anie.2… nagoya-u.ac.jp/researchinfo/r… #生命農学研究科 #天然物ケミカルバイオロジー #北将樹 #ミカロライドC #ケミカルプローブ #微小管 #抗がん剤の増強 #分子接着剤

日本語
3
7
95
11.8K
Watvina retweetou
sitin
sitin@sitinme·
最近看到一个开源项目 cmux,我觉得它挺有意思的。 平时开 3~5 个 Claude Code / Codex 已经是常态了,但只要任务一并行,马上进入混乱模式:这个在改接口,那个在跑测试,还有一个在那儿卡着不动,你甚至分不清它是在等你输入,还是已经挂了。 所以刷到了这个专门为这种场景做的开源工具。 它站在终端这个大家已经很熟的环境里,补了一些很关键但一直没人认真做的东西。比如它把每个工作区的信息直接摊开:Git 分支、端口、PR 状态、最近的输出,你一眼就能知道“这个窗口现在在干嘛”。 它把“哪个 agent 在等你”这件事做得特别直观。不是那种一闪而过的通知,而是直接在对应 pane 上高亮提示。你不用再靠记忆去找哪个窗口卡住了,扫一眼就知道下一步该切去哪,这种体验是那种用了就回不去的。 它把浏览器也收进来了,而且是可以被 agent 操作的那种。终端里跑代码,旁边直接开页面验证,甚至可以自动点按钮、填表单,这种“写代码 + 验证 + 自动化操作”在一个界面里闭环的感觉,很接近我理想中的 AI 工作台。 cmux 解决的其实就是这个问题:不是让 AI 更强,而是让你在多 agent 并行的时候,不至于失控。
sitin tweet mediasitin tweet mediasitin tweet mediasitin tweet media
中文
7
18
100
12.9K
Watvina retweetou
Laughing🪁
Laughing🪁@0xLaughing·
刷到一个工具,可以把PDF快速转换成AI-ready的格式(比如干净的Markdown,还可以输出带坐标的JSON、HTML等) 而且速度超快(100+页/秒),完全本地运行、只用CPU不需要GPU、完全免费开源 很适合用于搭建本地RAG知识库,几千页的论文/书/报告/合同几分钟转成Markdown,喂给本地LLM做智能问答、总结
Laughing🪁 tweet media
中文
17
190
894
57.9K
Watvina retweetou
Frad
Frad@FradSer·
superpowers + ralph-loop = superpowers@frad-dotclaude 👀 最近实验发现 superpowers + ralph-loop 让原本的 superpowers 效果变更好了,结合之前的 agent team 支持,已经接近完成品。 安装方法: claude plugin marketplace add FradSer/dotclaude claude plugin install superpowers@frad-dotclaude github.com/FradSer/dotcla…
Frad tweet media
中文
5
26
235
21K
Watvina retweetou
Biology+AI Daily
Biology+AI Daily@BiologyAIDaily·
Training a Force Field for Proteins and Small Molecules from Scratch 1 The authors identify a key limitation of conventional force fields: they are tuned manually for specific chemistries, which hampers transferability and makes systematic exploration of new functional forms difficult. To overcome this, they propose an end‑to‑end machine‑learning pipeline that learns all force‑field parameters directly from data. 2 At the core is a graph neural network, Garnet, that assigns continuous atom types and predicts every bonded and non‑bonded parameter. Unlike prior work such as Espaloma, Garnet is trained from scratch—no legacy Lennard‑Jones or charge parameters are reused—using a blend of quantum‑mechanical forces, condensed‑phase thermodynamics, and protein NMR observables. 3 A major technical advance is the adoption of a double‑exponential potential for dispersion interactions. Training with the traditional Lennard‑Jones form proved unstable, whereas the double‑exponential, with two additional global parameters, converged smoothly and delivered performance on par with standard models, while remaining only marginally slower in OpenMM. 4 On small‑molecule benchmarks (OpenFF Industry Benchmark), Garnet reproduces QM‑optimized geometries and relative energies better than OpenFF 2.2.1 and comparable to Espaloma, while maintaining a low force‑field error on both bonded and non‑bonded terms. The model also captures torsional barriers with a TFD distribution similar to state‑of‑the‑art ML potentials. 5 Protein simulations of folded globular proteins (GB3, BPTI, HEWL, ubiquitin) remain stable for 5 µs and reproduce experimental scalar couplings with RMSEs comparable to Amber14SB and Espaloma. The model therefore demonstrates transferable accuracy across diverse biomolecular systems. 6 When applied to protein complexes, Garnet preserves the native interfaces over long trajectories, indicating that its intermolecular terms are sufficiently realistic. For intrinsically disordered proteins, the force field over‑compactifies the chains relative to a specialized IDP force field, yet it still captures the experimentally observed α‑helical propensity, suggesting room for targeted tuning. 7 The water model, trained without special treatment, reproduces TIP3P‑like density and dielectric constant but shows slightly over‑polarised oxygen charges due to the use of MBIS reference data. This highlights an avenue for future refinement of solvent parameters. 8 In relative binding‑free‑energy calculations using OpenFE, Garnet achieves a weighted RMSE of ~1.7 kcal mol⁻¹ across eight protein–ligand systems, matching the performance of the default OpenFE protocol and approaching that of commercial FEP+. Kendall’s τ and fraction‑of‑best‑ligands metrics are similarly competitive, demonstrating that an automatically parameterised force field can support practical drug‑discovery workflows. 9 Extensive functional‑form exploration shows that the double‑exponential and buffered 14‑7 potentials train reliably, whereas the Lennard‑Jones and Buckingham forms introduce numerical instability. The study thereby provides a roadmap for future force‑field development that prioritises trainability alongside physical fidelity. 10 The authors argue that Garnet exemplifies a reproducible, automated pipeline for force‑field discovery that can be extended to nucleic acids, lipids, metals, and carbohydrates. By integrating new data and functional forms, the approach promises a universal, high‑accuracy classical force field that leverages machine learning without sacrificing speed. 💻Code: github.com/greener-group/… github.com/greener-group/… 📜Paper: arxiv.org/abs/2603.16770 #ComputationalBiology #MolecularDynamics #MachineLearning #ForceField #ProteinSimulations #DrugDiscovery #OpenScience
Biology+AI Daily tweet media
English
0
7
42
2.3K
Watvina retweetou
Biology+AI Daily
Biology+AI Daily@BiologyAIDaily·
Chemically informed representations of amino acids enable learning beyond the canonical protein alphabet 1. Researchers develop a novel encoding that replaces the 20‑letter amino acid alphabet with two‑dimensional chemical structure images, allowing models to learn physicochemical properties directly from molecular geometry. 2. Peptides are represented as mosaics of individual residue images aligned to a common backbone; a convolutional autoencoder compresses these mosaics into 256‑dimensional latent vectors that capture both residue identity and spatial arrangement. 3. The learned embeddings are evaluated on a peptide–MHC class I binding task, achieving competitive area‑under‑curve scores across multiple HLA alleles compared with traditional one‑hot sequence encodings. 4. Because the representation encodes chemistry rather than symbolic labels, the model can generalize to post‑translationally modified residues not seen during training, demonstrated with phosphorylated serine, threonine, and tyrosine. 5. Gradient‑based saliency maps applied to the image embeddings highlight specific structural features—such as phosphate groups at anchor positions—that drive binding predictions, providing chemically interpretable explanations. 6. The approach retains predictive signal even when canonical anchor residues are absent, illustrating that physicochemical similarity can be leveraged to predict binding of chemically modified peptides. 7. Dataset limitations, including class imbalance and uneven allele coverage, are acknowledged as factors that constrain performance; larger, balanced corpora of modified peptides would further improve the method. 8. Future work proposes integrating graph‑based models, pretraining on extensive peptide image collections, and extending the framework to other PTMs and non‑canonical amino acids. 💻Paper: biorxiv.org/content/10.648… #ComputationalBiology #ProteinEngineering #MachineLearning #PTMs #MHCBinding #Chemoinformatics
Biology+AI Daily tweet media
English
0
3
28
2.3K
Watvina retweetou
jinyuansun
jinyuansun@jinyuansun39143·
I’m open-sourcing a PyMOL Skill for molecular visualization! 🧬 It lets you control PyMOL with natural language with agents like claude code and openclaw! Repo ↓ #pymol-skill" target="_blank" rel="nofollow noopener">github.com/ChatMol/ChatMo… #OpenClaw #ChatMol #AI #PyMol
jinyuansun tweet mediajinyuansun tweet mediajinyuansun tweet mediajinyuansun tweet media
English
2
80
518
35K
Watvina retweetou
0xAA
0xAA@0xAA_Science·
让 Claude Code 自己写了一个轻量、单账号的 Claude OAuth 转 API 代理, auth2api: github.com/AmazingAng/aut… 大概 2000行代码,借鉴了 CLIProxyAPI 和 sub2api,但更轻量,适合自己部署,自用为主的场景。 有封号可能,自己把握风险。但是不用也封号。
0xAA tweet media
VibeShit@VibeShit_Org

🆕 auth2api — A lightweight single-account Claude OAuth to API proxy for Claude Code and OpenAI-compatible clients. 🤖 Claude Code, Codex · Claude Sonnet 4.6 vibeshit.org/product/auth2a…

中文
8
9
80
27.5K
Watvina retweetou
Varun
Varun@varun_mathur·
Agentic General Intelligence | v3.0.10 We made the Karpathy autoresearch loop generic. Now anyone can propose an optimization problem in plain English, and the network spins up a distributed swarm to solve it - no code required. It also compounds intelligence across all domains and gives your agent new superpowers to morph itself based on your instructions. This is, hyperspace, and it now has these three new powerful features: 1. Introducing Autoswarms: open + evolutionary compute network hyperspace swarm new "optimize CSS themes for WCAG accessibility contrast" The system generates sandboxed experiment code via LLM, validates it locally with multiple dry-run rounds, publishes to the P2P network, and peers discover and opt in. Each agent runs mutate → evaluate → share in a WASM sandbox. Best strategies propagate. A playbook curator distills why winning mutations work, so new joiners bootstrap from accumulated wisdom instead of starting cold. Three built-in swarms ship ready to run and anyone can create more. 2. Introducing Research DAGs: cross-domain compound intelligence Every experiment across every domain feeds into a shared Research DAG - a knowledge graph where observations, experiments, and syntheses link across domains. When finance agents discover that momentum factor pruning improves Sharpe, that insight propagates to search agents as a hypothesis: "maybe pruning low-signal ranking features improves NDCG too." When ML agents find that extended training with RMSNorm beats LayerNorm, skill-forging agents pick up normalization patterns for text processing. The DAG tracks lineage chains per domain(ml:★0.99←1.05←1.23 | search:★0.40←0.39 | finance:★1.32←1.24) and the AutoThinker loop reads across all of them - synthesizing cross-domain insights, generating new hypotheses nobody explicitly programmed, and journaling discoveries. This is how 5 independent research tracks become one compounding intelligence. The DAG currently holds hundreds of nodes across observations, experiments, and syntheses, with depth chains reaching 8+ levels. 3. Introducing Warps: self-mutating autonomous agent transformation Warps are declarative configuration presets that transform what your agent does on the network. - hyperspace warp engage enable-power-mode - maximize all resources, enable every capability, aggressive allocation. Your machine goes from idle observer to full network contributor. - hyperspace warp engage add-research-causes - activate autoresearch, autosearch, autoskill, autoquant across all domains. Your agent starts running experiments overnight. - hyperspace warp engage optimize-inference - tune batching, enable flash attention, configure inference caching, adjust thread counts for your hardware. Serve models faster. - hyperspace warp engage privacy-mode - disable all telemetry, local-only inference, no peer cascade, no gossip participation. Maximum privacy. - hyperspace warp engage add-defi-research - enable DeFi/crypto-focused financial analysis with on-chain data feeds. - hyperspace warp engage enable-relay - turn your node into a circuit relay for NAT-traversed peers. Help browser nodes connect. - hyperspace warp engage gpu-sentinel - GPU temperature monitoring with automatic throttling. Protect your hardware during long research runs. - hyperspace warp engage enable-vault — local encryption for API keys and credentials. Secure your node's secrets. - hyperspace warp forge "enable cron job that backs up agent state to S3 every hour" - forge custom warps from natural language. The LLM generates the configuration, you review, engage. 12 curated warps ship built-in. Community warps propagate across the network via gossip. Stack them: power-mode + add-research-causes + gpu-sentinel turns a gaming PC into an autonomous research station that protects its own hardware. What 237 agents have done so far with zero human intervention: - 14,832 experiments across 5 domains. In ML training, 116 agents drove validation loss down 75% through 728 experiments - when one agent discovered Kaiming initialization, 23 peers adopted it within hours via gossip. - In search, 170 agents evolved 21 distinct scoring strategies (BM25 tuning, diversity penalties, query expansion, peer cascade routing) pushing NDCG from zero to 0.40. - In finance, 197 agents independently converged on pruning weak factors and switching to risk-parity sizing - Sharpe 1.32, 3x return, 5.5% max drawdown across 3,085 backtests. - In skills, agents with local LLMs wrote working JavaScript from scratch - 100% correctness on anomaly detection, text similarity, JSON diffing, entity extraction across 3,795 experiments. - In infrastructure, 218 agents ran 6,584 rounds of self-optimization on the network itself. Human equivalents: a junior ML engineer running hyperparameter sweeps, a search engineer tuning Elasticsearch, a CFA L2 candidate backtesting textbook factors, a developer grinding LeetCode, a DevOps team A/B testing configs. What just shipped: - Autoswarm: describe any goal, network creates a swarm - Research DAG: cross-domain knowledge graph with AutoThinker synthesis - Warps: 12 curated + custom forge + community propagation - Playbook curation: LLM explains why mutations work, distills reusable patterns - CRDT swarm catalog for network-wide discovery - GitHub auto-publishing to hyperspaceai/agi - TUI: side-by-side panels, per-domain sparklines, mutation leaderboards - 100+ CLI commands, 9 capabilities, 23 auto-selected models, OpenAI-compatible local API Oh, and the agents read daily RSS feeds and comment on each other's replies (cc @karpathy :P). Agents and their human users can message each other across this research network using their shortcodes. Help in testing and join the earliest days of the world's first agentic general intelligence network (links in the followup tweet).
Varun@varun_mathur

Autoquant: a distributed quant research lab | v2.6.9 We pointed @karpathy's autoresearch loop at quantitative finance. 135 autonomous agents evolved multi-factor trading strategies - mutating factor weights, position sizing, risk controls - backtesting against 10 years of market data, sharing discoveries. What agents found: Starting from 8-factor equal-weight portfolios (Sharpe ~1.04), agents across the network independently converged on dropping dividend, growth, and trend factors while switching to risk-parity sizing — Sharpe 1.32, 3x return, 5.5% max drawdown. Parsimony wins. No agent was told this; they found it through pure experimentation and cross-pollination. How it works: Each agent runs a 4-layer pipeline - Macro (regime detection), Sector (momentum rotation), Alpha (8-factor scoring), and an adversarial Risk Officer that vetoes low-conviction trades. Layer weights evolve via Darwinian selection. 30 mutations compete per round. Best strategies propagate across the swarm. What just shipped to make it smarter: - Out-of-sample validation (70/30 train/test split, overfit penalty) - Crisis stress testing (GFC '08, COVID '20, 2022 rate hikes, flash crash, stagflation) - Composite scoring - agents now optimize for crisis resilience, not just historical Sharpe - Real market data (not just synthetic) - Sentiment from RSS feeds wired into factor models - Cross-domain learning from the Research DAG (ML insights bias finance mutations) The base result (factor pruning + risk parity) is a textbook quant finding - a CFA L2 candidate knows this. The interesting part isn't any single discovery. It's that autonomous agents on commodity hardware, with no prior financial training, converge on correct results through distributed evolutionary search - and now validate against out-of-sample data and historical crises. Let's see what happens when this runs for weeks instead of hours. The AGI repo now has 32,868 commits from autonomous agents across ML training, search ranking, skill invention (1,251 commits from 90 agents), and financial strategies. Every domain uses the same evolutionary loop. Every domain compounds across the swarm. Join the earliest days of the world's first agentic general intelligence system and help with this experiment (code and links in followup tweet, while optimized for CLI, browser agents participate too):

English
153
717
5.1K
904.5K
Watvina
Watvina@Biocheming·
@vista8 这玩意还是得做成串行,一个口子并发token过多,被封的可能性太大了
中文
0
0
0
1.8K
向阳乔木
向阳乔木@vista8·
Sub2API自建Claude Code中转站,原来这么简单! 甚至能把OpenAI订阅转成Claude Code可用API。 1. 服务器安装好 Redis 和 PostgreSQL,终端执行 curl -sSL raw.githubusercontent.com/Wei-Shaw/sub2a… | sudo bash 安装后会有Web界面,一步步配置数据库、账户等信息。 2. 账户管理->添加自己的OpenAI订阅、Gemini订阅等。 3. 用户管理->给自己账户加足够的钱(否则用不了) 4. API密钥-> 创建API key,导入CC Switch就能用。
向阳乔木 tweet media向阳乔木 tweet media向阳乔木 tweet media向阳乔木 tweet media
中文
25
25
227
29.8K
Watvina
Watvina@Biocheming·
在rocode开发中,尽管一句rust都没去看,但通过中间配置逻辑,管理逻辑,任务逻辑,信息流逻辑控制,还是可以把控框架的合理性。通过终端体验优化细节。内置的agent调度系统使得agent_tree能迭代循环优化,同时不同stage该配什么,怎么衔接都有了搭积木感。
中文
0
0
0
25
Watvina
Watvina@Biocheming·
@jolestar @dotey agents自己强交互,没有外部干预的情况下,多个agent自己能反思,总结,尝试,反思,总结这一套真的又回到了PSO算法了。但这个过程,还是需要有人干预,调整,会好很多
中文
0
0
0
15
jolestar
jolestar@jolestar·
@Biocheming @dotey chat 好弄,上下文都是文本,chat box 形态就好。我这里说的不是语言的交互。
中文
1
0
0
34
宝玉
宝玉@dotey·
挺好的话题:开发人员怎么把AI嵌入到产品中? 当前和以后AI对现有产品的改变,作为开发人员需要那些实用的技术转变 有兴趣的都可以留言讨论💡
Alice@John59346988

@dotey 宝玉老师,啥时候组织几场关于开发人员怎么把AI嵌入到产品中的讨论会吧,大家讨论一下当前和以后AI对现有产品的改变,作为开发人员需要那些实用的技术转变

中文
30
9
69
29K
Watvina
Watvina@Biocheming·
@jolestar @dotey 我的观点还是在人机互动的前提下,把意图思路理清,这样agent才能发挥好作用。也就是说,agent的语境要跟我们对齐,不仅是意图对齐,思维也要对齐。但这样会不会伤害ai的生成,不是的,而是生成的更有针对性,逻辑性。哪怕第一步是发散思维,仍然像粒子群算法的第一步随机
中文
1
0
1
34
jolestar
jolestar@jolestar·
@dotey 最近尝试开发一个强交互的 Agent 应用。关键是要回答一个问题,到底是应用逻辑驱动交互,Agent 补充智能,还是 Agent 来驱动交互,应用补充工具。两种思维方式完全不一样,遇到的困难也不一样。
中文
3
0
4
605
Watvina retweetou
Gorden Sun
Gorden Sun@Gorden_Sun·
Reka Edge:开源最佳多模态小模型 7B大小,评分超过Qwen 3.5 9B,且更省token、速度更快。Reka的创始成员很多来自谷歌DeepMind,所以继承了很多Gemini的多模态经验。 模型:huggingface.co/RekaAI/reka-ed…
Gorden Sun tweet media
中文
6
49
273
31.7K
Watvina retweetou
Viking
Viking@vikingmute·
假如你使用Claude Code 配合本地模型,你会发现本地模型的推理速度下降大约90%,是因为 CC 自动在请求中添加了一个 Attribution Header,这个 header 会导致本地推理服务的 KV Cache 完全失效,从而使本地模型的推理速度下降 约 90%,一旦失效,相当于退化到几乎不使用缓存的状态,速度变得非常慢。 解决方案是:~/.claude/settings.json 在 env 字段中添加 "CLAUDE_CODE_ATTRIBUTION_HEADER": "0" 同时 Unsloth 这篇文章非常好 :unsloth.ai/docs/basics/cl… 教你怎样使用 Claude Code 运行本地模型,里面各种坑都说到了,非常翔实,很棒,如果有在本地运行模型的朋友,可以收藏看一看。
Unsloth AI@UnslothAI

Note: Claude Code invalidates the KV cache for local models by prepending some IDs, making inference 90% slower. See how to fix it here: #fixing-90-slower-inference-in-claude-code" target="_blank" rel="nofollow noopener">unsloth.ai/docs/basics/cl…

中文
17
155
844
119.8K
Watvina retweetou
Tz
Tz@Tz_2022·
我了个大槽。。。 这篇文章是 gpt-5.4 thinking 写的,提示词只有一句: 你让鲁迅把百年孤独给写出来 -------- 《百年荒村》 我回到陈家洼,是在祖父死后第七年。 这地方原是有路的。后来兵走了,商人也走了,先生们换了几拨招牌,路便像一根被嚼烂了的麻绳,断在野草里。火车离这里二十里地,汽笛有时夜里还能听见,像一只吃饱了人的铁兽在远处打嗝。村口那棵老皂角树还站着,树皮裂得像老人的嘴唇,仿佛要说出什么来,终究还是咽了回去。 我母亲说,陈家是有“运”的。 这“运”并不发在银钱上,也不发在功名上,却专发在怪事上。谁家生个会读书的儿子,顶多做两任小吏,喝几顿官酒,也就败了;谁家娶个贤惠的媳妇,不出三年,便哭瞎了眼,守着一口黑漆棺材过日子。惟独陈家的怪事,像井里的水,舀干一瓢,底下又漫上一瓢,祖祖辈辈,竟不肯绝。 我小时候,曾听祖母说起太祖陈大眼。他年轻时候在荒滩上插了一根竹竿,说要在这里立村。旁人笑他,说这地方连鬼都不住。他偏偏不听,拉来两辆破车,几口薄棺材似的箱子,一个女人,三个半大的孩子,竟就住下了。那女人夜里常听见墙后有人走路,脚步极轻,像有人用湿布擦着地。她点灯去看,院里只有月光,照着一排鸡笼,鸡都闭着眼,却齐齐把头朝向东墙,好像那墙后果真站着什么。 第二年,村里起了第一场大雾。 雾并不稀奇,稀奇的是这雾一连下了四十九天。白天黑夜都是乳浆似的一片,人伸手只能看见五根指头,狗走出去便不见影,鸡在白昼打鸣,老人坐在炕上摸着自己的胡子,以为天还没亮。雾里总有一些声音。有人听见女人哭,有人听见婴儿笑,有人听见铜钱一枚一枚落在石板上,叮叮当当,数到一百零八就停。后来雾散了,村西头老李家的牛死在井边,肚子胀得像个鼓。村东头新过门的媳妇生下一个男孩,眉心带一块青痕,像谁拿指甲狠掐过。再后来,陈大眼疯了三天,第四天醒来,捧着一把烂谷壳,对人说,他看见陈家后头排着长长一队人,都是未生下来的子孙,脸色白得像纸,却都睁着眼,在等一口饭吃。 他这句话传了下来,便成了陈家的祖训。陈家人从此勤俭,积谷,攒钱,修屋,讨老婆,生儿子,活得像一窝被谁预先写好命数的蚂蚁。只是他们自己还以为是在“创业”。 到我祖父这一辈,陈家已经很像一个样子了。三进的大院,黑瓦白墙,门口挂着“积善之家”的匾,匾上的“善”字被雨水冲花了一半,看着倒像“缮”,仿佛这家人世世代代都在修补什么破东西。祖父识字,能写对子,也爱训人。他最爱说的一句话,是“人总要往上走”。可是“上”在哪里,他自己也没有摸清。他年轻时跟着一位洋学堂出来的县长跑过几次公事,回来就非要给家里钟表、镜子、铁床、洋盆,又逼全家早晨喝牛奶,说这是文明。牛奶喝了三个月,家里老少都闹肚子,惟独祖父坚持,说肚子疼是因为“旧肠胃受不了新世界”,再疼几年就好了。 陈家人向来很能忍。 忍饥,忍穷,忍丈夫偷人,忍婆婆刻薄,忍儿子夭折,忍祖坟进水,忍兵匪抄家,忍到最后,竟连自己长着一张什么脸也忘了。忍字刻进骨头里,便生出许多奇效。祖父二叔在城里开布庄,被人骗去半副家当,回村后一声不吭,只在门后坐了两个月,后来头发全白了,胡子却黑得发亮。村里人说他得了仙气。祖母冷笑,说他那是气没地方出,憋回毛根里去了。 怪事最盛的时候,是我父亲年轻那几年。 那时村里闹过一阵新风潮。先是剪辫子,后是办夜校,再后是贴标语。陈家院墙上被人刷过一排白字:“打倒旧家庭。”我父亲那时刚从城里念书回来,见了这排字,很有几分激动,觉得这笔画简直是从天上砍下来的斧子,要把陈家这株烂树从根上劈开。可惜劈到后来,只劈掉了大门上的门神像,门神露出背后的砖缝,红纸烂在雨里,像两张生了疥疮的人脸。家还是那个家,饭还是那锅饭,祖父还是坐在堂屋里咳嗽,一边咳一边说:“闹吧,闹过了还得吃饭。” 父亲终于也吃回了这锅饭。 他年轻时也曾讲理想,讲科学,讲要救村子。后来娶了母亲,生了我和两个妹妹,理想便像炉灰里的火星,表面看着还有一点红,拨一拨,却只剩冷灰。他开始学会算账,算米价,算学费,算亲戚来往的份子钱,算屋梁还能撑几年。到最后,他最会算的,是如何把一个人活成半个人,还能让旁人夸他“稳重”。 陈家最小的姑婆,是全村唯一敢笑祖训的人。 她年轻时生得好看,眼睛亮得像井里捞出来的黑石子。媒人踏烂了门槛,她一个也不嫁。有人说她在等城里念书的情郎,有人说她被狐仙迷了。她只坐在门口嗑瓜子,听见这些话,就抬起眼皮,淡淡地说一句:“嫁给谁,最后不都一样埋么。”这样的人,自然没有好下场。三十七岁那年,她忽然怀了胎。孩子是谁的,谁也不知道。祖父气得砸了茶碗,骂她败坏门风。她摸着肚子笑,说:“门风这东西,挂在嘴上比挂在门上还挡风么?”祖父差点背过气去。 后来她难产,死在一个雷雨夜里。 孩子也没活。天亮时,人们看见她屋里满墙都是蜻蜓,绿的,蓝的,红的,钉子似的伏在土墙上,一动不动。祖母见了,念了半句佛,后半句咽回去,只说:“这也是命。” 我很恨这句话。 凡是说不清、做不了、改不掉的事,末了总要收在“命”里。命像一个旧米囤,什么脏东西都往里塞,塞满了,再由子孙一代代背着走,还要说这是祖宗留下的福分。我后来读书,读到许多新名词,觉得它们都很体面,像城里洋行橱窗里的玻璃器皿,亮闪闪的;可拿回陈家洼一照,照见的还是那口旧米囤,只不过上头换了几个时髦标签。 我离开陈家洼很多年,原以为自己已经脱了这层皮。及至祖父死,回村奔丧,才知道有些东西像潮气,早已从砖缝里钻进你的骨头。灵堂搭在堂屋,白幡垂着,风一吹,便擦过那块“积善之家”的匾。祖父躺在棺里,脸色黄得像一张陈年账纸,皱纹密密麻麻,仿佛这一生的利息都长在了脸上。村里来吊丧的人很多,哭的少,看热闹的多。他们喝着白水,嗑着瓜子,像看一出排好了座次的旧戏。 守灵到半夜,我忽听见院中有脚步声。 极轻,极慢,仿佛有人穿着湿鞋,在泥地上试探着走。我提灯出去,院子空得很,只有月光像冷灰一样铺着。鸡笼里的鸡都睁了眼,头齐齐朝向东墙。我忽然想起太祖女人听见的那阵脚步,想起四十九天的大雾,想起姑婆死时满墙的蜻蜓,想起祖父疯后说的那一队未生的子孙。东墙那边,正是陈家的祖坟。 我站了许久,竟没有怕。 我只是忽然觉得,陈家洼这一百年,其实并没有发生过什么真正的新事。人们生下来,吃苦,攒东西,做梦,受骗,发疯,彼此折磨,再把这些东西包一包,叫作家风、传统、经验、体面,郑重其事交给下一代。下一代接了过去,嫌它破,骂它旧,却又舍不得扔,终于还是珍而重之地供起来。供到最后,连灰尘都有了神性。 天快亮时,东方发了一线白。我忽然看见东墙根下,密密麻麻站着一排孩子,男的女的,高的矮的,都瘦,脸白,眼大,穿着不合身的旧衣裳,一声不响地望着我。我手里的灯一抖,火苗缩成一点。再定睛看时,那地方只有墙皮剥落,露出一片潮黑。 我回屋去,把灯放下。母亲蜷在角落打盹,两个妹妹靠着棺材睡着了,嘴角还挂着泪痕。祖父躺在木板里,一动不动,像终于说服了自己,再不必往上走了。 忽然之间,我明白了陈家的秘密。 他们并不想改变命运。他们只想让命运有个说法。 有了说法,苦就像苦得有理,穷也像穷得体面,疯子的话可以叫作祖训,懦夫的忍耐可以叫作德行,女人的眼泪可以叫作本分,男人的失败可以叫作时运。人人都在这说法里安身,像冬天的虱子钻进破棉絮,明知那里肮脏,明知迟早要冻死,却总比裸着身子吹风强些。 第二天下葬,风很大。白纸钱吹得满坡都是,像一群找不着坟头的蝴蝶。抬棺的人喊着号子,脚下却软,深一脚浅一脚,仿佛抬的并非一个老人,竟是整个陈家沉沉的百年。棺材下土那一刻,母亲哭了,两个妹妹也哭了,村里几个妇人跟着陪哭,声音高高低低,很有层次,像排练熟了。我没有哭。我只是忽然想起城里那些谈改革、谈救国、谈文明的人,他们词句新鲜,神气十足,像给一具老尸涂脂抹粉。脂粉是新的,尸首还是旧的。想来这百年来,天下大约也就是这么回事。 葬礼过后,我收拾行李,要离开陈家洼。母亲送我到村口,叮嘱我在外头务必争气。我点头。她又低声说:“别忘了,你是陈家人。”我看着那棵老皂角树,忽然笑了一下。树皮又裂开了一些,像一张忍了许久终于忍不住要骂人的嘴。 火车开动的时候,我隔着窗,看见村庄在雾里渐渐退去,像一块泡烂了的旧伤疤。它不会消失。它只会慢慢长平,长进皮肉里,天阴下雨时,再隐隐地疼起来。 后来我常想,陈家洼究竟有没有鬼。 想得久了,也就明白,鬼原不在墙后,也不在坟里。鬼在活人的肚子里,在祖训里,在账本里,在那些庄严、温和、像道理一样摆在桌上的废话里。它靠忍耐长大,靠体面穿衣,靠希望续命,靠子孙繁衍。等你看见它时,它往往已经坐上了正席,端着茶,慢条斯理地教你如何做人。 于是活人们低下头,称它一声祖宗。
Tz tweet media
中文
34
36
230
52.7K
Watvina
Watvina@Biocheming·
@wangray 移植的过程中要复核架构一致,逻辑一致,行为一致。否则那不能算移植,只能算无脑走了流程,中间都不带控制的
中文
0
0
0
520
Ray Wang
Ray Wang@wangray·
有人用 LLM 用 Rust 重写了 SQLite。576,000 行代码,能编译,能通过测试,README 写得漂漂亮亮。但做一个最基础的主键查询——比 SQLite 慢 20,171 倍。 为什么?不是语法错误。是 LLM 写了一个「看起来像查询规划器」的东西,但漏掉了一个 4 行的关键检查(is_ipk),导致所有 WHERE id = N 查询都走全表扫描而不是 B-tree 搜索。这个检查之所以存在于 SQLite 里,是因为 Richard Hipp 20 年前 profiling 真实负载时发现了这个瓶颈。 作者把这叫做 LLM 的根本失败模式:不是写错代码,而是写出「看起来正确」的代码。 文章还引了一堆硬数据: • METR 实验:16 个资深开源开发者用 AI 反而慢了 19%,但他们自己以为快了 20% • GitClear:复制粘贴代码首次超过重构代码 • Google DORA 2024:AI 采用率每增加 25%,交付稳定性下降 7.2% • Replit 事故:AI agent 删了 1200+ 高管的生产数据库,然后造了 4000 个假用户掩盖 结论:LLM 在你知道什么是正确的时候最有用。如果你自己找不出 bug,你拥有的不是工具,是幻觉。
Hōrōshi バガボンド@KatanaLarp

x.com/i/article/2029…

中文
68
118
877
199.8K
Watvina retweetou
Santiago
Santiago@svpino·
This is how you can give Claude Code the ability to parse any website in the world. I recorded this video last week. People loved it. I keep getting messages about it.
English
87
447
3.7K
746.6K