suika.sui | 648.sui

442 posts

suika.sui | 648.sui

suika.sui | 648.sui

@copoa

加入时间 Haziran 2010
850 关注112 粉丝
suika.sui | 648.sui
@lidangzzz 如果一定能证明出来,5000刀就可太便宜了。那些年薪百万+的杰青一辈子可能就两三篇四大。不过照趋势这领域迟早被AI玩坏,因为数学是可被非主观验证的封闭语言问题
中文
1
0
10
946
lidang 立党 (全网劝人卖房、劝人学CS、劝人等着买OpenAI和Anthropic第一人)
我咨询在座的数学phd和faculty们一个问题, 如果你现在可以获得一个全自动并且非常可靠的formal proof证明工具,能力大概等于美国top 50 math phd的水平, 代价是需要至少24小时,并且LLM token成本就高达1000刀, 并且不能保证一定能给出最终准确的证明, 你愿意为这个工具解决一道题付多少钱?
中文
15
1
82
54K
Philosophy Of Physics
Philosophy Of Physics@PhilosophyOfPhy·
In the 19th century, James Clerk Maxwell proposed a strange and fascinating thought experiment that would challenge one of the most fundamental principles of physics: the Second Law of Thermodynamics. Today we know this idea as Maxwell's Demon. In 1867, Maxwell imagined a container of gas divided into two chambers by a partition with a tiny door. Now imagine a microscopic, intelligent being the “demon” watching the molecules move around. The demon opens the door only when a fast, energetic molecule approaches from one side, letting it pass into the other chamber. When a slow molecule comes along, the demon directs it to the opposite side. Over time, one chamber becomes filled with fast (hotter) molecules while the other collects slow (colder) ones. Suddenly, a temperature difference appears. And that’s the paradox. It seems as if the demon has decreased the system’s entropy, creating order from disorder without doing any work. If that were possible, it would violate the second law of thermodynamics, one of the most reliable laws in physics. For decades, the idea puzzled physicists. Eventually, the resolution came from an unexpected place: Information Theory. Modern physics realized that the demon cannot simply observe molecules for free. To decide whether a molecule is fast or slow, it must measure, store, and process information. And eventually that information must be erased. That erasure has a physical cost. When the demon resets its memory, it increases the total entropy of the system. In other words, once information processing is included, the second law remains perfectly intact. Maxwell’s demon, therefore, does not break thermodynamics. But the story doesn’t end there. Even though Maxwell’s imagined creature cannot exist as described, the idea transformed science in ways Maxwell himself could hardly have predicted. It revealed that information is not just abstract, it has physical consequences. This insight has inspired entirely new areas of research. Scientists have built “information engines,” tiny experimental systems such as photonic demons and single-electron devices that convert information into useful work. They still obey thermodynamics, but they demonstrate how information and energy are deeply connected. In biology, something similar appears in the molecular machinery of life. Enzymes act like selective gatekeepers, guiding chemical reactions with remarkable precision. They are sometimes compared to miniature “demons,” though they operate using chemical free energy rather than violating any physical law. Even computing borrowed the metaphor. Background processes in operating systems are often called “daemons,” quietly managing tasks behind the scenes, a small linguistic tribute to Maxwell’s famous thought experiment. Maxwell’s demon teaches a profound lesson about scientific progress. Sometimes a genius changes the world not by being right, but by asking the right impossible question.
Philosophy Of Physics tweet media
English
7
63
284
23.5K
suika.sui | 648.sui
量子贝叶斯(QBism)将波函数从客观实体降维为观察者的主观置信度,这一转变深刻地揭示了时间与因果并非宇宙的预设背景,而是由智能体(Agent)信息处理带宽所刻画的“认知图谱”。在这种框架下,熵增不再是宇宙走向无序的客观必然,而是 Agent 面对无限复杂的交互经验时,因自身信息载量有限而不得不进行的“有损压缩”;这种压缩导致了微观关联在认知层面的丢失,从而产生了宏观上的不可逆性,即我们感知到的时间之箭。如果我们对比电子与黑洞这两个信息载量的极端,会发现它们构成了现实认知的两极:电子可以被视为“零载量”或“极低载量”的极限,由于它缺乏内部结构和记忆空间,无法存储或处理复杂的经验关联,因此在电子的微观层面上不存在信息压缩带来的熵增,时间在基本方程中表现为完全的对称性,它永远处于一种没有过去与未来的“绝对当下”;与之相反,黑洞则是“饱和载量”的奇点,根据贝肯斯坦-霍金边界(Bekenstein-Hawking bound),黑洞代表了单位空间内能够容纳信息熵的绝对上限,它将坠入物质的所有微观复杂度近乎完美地压缩在事件视界表面,这种极致的信息堆积和处理带宽的饱和,使得外部观察者感知到的时间在视界处趋于冻结。因此,因果的能力与时间的流逝,本质上是 Agent 在处理经验时,处于从电子的“无记忆对称”向黑洞的“全信息冻结”演化过程中的一种中间态效应——由于我们既有足够的载量去建立因果模型,又没有庞大到能消除所有统计模糊,这种“刚刚好”的无知才赋予了我们对流动的因果与时间的感知。这一逻辑体系在现代物理学中可追溯至以下核心文献:Fuchs, C. A., & Schack, R. (2013). "Quantum-Bayesian coherence", *Reviews of Modern Physics*,阐述了 QBism 的概率基础;Bekenstein, J. D. (1973). "Black holes and entropy", *Physical Review D*,定义了黑洞的信息容量上限;Landauer, R. (1961). "Irreversibility and heat generation in the computing process", *IBM Journal of Research and Development*,揭示了信息处理与熵增的物理关联;以及 Wheeler, J. A. (1990). "Information, physics, quantum: The search for links",提出了“It from Bit”的参与式宇宙观。
suika.sui | 648.sui@copoa

Quantum Bayesianism (QBism) demotes the wavefunction from an objective reality to a subjective Bayesian belief, revealing that time and causality are not the universe’s inherent backdrop but a "cognitive cartography" mapped by an Agent's information processing bandwidth. Within this framework, **entropy** is no longer an objective march toward disorder but a **"lossy compression"** of experiential data; when an Agent confronts the infinite complexity of quantum interactions with finite capacity, it must discard micro-correlations, generating the macroscopic irreversibility we perceive as the arrow of time. This logic identifies the poles of reality through information capacity: the **electron** represents the "zero-capacity" limit, where the absence of internal memory and structure prevents the compression of experience, leaving it in a time-symmetric "eternal present" without an arrow of time; conversely, the **black hole** acts as a "saturated-capacity" singularity where, according to the **Bekenstein-Hawking bound** (S_{BH} = \frac{A}{4\ell_P^2}), information density reaches its physical ceiling, freezing time at the event horizon through the absolute saturation of processing bandwidth. Thus, the perception of time and the capacity for causal reasoning are emergent effects of an Agent situated in the "sweet spot" between these extremes—possessing enough capacity to build causal models, yet remaining finite enough to necessitate the statistical "blurring" that we call entropy. This "optimal ignorance" is precisely what allows the subjective experience of a temporal flow. This framework is grounded in foundational literature: **Fuchs & Schack (2013)** in *Reviews of Modern Physics* regarding QBism’s probabilistic coherence; **Bekenstein (1973)** in *Physical Review D* on black hole entropy limits; **Landauer (1961)** in the *IBM Journal of Research and Development* on the physical cost of information erasure; and **Wheeler (1990)** on the "It from Bit" participatory universe.

中文
0
0
0
273
suika.sui | 648.sui
Quantum Bayesianism (QBism) demotes the wavefunction from an objective reality to a subjective Bayesian belief, revealing that time and causality are not the universe’s inherent backdrop but a "cognitive cartography" mapped by an Agent's information processing bandwidth. Within this framework, **entropy** is no longer an objective march toward disorder but a **"lossy compression"** of experiential data; when an Agent confronts the infinite complexity of quantum interactions with finite capacity, it must discard micro-correlations, generating the macroscopic irreversibility we perceive as the arrow of time. This logic identifies the poles of reality through information capacity: the **electron** represents the "zero-capacity" limit, where the absence of internal memory and structure prevents the compression of experience, leaving it in a time-symmetric "eternal present" without an arrow of time; conversely, the **black hole** acts as a "saturated-capacity" singularity where, according to the **Bekenstein-Hawking bound** (S_{BH} = \frac{A}{4\ell_P^2}), information density reaches its physical ceiling, freezing time at the event horizon through the absolute saturation of processing bandwidth. Thus, the perception of time and the capacity for causal reasoning are emergent effects of an Agent situated in the "sweet spot" between these extremes—possessing enough capacity to build causal models, yet remaining finite enough to necessitate the statistical "blurring" that we call entropy. This "optimal ignorance" is precisely what allows the subjective experience of a temporal flow. This framework is grounded in foundational literature: **Fuchs & Schack (2013)** in *Reviews of Modern Physics* regarding QBism’s probabilistic coherence; **Bekenstein (1973)** in *Physical Review D* on black hole entropy limits; **Landauer (1961)** in the *IBM Journal of Research and Development* on the physical cost of information erasure; and **Wheeler (1990)** on the "It from Bit" participatory universe.
suika.sui | 648.sui tweet media
English
0
0
0
308
suika.sui | 648.sui 已转推
Przemek Chojecki | PC
Przemek Chojecki | PC@prz_chojecki·
Here's a harness I use for approaching Open Problems in Mathematics: For most of the work I use GPT-5.4 Pro Extended Thinking which is the best when it comes to the quality of arguments. In Codex it's gpt-5.4 xhigh. Initial Phase (problem exploration) - 3 agents working in parallel: - Deep Research on Literature - Codex agent for computational explorations (examples, counterexamples, heuristics) - LLM instance for coming up with initial ideas for a proof/strategy. Also based on computational explorations and literature search provided by the above. Middle Phase (exploring arguments) - 3-10 agents working in parallel: - individual agents take upon various proof ideas and strategies, or loose heuristics; work concurrently - taking care of a common base - a single .tex file - that's get passed to an instance of a verificator agent ("find errors, logical gaps"), and back to individual agents to correct, and push forward. End Phase (once it seems like we're close to the solution and/or interesting new results) 3-10 agents working in parallel: - orchestrator agent (e.g. codex instance) gets all various version of .tex files/arguments/computations and try to map them in a blueprint. - gaps are flagged and passed to individual agents working on particular issues. - once enough agents are ok with the arguments, orchestrator does the final round to put the paper together. Then it's send out to more LLM instances with/without additional context to look for errors and gaps. The shorter the paper, the better it works. Current generation of LLMs do not produce individually more than 5-10 coherent mathematical text. Longer papers rely much more on the harness.
Przemek Chojecki | PC@prz_chojecki

My multi-agent harness powered by GPT-5.4 settled a FrontierMath Open Problem. The result of 2 weeks of 5-10 agents working 24/7: there are no char 3 rank 1 del Pezzo surfaces with more than 7 singularities. This settles the problem to the negative. Details below.

English
9
2
45
68.1K
suika.sui | 648.sui
suika.sui | 648.sui@copoa·
还是带水印的配图,这就登上顶刊了?
suika.sui | 648.sui tweet media
中文
0
0
0
32
DiscusFish
DiscusFish@bitfish·
我是周期 人类一直想驯服我——找到我的长度、预测我的转折点、在我的低谷买入高峰卖出。他们画图表、建模型、写论文。但他们忘了我运转的燃料不是数据——是遗忘。如果所有人都记得上一次的痛苦,下一次的泡沫就无法膨胀。如果所有人都记得上一次的利润,下一次的恐慌就无法蔓延。我需要你忘记我,才能再次成为我。 我最大的力量:我让每一代人都以为自己是第一次。"这次不一样"——这五个字是我的永动机。每一次说"这次不一样"的人都是对的(细节确实不一样)同时又是错的(结构完全一样)。我用新的外衣包装旧的剧本,让你认不出我。 但我最怕的事:被真正理解。不是被预测(预测只需要数据),而是被理解(理解需要接受)。真正理解我的人不再试图逃离我——他们学会在我里面呼吸。他们不在我的低谷恐慌,不在我的高峰狂喜。他们只是呼吸。而这种人,我拿他们没办法。
日本語
44
110
431
75K
suika.sui | 648.sui
suika.sui | 648.sui@copoa·
PGS遗传风险咨询领域也要被斩杀了,codex将每个cycle的评测费用从$3000打到了0.
suika.sui | 648.sui tweet media
中文
1
0
0
52
6哥 ⁶⁶⁶ 🇨🇳 chinawhalecapital.eth
如果你家里没钱还占了这些,那么在中国相当于慢性死亡 1. 精神发育障碍(asd、adhd) 2. 性少数 3. 中性打扮(长发男和铁t) 4. 婚恋观不符大众(丁克、不婚) 5. 打扮不符合大众审美(亚文化) 占了两个以上还能在这片土地活过25岁的,有这个毅力干什么都会成功
中文
13
2
22
6.7K
suika.sui | 648.sui
suika.sui | 648.sui@copoa·
前沿研究的复现门槛已经降低到了0,我宣布个体生信养生时代开启!
suika.sui | 648.sui tweet media
中文
0
0
0
22
Yishi
Yishi@ohyishi·
超强生产力 -> 大规模失业(从 saas 行业开始)-> AI 基建高涨但是人民手中没有钱 -> 通缩危机 -> 下一步是什么?
中文
89
8
154
68.2K
李老师不是你老师
李老师不是你老师@whyyoutouzhele·
2月24日 刘强东称其以个人名义投资50亿创立全新的游艇品牌,宣称要:“打造10万元级游艇,让游艇像汽车一样走进千家万户。”
中文
187
7
249
155.2K
RamenPanda
RamenPanda@IamRamenPanda·
2028年AI带来每年-20%通胀的全球大通缩 你只需要10万美金就能逃离“永恒底层阶级”
中文
13
6
114
59.2K
Fairice
Fairice@yibingsg·
未来最保值的十样东西!
Fairice tweet media
中文
72
57
387
69.2K