CSI300.eth🦞

659 posts

CSI300.eth🦞 banner
CSI300.eth🦞

CSI300.eth🦞

@imajuana

Trader and Portfolio Manager | Tradfi-Cefi-Defi

Katılım Nisan 2017
3.1K Takip Edilen263 Takipçiler
CSI300.eth🦞
CSI300.eth🦞@imajuana·
从抽象的概念上面说,以后产品经理做的其实不再是一个个产品的需求设计,他更多是在设计一个 spec 传统 PRD 的读者是人——工程师读完理解意图,再用自己的判断去实现。所以 PRD 可以写"用户体验应该流畅",因为工程师知道这意味着什么。 Spec 的读者是 Agent——它没有"判断",只会按字面执行。"体验流畅"对它毫无意义,"列表加载 < 1s,滚动帧率 ≥ 60fps"它才能执行和验证。 传统产品经理定义的是意图(intent),新产品经理定义的是约束(constraints)。 这意味着: PM 必须比以前更精确——你不能依赖"工程师会理解的"这层缓冲 PM 获得了更大的控制力——你不再只是"提需求等交付",你定义的 Spec 直接决定产出质量 PM 的能力瓶颈变了——从"能不能讲清楚需求让人理解"变成"能不能定义出完备的约束让机器执行"
中文
0
0
0
15
CSI300.eth🦞
CSI300.eth🦞@imajuana·
古人说的:三人行必有我师 其实是一种蒸馏行为 我们无法直接读取别人的模型参数和soul.md 但是可以通过他们的语言及行为output 来倒推他们的思维和人格
中文
0
0
0
24
Dexter
Dexter@DexterOnchain·
做了一个推文监控的skills. 关注我+点赞+评论+转发这条推特, 终生免费 底层监控了数百个可以新闻交易的推特账号的实时数据流. 可以让你的龙虾🦞瞬间化身新闻交易员📰. 你们觉得价格多少卖合适😂 平均值(正常推文)延迟. 1.8-1.9 秒
中文
83
66
173
41.2K
CSI300.eth🦞
CSI300.eth🦞@imajuana·
补充几个数字: 1. ICE上个投资的是Polymarket,200员工100亿估值,OKX有5000员工250亿估值; 2.Coinbase二级市场550亿市值,比OKX大一倍多; 3. 不披露投资金额和占比,一般是投的金额数量不大,占比不significant; 4. 拿美国投资,做中国人生意,是有很高要求的
OKX@okx

Today we announced a strategic relationship with Intercontinental Exchange (ICE). • ICE has made a direct investment in OKX and joining our Board of Directors • ICE will license OKX spot crypto prices to launch U.S.-regulated futures • OKX plans to provide access to ICE U.S. futures and NYSE tokenized equities markets to our 120M users Together, we’re advancing the infrastructure connecting digital assets and global capital markets. Details from our Founder & CEO @Star_okx: okx.com/en-us/learn/ok…

中文
0
0
1
89
CSI300.eth🦞
CSI300.eth🦞@imajuana·
模型的上下文过多就会效率降低、答非所问 人脑也一样,一直想过去的事情,就没法在当下开心
中文
0
0
0
21
CSI300.eth🦞 retweetledi
edgeX🦭
edgeX🦭@edgeX_exchange·
Polymarket prediction markets are now live on the edgeX web app. Prediction markets on mobile will be available in the next app upgrade. Access the world’s largest prediction markets directly within edgeX. More markets. Liquidity for all.
edgeX🦭 tweet media
English
39
39
237
63.1K
CSI300.eth🦞
CSI300.eth🦞@imajuana·
每天忙着学AI、炒AI、转AI 推特都来不及发了 回头看去年10月币圈崩盘之后的策略 Short shit coins Long AI equity 无比正确
中文
0
0
1
20
Jukan
Jukan@jukan05·
Samsung Accelerates Next-Gen Semiconductor Fab P5 Cleanroom Construction… ‘Shell First’ Strategy Samsung Electronics is continuing its “Shell First” strategy of preemptively securing cleanroom capacity. The company has reportedly moved up the cleanroom construction timeline for its next-generation semiconductor production base, P5, from early next year to around mid-this year. According to industry sources on the 17th, Samsung Electronics has advanced the cleanroom construction schedule for its Pyeongtaek Campus Fab 5 (P5) by approximately six months. Samsung had originally planned to begin full-scale cleanroom construction starting early next year. Preparatory work including inserts (the process of embedding steel supports prior to structural installation) had been scheduled for early Q4. However, Samsung recently requested its construction partners to accelerate the timeline and begin the work in Q2. As a result, cleanroom construction is now expected to commence in early Q3. A cleanroom is an infrastructure facility that controls contamination levels, temperature, humidity, air pressure, and other environmental factors essential for semiconductor manufacturing. It must be installed before any fabrication equipment can be brought in. The piping installation that follows cleanroom completion has also been moved up from next year to late this year. An industry source explained, “The P5 construction site is currently very busy with all cranes already deployed,” adding that “cleanroom and piping subcontractors are also preparing to respond to Samsung’s sudden request.” P5 is Samsung Electronics’ next-generation semiconductor production base, targeting operation by 2028. It is known to feature six cleanrooms across three floors, making it larger in scale than other fabs on the Pyeongtaek Campus (which have four cleanrooms across two floors). The primary product line for P5 is expected to be High Bandwidth Memory (HBM), a critical component for the AI industry. Samsung recently stated in a press release that “P5 will serve as a key hub for HBM production,” adding that “we plan to continuously secure stable supply response capabilities amid the medium- to long-term demand expansion phase centered on AI and data centers.” Samsung’s latest decision is interpreted as part of its ongoing Shell First strategy. Shell First refers to an approach of preemptively constructing cleanrooms, then flexibly executing capital expenditure for actual capacity expansion in alignment with market demand. Previously, during its earnings call on the 30th of last month, Samsung explained: “We plan to maintain our preemptive investment strategy going forward. We will lead with investments in new fab space to secure cleanrooms, then rapidly execute equipment capex at the point when capacity expansion is needed based on demand trends.“
Jukan tweet media
English
8
12
117
17.3K
Serenity
Serenity@aleabitoreddit·
TLDR of Phison CEO interview on Memory and Investment Framework: "Toll Collectors": - Micron ( $MU ) - SK Hynix (000660.KS) - Samsung Electronics, - Western Digital ( $WDC ) - $SNDK. T2: - $MRVL - $SIMO - Phison Electronics Companies that design the logic/software controllers connecting memory to compute will capture massive value as AI moves to the edge. T3: - Pure Storage ( $PSTG ) - NetApp ( $NTAP ) - Seagate ( $STX) As Vera Rubin inference servers roll out, the explosion in KV Cache and data generation will trigger a massive hardware upgrade cycle specifically focused on data center storage density and high-capacity Enterprise SSDs. Hilariously: $EBAY (refurbished electronics), might be a beneficiary. - Short / Avoid Low-Margin Consumer Hardware. - Short / Avoid Unhedged Auto/IoT Makers Main alpha points: - The "3-Year Prepayment" Cash Flow. Memory foundries are demanding 3 years of cash prepayments to guarantee supply. - The Inference Bottleneck is Storage, Not GPUs. A single 10-million-unit run of $NVDA Vera Rubin platform requires 20+TB of SSD per unit, which alone would consume 20% of last year's global NAND capacity. - The "Chinese Supply Glut" Bear Thesis is Dead: Pan entirely dismisses this point around YMTC and CXMT. China’s internal AI demand is so massive that it will instantly swallow 100% of its domestic production. No cheap Chinese memory will leak into the global market to rescue western hardware OEMs. TLDR from the interview: Memory demand is structural. No supply end in sight. $INTC CEO confirmed this last month.
English
21
57
568
127.9K
CSI300.eth🦞
CSI300.eth🦞@imajuana·
世界上很多事情都可以让AI帮助理解 但对AI本身的理解,还是要人自己思考
中文
0
0
0
26