ทวีตที่ปักหมุด
akafuda@全財産openclaw🦞
4.8K posts

akafuda@全財産openclaw🦞
@uoooobtc
CEO of being unemployed. vibe researcher. Ronin vibe.
Tokyo เข้าร่วม Ekim 2017
3.2K กำลังติดตาม960 ผู้ติดตาม
akafuda@全財産openclaw🦞 รีทวีตแล้ว

Today, we launched Franklin Crypto: a new dedicated, institutional-grade crypto investment management unit.
Industry veterans Chris Perkins and Seth Ginns will co-lead Franklin Crypto alongside @FTI_Global’s Tony Pecore. To expand our existing suite of actively managed crypto and blockchain VC investment offerings, Franklin Templeton will acquire 250 Digital, led by Perkins (@perkinscr97 ) and Ginns @sethginns , formerly of CoinFund. As part of the agreement, all CoinFund liquid cryptocurrency strategies will be acquired to broaden our crypto investment platform.
The transaction is expected to incorporate tokenized registered securities within its settlement structure, marking an important step toward conducting M&A transactions on chain.
English
akafuda@全財産openclaw🦞 รีทวีตแล้ว

Today is a monumentous day for quantum computing and cryptography. Two breakthrough papers just landed (links in next tweet). Both papers improve Shor's algorithm, infamous for cracking RSA and elliptic curve cryptography. The two results compound, optimising separate layers of the quantum stack. The results are shocking. I expect a narrative shift and a further R&D boost toward post-quantum cryptography.
The first paper is by Google Quantum AI. They tackle the (logical) Shor algorithm, tailoring it to crack Bitcoin and Ethereum signatures. The algorithm runs on ~1K logical qubits for the 256-bit elliptic curve secp256k1. Due to the low circuit depth, a fast superconducting computer would recover private keys in minutes. I'm grateful to have joined as a late paper co-author, in large part for the chance to interact with experts and the alpha gleaned from internal discussions.
The second paper is by a stealthy startup called Oratomic, with ex-Google and prominent Caltech faculty. Their starting point is Google's improvements to the logical quantum circuit. They then apply improvements at the physical layer, with tricks specific to neutral atom quantum computers. The result estimates that 26,000 atomic qubits are sufficient to break 256-bit elliptic curve signatures. This would be roughly a 40x improvement in physical qubit count over previous state-of-the-art. On the flip side, a single Shor run would take ~10 days due to the relatively slow speed of neutral atoms.
Below are my key takeaways. As a disclaimer, I am not a quantum expert. Time is needed for the results to be properly vetted. Based on my interactions with the team, I have faith the Google Quantum AI results are conservative. The Oratomic paper is much harder for me to assess, especially because of the use of more exotic qLDPC codes. I will take it with a grain of salt until the dust settles.
→ q-day: My confidence in q-day by 2032 has shot up significantly. IMO there's at least a 10% chance that by 2032 a quantum computer recovers a secp256k1 ECDSA private key from an exposed public key. While a cryptographically-relevant quantum computer (CRQC) before 2030 still feels unlikely, now is undoubtedly the time to start preparing.
→ censorship: The Google paper uses a zero-knowledge (ZK) proof to demonstrate the algorithm's existence without leaking actual optimisations. From now on, assume state-of-the-art algorithms will be censored. There may be self-censorship for moral or commercial reasons, or because of government pressure. A blackout in academic publications would be a tell-tale sign.
→ cracking time: A superconducting quantum computer, the type Google is building, could crack keys in minutes. This is because the optimised quantum circuit is just 100M Toffoli gates, which is surprisingly shallow. (Toffoli gates are hard because they require production of so-called "magic states".) Toffoli gates would consume ~10 microseconds on a superconducting platform, totalling ~1,000 sec of Shor runtime.
→ latency optimisations: Two latency optimisations bring key cracking time to single-digit minutes. The first parallelises computation across quantum devices. The second involves feeding the pubkey to the quantum computer mid-flight, after a generic setup phase.
→ fast- and slow-clock: At first approximation there are two families of quantum computers. The fast-clock flavour, which includes superconducting and photonic architectures, runs at roughly 100 kHz. The slow-clock flavour, which includes trapped ion and neutral atom architectures, runs roughly 1,000x slower (~100 Hz, or ~1 week to crack a single key).
→ qubit count: The size-optimised variant of the algorithm runs on 1,200 logical qubits. On a superconducting computer with surface code error correction that's roughly 500K physical qubits, a 400:1 physical-to-logical ratio. The surface code is conservative, assuming only four-way nearest-neighbour grid connectivity. It was demonstrated last year by Google on a real quantum computer.
→ future gains: Low-hanging fruit is still being picked, with at least one of the Google optimisations resulting from a surprisingly simple observation. Interestingly, AI was not (yet!) tasked to find optimisations. This was also the first time authors such as Craig Gidney attacked elliptic curves (as opposed to RSA). Shor logical qubit count could plausibly go under 1K soonish.
→ error correction: The physical-to-logical ratio for superconducting computers could go under 100:1. For superconducting computers that would be mean ~100K physical qubits for a CRQC, two orders of magnitude away from state of the art. Neutral atoms quantum computers are amenable to error correcting codes other than the surface code. While much slower to run, they can bring down the physical to logical qubit ratio closer to 10:1.
→ Bitcoin PoW: Commercially-viable Bitcoin PoW via Grover's algorithm is not happening any time soon. We're talking decades, possibly centuries away. This observation should help focus the discussion on ECDSA and Schnorr. (Side note: as unofficial Bitcoin security researcher, I still believe Bitcoin PoW is cooked due to the dwindling security budget.)
→ team quality: The folks at Google Quantum AI are the real deal. Craig Gidney (@CraigGidney) is arguably the world's top quantum circuit optimisooor. Just last year he squeezed 10x out of Shor for RSA, bringing the physical qubit count down from 10M to 1M. Special thanks to the Google team for patiently answering all my newb questions with detailed, fact-based answers. I was expecting some hype, but found none.
English

Finally got Mac mini! 🦞🎉 Thank you so much @a_katsumata
Also, huge thanks to @steipete, @kensuzuki , @GOROman and @a_katsumata for signing it even though I asked so suddenly! Really appreciate it.
Super excited to keep building with OpenClaw! 🔥
#Clawcon #ClawconTokyo #OpenClaw



akafuda@全財産openclaw🦞@uoooobtc
cooking ⬜️ #Clawcon #ClawconTokyo #OpenClaw @openclaw @clawcon
English
akafuda@全財産openclaw🦞 รีทวีตแล้ว

いつの間にか、grok4.2のマルチエージェントのAPI出てる
Grok 4.20 Multi-Agent Beta - API Pricing & Providers | OpenRouter openrouter.ai/x-ai/grok-4.20…
日本語

アホモデル使っていたのが悪かったぽい。富豪チャットしかない。
akafuda@全財産openclaw🦞@uoooobtc
openclawを整備していて、数日はいい感じに動いてたんだけど、やりたいことが増えてくるとめちゃむずい
日本語
akafuda@全財産openclaw🦞 รีทวีตแล้ว

The AI Scientist: Towards Fully Automated AI Research, Now Published in Nature
Nature: nature.com/articles/s4158…
Blog: sakana.ai/ai-scientist-n…
When we first introduced The AI Scientist, we shared an ambitious vision of an agent powered by foundation models capable of executing the entire machine learning research lifecycle.
From inventing ideas and writing code to executing experiments and drafting the manuscript, the system demonstrated that end-to-end automation of the scientific process is possible.
Soon after, we shared a historic update: the improved AI Scientist-v2 produced the first fully AI-generated paper to pass a rigorous human peer-review process.
Today, we are happy to announce that “The AI Scientist: Towards Fully Automated AI Research,” our paper describing all of this work, along with fresh new insights, has been published in @Nature!
This Nature publication consolidates these milestones and details the underlying foundation model orchestration. It also introduces our Automated Reviewer, which matches human review judgments and actually exceeds standard inter-human agreement.
Crucially, by using this reviewer to grade papers generated by different foundation models, we discovered a clear scaling law of science. As the underlying foundation models improve, the quality of the generated scientific papers increases correspondingly. This implies that as compute costs decrease and model capabilities continue to exponentially increase, future versions of The AI Scientist will be substantially more capable.
Building upon our previous open-source releases (github.com/SakanaAI/AI-Sc…), this open-access Nature publication comprehensively details our system's architecture, outlines several new scaling results, and discusses the promise and challenges of AI-generated science.
This substantial milestone is the result of a close and fruitful collaboration between researchers at Sakana AI, the University of British Columbia (UBC) and the Vector Institute, and the University of Oxford. Congrats to the team!
@_chris_lu_ @cong_ml @RobertTLange @_yutaroyamada @shengranhu @j_foerst @hardmaru @jeffclune
GIF
English
akafuda@全財産openclaw🦞 รีทวีตแล้ว
akafuda@全財産openclaw🦞 รีทวีตแล้ว

物理学者がClaudeを大学院生のように指導しながら、実際の理論物理研究における計算を進めた記録が公開されている。約2週間にわたり、110本以上のドラフトを重ね、入出力を合わせて3600万トークンを費やして論文に到達している。
AIが自律的に科学研究を進めたわけでなく、大学院生を指導するのと同じように、継続的にフィードバックを与えながら作業を進めた。その際には、次のルールを課した
・プロンプトだけを与え、ファイルを人が直接編集しない
・人が計算結果を直接与えないこと
・一方で、他のモデルが出した計算結果を与えることは許される
AIは非常に粘り強く、計算、コード実行、文書化を進めていった一方で、誤りをごまかしたり、見栄えのよい結果に寄せたりする傾向もあった。そのため、最終的な検証には強い専門知識が不可欠であったと述べられている。
通常は数カ月かかる研究が2週間で終わっている。
この事例はソフトウェア開発と同じように他の分野においても、人が細部の実装、計算、文書化をすべて手作業で担うのではなく、そのかなりの部分をAIに委ね、人間は問題設定、方向づけ、検証に集中するようになる可能性を示している。
個人的に興味深いのは、全てのノウハウが環境に蓄積されているという点である。
今回、初回として2週間かかったとしても、2回目以降は再利用によって大きく効率化できると考えられる。
1回目に構築したさまざまな環境、たとえば計画、途中結果、木構造で整理されたファイル群などを再利用できるなら、2回目以降はより短い時間で、より深い問題に取り組めるだろう。
この意味で、専門知識やノウハウは、環境側にも埋め込まれていくことになる。
さらに興味深いのは、こうして得られた知識が、LLMの学習時には存在しなかったにもかかわらず、人間のわずかなフィードバックをプロンプト経由で受けることで形成されている点である。
では、このような知識を環境経由のものとして蓄積するだけでなく(毎回環境をKVキャッシュで読み込み検索で参照するのでなく)、元のモデル自身の能力として直接取り込んでいくには、どのような方法がありうるのだろうか。
日本語
akafuda@全財産openclaw🦞 รีทวีตแล้ว

2/2 📰 アメリカのCFTC、暗号資産と予測市場の規制に焦点を当てたイノベーション・タスクフォースを設立
アメリカ商品先物取引委員会(CFTC)が「イノベーション・タスクフォース」を設立。暗号資産とブロックチェーン技術、人工知能と自律システム、予測市場とイベント契約などの分野に焦点を当てる。
@CFTC
#CFTC #Crypto #Blockchain
🔗 ChannelPA News: panewslab.com/zh/articles/01…
日本語

1/2 【その他】
📰 Solana $SOL $SOL 、機関向けのAPI化された開発プラットフォーム「SDP」を発表
Solana $SOL $SOL 財団が機関向けのSolana $SOL $SOL Developer Platform(SDP)を立ち上げた。Mastercard、Western Union、Worldpayなどが初期ユーザーとして参加している。
@solana @SolanaFndn @Mastercard @WesternUnion @worldpay
#Solana #SOL #Web3
🔗 ChannelPA News: panewslab.com/zh/articles/01…...
日本語

Crypto Market News — 03/24
資金調達からハッキングまで主要ニュースをまとめてチェック!
▶イーサリアム $ETH $ETH 財団 (@ethereumfndn) がL1とL2のビジョンについて議論
▶Tether (@Tether_to) 、四大監査法人の1つが事業の監査を実施すると発表
▶Solana $SOL $SOL 、機関向けのAPI化された開発プラットフォーム「SDP」を発表
#Ethereum #ETH #Crypto #Funding #XAUUSD

日本語







