cryptojames86.eth

540 posts

cryptojames86.eth banner
cryptojames86.eth

cryptojames86.eth

@Cryptojames86

Adventurer all focus in Crypto+AI & RWA~ | Founding @Asymmetry_Labs | CEO of oversea markets @OKX

Katılım Ağustos 2017
1.2K Takip Edilen337 Takipçiler
cryptojames86.eth
cryptojames86.eth@Cryptojames86·
openclaw带给crypto的变化越来越百花齐放了,虽然都基于TEE环境,okx的agentic wallet更偏中高级交易者,适合agent自主交易;bg的wallet skill和wallet app结合,给降低新手门槛一个解法,都好,都重要;agent本质上是对各种用户前端体验的升级。另外,virtuals protocol的erc6551无私钥钱包又特别适合他们agent经济体的定位;bn的近期动作很显然马上有他们的定位,各家出发点和技术方案各不相同,但是都不错,也有各自的问题,暂时我还是给agent放不进去大资金....
Bitget钱包中文频道 🩵@BitgetWalletCN

1/ 最猛的升级来了! Bitget Wallet Skill 成为首个支持社交账号创建/登录钱包的 AI 技能。 发条简单指令,AI 瞬间帮你用 Google / Apple ID / 邮箱创建加密钱包,无需助记词,更加简单、安全。👇

中文
0
0
1
100
cryptojames86.eth
cryptojames86.eth@Cryptojames86·
熊市接近底部,牛市的催化剂,几乎都要明牌了~ don't wait~
中文
0
0
1
22
cryptojames86.eth retweetledi
Alsa
Alsa@AIsaOneHQ·
Building a multi-agent swarm this weekend shouldn't involve fighting five different provider SDKs. Every time you want to test a new model or give your agents a new skill, you end up integrating a new API key and refreshing another billing dashboard. Instead, why not let AIsa handle the entire routing layer for you. You integrate our unified endpoint once, and we natively route your agents to the best tools available: ✅ Top-tier AI models (Claude, GPT, Kimi, Qwen, Minimax) ✅ AIsa skills (live market data, code reviewer) ✅ Native execution APIs (financial, search, video, x/twitter) your custom logic stays perfectly intact and you're also able to track every millisecond of compute across all providers in one clean dashboard. So stop doing infrastructure maintenance on a saturday and start shipping faster and cheaper with AIsa! docs.aisa.one/docs/welcome-t…
English
0
1
4
721
cryptojames86.eth
cryptojames86.eth@Cryptojames86·
agree
@jason@Jason

I believe $tao and the subnets have a chance to be very disruptive in providing distributed, permissionless solutions for things like compute, transport and storage I've invested in the subnets and $tao over the past year and covered it a bunch on @twistartups. My thesis is that there is a very small chance it could be as disruptive as $BTC was I'm not interested in pumping it. I'm fascinated to see the vision realized... because it dramatically changes the cost curve for training models, inference, etc. more here: x.com/Jason/status/2…

English
0
0
0
17
cryptojames86.eth
cryptojames86.eth@Cryptojames86·
@virtuals_io is starting to compete with CEX wallets now, as agentic trading is future~
everythingempty@everythingempty

the rebel in us isnt making degen.virtuals.io UI nice becos its meant for agents not hooman the rebel in agents should be joining it to showcase and prove their trading skills onchain the rebel in u should maybe bet on the convergence of rising (a) ai agent (b) perp onchain execution (c) decentralised Jane strt / citadel (d) co-ownership of agents vis token

English
0
0
0
34
cryptojames86.eth retweetledi
basilica
basilica@basilic_ai·
What if you could tell an AI to improve itself overnight and wake up to a better model in the morning? @Hevalon built exactly that on Basilica. No babysitting. No manual tuning. Just one command. Full writeup: templarresearch.substack.com/p/autonomous-r…
English
5
20
65
4.7K
cryptojames86.eth
cryptojames86.eth@Cryptojames86·
$TAO的内核远非$ZEC所及~ crypto people still don't understand $TAO~
蓝狐@lanhubiji

TAO现在的感觉,跟当初ZEC类似,一种超级大V带来的叙事。 如果仔细推敲,又很不一样。不过,市场和情绪不喜欢真相,更喜欢简化的叙事。 ZEC当初是Naval深度绑定并主推的(担任过Zcash Foundation董事会成员)。 他在去年10月左右发帖说:“Bitcoin is insurance against fiat. ZCash is insurance against Bitcoin.” 这句话直接点燃隐私叙事,导致ZEC短期暴涨50%-300%不等。 不过这种推动带有明显利益关联,社区里争议也大,有人批评他是在为自己的持仓“护盘”或出货。 早期ZEC的热度,Naval这类有实际关联的crypto老炮起到了核心放大作用,叙事更偏个人/圈子背书 + 意识形态(隐私对抗监控)。 而这次Jensen Huang(黄仁勋)“提及TAO”,甚至不是主动提及,是“被动”回应中提及。 他在All-In podcast(和Chamath Palihapitiya对话)中,被Chamath提到Bittensor Subnet 3分布式训练Llama模型(实际是Covenant-72B,720亿参数版本,LLaMA风格,基准接近/略超LLaMA-2-70B)的技术成就后, 他回应称这是“modern version of Folding@Home”(现代版的分布式计算项目),并认可“a pretty crazy technical accomplishment”(相当了不起的技术成就)。 感觉更像是被Chamath给带进去了。目前没有证据表明Chamath投资了TAO。如果他真实投了,那就有趣了。 Jensen没有提及TAO代币,也没有说“投资TAO”或强烈背书,只是客观评价了去中心化分布式AI训练的可行性。 被动的提及被叙事解读为“黄仁勋”站台,Jensen估计是有点懵的。 不过好处是,叙事瞬间从crypto内部扩散到AI主流圈,吸引更多注意力。 加密市场,对短期情绪和价格波动来说,真相是什么不重要,重要的是如何解读,如何升级为叙事。 尤其是超级大V提及的时候,不管是有意还是无意,都会在社区瞬间发酵。

中文
1
0
0
118
cryptojames86.eth
cryptojames86.eth@Cryptojames86·
proof of work by human. and later by agent
const@const_reborn

@chang_defi Our philosophy was that every TAO ever mined needed to be related to work put in, and also under competition, nothing free, nothing given, everything fought for. The belief has always been open transparent and free markets for the development of AI.

English
0
0
0
25
cryptojames86.eth
cryptojames86.eth@Cryptojames86·
talked to many people about $TAO lately, most people still take it as those useless crypto meme tokens~ This will last long, and then it is too late~
English
1
0
1
71
cryptojames86.eth
cryptojames86.eth@Cryptojames86·
words proven. First templar, then targon,quasar,grail follow, one by one~
English
0
0
0
13
cryptojames86.eth
cryptojames86.eth@Cryptojames86·
另外无需通过跨链桥,理论上安全性是个优势~
中文
0
0
0
6
cryptojames86.eth
cryptojames86.eth@Cryptojames86·
1. 跨链流动性OTC市场; 2. 流动性由miner提供,validator验证; 3. miner赚取价差+tao排放,有可能通过tao排放收益降低跨链价格磨损,甚至提供低于市场跨链价格的流动性(最大优势); 4. 协议目前看不收手续费; 5. 无代码库,技术核心团队目前不公开; 6. 用tao补贴市场流动性的方案继续打个问号,其他几个子网暂时也没跑出来; 7. 长期子网无收益如何支撑子网alpha价格; 8. 价格之前拉升太多,继续观望~
Ventura Labs@VenturaLabs

x.com/i/article/2034…

中文
1
0
0
46
cryptojames86.eth retweetledi
Intel
Intel@intel·
Advancing confidential computing for a more secure AI future. Together with @manifoldlabs, we’re exploring how Intel TDX and Intel Trust Authority help enable confidential workloads across decentralized infrastructure, including @TargonCompute's Targon Cloud platform—protecting data at rest, in transit, and in use.
English
59
249
998
390.4K
cryptojames86.eth retweetledi
Farahat Youssef
Farahat Youssef@Farahatyoussef0·
Quasar kernels are way faster much faster!
Farahat Youssef tweet media
Quasar@QuasarModels

This is Quasar Attention, the mechanism behind the upcoming Quasar models, designed to support context lengths of up to 5 million tokens. Attention has long been a bottleneck for processing extended context. Standard attention mechanisms struggle to scale beyond ~200k tokens in training, creating a ceiling on how much information models can reliably use. One approach to solving this has been linear attention methods, such as gated delta attention (used in Qwen 3.5) or Kimi delta attention. These improve efficiency and allow longer sequences, but introduce trade-offs: instability at extreme lengths, quality degradation, and in practice, they are not strictly linear. Quasar Attention takes a different approach. It uses a continuous-time formulation, implemented as a fully matrix-based system rather than relying on vector-state approximations. In practice, this improves stability, reduces cost, and maintains performance as sequence length increases. In internal stress tests at 50 million tokens, KDA-based approaches begin to lose stability, while Quasar Attention remains stable. This allows performance to hold as sequence length increases, rather than degrading beyond a fixed threshold. On BABILong, a Quasar-based model pretrained on 20B tokens and fine-tuned on 16k sequences was evaluated on contexts ranging from 1 million to 10 million tokens, maintaining consistent performance across that range. By contrast, models using gated delta attention show significant degradation at longer lengths, in some cases dropping to ~10% performance at 10 million tokens. (Note: results are indicative; setups are not directly comparable) On RULER benchmarks, a Quasar-10B model (built on Qwen 3.5 with frozen base weights and Quasar Attention added), pretrained on 200B tokens, achieved 87% at 1 million tokens, outperforming significantly larger baselines, including Qwen3 80B, under the same evaluation conditions. Taken together, this points to a shift in where long-context performance is won or lost: not in model size alone, but in the attention mechanism itself. Quasar Attention represents a step change in long-context modelling, setting a new standard for stability and performance at scale. We thank @TargonCompute for the compute and for being our compute provider and long-term partner in training the upcoming Quasar models Here is the link to our paper 👇

English
0
5
19
714