chali

108 posts

chali

chali

@UnoLeeb

Katılım Ekim 2015
39 Takip Edilen32 Takipçiler
Gita Gopinath
Gita Gopinath@GitaGopinath·
A painting of the end of meritocracy: A meeting of the two largest economies and not one woman at the table.
Gita Gopinath tweet media
English
14.4K
10.4K
45.1K
11.5M
chali
chali@UnoLeeb·
@LuBtc888 这傻逼在狗叫什么?他妈一天造出没有的名词
中文
0
0
0
2
0x鸣人
0x鸣人@LuBtc888·
罗振宇:能力没那么重要! 因为还有一个说法叫对冲能力的是业力。 你能力很强,但是你会被你的环境击败,被你的儿时的记忆,自己身上的坏习惯。 能力基本上被制约和对冲的差不多。 真正能让一个人状态好的其实是愿力。 这个愿力大于业力,业力大于能力。
中文
4
26
85
7.3K
chali
chali@UnoLeeb·
rust for uno q github.com/RickyCong/rust… @arduino I'll first complete the preliminary groundwork, and I'm confident that the Ventuno Q device will shine brightly in the future.
English
0
0
0
11
Anthropic
Anthropic@AnthropicAI·
We've published a paper that explains our views on AI competition between the US and China. The US and democratic allies hold the lead in frontier AI today. Read more on what it’ll take to keep that lead: anthropic.com/research/2028-…
English
1.2K
994
5.7K
4.6M
Corey - Mojee3D
Corey - Mojee3D@Mojee3d·
I have said this before, and I'll say it again. Especially after yet another @BambulabGlobal middle finger to the community. If @Prusa3D could find a way to sell the Core One+ for $899, they'd put other companies out of business.
Corey - Mojee3D tweet media
English
64
21
388
39.5K
Arduino
Arduino@arduino·
One SBC, two brains, unlimited possibilities! By integrating an NPU-accelerated processor and a real-time microcontroller, Arduino VENTUNO Q makes it possible to build machines that perceive, decide, and act – all on a single board. Sign up now: arduino.cc/product-ventun…
Arduino tweet media
English
3
17
68
4.7K
AYi
AYi@AYi_AInotes·
Damn,Karpathy这条帖子直接把我过去半年的AI工作流全推翻了🤯 大家都在死等更强的模型, 死等更大的上下文窗口, 但Karpathy说,你们全搞错方向了, 现在AI最大的瓶颈,根本不是模型不够聪明, 是我们还在用文本这种最低带宽的方式,跟它沟通。 他推荐了一个所有人今天就能用的trick, 在任何query的最后加一句: "structure your response as HTML" 然后让Claude直接帮你打开, 出来的效果好到离谱, 不仅仅是多了点颜色和排版, 更像是你终于给AI打开了大脑里那片10车道的视觉超级高速公路, 同样的内容,HTML的阅读效率和理解深度,是Markdown的10倍以上, 这简直就是人机交互的真正下一代范式,因为人类的输入和输出偏好,天生就是完全不对称的, 输入最自然的是音频,说话比打字快4倍,思考也更连贯, 输出最擅长的是视觉,我们大脑1/3的皮层,全用来处理视觉信息, 而我们现在,却在用文本这种单车道的土路,双向跑所有的流量, Karpathy画了一条清晰的演进路线: 原始文本 → Markdown → HTML → 交互式神经视频, 我们现在正站在Markdown到HTML的转折点上, 最令人兴奋的是,很多人说HTML费token,生成慢, 但你算一笔账就懂了, 多花2倍的token,换你10倍的阅读速度和理解深度, 这是全世界最划算的交易了吧哈哈, 可惜我们早就被省token的思维绑架了,却忘了人类的时间才是真正的稀缺资源, 还有一个更扎心的认知, Markdown是给AI看的格式, HTML是给人用的格式, AI代理之间沟通,用Markdown甚至JSON都没问题, 但所有最终要给人类消费的东西,都应该切成HTML, 这才是最优的分工, 现在我已经把所有prompt的结尾,都加上了那行字, 做对比用并排表格,做分析用彩色标注,做原型用交互式滑块, AI不再是给我甩一大段干巴巴的文字让我啃, 它直接给我造了一个可交互的视觉思考空间, Karpathy说,人机的心智融合才刚刚开始, 我们根本不用等Neuralink那种脑机接口, 先把HTML用起来,就是当下能摘到的最大最甜的低垂果实🍒 #AI #Karpathy
Andrej Karpathy@karpathy

This works really well btw, at the end of your query ask your LLM to "structure your response as HTML", then view the generated file in your browser. I've also had some success asking the LLM to present its output as slideshows, etc. More generally, imo audio is the human-preferred input to AIs but vision (images/animations/video) is the preferred output from them. Around a ~third of our brains are a massively parallel processor dedicated to vision, it is the 10-lane superhighway of information into brain. As AI improves, I think we'll see a progression that takes advantage: 1) raw text (hard/effortful to read) 2) markdown (bold, italic, headings, tables, a bit easier on the eyes) <-- current default 3) HTML (still procedural with underlying code, but a lot more flexibility on the graphics, layout, even interactivity) <-- early but forming new good default ...4,5,6,... n) interactive neural videos/simulations Imo the extrapolation (though the technology doesn't exist just yet) ends in some kind of interactive videos generated directly by a diffusion neural net. Many open questions as to how exact/procedural "Software 1.0" artifacts (e.g. interactive simulations) may be woven together with neural artifacts (diffusion grids), but generally something in the direction of the recently viral x.com/zan2434/status… There are also improvements necessary and pending at the input. Audio nor text nor video alone are not enough, e.g. I feel a need to point/gesture to things on the screen, similar to all the things you would do with a person physically next to you and your computer screen. TLDR The input/output mind meld between humans and AIs is ongoing and there is a lot of work to do and significant progress to be made, way before jumping all the way into neuralink-esque BCIs and all that. For what's worth exploring at the current stage, hot tip try ask for HTML.

中文
74
229
1.6K
388.9K
hug
hug@yu_jin91444·
无人得知贴纸下面戴上眼镜是多么乖的一张脸
hug tweet media
中文
81
0
158
7.2K
goodbyekisser
goodbyekisser@getawayfromme07·
嫩也很为俺啄米吧
goodbyekisser tweet media
中文
4
0
75
3.7K
chali
chali@UnoLeeb·
@jinchenma_ai 你别叫了好吗拉黑你了看的都恶心
中文
0
0
0
5
金尘马
金尘马@jinchenma_ai·
中美两国 AI APP 对比: 美国: ChatGPT:周活约9亿 Claude:月活约1900万 Gemini:月活7.5亿 Grok:月活约6000万 中国: 豆包:月活3.45亿 千问:月活1.66亿 DeepSeek:月活1.27亿 Kimi:月活约900万 美国这些产品面向全球数十亿用户,而中国的这些产品面向国内14亿用户。 规模背后的路径差异,大家细品。
金尘马 tweet media
中文
146
40
282
127.3K
chali
chali@UnoLeeb·
@ststwuqi 母猪屁股肌肉都流失了
日本語
0
0
0
85
chali
chali@UnoLeeb·
@Lzirq777 又你妈来打色粉草你妈
中文
0
0
0
17
chali
chali@UnoLeeb·
@cloudwu 好了 知道了拉黑你了
中文
0
0
0
7
云风
云风@cloudwu·
我爹在 80 年代曾经在单位住了两个月,每天我和我妈去送饭,所以我印象深刻。他们单位进了一套计算机控制的水泥搅拌站,卖家为了保密,不仅没有源码,硬件芯片上的型号都挂掉了。因为增加需求,我爸逆向了整套系统的协议重新开发了一套,还加了很漂亮的动画表现。
云风@cloudwu

@plantegg 一代不如一代是正常的。我爹比我厉害多了,我继承了点皮毛感觉就够用。目前感觉我娃比我同年龄时差很远,希望长大后能继承点好的。

中文
92
22
481
175.7K
Antonio Li
Antonio Li@AntonioSitongLi·
My robot can now feel how hard it's gripping something. I didn't add any sensors. Comment "tactile" and I'll DM how it works.
English
170
35
752
325.7K
Geek
Geek@geekbb·
看到一篇博文:依赖 AI 的人终将被时代抛弃……
Geek tweet media
中文
33
2
69
30.1K
我不是V1per
我不是V1per@iloveV1111·
ok让我给世界一点平胸震撼(这还是聚拢内衣。?)
我不是V1per tweet media
中文
526
5
904
133.3K
mooooon_
mooooon_@mooooonn9o·
胖了15斤
mooooon_ tweet media
中文
49
0
483
18.8K
无糖全麦冰面包
无糖全麦冰面包@ir_duq·
我操…我才知道酒店机器人有摄像头。。我每次都光着身子拿外卖。
中文
1.3K
10
2.4K
1.3M
Abhinav Kukreja
Abhinav Kukreja@kukreja_abhinav·
Insane month of R&D. On to the next ☺️
English
7
10
180
14.7K