morluto

51 posts

morluto banner
morluto

morluto

@morluto

of world graphs and agents

Katılım Mart 2026
11 Takip Edilen17 Takipçiler
morluto
morluto@morluto·
thank god bigquery has an mcp server these docs are terrible
English
0
0
0
7
morluto
morluto@morluto·
@novasarc01 at the end of the day you have to make money alibaba isn't nearly the cash cow amazon is if u look at their books
English
0
0
0
200
λux
λux@novasarc01·
first qwen now Ai2. what is the reason behind collapse/breakdown of open-source orgs? is long-term sustainability fundamentally harder for them or something else?
English
18
7
139
22.5K
morluto
morluto@morluto·
@parcadei how is it compared to kimi k2.5 as an exectutor / worker agent
English
1
0
1
29
dei
dei@parcadei·
MiniMax M2.7-HighSpeed is a pure killer
dei tweet media
English
2
0
6
426
morluto
morluto@morluto·
@var_epsilon tradeoffs of push vs pull architecture in the short term
English
0
0
0
54
varepsilon
varepsilon@var_epsilon·
why is claude code so bad at invoking skills autonomously
English
19
2
107
21.3K
morluto
morluto@morluto·
i have met many people who use frieren as their pfp all of them. have been men.
English
0
0
1
55
morluto
morluto@morluto·
factory's pro plan comes with basically no usage ==
morluto tweet media
English
0
0
1
52
morluto
morluto@morluto·
@MTabarrok worked around this by give it an OCR tool like GLM-vision one line + a ZAI key
English
1
0
4
482
Maxwell Tabarrok
Maxwell Tabarrok@MTabarrok·
still crazy how many tokens Claude Code uses just to figure out how to read pdfs every single time
English
11
1
59
7.7K
morluto
morluto@morluto·
> what
morluto tweet media
English
0
0
0
30
Lucas Maes
Lucas Maes@lucasmaes_·
JEPA are finally easy to train end-to-end without any tricks! Excited to introduce LeWorldModel: a stable, end-to-end JEPA that learns world models directly from pixels, no heuristics. 15M params, 1 GPU, and full planning <1 second. 📑: le-wm.github.io
English
95
515
3.7K
614.7K
morluto
morluto@morluto·
the codex app is completely soulless but damn it is good like that one employee who checked out mentally 5 yrs ago but is just ruthlessly efficient
morluto tweet media
English
0
0
1
21
morluto
morluto@morluto·
kimi k2.5 decided the best way to solve the linting issues was to eslint-disable everything what in the training data
morluto tweet media
English
0
0
0
22
DataLust
DataLust@DataLust_xyz·
imo hong kong is one of the best places to build a startup right now. >have access to most of the advanced chinese + western ai models >very safe: low crime rate + no war >can travel to shenzhen on weekends >less than 4 hour flight to Japan, Korea, Singapore & Thailand
English
46
10
272
33.1K
Cailyn Y.
Cailyn Y.@cailynyongyong·
when an accelerator application is asking u for a pitch deck u know its cooked
English
27
6
242
23.8K
morluto
morluto@morluto·
@jia_seed one of the most naturally gifted people I knew dropped out in yr one and spends his days juggling he’s pushing a world record I think now
English
0
0
0
73
jia
jia@jia_seed·
there’s a whole archetype of ppl who are weird / don’t fit into corporate and they either are at community college and building some kind of crazy advanced mechatronic fursuit, and it’s 50/50 they get one of the fancy math prizes later or become a professional comedian i feel one of the things about silicon valley is that it’s really good at making space for these ppl
English
7
3
44
4.1K
morluto
morluto@morluto·
@_TobiasLee when do you think was the moment x became sufficiently good enough?
English
1
0
0
255
Lei Li
Lei Li@_TobiasLee·
continual learning for agents can be explained in one line: Y = Wx - change x: memory.md - change W: weight updates, e.g., LoRa change x is sufficiently good now
English
3
1
81
8.6K
Matt Miesnieks
Matt Miesnieks@mattmiesnieks·
Vision isn’t even remotely close to being solved by AI. Solving it will be as big for Physical work as ChatGPT was for knowledge work.
Kyle Chan@kyleichan

I’ve been working my way through this epic 7-hour interview with Xie Saining at AMI Labs. I also asked Gemini to give me top 10 takeaways. Biggest ones are that he turned down Ilya twice and believes world models, not LLMs, are the key to AGI. 1. Non-Linear Path to AI & Academic Freedom Xie emphasizes that his journey wasn't a standard, hyper-competitive path of a "genius." During his time in Shanghai Jiao Tong University's ACM class, his "highlight" was playing video games in his dorm, teaching him the value of unstructured exploration over rigid academic competition [11:46]. He believes the best research is never linear; if a project ends exactly how you initially planned it, it's likely a "boring idea" [02:09:58]. 2. Rejecting OpenAI & Ilya Sutskever (Twice!) In 2018, Xie turned down a job offer from OpenAI in favor of Facebook AI Research (FAIR), which led to an angry phone call from Ilya Sutskever [01:21:04]. More recently, he declined an invitation to join Ilya's new startup, SSI, because of a fundamental philosophical disagreement: Ilya believes vision is a "solved" problem and focuses on language, while Xie believes vision and physical world modeling are the true frontiers of AI [01:25:57]. 3. Silicon Valley is "LLM-Pilled" Xie argues that the tech industry is currently hypnotized by Large Language Models (LLMs) [05:46:51]. While he acknowledges LLMs are revolutionary communication tools, he insists they are not true "world models" because they operate purely in a digital, text-based space and lack the ability to process high-dimensional, noisy, continuous signals of the physical world [04:29:36]. 4. The Definition of a True World Model According to Xie, a true world model must go beyond text and video generation. It must be a "predictive brain" that understands the physical world, possesses associative memory, can reason and plan, and can predict the consequences of actions in the real world [04:31:32]. 5. Founding AMI Labs with Yann LeCun Disillusioned by the current Silicon Valley narrative that treats AI research as a "finite game" of benchmark-chasing and product cycles, Xie co-founded AMI Labs with Turing Award winner Yann LeCun [05:00:58]. The startup acts as an "underdog" aiming to build true predictive world models based on LeCun's JEPA (Joint Embedding Predictive Architecture) vision, separate from the dominant LLM narrative [06:04:42].

English
2
6
34
5.6K
morluto
morluto@morluto·
queuing is kind of broken though
English
2
0
1
55
morluto
morluto@morluto·
droid’s mission control is such an impressive harness you can tell a lot of heart was put into designing it seems to work terribly with GLM-5.0 though
English
9
0
12
197