buzz

8.5K posts

buzz banner
buzz

buzz

@buzzaiguy

Al Exploring • Building Community's • Web 3

nfa انضم Nisan 2019
803 يتبع4.4K المتابعون
تغريدة مثبتة
buzz
buzz@buzzaiguy·
@Banks wait why
English
9
0
52
1.2M
buzz أُعيد تغريده
Triple T
Triple T@TripleTsol·
Triple T tweet media
ZXX
47
984
6.7K
67.3K
buzz
buzz@buzzaiguy·
use that printrrrrr
English
0
0
0
65
buzz
buzz@buzzaiguy·
prob one of the best memes on printr so far lets all grab some ,stake it and print
English
1
0
1
89
buzz
buzz@buzzaiguy·
@LowHash i’ll support you
English
0
0
1
363
lowhash.hl
lowhash.hl@LowHash·
@veH0rny My account isnt hacked as far as I know. This was a quick, and obviously not great, announcement after losing all. Literally have no idea what to do. But all is explained in my recent tweet.
English
3
0
4
1.3K
Rohan Paul
Rohan Paul@rohanpaul_ai·
GenRobot is packaging multimodal robot data collection into wearable hardware. They just launched a 6-camera bionic wearable to capture embodied AI data, addressing common blind spots in traditional monocular setups, such as occlusion and precise hand-object timing. That matters because embodied models do not learn from pixels alone. They learn from synchronized structure: head pose, hand motion, scene layout, and action timing living on the same clock. DAS Ego utilizes six 2MP cameras to achieve a zero-distortion 270° horizontal and 150° vertical FOV. This enables mm-level trajectory reconstruction and ultra-low latency (<1ms) head-hand coordination. What GenRobot is building is a cleaner way to record natural human interactions. To prove it, they open-sourced "Gen Ego Data," a first-person, human-centric dataset covering 20+ environments and 200+ skills. By capturing authentic interactions, it helps models learn physical laws and "perception-action-outcome" causality, providing core data support for real-world embodied AI deployment.
Genrobot.AI@GenrobotAI

Perception is a system problem. One camera misses depth, occlusion, and hand interactions. Gen DAS Ego uses 6 synced cameras (270° FOV). What you get: cm-level joint tracking ms-level head–hand sync full coverage Built for real use: plug-and-play fit 24/7 swappable battery

English
11
13
81
14.7K
buzz
buzz@buzzaiguy·
The post points out phones' core historical purpose—requiring human tapping for tasks like game testing and app navigation—and notes William makes that human involvement optional. This captures excitement around no-code mobile automation, where users upload apps and give plain-English instructions for autonomous execution with full recordings and reports.
shirish@shiri_shh

@jjschnyder phones have had one job for 50 years get a human to tap them william just made that human optional 😭

English
1
0
3
341
buzz
buzz@buzzaiguy·
The X post by @jjschnyder announces the public beta launch of William, an AI agent from nunu.ai that autonomously operates real physical phones like a human by viewing the screen, tapping, scrolling, and typing. William requires no SDKs, code, or integrations; users upload a mobile app or build, describe tasks in plain English, and receive detailed recordings plus reports showing the agent's reasoning, actions, bugs, and data collected. The product targets automation of repetitive mobile workflows such as game testing and app navigation, claiming to eliminate 5 billion annual human hours spent on manual phone interactions.
Jan Schnyder@jjschnyder

Say Hello to William 👋 The first AI agent with a phone. 5 billion hours this year will be spent tapping the same buttons on a phone. Testing games. Navigating App. Doing repetitive workflows. We think that should be 0. RT and comment “William” to get access.

English
0
1
4
576
buzz
buzz@buzzaiguy·
@songjunkr voices hope that Yann LeCun’s JEPA approach finally succeeds with LeWorldModel (LeWM), spotlighting its tiny 15 million parameter count as a potential game changer versus massive generative models. • LeWM introduces a stable end-to-end JEPA trained directly from raw pixels using just two losses: next-embedding prediction and a Gaussian regularizer (SIGReg) that prevents representation collapse without heuristics or pre-trained encoders. • With single-GPU training in hours and up to 48x faster planning than foundation models, LeWM encodes physical structures, detects impossible events, and performs competitively on 2D/3D control tasks.
송준 Jun Song@songjunkr

I really hope @ylecun JEPA works out well this time. 15M parameters only, it’s million not billion This could be a game changer.

English
0
0
1
64
buzz
buzz@buzzaiguy·
Lucas Maes announces LeWorldModel, a 15M-parameter JEPA world model trained end-to-end directly from pixels on one GPU, achieving full planning in under 1 second without heuristics or tricks. It simplifies training with a basic two-term loss of MSE for prediction and SIGReg as anti-collapse, eliminating prior instabilities from methods like EMA, stop-grad, or masking. LeWorldModel matches or exceeds DINO-WM on 2D/3D tasks, plans 48x faster by using far fewer tokens, and shows emergent spatial structure in its latent space purely from the objective.
Lucas Maes@lucasmaes_

JEPA are finally easy to train end-to-end without any tricks! Excited to introduce LeWorldModel: a stable, end-to-end JEPA that learns world models directly from pixels, no heuristics. 15M params, 1 GPU, and full planning <1 second. 📑: le-wm.github.io

English
0
0
1
294
buzz
buzz@buzzaiguy·
(LeWorldModel/LeWM) co-authored by Yann LeCun, introducing a JEPA-based world model that trains end-to-end from raw pixels using only a prediction loss plus a Gaussian regularization term (SIGReg) to prevent representation collapse. With ~15 million parameters, LeWM trains in hours on a single GPU yet plans up to 48x faster than large foundation world models while remaining competitive on 2D/3D control tasks and encoding meaningful physical structure in its latent space. The work validates LeCun's long-standing critique of scaling generative models, showing compact predictive architectures can learn physics and detect implausible events more efficiently than trillion-parameter LLMs focused on pattern memorization.
How To AI@HowToAI_

Yann LeCun was right the entire time. And generative AI might be a dead end. For the last three years, the entire industry has been obsessed with building bigger LLMs. Trillions of parameters. Billions in compute. The theory was simple: if you make the model big enough, it will eventually understand how the world works. Yann LeCun said that was stupid. He argued that generative AI is fundamentally inefficient. When an AI predicts the next word, or generates the next pixel, it wastes massive amounts of compute on surface-level details. It memorizes patterns instead of learning the actual physics of reality. He proposed a different path: JEPA (Joint-Embedding Predictive Architecture). Instead of forcing the AI to paint the world pixel by pixel, JEPA forces it to predict abstract concepts. It predicts what happens next in a compressed "thought space." But for years, JEPA had a fatal flaw. It suffered from "representation collapse." Because the AI was allowed to simplify reality, it would cheat. It would simplify everything so much that a dog, a car, and a human all looked identical. It learned nothing. To fix it, engineers had to use insanely complex hacks, frozen encoders, and massive compute overheads. Until today. Researchers just dropped a paper called "LeWorldModel" (LeWM). They completely solved the collapse problem. They replaced the complex engineering hacks with a single, elegant mathematical regularizer. It forces the AI's internal "thoughts" into a perfect Gaussian distribution. The AI can no longer cheat. It is forced to understand the physical structure of reality to make its predictions. The results completely rewrite the economics of AI. LeWM didn't need a massive, centralized supercomputer. It has just 15 million parameters. It trains on a single, standard GPU in a few hours. Yet it plans 48x faster than massive foundation world models. It intrinsically understands physics. It instantly detects impossible events. We spent billions trying to force massive server farms to memorize the internet. Now, a tiny model running locally on a single graphics card is actually learning how the real world works.

English
1
2
5
1.2K
buzz
buzz@buzzaiguy·
@senamakel how are you getting funded for this project? Do you accept donations?
English
0
0
0
193
Steven Enamakel
Steven Enamakel@senamakel·
We are killing OpenClaw. Introducing OpenHuman, a self-learning AI with memory that gets better the more you use it. Comment "Openhuman" and I'll send the download link 👇
English
66
30
125
24.9K
buzz
buzz@buzzaiguy·
The post showcases "html-in-canvas," an experimental Chrome feature allowing standard HTML UIs to render directly as interactive 3D surfaces with cloth-like physics, demonstrated in a video where a deformable profile card responds to clicks and typing while bending in 3D space. Shared by AR app developer (@tokufxug), it enables placing complex browser interfaces into Web3D environments without custom texture work, with links to a LinkedIn post and GitHub plugin for further details. The demo highlights potential for efficient XR and immersive UI creation, drawing reactions on impacts to metaverse interfaces, performance in deformed states, and shifts from traditional DOM handling.
Sadao Tokuyama@tokufxug

HTMLを直接3Dの質感として描画できる新機能「html-in-canvas」 複雑な操作画面をWebブラウザの標準機能でそのまま3D空間へ配置できます。 現在はChromeの試験運用機能ですが、Web3DのUI制作を劇的に効率化する可能性を秘めた期待の技術です。

English
3
0
5
327
buzz
buzz@buzzaiguy·
@PixelBilly 's post features a 41-second video demo of a floating, cloth-like webpage "curtain" that deforms realistically when poked or dragged by mouse in a 3D browser scene, complete with a 3D avatar interacting with it. The effect uses the experimental WICG HTML-in-Canvas API , allowing styled, interactive HTML elements including video and mouse events to render directly into a canvas with physics simulation. The playful "in a mood today" caption highlights the curtain's dynamic behavior; a later reply notes the tech was reposted and sparked viral interest plus informal crowdfunding on X.
Thomas Richter-Trummer@PixelBilly

my curtain is in a mood today

English
0
0
4
175
buzz
buzz@buzzaiguy·
We are gonna help out enoch so he can keep building lykeion!
buzz tweet media
English
2
0
5
383
buzz
buzz@buzzaiguy·
@_ppotat0 explains his project Lykeion as "Udemy for memecoins," created to teach beginners Solana memecoin trading and reduce the risk of rug pulls through structured education. The platform at lykeion.fun features courses on wallets/bots/terminals, an AI assistant for wallet and token analysis, plus a demo trading terminal using live market data for risk-free practice. Shared in a Solana community welcome thread, the post positions Lykeion as a practical response to the common pitfalls new traders face in volatile memecoin spaces.
enoch@_ppotat0

@revival_sol @buzzaiguy Lykeion is basically udemy for memecoins, there was no place for newbies to learn how to trade on memecoins without them getting rugged so i decided to make it

English
0
0
2
44
buzz أُعيد تغريده
Momo
Momo@momo_mattomo·
Buying $Lykeion on Lykeion Good tek
Momo tweet media
English
0
2
15
927
buzz
buzz@buzzaiguy·
dex has been paid
English
2
1
10
706
buzz
buzz@buzzaiguy·
Welcome @_ppotat0 we are so glad to have you here!
English
1
0
12
779