William Carbone

239 posts

William Carbone banner
William Carbone

William Carbone

@W_Carbone

CEO & Co-founder, The Adjacent Possible

Katılım Eylül 2012
237 Takip Edilen116 Takipçiler
William Carbone retweetledi
Tamas David-Barrett
Tamas David-Barrett@tamasdb·
Gendered Species: A Natural History of Patriarchy
Tamas David-Barrett tweet mediaTamas David-Barrett tweet media
English
6
7
47
5K
William Carbone
William Carbone@W_Carbone·
“Evalify introduces a paradigm shift by using patents as a benchmark to assess the potential and originality of early-stage ideas.” — Nick Sgobba link.medium.com/QkiGApvm5Ib
English
0
0
0
9
William Carbone retweetledi
Yann LeCun
Yann LeCun@ylecun·
Let me clear a *huge* misunderstanding here. The generation of mostly realistic-looking videos from prompts *does not* indicate that a system understands the physical world. Generation is very different from causal prediction from a world model. The space of plausible videos is very large, and a video generation system merely needs to produce *one* sample to succeed. The space of plausible continuations of a real video is *much* smaller, and generating a representative chunk of those is a much harder task, particularly when conditioned on an action. Furthermore, generating those continuations would be not only expensive but totally pointless. It's much more desirable to generate *abstract representations* of those continuations that eliminate details in the scene that are irrelevant to any action we might want to take. That is the whole point behind the JEPA (Joint Embedding Predictive Architecture), which is *not generative* and makes predictions in representation space. Our work on VICReg, I-JEPA, V-JEPA, and the works of others show that Joint Embedding architectures produce much better representations of visual inputs than generative architectures that reconstruct pixels (such as Variational AE, Masked AE, Denoising AE, etc). When using the learned representations as inputs to a supervised head trained on downstream tasks (without fine tuning the backbone), Joint Embedding beats generative. See the results table from the V-JEPA blog post or paper: ai.meta.com/blog/v-jepa-ya…
English
184
728
4.8K
2M
William Carbone retweetledi
Nobody Studios
Nobody Studios@NobodyCrowd·
Within our portfolio, something extraordinary is brewing! #AdjacentPossible is crafting Evalify - a groundbreaking tool that enhances investors' instincts with practical IP insights, spearheading a transformation in early-stage tech investments. 🚀 Stay tuned! #NobodyStudios
Nobody Studios tweet media
English
0
5
5
109
William Carbone retweetledi
Nobody Studios
Nobody Studios@NobodyCrowd·
Adjacent Possible, a valued member of the #NobodyStudios portfolio, harnesses the potential of patents to navigate the complex landscape of VC investments. Make sure to stay tuned for the latest developments in this dynamic space with Adjacent Possible. #AdjacentPossible
Nobody Studios tweet media
English
0
2
3
93
William Carbone retweetledi
Nobody Studios
Nobody Studios@NobodyCrowd·
Evalifiers, born from the brilliance of Adjacent Possible is a software that revolutionizes VC investments with data and expertise, saving time and unlocking insights from the get-go. Get ready for a VC game-changer! Stay tuned with Adjacent Possible. #AdjacentPossible #VC
Nobody Studios tweet media
English
0
2
3
86