Check out the latest article in my newsletter: The Edge Effect: where music, entrepreneurship, and integral coaching converge linkedin.com/pulse/edge-eff… via @LinkedIn
Check out the latest article in my newsletter: From Vision to Reality: The Personal and Professional Growth of a Founder linkedin.com/pulse/from-vis… via @LinkedIn
Check out the latest article in my newsletter: Wiser Investor: The Future of Funding - Will Private Direct VC Revolutionize Venture Capital? linkedin.com/pulse/wiser-in… via @LinkedIn
Check out the latest article in my newsletter: Building a Winning Strategy - Tips for Success in Private Direct VC linkedin.com/pulse/building… via @LinkedIn
Check out the latest article in my newsletter: Beyond the Hype - Exploring the Challenges and Risks of Private Direct VC linkedin.com/pulse/beyond-h… via @LinkedIn
Check out the latest article in my newsletter: Demystifying the Deal: A Deep Dive into Private Direct VC Transactions linkedin.com/pulse/demystif… via @LinkedIn
Check out the latest article in my newsletter: Part 1: The Renegades of VC: Unveiling the Rise of Private Direct Investors linkedin.com/pulse/part-1-r… via @LinkedIn
“Evalify introduces a paradigm shift by using patents as a benchmark to assess the potential and originality of early-stage ideas.” — Nick Sgobba link.medium.com/QkiGApvm5Ib
Let me clear a *huge* misunderstanding here.
The generation of mostly realistic-looking videos from prompts *does not* indicate that a system understands the physical world.
Generation is very different from causal prediction from a world model.
The space of plausible videos is very large, and a video generation system merely needs to produce *one* sample to succeed.
The space of plausible continuations of a real video is *much* smaller, and generating a representative chunk of those is a much harder task, particularly when conditioned on an action.
Furthermore, generating those continuations would be not only expensive but totally pointless.
It's much more desirable to generate *abstract representations* of those continuations that eliminate details in the scene that are irrelevant to any action we might want to take.
That is the whole point behind the JEPA (Joint Embedding Predictive Architecture), which is *not generative* and makes predictions in representation space.
Our work on VICReg, I-JEPA, V-JEPA, and the works of others show that Joint Embedding architectures produce much better representations of visual inputs than generative architectures that reconstruct pixels (such as Variational AE, Masked AE, Denoising AE, etc).
When using the learned representations as inputs to a supervised head trained on downstream tasks (without fine tuning the backbone), Joint Embedding beats generative.
See the results table from the V-JEPA blog post or paper:
ai.meta.com/blog/v-jepa-ya…
Within our portfolio, something extraordinary is brewing! #AdjacentPossible is crafting Evalify - a groundbreaking tool that enhances investors' instincts with practical IP insights, spearheading a transformation in early-stage tech investments. 🚀 Stay tuned! #NobodyStudios
Check out my latest article: Beyond Invention: Unearthing the Strategic Secrets of Intellectual Property: An 8-Part Series Recap linkedin.com/pulse/beyond-i… via @LinkedIn
Adjacent Possible, a valued member of the #NobodyStudios portfolio, harnesses the potential of patents to navigate the complex landscape of VC investments. Make sure to stay tuned for the latest developments in this dynamic space with Adjacent Possible. #AdjacentPossible
Evalifiers, born from the brilliance of Adjacent Possible is a software that revolutionizes VC investments with data and expertise, saving time and unlocking insights from the get-go. Get ready for a VC game-changer! Stay tuned with Adjacent Possible. #AdjacentPossible#VC