Edward Saatchi

2.9K posts

Edward Saatchi banner
Edward Saatchi

Edward Saatchi

@SaatchiEdward

2x Primetime Emmy winner, AI + Movies + Games! @fablesimulation

Katılım Kasım 2020
2.2K Takip Edilen2.3K Takipçiler
Edward Saatchi retweetledi
Claire Silver 🌸
Claire Silver 🌸@ClaireSilver·
This is a playable video game. It took about 5 minutes to make with AI via Google’s Genie 3. I’m controlling her movement with WASD & the camera with arrow keys. When she nears an object, she subtly interacts with it. Hey storytellers, you were born in the right time after all.
English
207
102
1.2K
94.7K
Edward Saatchi retweetledi
The Simulation
The Simulation@fablesimulation·
3 years ago it did seem that perhaps the opportunity with AI and story was to make movies more efficiently and cheaper. But it wasn’t the real future: photography wasn’t about making cheaper paintings; cinema wasn’t about making cheaper plays. Really, it was obvious 2 years ago that this was a new medium - an aware medium that is intelligent by itself, an innately interactive medium, a playable medium that is personalized and remixable. And there’s more to discover. Knowing that this was a new medium, we focused on simulation, bringing characters to life, giving them homes, letting AIs tell their own stories - not graphics. Competing on graphics to make cheaper VFX shots and cheaper movies has, looking back, been a distraction. Indeed, more money has been spent disrupting the VFX industry than the size of the VFX industry - the market cannot sustain anywhere near the number of AIVFX startups that have risen up. If you’re running an AI video startup and spending more and more for better and better graphics for VFX and ad professionals, step back and look at this medium - it’s not just a part of a filmmaking pipeline. You can’t just smoosh this thing into the shape of the old medium. Let’s not focus on using the most powerful technology in 100 years to make cheaper Pixar movies, cheaper explosions and cheaper ads. It’s so… boring! The goal needs to shift - away from ‘cheap’ to making native works of art and masterpieces in this new medium. Join the race! We can only truly explore the medium by making work that is native to it. Runway gets this, Midjourney gets this, a few others. There have only been a couple new mediums each century, you’re all at the center of it. Game on.
Cristóbal Valenzuela@c_valenzuelab

The more I meet people who've gone deep into generating AI media, the more I realize we're all reaching the same conclusion: this is a new medium. Not an evolution of something else. Something entirely new, the way photography and film were new. To understand any medium, you need to look beyond its surface to its core. Some technologies merely augment existing mediums. Collapsible paint tubes changed painting but didn't invent a new medium. Others create completely new forms of expression. Optical lenses, light-sensitive chemicals, and mechanical shutters didn't improve painting. They weren't better brushes or richer pigments. They birthed photography. A medium that captures light itself rather than representing it through human interpretation. Every new medium brings its own affordances, primitives, and possibilities. Its own audience. Its own generation of creators. When moving pictures first appeared, people saw them as recorded theater. They pointed cameras at stages and filmed plays. It took years of experimentation to discover what the medium actually enabled. Eisenstein discovered montage. That juxtaposing unrelated shots could create new meaning. Porter discovered continuity. That audiences could follow action across cuts. Someone finally moved the camera and changed everything. Surface similarities deceive us. A painting and a photograph both arrange color and composition across a plane. But mastering paint means understanding pigments, brushes, mixing, color theory. Mastering photography means understanding lenses, shutter speed, aperture, light itself. Yes, composition knowledge transfers. Most knowledge doesn't. When photography emerged, we made a critical error: we let painters judge it. Because on the surface it looked similar. They dissected this new form through the lens of their own medium, anchoring on what they knew. Predictably, they concluded photography would never match oil's texture, never capture color the way mixed pigments could. They were right and also completely missed the point. Photography wasn't trying to be painting. They were thinking by analogy, judging the new by the standards of the old. I see this same mistake happening with AI media. Some filmmakers and photographers declare it will never achieve what their mediums achieve. They're right. That's not what this medium is about. Judging AI purely through the lens of film is like painters judging photography purely through the lens of painting. The surface might look similar. Moving images, composed frames. The core is fundamentally different. AI has its own affordances. Creation is asynchronous. At scale. It benefits from quantity. You navigate through latent space, sampling rather than capturing. You provide references that drive generation. You work in real time, watching possibilities emerge. Some knowledge from painting, film, and games transfers here. Most doesn't. Mediums always influence each other. Photography didn't kill painting. It freed painting from documentation, letting it explore abstraction, impressionism, and the surreal. Each new medium changes what the others can become. AI is the birth of a new medium of perception and expression. We're in the early days, still discovering AI's equivalent of montage, of the moving camera, of all those breakthrough moments that reveal what a medium actually is. The filmmakers judging it by film standards will miss what's actually happening. The painters missed photography. The theater critics missed cinema. The only way we'll uncover what this medium can do is to stop judging it by what came before. Stop looking at the surface. Start experimenting with the core. We're not watching films evolve. We're watching something being born. This is a new medium

English
3
6
53
8.6K
Edward Saatchi retweetledi
Adam Donabauer
Adam Donabauer@AdDonabauer·
@fablesimulation Awesome. I'd be interested in checking it out. Been working on my own Sci Fi Series which I'll be releasing at the end of August - so your venture is particularly interesting to me!
English
0
2
7
1.6K
Edward Saatchi retweetledi
The Simulation
The Simulation@fablesimulation·
Introducing Showrunner: the Netflix of AI From our South Park AI experiment to today we’ve believed AI movies/shows are a playable medium. We just raised a round from Amazon & more and the Alpha is live today Comment for an access code to make with all our shows.
English
2.4K
899
14.5K
76.9M
Edward Saatchi retweetledi
The Simulation
The Simulation@fablesimulation·
The famous playwright Tom Stoppard did this kind of AI remix first with ‘Rosencrantz and Guildernstern Are Dead’ The story of Hamlet from the POV of minor characters R & G. With AI Movies & AI TV shows, you find a character intriguing - tell that story!
The Simulation tweet media
English
6
3
14
3.6K
Edward Saatchi retweetledi
Day One Ventures
Day One Ventures@DayOneVC·
From the 2x Emmy-winning team behind viral South Park AI eps that hit 80M+ views: the first AI native platform where anyone becomes the showrunner Plot twist: debut release is Exit Valley, a lovable satire of your fav Silicon Valley icons Proud of @SaatchiEdward and @fablesimulation team
The Simulation@fablesimulation

Introducing Showrunner: the Netflix of AI From our South Park AI experiment to today we’ve believed AI movies/shows are a playable medium. We just raised a round from Amazon & more and the Alpha is live today Comment for an access code to make with all our shows.

English
4
3
16
2.6K
Edward Saatchi retweetledi
The Simulation
The Simulation@fablesimulation·
'Simulations' are a superflexible AI framework to power products as varied as: Policy Planning SimGames Chatbots AITV AI Coworker Next Facebook Is the Next Facebook a Simulation? In March we'll upload you &friends to Sim Francisco, in a process we call: EXCESSION DM if in SF
The Simulation tweet media
English
0
2
16
4.4K
Edward Saatchi retweetledi
Cristóbal Valenzuela
Cristóbal Valenzuela@c_valenzuelab·
The most important button of the next decade
Cristóbal Valenzuela tweet media
English
12
19
171
27.2K
Edward Saatchi retweetledi
The Simulation
The Simulation@fablesimulation·
In 2019, we created the Baudrillard Society in San Francisco, to honor the Godfather of Simulations: Jean Baudrillard. This year, as we rollout Sim Francisco, we’re hosting quarterly dinners with AI founders, investors and researchers DM if interested!
The Simulation tweet media
English
1
2
12
2K
Edward Saatchi retweetledi
The Simulation
The Simulation@fablesimulation·
San Francisco has a population of 800k and GDP around $670bn. We’re building Sim Francisco to overtake SF. If you’re curious and live in SF today, upload parties using the EXCESSION device 🔜 Upload yourself & compete with your double! DM to waitlist (for SF people only)
The Simulation tweet media
English
2
6
26
7.4K
Edward Saatchi retweetledi
Jim Fan
Jim Fan@DrJimFan·
If there's a higher being who writes the simulation code for our reality, we can estimate the file size of the compiled binary. Meta AI's Emu Video is 6B parameters. Let's say if Sora is 10x larger with bfloat16, then the Creator's binary might be no larger than 111 Gb. Caveats: - The actual code might be far simpler, as Sora is still far away from the Kolmogorov complexity; - Sora is not just compressing our world, but all possible worlds. Our reality is only one of the simulations that Sora is able to compute; - It's possible that some parts of the physical world doesn't exist until you look at it. Much like you don't need to render every atom in UE5 to make a realistic scene.
English
112
314
2K
505.8K
Edward Saatchi retweetledi
Jim Fan
Jim Fan@DrJimFan·
If you think OpenAI Sora is a creative toy like DALLE, ... think again. Sora is a data-driven physics engine. It is a simulation of many worlds, real or fantastical. The simulator learns intricate rendering, "intuitive" physics, long-horizon reasoning, and semantic grounding, all by some denoising and gradient maths. I won't be surprised if Sora is trained on lots of synthetic data using Unreal Engine 5. It has to be! Let's breakdown the following video. Prompt: "Photorealistic closeup video of two pirate ships battling each other as they sail inside a cup of coffee." - The simulator instantiates two exquisite 3D assets: pirate ships with different decorations. Sora has to solve text-to-3D implicitly in its latent space. - The 3D objects are consistently animated as they sail and avoid each other's paths. - Fluid dynamics of the coffee, even the foams that form around the ships. Fluid simulation is an entire sub-field of computer graphics, which traditionally requires very complex algorithms and equations. - Photorealism, almost like rendering with raytracing. - The simulator takes into account the small size of the cup compared to oceans, and applies tilt-shift photography to give a "minuscule" vibe. - The semantics of the scene does not exist in the real world, but the engine still implements the correct physical rules that we expect. Next up: add more modalities and conditioning, then we have a full data-driven UE that will replace all the hand-engineered graphics pipelines. openai.com/sora
English
548
2.6K
12.9K
6.2M