Anastasis Germanidis

523 posts

Anastasis Germanidis banner
Anastasis Germanidis

Anastasis Germanidis

@agermanidis

Simple ideas, pursued maximally. Co-Founder & Co-CEO @runwayml.

New York, NY Katılım Mayıs 2011
425 Takip Edilen4.7K Takipçiler
Jack Parker-Holder
Jack Parker-Holder@jparkerholder·
@taiuti I think a realtime avatar model is more closely aligned with the original world model definition than a Gaussian splat
English
3
0
21
1.4K
Anastasis Germanidis retweetledi
Runway
Runway@runwayml·
Learn how to create your own Runway Character via the Runway API. And start bringing real-time video agents directly into your apps, products, websites and experiences. Get started at the link below.
English
17
59
485
104.9K
Anastasis Germanidis retweetledi
Alejandro Matamala Ortiz
Alejandro Matamala Ortiz@matamalaortiz·
Building new products and applications has completely changed. Today, more than ever, it is crucial to ask ourselves what the next generation of applications for generative video will be, how we are building them, and how this will impact every single industry. I’m very excited to start Runway Labs, an internal incubator dedicated to exploring new products at the frontier of generative video and General World Models. We are at a very special moment in time to define how this future will look. I’m hiring an initial team to build with: design engineers, AI engineers, and builders who want to create new kinds of experiences and products that haven’t been possible before. If this sounds interesting to you, reach out.
Runway@runwayml

Today we are introducing Runway Labs, a generative AI incubator led by our co-founder and Chief Innovation Officer, Alejandro Matamala Ortiz. Runway Labs will focus on broadly exploring the transformative power of AI video and General Worlds Models across all industries. From film and television to healthcare and education, gaming, advertising, real estate and more. We will be partnering with creators, enterprises, institutions and foundations to find new applications for these technologies and opportunities across industries. Learn more at the link below.

English
8
13
126
15.1K
Anastasis Germanidis retweetledi
Yohei
Yohei@yoheinakajima·
btw it took ~5 min to set up this demo, most of which was waiting for Gemini to generate a few Ghibli image of me, while having ChatGPT write a prompt based on what it knows about me (I’ve fed a lot of my granola transcriptions into it)
Yohei@yoheinakajima

haha the @runwayml real-time characters is great. i just spent 2 minutes pitching a ghibli version of myself a bladeless blender startup it's only a 2 min demo, but the questions and tone are pretty solid. one question even threw me off a bit (the cleaning). asking which round was unnecessary. it got a little weird at the end right before it cut off, which is due to a 2 min limit when testing on site(which seems unnecessary since they're charging me for the api - why not let me talk longer?) at current pricing, this would be roughly ~$6 for a 30 min call or $12 an hour. this doesn't include the cost of running an extraction prompt in the background to store this data, but llm tokens are so cheap. very expensive compared to mean vc (free, since it's in gpt marketplace) or even a self-hosted chat agent, but it's way cheaper than an associate. while it has downsides like not being a real person, there's also benefits like having consistent and unbiased information gathering, being available 24/7, being able to talk to founders in parallel, and consistent note taking. and it will likely get cheaper (sorry my audio volume is low, i'm testing while everyone else in the house is asleep)

English
8
6
37
6.4K
Anastasis Germanidis retweetledi
Nicolas Neubert
Nicolas Neubert@iamneubert·
Using Runway Character's API, I built a character with the entire map knowledge base of Bungie's latest title @MarathonTheGame. It can read the screen, guide you to objectives, and help you decide which valuables to extract. Just a preview of how gaming will change with AI.
English
72
51
547
65.5K
Anastasis Germanidis retweetledi
Runway
Runway@runwayml·
Introducing, Runway Characters. Real-time intelligent avatars that turn the internet into a conversation. Deployable anywhere via the Runway API, Runway Characters can be customized in any way across every style. All with the ability to embed bespoke knowledge banks, custom voices and instructions. Start integrating Runway Characters directly into your apps, websites, products and services today. Available now at the link below.
English
20
78
408
52.5K
Anastasis Germanidis retweetledi
Cristóbal Valenzuela
Cristóbal Valenzuela@c_valenzuelab·
Last week we debuted our new real-time video agents with one of the hardest demos possible: live television. The BBC is now using Runway Characters to augment segments of their programming. Wild to see this live. So excited for all the new possible applications to come.
English
37
32
210
33.3K
Anastasis Germanidis retweetledi
Runway
Runway@runwayml·
Testing robot policies on hardware is slow, expensive and hard to scale. World models offer a promising path to accelerating robot policy development. We're sharing new research from the Runway Robotics team, in which we simulated 8 robot policies inside our General World Model and found 0.95 correlation with real-world results. Those early results point to world model simulation as a practical substitute for hardware evaluation, comparing favorably to existing real-to-sim approaches. Learn more at the link below.
English
6
15
89
12.9K
Anastasis Germanidis retweetledi
Runway
Runway@runwayml·
World models are the most transformative technology of our time. Our mission at Runway is to accelerate their development and ensure they have a positive impact on the world. Today we're announcing $315 million in Series E funding to help advance this work. Learn more at the link below.
Runway tweet media
English
21
56
239
77.2K
Anastasis Germanidis retweetledi
Runway
Runway@runwayml·
Introducing Image to Video for Gen-4.5, the world's best video model. Built for longer stories. Precise camera control. Coherent narratives. And characters that stay consistent. Gen-4.5 Image to Video is available now for all paid plans.
English
224
470
4.1K
743.4K
@alexgnewmedia
@alexgnewmedia@alexgnewmedia·
Hi Anastasis, thanks for asking. I really love Runway. I’ve been using the Workflows feature since it launched, and it’s great to see how it has evolved over time. The updates since the beginning have been very noticeable and positive. I’m not a node specialist, although I’ve played a bit with similar tools like ComfyUI at a beginner level and Unreal Engine 5, which has "Blueprints", though obviously not AI-focused. Even so, I already think Runway Workflows is in a very good place. When I mentioned “missing nodes,” what I really meant was missing handy features to make the experience smoother, altough there are a few nodes I would personally love to have, and I’m sure some of them are already on your radar. Others might not even be technically possible. It’s also very possible that some things I feel are missing already exist, and I’m simply not using them in the right way. 🙂 That said, here are a few things that come to mind based on real needs I’ve already run into: 1 - Let me see if i can explain this one: Let’s say I create a text instruction for an LLM node that outputs several different prompts. Right now, to extract each prompt individually, I need to create a new LLM node and a new text node for each one. It would be very helpful to have a way to extract multiple prompts from a single LLM output without needing separate text nodes for each. I know weavy has some kind of Array and list nodes for those cases, would be cool something like that in Runway. 2-As workflows grow, they can become quite large, and connecting distant nodes can be a real pain. A node that simply references or reroutes a connection would help a lot. ComfyUI has something called a Reroute Node, and having something similar in Runway would be amazing. 3- Some way to group nodes feels essential. I know we can add labels, which is useful, and we can also select multiple nodes to move or copy them together. Still, proper grouping would really help organize workflows, move sections more easily, and reuse parts via copy and paste.A small note on labels: I think labels should be created at the position where the user intends them to be. Right now, they usually appear where I was last working. For example, if I’m at the bottom of a large workflow and remember I want to label something at the top, the label still appears at the bottom. Since labels are small, I sometimes have to hunt for them. 🙂 4- I might be wrong here, but at the moment I believe the sharing feature only works within the same organization. It would be fantastic to be able to share workflows with people outside a workspace. Sharing them directly would be much easier than trying to explain them step by step. I would add a few more ideas, but this is already getting long, so I’ll stop here. 🙂 Sorry about that, and thanks again. Runway really rocks...and i am really looking forward for the next releases. Cheers. ;)
English
1
0
2
119
@alexgnewmedia
@alexgnewmedia@alexgnewmedia·
Over the past month, I’ve been taking my time building my own workflows and quietly experimenting with different tools. I’ll be sharing some of these explorations soon. One of them is @runwayml workflows. I built a custom workflow to speed up my creative process and, just for fun, tested it with a quick concept ad. I’m really enjoying the node-based approach. It’s fast, flexible, and genuinely fun to work with. Within the workflow, I write the concept and upload a mood image, a product image, a character sheet, and a logo. In this case, I invented all of them. From there, I can quickly test consistent shots and explore different ideas, ready to be edited later in Premiere.
@alexgnewmedia tweet media
English
2
0
9
406
@alexgnewmedia
@alexgnewmedia@alexgnewmedia·
@TheBLUDSIMPLE @runwayml Thanks, my friend. Yeah, i like Runway workflows, although it is still missing some important nodes. I will also try something with Freepik Spaces, that its at the moment more robust. I like these way nodes help in building a coherent process. ;)
English
1
0
0
85
Anastasis Germanidis
Anastasis Germanidis@agermanidis·
A short statement on our mission ahead: to train generalist models directly on observations from the universe, in what we believe will be the most important (and fun) technological quest of our time.
Runway@runwayml

Universal World Simulator Soon, everyone will have access to their own world simulator. This will be the most important technological development of our time. Video models trained at sufficient scale become world models. To predict the next frame, a video model must learn how the world works. How objects move, how forces propagate, how actions cause effects. General world models are learned approximators of physics. The hardest problems facing humanity are rooted in physical reality. Robotics, medicine, climate, materials, energy. Language models will not get us there. Text distills existing human knowledge. In order to move beyond that, we need to learn directly from raw observations of the world. Universal simulation is fundamentally about access. Experiences limited by geography and cost will become available to anyone. Running experiments today requires equipment, funding, and institutional access. With world simulators, the tools of discovery stop being scarce. Small teams will be able to develop autonomous systems and test policies across millions of scenarios without physical infrastructure. A student anywhere in the world will have her own biology lab and her own particle accelerator. Progress will not happen overnight, but it will happen faster than people expect. We anticipate half a decade before we achieve human-scale world simulation: interactive simulations indistinguishable from the real world. Within a decade, we expect to simulate physics and biology accurately enough to solve a significant percentage of today's scientific challenges. At Runway, we choose to invest in this long-term research vision rather than short-term optimization. At the same time, we will be deploying world models incrementally: this is how the world comes to understand what they are capable of, and how we learn to build them responsibly. Science is about understanding reality. Art is about transcending it. These are two expressions of the same capability, which is why we need to advance them together. Runway will always operate at their intersection. AAC Anastasis, Alejandro, Cris

English
0
5
58
5.3K
Anastasis Germanidis retweetledi
Runway
Runway@runwayml·
Today we shared five exciting announcements across both product and research that outline our future vision for how AI will change how stories are told, scientific progress is made and how the next frontiers of humanity are reached. Learn more below about how we’re building AI to simulate the world.
Runway tweet media
English
11
26
164
30K
Anastasis Germanidis retweetledi
Runway
Runway@runwayml·
5 things. Tomorrow 12pm ET.
English
57
43
361
97.1K
Anastasis Germanidis
Anastasis Germanidis@agermanidis·
Introducing Whisper Thunder, or Gen-4.5. It's the best video model in the world, able to handle very complex sequences of actions and events with amazing fidelity, and it's just so fun to use. Can't wait to make it available to everyone in the coming days. It's a validation of our research vision, infrastructure, but most of all, of this incredibly talented and generative team. Our base model sets an upper bound for all our downstream research efforts in world modeling. And, right now, the upper bound is very high.
Anastasis Germanidis tweet media
Runway@runwayml

Introducing our new frontier video model, Runway Gen-4.5. Previously known as Whisper Thunder (aka) David. Gen-4.5 is state-of-the-art and sets a new standard for video generation motion quality, prompt adherence and visual fidelity. Learn more below.

English
11
22
202
26.3K
MBZ
MBZ@babaeizadeh·
@agermanidis Congrats! Looks like a great model :)
English
1
0
2
440