Shawn Wilkinson

1.9K posts

Shawn Wilkinson banner
Shawn Wilkinson

Shawn Wilkinson

@super3

Building Prodia, AI Inference API. Founder of Storj.

Atlanta, GA Katılım Ocak 2009
691 Takip Edilen6.4K Takipçiler
Shawn Wilkinson
Shawn Wilkinson@super3·
Is self-learning AI just much larger context windows, and an efficient subprocess to move information in/out to an external store?
English
0
0
2
873
Zitao
Zitao@ZitaoTech·
16+512 setup cyberdeck monster - HackberryPi 5
Zitao tweet mediaZitao tweet mediaZitao tweet mediaZitao tweet media
Indonesia
5
8
56
3.5K
Shawn Wilkinson retweetledi
Cloudflare Developers
Cloudflare Developers@CloudflareDev·
@prodialabs ,one of the world's fastest image generators, uses Cloudflare R2 for AI model storage & transfer—calling it the "fastest and most economical solution". See how they built with Cloudflare 👇
English
1
4
16
3.7K
cocktail peanut
cocktail peanut@cocktailpeanut·
Starting Pinokio 4 Alpha Test Today. To apply, reply here or let me know on Discord.
cocktail peanut tweet media
English
257
13
255
76K
Shawn Wilkinson
Shawn Wilkinson@super3·
Not sure how to feel about OpenAI Codex. It was able to increase test coverage on one of my projects, but Codex was not able to get Jest installed properly to run tests...
English
2
0
2
835
Shawn Wilkinson
Shawn Wilkinson@super3·
@cocktailpeanut How is the pinokio API for WAN? I could build this right now just by doing a multi-path video generation.
English
0
0
3
71
cocktail peanut
cocktail peanut@cocktailpeanut·
if there's a way to do this very efficiently (let's assume it takes 1 second to generate a future frame), we can: 1. Generate 100 different futures in 100 seconds 2. Pick the future I want to see happen 3. Do a start/end frame to video interpolation
English
3
0
16
3.2K
cocktail peanut
cocktail peanut@cocktailpeanut·
How about instead of image-to-video, image-to-futureImage? Instead of having to generate the full video, only generate a SINGLE frame N seconds into the future that would be generated if you generated the video using img-to-video.
English
6
1
33
5.2K
cocktail peanut
cocktail peanut@cocktailpeanut·
@super3 I think it was 68 minutes on a 4090 but uncertain if this is the correct number, trying again 😅
English
2
0
1
106
cocktail peanut
cocktail peanut@cocktailpeanut·
What's the longest video generated you've seen with mochi? how many seconds?
English
1
0
7
3.6K
Shawn Wilkinson retweetledi
Prodia
Prodia@prodialabs·
🚨New Model Drop 🚨 Mochi by @genmoai , the SOTA in open-source video generation is now available for everyone on Prodia with a Pro+ or Enterprise subscription Here are some samples of what you can generate the first one is the classic prompt “astronaut riding a horse on mars, 4k”
English
2
2
17
2.5K
Shawn Wilkinson retweetledi
Monty Anderson
Monty Anderson@monty10x·
ZXX
0
2
3
1.1K
Shawn Wilkinson retweetledi
Monty Anderson
Monty Anderson@monty10x·
the easiest way to add generative video to your app
English
4
4
14
1.8K
Ajay Jain
Ajay Jain@ajayj_·
Run Mochi 1 with one 4090 GPU. No H100 needed. The official ComfyUI nodes for Mochi 1 are out with fast and efficient community kernels.
ComfyUI@ComfyUI

You can now run Mochi from @genmoai natively in ComfyUI with consumer GPU! ComfyUI now has optimized support for Genmo’s latest model and can run it fast on a GPU like 4090.

English
1
1
27
2.1K
Shawn Wilkinson retweetledi
Storj
Storj@storj·
Today @storj announced Colby Winegar's promotion to CEO 👏 He previously served as CRO driving partnerships, tech alliances, acquisitions & new customers that have transformed the organization. Full press release here hubs.li/Q02WhgGm0 🌟🚀#CEO #cloudleadership
Storj tweet media
English
4
10
40
6.7K
Shawn Wilkinson
Shawn Wilkinson@super3·
@Kijaidesign Super helpful data point. Have you tried running it across multiple 4090s? Let me know if you need credits or access to test.
English
0
1
1
169
Jukka Seppänen
Jukka Seppänen@Kijaidesign·
@super3 Shorter clips are decently fast, around ~5 mins for 49 framese, 163 frames took 20 mins which is quite slow, but not all speed optimizations are used yet. Interestingly the motion itself can be seen as low as 4 steps, so potentially one could find good promp/seed lot faster
English
5
2
22
6.5K
Jukka Seppänen
Jukka Seppänen@Kijaidesign·
As bit of a personal challenge I decided to get the new Mochi (github.com/genmoai/models) -text2video model running locally on 4090, this was generated under 20GB VRAM in ComfyUI. It's not perfect as I had to use tiling to be able to decode the whole thing, github.com/kijai/ComfyUI-…
English
42
87
562
118.3K
Shawn Wilkinson
Shawn Wilkinson@super3·
@TopperDEL I just ran the base model code via command line. No great tools yet. Working on it.
English
0
0
0
31
TopperDEL
TopperDEL@TopperDEL·
@super3 What Tool helped you create that one specifically?
English
1
0
0
44
Shawn Wilkinson
Shawn Wilkinson@super3·
Open source video models are getting way better. Here is my first attempt with Will Smith eating spaghetti with made with Mochi.
English
1
0
11
1.6K