metahades

2.9K posts

metahades

metahades

@metafilmarchive

https://t.co/jC3FrBo9SN | Creating Definitive Collections. Preservation, Archivism, Restoration, Multi-Dubbing.

Katılım Temmuz 2022
561 Takip Edilen2.8K Takipçiler
Sabitlenmiş Tweet
metahades
metahades@metafilmarchive·
WIP Stanley Kubrick · 2001: A Space Odyssey (1961) [RAW UHDTV 8K 4320p · Remastered 70mm Scan from the Original Camera Negative · Broadcast on NHK BS8K, Japan, 2025] Please read the entire post. RT!
English
31
260
2.2K
291.3K
metahades retweetledi
vishnu
vishnu@maheshscofieldd·
#TheOdyssey 15/70 IMAX print
English
5
56
961
21.3K
metahades
metahades@metafilmarchive·
I haven't seen them, but in Russia they make a lot of DCPs from Blu-rays. If they manage to release legitimate DCPs, sooner or later they'll get mixed up with the fake ones (I know several Russians who do this).
Manuel Santander@ElManuSantander

No sé si cachaban, pero los rusos ya están pirateando las películas en formato DCP (Digital Cinema Package) rip. Las películas actualmente son UN PAQUETE DE SOFTWARE, que se ejecuta en los cines con una licencia-llave que contiene el idioma de audio y los subtítulos. Por ej.:

English
2
0
20
2.5K
metahades retweetledi
Dexerto
Dexerto@Dexerto·
Famed "Chicago Tape Guy," who recorded over 10,000 live concerts, is uploading them online for free The collection spans four decades, with volunteers now digitizing and uploading the tapes to the internet
Dexerto tweet mediaDexerto tweet media
English
246
7.9K
67.3K
4.6M
metahades retweetledi
El videoclub argento
El videoclub argento@videoclubarg·
"El archivo no es solamente el valor de la colección sino también la soberanía de poder ver las películas que se te den la gana y no depender de lo que te ofrecen las plataformas"
Español
11
194
902
40.1K
metahades retweetledi
Arko | অর্ক
Arko | অর্ক@films_and_life·
Tarr Bela (1955-2026) with a Peruvian man selling pirated DVDs of his films.
Arko | অর্ক tweet media
English
64
4.7K
37.1K
2.9M
metahades retweetledi
Inspirenaire
Inspirenaire@Inspirenaire·
This clip shows how original versions of TV shows have been shortened via syndication and streaming. But the bigger takeaway is that most people don’t realize it. In an era where books are being banned and original materials are becoming harder to find. This is just a small/fun example of that concept. If we don’t hold on to original copies of books, newspapers, magazines, vinyl, commercial VHS tapes, television recorded VHS tapes, cassette tapes, DVDs, original news broadcasts etc, we will watch history be edited with holes in it where things are missing and presented as though it is the complete picture. Here’s a light example of that concept.
English
386
6.6K
33.5K
2.9M
metahades retweetledi
吉野@連邦(renpou.com)
吉野@連邦(renpou.com)@yoshinokentarou·
世界に50枚しかない貴重な「月姫」のフロッピーが税関により破壊された事が話題となっている。 同様、過去には「約60万円 の価値があるポケットモンスター ピカチュウ」がアメリカの税関により破壊され価値がなくなった事がある為、この回避方法が議論されている。
吉野@連邦(renpou.com) tweet media
日本語
79
3K
9.6K
1.4M
metahades retweetledi
Kineko Video
Kineko Video@kinekovideo·
13 episodes. 16mm film scans. Native HD. How such a milestone anniversary SHOULD be celebrated. nyaa.si/view/2080128
Kineko Video tweet mediaKineko Video tweet mediaKineko Video tweet mediaKineko Video tweet media
English
33
301
1.9K
261.5K
Try ArtCraft
Try ArtCraft@get_artcraft·
Seedance 2.0 is now in the **OPEN SOURCE** ArtCraft. Yours to use today and forever. You can make videos just like this. Stop RENTING from Higgizfud who treats everybody like a little b and OWN ArtCraft today. Please help us REpoST so people see. (We love youuu!)
English
74
137
1.1K
86.3K
Soumya
Soumya@soumyattention·
What was fake? Is the platform fake? The social engagement or payouts fake? Nothing is fake here. They have got too much money they can’t spend, or either classic money laundering going on behind the scene…that doesn’t concern us. We are getting great tools, great platforms, at great prices.
English
3
0
4
1.1K
Varunram Ganesh
Varunram Ganesh@varunram·
The most successful thing they managed to do is convince a bunch of clueless investors to invest $80M two weeks before everyone figured out it was all fake. Truly a skill, may be one of the best to ever do it
Varunram Ganesh tweet media
English
26
4
223
39.3K
Jim HuiHui
Jim HuiHui@Jinghui_Dong·
Seedance 2.0 Testing 02 “Static camera. Three women stand evenly spaced in the frame. Starting with the woman on the far left, she strikes a cool and slightly provocative pose, then returns to a neutral standing position. Next, the second woman performs a sweet and charming pose, then returns to the original stance. Finally, the third woman strikes a stylish and trendy pose, then returns to the original stance. The timing between each character’s pose is the same, with each action lasting approximately 5 seconds.” @dreamina_ai
English
11
17
231
18.7K
metahades retweetledi
OpenClaw🦞
OpenClaw🦞@openclaw·
🦞Clawdbot 2026.1.23 🎤/TTS goes core ❤️‍🔥Controllable heartbeats 🔧 /tools/invoke API 🌐 Urbit channel 🐛 Mountain of fixes A changelog so thicc I needed a snack break reading it. github.com/clawdbot/clawd… clawd.bot
English
71
156
2.4K
377.6K
metahades retweetledi
ilker
ilker@ailker·
As I promised yesterday, I'll briefly explain LoRA training and share a workflow I made so you can do it quickly. First, let me answer a very common question: 'Why train LoRAs when we have such advanced models?' Even though we have incredibly advanced models now (like NBP), we still can't always get them to do specific things we want. Simplest example: the spritesheet LoRA I made the other day. I generated 1000 images with Nano Banana and only 100 were what I wanted. The LoRA I trained using those 100 images gives me nearly 100% consistent results. Second point is cost and speed. With LoRA, we can cut costs by 4-5x. And while doing that, we're generating 4-5x faster. How many images do you need for a good LoRA? This depends on your LoRA's complexity. For example, when I training the spritesheet LoRA, even though I used 100 images, I didn't include buildings in the training data, so this LoRA doesn't work for buildings. So think about your LoRA's use cases and add examples for as many use cases as possible to improve quality. What are paired images and how to train LoRAs for image-editing? When training LoRAs for image editing on fal, we call each edit example paired images - one with _start suffix, one with _end suffix. For example, if you're training a background remove LoRA, the unedited original photo will be your '_start' image. The image with background removed will be the '_end' image. Simply put: images we want to edit or use as reference get _start, target images we want to achieve get '_end'. Important: save both images with the same name. Like image332_start.jpg and image332_end.jpg. This way the system knows which images pair together. What about training LoRAs for models with multiple image inputs? Same logic. We still use _start and _end suffixes, but with one difference. Since there are multiple input images, we can number them: _start, _start1, _start2. Example: start images, 1st image = Woman portrait (image35_start.jpg) 2nd image = Glasses photo (image35_start1.jpg) 3rd image = Hat photo (image35_start2.jpg) Output image = portrait of woman wearing glasses and hat (image35_end.jpg) Can we do more detailed captioning? Yes. Similarly, you can improve training quality by creating a txt file for each set with the caption inside. Example: create image35.txt and write: 'Recreate the image by putting the glasses from the second image and the hat from the third image on the woman in the first image.' What are Steps? How many should I use? What's Learning Rate? Steps determines how many times the model sees and processes your training data (your images). Each step, the model learns a bit more. But as steps increase, so does the risk of overfitting. So there's no real default. But for a simpler LoRA with 20 paired images, 1000 steps is ideal. Here's a metaphor for the Steps and Learning Rate relationship: Imagine you have a balloon. Our goal is to inflate it to the optimal size. Steps = How many times we blow into the balloon Learning rate = How hard we blow each time If we blow too softly, we need to blow many more times. If we blow too hard, we risk popping it quickly and can't reach optimal size. Of course training won't explode, but it won't work as intended because it wasn't trained optimally. Training's done, now what? Once training's complete, you'll have a safetensors file. Every model you train on fal has a LoRA inference endpoint. In that inference, add your safetensors file link to the LoRA url input, and you can use your LoRA. Thanks for the read! The workflow in the video: fal.ai/workflows/ilke… If I forgot anything, let me know in the replies.
English
20
31
274
15K
metahades retweetledi
Nao Casanova
Nao Casanova@NaoCasanova·
WIKIFLIX es una plataforma online que agrupa películas, documentales y vídeos, de dominio público o con licencias abiertas, muchas veces con un enfoque cultural, educativo o histórico. 4.410 películas. wikiflix.toolforge.org/#/
Español
30
2.1K
10.8K
361.8K
Miko
Miko@Mho_23·
i found a way to make AI videos that don't sound like AI... example below.... sora 2, veo 3.1, kling 2.6 are decent but they all have that robotic AI accent that everyone can detect immediately and when people try replacing the voices with elevenlabs, it always sounds way too professional like it's recorded on a professional mic in a studio but if i'm making a UGC video of someone in their room, i don't want studio quality it should sound like it's actually in that room the workflow i use completely removes that fake AI voice sound why this matters: when your AI voices sound too polished or robotic, people notice. their guard goes up. they disengage. but when your voices sound like they're coming from a real person in a real room, that's when they work the best i created a guide breaking down exactly how: > to create voiceovers that sound completely real > to replace fake-sounding voices in AI videos (not elevenlabs) > to make voices sound like they're in the actual environment, not a studio RT + reply "VOICE" and i'll send you the FULL guide (must follow so i can DM)
English
3K
1.1K
3.3K
240.9K
metahades
metahades@metafilmarchive·
@mrjoyenxr0 full bots the comments, death internet, lmao
English
1
0
2
1.7K