Sam Gelish

80.3K posts

Sam Gelish banner
Sam Gelish

Sam Gelish

@Rubab59f

⚡ Exploring AI tools & future tech 🚀 Sharing tips, trends & growth hacks 🧠 Learn • Build • Grow 📩 DM for collab & inquiries

London, England เข้าร่วม Ocak 2024
9.9K กำลังติดตาม11.1K ผู้ติดตาม
Ryan Blake
Ryan Blake@Ryan_blake_ai·
SenseNova U1 Lite Series isn't just a better image generator. It understands spatial intelligence and layout logic in a way that makes it function more like a designer than a painter. Check the repo: github.com/OpenSenseNova/…
SenseTime@SenseTime_AI

𝗦𝗲𝗻𝘀𝗲𝗡𝗼𝘃𝗮 𝗨1 𝗟𝗶𝘁𝗲 𝗦𝗲𝗿𝗶𝗲𝘀 𝗶𝘀 𝗻𝗼𝘄 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲! Built on the 𝗡𝗘𝗢-𝘂𝗻𝗶𝗳𝘆 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲, it natively unifies multimodal understanding and generation, delivering: •𝗦𝗢𝗧𝗔 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 𝗔𝗺𝗼𝗻𝗴 𝗢𝗽𝗲𝗻-𝗦𝗼𝘂𝗿𝗰𝗲 𝗠𝗼𝗱𝗲𝗹𝘀: Compact models (8B & A3B) delivering commercial-grade performance and exceptional cost efficiency. Leading performance among open-source models across a wide range of understanding, reasoning, and generation benchmarks. •𝗡𝗮𝘁𝗶𝘃𝗲 𝗜𝗺𝗮𝗴𝗲–𝗧𝗲𝘅𝘁 𝗜𝗻𝘁𝗲𝗿𝗹𝗲𝗮𝘃𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻: Generate coherent interleaved text and images in a single flow using one model; ideal for practical applications like guides, where visuals turn complex information into intuitive insights. •𝗛𝗶𝗴𝗵-𝗗𝗲𝗻𝘀𝗶𝘁𝘆 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 𝗥𝗲𝗻𝗱𝗲𝗿𝗶𝗻𝗴: Strong capabilities in dense visual communication, generating richly structured layouts for knowledge illustrations, posters, PPTs, comics and other information-rich formats. 𝗛𝘂𝗴𝗴𝗶𝗻𝗴 𝗙𝗮𝗰𝗲: huggingface.co/collections/se… 𝗚𝗶𝘁𝗛𝘂𝗯: github.com/OpenSenseNova/… 𝗗𝗶𝘀𝗰𝗼𝗿𝗱: discord.gg/cxkwXWjp  @huggingface @github

English
2
0
5
41
Sam Gelish
Sam Gelish@Rubab59f·
"GPT-Image 2 feels like a painter SenseNova U1 feels closer to a designer working with structured information That’s the gap here Worth a look and a star github.com/OpenSenseNova/…
SenseTime@SenseTime_AI

𝗦𝗲𝗻𝘀𝗲𝗡𝗼𝘃𝗮 𝗨1 𝗟𝗶𝘁𝗲 𝗦𝗲𝗿𝗶𝗲𝘀 𝗶𝘀 𝗻𝗼𝘄 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲! Built on the 𝗡𝗘𝗢-𝘂𝗻𝗶𝗳𝘆 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲, it natively unifies multimodal understanding and generation, delivering: •𝗦𝗢𝗧𝗔 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 𝗔𝗺𝗼𝗻𝗴 𝗢𝗽𝗲𝗻-𝗦𝗼𝘂𝗿𝗰𝗲 𝗠𝗼𝗱𝗲𝗹𝘀: Compact models (8B & A3B) delivering commercial-grade performance and exceptional cost efficiency. Leading performance among open-source models across a wide range of understanding, reasoning, and generation benchmarks. •𝗡𝗮𝘁𝗶𝘃𝗲 𝗜𝗺𝗮𝗴𝗲–𝗧𝗲𝘅𝘁 𝗜𝗻𝘁𝗲𝗿𝗹𝗲𝗮𝘃𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻: Generate coherent interleaved text and images in a single flow using one model; ideal for practical applications like guides, where visuals turn complex information into intuitive insights. •𝗛𝗶𝗴𝗵-𝗗𝗲𝗻𝘀𝗶𝘁𝘆 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 𝗥𝗲𝗻𝗱𝗲𝗿𝗶𝗻𝗴: Strong capabilities in dense visual communication, generating richly structured layouts for knowledge illustrations, posters, PPTs, comics and other information-rich formats. 𝗛𝘂𝗴𝗴𝗶𝗻𝗴 𝗙𝗮𝗰𝗲: huggingface.co/collections/se… 𝗚𝗶𝘁𝗛𝘂𝗯: github.com/OpenSenseNova/… 𝗗𝗶𝘀𝗰𝗼𝗿𝗱: discord.gg/cxkwXWjp  @huggingface @github

English
6
3
7
37
Tanjina Islam
Tanjina Islam@Tanju_mim·
This Hachiko clip simply hits harder than most AI content here. It’s the small, human details that carry it through. A 6-minute film created in 3 days using Agent One. The unedited 50-minute tutorial is the proof.
Invideo@invideoOfficial

The "look what AI did" reels skip the part that matters: how it was directed. Vishal Balsara, our Creative Director, built a 7-min Hachiko short in 3 days on Agent One and recorded the full 41-minute tutorial. Context, treatment, shot-by-shot. Film below. Full tutorial in the next tweet👇

English
6
13
23
2.7K
Alexander James
Alexander James@siralexanderj·
This Hachiko sequence hits differently. Created entirely with invideo Agent One from start to finish. Just 3 days from idea to final cut. If it sounds unreal, the 50-minute raw tutorial proves every step.
Invideo@invideoOfficial

The "look what AI did" reels skip the part that matters: how it was directed. Vishal Balsara, our Creative Director, built a 7-min Hachiko short in 3 days on Agent One and recorded the full 41-minute tutorial. Context, treatment, shot-by-shot. Film below. Full tutorial in the next tweet👇

English
7
10
9
1.7K
christopher_Ai
christopher_Ai@SphereSuc32514·
This Hachiko story is genuinely powerful. Directed fully with invideo Agent One. Only 3 days to go from concept to completion. The 50-minute tutorial breaks down every single decision.
Invideo@invideoOfficial

The "look what AI did" reels skip the part that matters: how it was directed. Vishal Balsara, our Creative Director, built a 7-min Hachiko short in 3 days on Agent One and recorded the full 41-minute tutorial. Context, treatment, shot-by-shot. Film below. Full tutorial in the next tweet👇

English
7
2
8
1.2K
Alexander Jackson
Alexander Jackson@AI_Workflow_Hub·
One of the most moving Hachiko sequences you’ll see. Made entirely with invideo Agent One. 3 days. Start to finish. The full unedited tutorial reveals everything behind the scenes.
Invideo@invideoOfficial

The "look what AI did" reels skip the part that matters: how it was directed. Vishal Balsara, our Creative Director, built a 7-min Hachiko short in 3 days on Agent One and recorded the full 41-minute tutorial. Context, treatment, shot-by-shot. Film below. Full tutorial in the next tweet👇

English
6
10
21
2.4K
Ryan Blake
Ryan Blake@Ryan_blake_ai·
This Hachiko sequence feels real and emotional. Created using invideo Agent One in just 3 days. Every shot and fix is captured in the full tutorial. Nothing hidden. Just pure process.
Invideo@invideoOfficial

The "look what AI did" reels skip the part that matters: how it was directed. Vishal Balsara, our Creative Director, built a 7-min Hachiko short in 3 days on Agent One and recorded the full 41-minute tutorial. Context, treatment, shot-by-shot. Film below. Full tutorial in the next tweet👇

English
5
0
6
119
Sam Gelish
Sam Gelish@Rubab59f·
A deeply emotional Hachiko sequence. Built using invideo Agent One in just 3 days. From setup to final edit, everything is documented. The full uncut tutorial shows the real process behind it.
Invideo@invideoOfficial

The "look what AI did" reels skip the part that matters: how it was directed. Vishal Balsara, our Creative Director, built a 7-min Hachiko short in 3 days on Agent One and recorded the full 41-minute tutorial. Context, treatment, shot-by-shot. Film below. Full tutorial in the next tweet👇

English
10
4
10
1.5K
Sam Gelish รีทวีตแล้ว
christopher_Ai
christopher_Ai@SphereSuc32514·
Most multimodal setups generate text and images separately, then stitch them together afterward. SenseNova U1 Lite Series generates them together. The difference in coherence is real. Star the repo: github.com/OpenSenseNova/…
SenseTime@SenseTime_AI

𝗦𝗲𝗻𝘀𝗲𝗡𝗼𝘃𝗮 𝗨1 𝗟𝗶𝘁𝗲 𝗦𝗲𝗿𝗶𝗲𝘀 𝗶𝘀 𝗻𝗼𝘄 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲! Built on the 𝗡𝗘𝗢-𝘂𝗻𝗶𝗳𝘆 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲, it natively unifies multimodal understanding and generation, delivering: •𝗦𝗢𝗧𝗔 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 𝗔𝗺𝗼𝗻𝗴 𝗢𝗽𝗲𝗻-𝗦𝗼𝘂𝗿𝗰𝗲 𝗠𝗼𝗱𝗲𝗹𝘀: Compact models (8B & A3B) delivering commercial-grade performance and exceptional cost efficiency. Leading performance among open-source models across a wide range of understanding, reasoning, and generation benchmarks. •𝗡𝗮𝘁𝗶𝘃𝗲 𝗜𝗺𝗮𝗴𝗲–𝗧𝗲𝘅𝘁 𝗜𝗻𝘁𝗲𝗿𝗹𝗲𝗮𝘃𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻: Generate coherent interleaved text and images in a single flow using one model; ideal for practical applications like guides, where visuals turn complex information into intuitive insights. •𝗛𝗶𝗴𝗵-𝗗𝗲𝗻𝘀𝗶𝘁𝘆 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 𝗥𝗲𝗻𝗱𝗲𝗿𝗶𝗻𝗴: Strong capabilities in dense visual communication, generating richly structured layouts for knowledge illustrations, posters, PPTs, comics and other information-rich formats. 𝗛𝘂𝗴𝗴𝗶𝗻𝗴 𝗙𝗮𝗰𝗲: huggingface.co/collections/se… 𝗚𝗶𝘁𝗛𝘂𝗯: github.com/OpenSenseNova/… 𝗗𝗶𝘀𝗰𝗼𝗿𝗱: discord.gg/cxkwXWjp  @huggingface @github

English
10
8
12
342
Alexander Jackson
Alexander Jackson@AI_Workflow_Hub·
I've been approximating native interleaved generation by stitching models together for two years. SenseNova U1 Lite Series just made that unnecessary.Give it a star: github.com/OpenSenseNova/…
SenseTime@SenseTime_AI

𝗦𝗲𝗻𝘀𝗲𝗡𝗼𝘃𝗮 𝗨1 𝗟𝗶𝘁𝗲 𝗦𝗲𝗿𝗶𝗲𝘀 𝗶𝘀 𝗻𝗼𝘄 𝗼𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲! Built on the 𝗡𝗘𝗢-𝘂𝗻𝗶𝗳𝘆 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲, it natively unifies multimodal understanding and generation, delivering: •𝗦𝗢𝗧𝗔 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 𝗔𝗺𝗼𝗻𝗴 𝗢𝗽𝗲𝗻-𝗦𝗼𝘂𝗿𝗰𝗲 𝗠𝗼𝗱𝗲𝗹𝘀: Compact models (8B & A3B) delivering commercial-grade performance and exceptional cost efficiency. Leading performance among open-source models across a wide range of understanding, reasoning, and generation benchmarks. •𝗡𝗮𝘁𝗶𝘃𝗲 𝗜𝗺𝗮𝗴𝗲–𝗧𝗲𝘅𝘁 𝗜𝗻𝘁𝗲𝗿𝗹𝗲𝗮𝘃𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻: Generate coherent interleaved text and images in a single flow using one model; ideal for practical applications like guides, where visuals turn complex information into intuitive insights. •𝗛𝗶𝗴𝗵-𝗗𝗲𝗻𝘀𝗶𝘁𝘆 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 𝗥𝗲𝗻𝗱𝗲𝗿𝗶𝗻𝗴: Strong capabilities in dense visual communication, generating richly structured layouts for knowledge illustrations, posters, PPTs, comics and other information-rich formats. 𝗛𝘂𝗴𝗴𝗶𝗻𝗴 𝗙𝗮𝗰𝗲: huggingface.co/collections/se… 𝗚𝗶𝘁𝗛𝘂𝗯: github.com/OpenSenseNova/… 𝗗𝗶𝘀𝗰𝗼𝗿𝗱: discord.gg/cxkwXWjp  @huggingface @github

English
10
9
13
335
Sam Gelish
Sam Gelish@Rubab59f·
AI ENGINEERING FROM SCRATCH > 416 Lessons > 20+ Chapters > In Python, Julia, Rust, Typescript 5000 GitHub Stars Completely Open source 100% #AI_generated
Sam Gelish tweet media
English
0
2
2
84