Sam Gelish
80.3K posts

Sam Gelish
@Rubab59f
โก Exploring AI tools & future tech ๐ Sharing tips, trends & growth hacks ๐ง Learn โข Build โข Grow ๐ฉ DM for collab & inquiries

๐ฆ๐ฒ๐ป๐๐ฒ๐ก๐ผ๐๐ฎ ๐จ1 ๐๐ถ๐๐ฒ ๐ฆ๐ฒ๐ฟ๐ถ๐ฒ๐ ๐ถ๐ ๐ป๐ผ๐ ๐ผ๐ฝ๐ฒ๐ป ๐๐ผ๐๐ฟ๐ฐ๐ฒ! Built on the ๐ก๐๐ข-๐๐ป๐ถ๐ณ๐ ๐ฎ๐ฟ๐ฐ๐ต๐ถ๐๐ฒ๐ฐ๐๐๐ฟ๐ฒ, it natively unifies multimodal understanding and generation, delivering: โข๐ฆ๐ข๐ง๐ ๐๐ณ๐ณ๐ถ๐ฐ๐ถ๐ฒ๐ป๐ฐ๐ ๐๐บ๐ผ๐ป๐ด ๐ข๐ฝ๐ฒ๐ป-๐ฆ๐ผ๐๐ฟ๐ฐ๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น๐: Compact models (8B & A3B) delivering commercial-grade performance and exceptional cost efficiency. Leading performance among open-source models across a wide range of understanding, reasoning, and generation benchmarks. โข๐ก๐ฎ๐๐ถ๐๐ฒ ๐๐บ๐ฎ๐ด๐ฒโ๐ง๐ฒ๐ ๐ ๐๐ป๐๐ฒ๐ฟ๐น๐ฒ๐ฎ๐๐ฒ๐ฑ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป: Generate coherent interleaved text and images in a single flow using one model; ideal for practical applications like guides, where visuals turn complex information into intuitive insights. โข๐๐ถ๐ด๐ต-๐๐ฒ๐ป๐๐ถ๐๐ ๐๐ป๐ณ๐ผ๐ฟ๐บ๐ฎ๐๐ถ๐ผ๐ป ๐ฅ๐ฒ๐ป๐ฑ๐ฒ๐ฟ๐ถ๐ป๐ด: Strong capabilities in dense visual communication, generating richly structured layouts for knowledge illustrations, posters, PPTs, comics and other information-rich formats. ๐๐๐ด๐ด๐ถ๐ป๐ด ๐๐ฎ๐ฐ๐ฒ: huggingface.co/collections/seโฆ ๐๐ถ๐๐๐๐ฏ: github.com/OpenSenseNova/โฆ ๐๐ถ๐๐ฐ๐ผ๐ฟ๐ฑ: discord.gg/cxkwXWjpย @huggingface @github

๐ฆ๐ฒ๐ป๐๐ฒ๐ก๐ผ๐๐ฎ ๐จ1 ๐๐ถ๐๐ฒ ๐ฆ๐ฒ๐ฟ๐ถ๐ฒ๐ ๐ถ๐ ๐ป๐ผ๐ ๐ผ๐ฝ๐ฒ๐ป ๐๐ผ๐๐ฟ๐ฐ๐ฒ! Built on the ๐ก๐๐ข-๐๐ป๐ถ๐ณ๐ ๐ฎ๐ฟ๐ฐ๐ต๐ถ๐๐ฒ๐ฐ๐๐๐ฟ๐ฒ, it natively unifies multimodal understanding and generation, delivering: โข๐ฆ๐ข๐ง๐ ๐๐ณ๐ณ๐ถ๐ฐ๐ถ๐ฒ๐ป๐ฐ๐ ๐๐บ๐ผ๐ป๐ด ๐ข๐ฝ๐ฒ๐ป-๐ฆ๐ผ๐๐ฟ๐ฐ๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น๐: Compact models (8B & A3B) delivering commercial-grade performance and exceptional cost efficiency. Leading performance among open-source models across a wide range of understanding, reasoning, and generation benchmarks. โข๐ก๐ฎ๐๐ถ๐๐ฒ ๐๐บ๐ฎ๐ด๐ฒโ๐ง๐ฒ๐ ๐ ๐๐ป๐๐ฒ๐ฟ๐น๐ฒ๐ฎ๐๐ฒ๐ฑ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป: Generate coherent interleaved text and images in a single flow using one model; ideal for practical applications like guides, where visuals turn complex information into intuitive insights. โข๐๐ถ๐ด๐ต-๐๐ฒ๐ป๐๐ถ๐๐ ๐๐ป๐ณ๐ผ๐ฟ๐บ๐ฎ๐๐ถ๐ผ๐ป ๐ฅ๐ฒ๐ป๐ฑ๐ฒ๐ฟ๐ถ๐ป๐ด: Strong capabilities in dense visual communication, generating richly structured layouts for knowledge illustrations, posters, PPTs, comics and other information-rich formats. ๐๐๐ด๐ด๐ถ๐ป๐ด ๐๐ฎ๐ฐ๐ฒ: huggingface.co/collections/seโฆ ๐๐ถ๐๐๐๐ฏ: github.com/OpenSenseNova/โฆ ๐๐ถ๐๐ฐ๐ผ๐ฟ๐ฑ: discord.gg/cxkwXWjpย @huggingface @github

๐ฆ๐ฒ๐ป๐๐ฒ๐ก๐ผ๐๐ฎ ๐จ1 ๐๐ถ๐๐ฒ ๐ฆ๐ฒ๐ฟ๐ถ๐ฒ๐ ๐ถ๐ ๐ป๐ผ๐ ๐ผ๐ฝ๐ฒ๐ป ๐๐ผ๐๐ฟ๐ฐ๐ฒ! Built on the ๐ก๐๐ข-๐๐ป๐ถ๐ณ๐ ๐ฎ๐ฟ๐ฐ๐ต๐ถ๐๐ฒ๐ฐ๐๐๐ฟ๐ฒ, it natively unifies multimodal understanding and generation, delivering: โข๐ฆ๐ข๐ง๐ ๐๐ณ๐ณ๐ถ๐ฐ๐ถ๐ฒ๐ป๐ฐ๐ ๐๐บ๐ผ๐ป๐ด ๐ข๐ฝ๐ฒ๐ป-๐ฆ๐ผ๐๐ฟ๐ฐ๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น๐: Compact models (8B & A3B) delivering commercial-grade performance and exceptional cost efficiency. Leading performance among open-source models across a wide range of understanding, reasoning, and generation benchmarks. โข๐ก๐ฎ๐๐ถ๐๐ฒ ๐๐บ๐ฎ๐ด๐ฒโ๐ง๐ฒ๐ ๐ ๐๐ป๐๐ฒ๐ฟ๐น๐ฒ๐ฎ๐๐ฒ๐ฑ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป: Generate coherent interleaved text and images in a single flow using one model; ideal for practical applications like guides, where visuals turn complex information into intuitive insights. โข๐๐ถ๐ด๐ต-๐๐ฒ๐ป๐๐ถ๐๐ ๐๐ป๐ณ๐ผ๐ฟ๐บ๐ฎ๐๐ถ๐ผ๐ป ๐ฅ๐ฒ๐ป๐ฑ๐ฒ๐ฟ๐ถ๐ป๐ด: Strong capabilities in dense visual communication, generating richly structured layouts for knowledge illustrations, posters, PPTs, comics and other information-rich formats. ๐๐๐ด๐ด๐ถ๐ป๐ด ๐๐ฎ๐ฐ๐ฒ: huggingface.co/collections/seโฆ ๐๐ถ๐๐๐๐ฏ: github.com/OpenSenseNova/โฆ ๐๐ถ๐๐ฐ๐ผ๐ฟ๐ฑ: discord.gg/cxkwXWjpย @huggingface @github

The "look what AI did" reels skip the part that matters: how it was directed. Vishal Balsara, our Creative Director, built a 7-min Hachiko short in 3 days on Agent One and recorded the full 41-minute tutorial. Context, treatment, shot-by-shot. Film below. Full tutorial in the next tweet๐

The "look what AI did" reels skip the part that matters: how it was directed. Vishal Balsara, our Creative Director, built a 7-min Hachiko short in 3 days on Agent One and recorded the full 41-minute tutorial. Context, treatment, shot-by-shot. Film below. Full tutorial in the next tweet๐

The "look what AI did" reels skip the part that matters: how it was directed. Vishal Balsara, our Creative Director, built a 7-min Hachiko short in 3 days on Agent One and recorded the full 41-minute tutorial. Context, treatment, shot-by-shot. Film below. Full tutorial in the next tweet๐

The "look what AI did" reels skip the part that matters: how it was directed. Vishal Balsara, our Creative Director, built a 7-min Hachiko short in 3 days on Agent One and recorded the full 41-minute tutorial. Context, treatment, shot-by-shot. Film below. Full tutorial in the next tweet๐

The "look what AI did" reels skip the part that matters: how it was directed. Vishal Balsara, our Creative Director, built a 7-min Hachiko short in 3 days on Agent One and recorded the full 41-minute tutorial. Context, treatment, shot-by-shot. Film below. Full tutorial in the next tweet๐

The "look what AI did" reels skip the part that matters: how it was directed. Vishal Balsara, our Creative Director, built a 7-min Hachiko short in 3 days on Agent One and recorded the full 41-minute tutorial. Context, treatment, shot-by-shot. Film below. Full tutorial in the next tweet๐

The future belongs to proactive agents. But without real-time perception, they're stuck reacting. "World2Agent" isn't a product. It's an open protocol and an invitation โ to build the perception layer for AI agents, together. We're open-sourcing everything: the protocol, the SDK, and first sensors. GITHUB + DEMO in comments.

The future belongs to proactive agents. But without real-time perception, they're stuck reacting. "World2Agent" isn't a product. It's an open protocol and an invitation โ to build the perception layer for AI agents, together. We're open-sourcing everything: the protocol, the SDK, and first sensors. GITHUB + DEMO in comments.

The future belongs to proactive agents. But without real-time perception, they're stuck reacting. "World2Agent" isn't a product. It's an open protocol and an invitation โ to build the perception layer for AI agents, together. We're open-sourcing everything: the protocol, the SDK, and first sensors. GITHUB + DEMO in comments.

The future belongs to proactive agents. But without real-time perception, they're stuck reacting. "World2Agent" isn't a product. It's an open protocol and an invitation โ to build the perception layer for AI agents, together. We're open-sourcing everything: the protocol, the SDK, and first sensors. GITHUB + DEMO in comments.

Students now get 50% off Typeless Pro with a school email. Typeless is not just voice-to-text. It turns your messy thoughts into clear, polished writing wherever you write. Notes, emails, essays, applications, messages - all by speaking. The keyboard is a bottleneck. Typeless is the way out.

๐ฆ๐ฒ๐ป๐๐ฒ๐ก๐ผ๐๐ฎ ๐จ1 ๐๐ถ๐๐ฒ ๐ฆ๐ฒ๐ฟ๐ถ๐ฒ๐ ๐ถ๐ ๐ป๐ผ๐ ๐ผ๐ฝ๐ฒ๐ป ๐๐ผ๐๐ฟ๐ฐ๐ฒ! Built on the ๐ก๐๐ข-๐๐ป๐ถ๐ณ๐ ๐ฎ๐ฟ๐ฐ๐ต๐ถ๐๐ฒ๐ฐ๐๐๐ฟ๐ฒ, it natively unifies multimodal understanding and generation, delivering: โข๐ฆ๐ข๐ง๐ ๐๐ณ๐ณ๐ถ๐ฐ๐ถ๐ฒ๐ป๐ฐ๐ ๐๐บ๐ผ๐ป๐ด ๐ข๐ฝ๐ฒ๐ป-๐ฆ๐ผ๐๐ฟ๐ฐ๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น๐: Compact models (8B & A3B) delivering commercial-grade performance and exceptional cost efficiency. Leading performance among open-source models across a wide range of understanding, reasoning, and generation benchmarks. โข๐ก๐ฎ๐๐ถ๐๐ฒ ๐๐บ๐ฎ๐ด๐ฒโ๐ง๐ฒ๐ ๐ ๐๐ป๐๐ฒ๐ฟ๐น๐ฒ๐ฎ๐๐ฒ๐ฑ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป: Generate coherent interleaved text and images in a single flow using one model; ideal for practical applications like guides, where visuals turn complex information into intuitive insights. โข๐๐ถ๐ด๐ต-๐๐ฒ๐ป๐๐ถ๐๐ ๐๐ป๐ณ๐ผ๐ฟ๐บ๐ฎ๐๐ถ๐ผ๐ป ๐ฅ๐ฒ๐ป๐ฑ๐ฒ๐ฟ๐ถ๐ป๐ด: Strong capabilities in dense visual communication, generating richly structured layouts for knowledge illustrations, posters, PPTs, comics and other information-rich formats. ๐๐๐ด๐ด๐ถ๐ป๐ด ๐๐ฎ๐ฐ๐ฒ: huggingface.co/collections/seโฆ ๐๐ถ๐๐๐๐ฏ: github.com/OpenSenseNova/โฆ ๐๐ถ๐๐ฐ๐ผ๐ฟ๐ฑ: discord.gg/cxkwXWjpย @huggingface @github

๐ฆ๐ฒ๐ป๐๐ฒ๐ก๐ผ๐๐ฎ ๐จ1 ๐๐ถ๐๐ฒ ๐ฆ๐ฒ๐ฟ๐ถ๐ฒ๐ ๐ถ๐ ๐ป๐ผ๐ ๐ผ๐ฝ๐ฒ๐ป ๐๐ผ๐๐ฟ๐ฐ๐ฒ! Built on the ๐ก๐๐ข-๐๐ป๐ถ๐ณ๐ ๐ฎ๐ฟ๐ฐ๐ต๐ถ๐๐ฒ๐ฐ๐๐๐ฟ๐ฒ, it natively unifies multimodal understanding and generation, delivering: โข๐ฆ๐ข๐ง๐ ๐๐ณ๐ณ๐ถ๐ฐ๐ถ๐ฒ๐ป๐ฐ๐ ๐๐บ๐ผ๐ป๐ด ๐ข๐ฝ๐ฒ๐ป-๐ฆ๐ผ๐๐ฟ๐ฐ๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น๐: Compact models (8B & A3B) delivering commercial-grade performance and exceptional cost efficiency. Leading performance among open-source models across a wide range of understanding, reasoning, and generation benchmarks. โข๐ก๐ฎ๐๐ถ๐๐ฒ ๐๐บ๐ฎ๐ด๐ฒโ๐ง๐ฒ๐ ๐ ๐๐ป๐๐ฒ๐ฟ๐น๐ฒ๐ฎ๐๐ฒ๐ฑ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป: Generate coherent interleaved text and images in a single flow using one model; ideal for practical applications like guides, where visuals turn complex information into intuitive insights. โข๐๐ถ๐ด๐ต-๐๐ฒ๐ป๐๐ถ๐๐ ๐๐ป๐ณ๐ผ๐ฟ๐บ๐ฎ๐๐ถ๐ผ๐ป ๐ฅ๐ฒ๐ป๐ฑ๐ฒ๐ฟ๐ถ๐ป๐ด: Strong capabilities in dense visual communication, generating richly structured layouts for knowledge illustrations, posters, PPTs, comics and other information-rich formats. ๐๐๐ด๐ด๐ถ๐ป๐ด ๐๐ฎ๐ฐ๐ฒ: huggingface.co/collections/seโฆ ๐๐ถ๐๐๐๐ฏ: github.com/OpenSenseNova/โฆ ๐๐ถ๐๐ฐ๐ผ๐ฟ๐ฑ: discord.gg/cxkwXWjpย @huggingface @github



Students now get 50% off Typeless Pro with a school email. Typeless is not just voice-to-text. It turns your messy thoughts into clear, polished writing wherever you write. Notes, emails, essays, applications, messages - all by speaking. The keyboard is a bottleneck. Typeless is the way out.