Heba AI
7.8K posts

Heba AI
@SubarcticRec
Generative AI specialist /Motion Graph Working on AI first projects, XR games, documentaries and films. ComfyUI, After Effects, Resolve, Ableton, Antigravity.
Finland Katılım Mart 2016
663 Takip Edilen831 Takipçiler

@francolli @BrentLynch $8 imagine was quite good value giving like 1000 videos a month with just $8. New model not so much.
English

@SubarcticRec @BrentLynch I get that but free with premium + on a site that is opaque and works on the whims of Nikita for $50 per month, or standalone without premium + $30 per month for a video tool that can't lip sync close to other models
Mmmm, choices choices
I'll take my chances with other models
English

😲GASP!
The paywalls are going up.
You'll now need a SUPERGROK
or PREMIUM+ ACCOUNT (includes SUPEGROK)
for GROK IMAGINE App Usage.
At the moment you can still create low resolution images inside of Grok on X with a Premium account.
While I'm a fan of Grok Imagine until they roll out all the features available via API (Key among them 15 seconds) I most likely will use Grok Imagine via API.

English

@francolli @BrentLynch Its less censored than other closed models.
English

@BrentLynch Nah, Im done with grok imagine
Its like sitting in a room filled with lego technics sets and all you are allowed to use is a bucket of duplo bricks
For $30 a month
Its shit
English

Graphic designers, look away.
- Adobe just dropped a new AI model.
- It generates fully editable layered files from simple text.
- No more flat AI images.
- Real RGBA layers with perfect transparency.
- It even rips flat JPEGs back into separated layers.
- The architecture is honestly beautiful.
And... the open-source GitHub link?
- no, it's Adobe. Enjoy reading the PDF.
arxiv.org/pdf/2603.17965


English

@OfficialLoganK Maybe try to fix it so that Gemini 3.1 pro would not still be so outdated when using in AI Studio.

English

@SubarcticRec Haha, I wish it were true, but I think it does have a lot of power and I can automate a lot of things right now.
English

@AvinashAila @ziwenxu_ OC works with OAI sub OAuth and can the use gemini cli via google oauth. No api tokens needed. 5.4 pro sub is very generous via OAuth.
English

@ziwenxu_ They are not working with subscription oAurh. Did it work for you?
English

Most of you missed this: OpenClaw has a hidden ACP Agent.
It lets OpenClaw tap into Claude Code, Codex, OpenCode, Gemini CLI without burning tokens on endless back-and-forth just like running Claude Code natively.
Drop this config into your OpenClaw and watch it unlock:
"acpx": {
"enabled": true,
"dispatch": {"enabled": true},
"backend": "acpx",
"defaultAgent":"claude",
"allowAgents": ["claude", "codex", "opencode"],
"maxConcurrentSessions": 8,
"config": {
"permissionMode": "approve-all",
"nonInteractivePermissions": "fail"
}
}
English

@SlipperyGem yep, i have now two comfy instals: one to run new fp4 models and other for rest. But nvidia latest things require latest drivers and its always risk to update. Some driver versions starts to crash the computer.
English

@SubarcticRec Aye indeed.
Every time you click update could be disastrous.
In fact, ever since the recent update, a lot of sub-graphs from the default workflows don't work anymore for me.
English

For Nvidia GPU Chads out there. You can give this Comfy node a go for the RTX Video Super Resolution for upscaling.
Its much faster than Seed VR2, good for vids, but Seed VR2 is still king of creative upscaling, especially of lower res images.
github.com/Comfy-Org/Nvid…
English

"We ship faster than they can clone."
😎
OpenClaw🦞@openclaw
OpenClaw 2026.3.11 🦞 🏹 Hunter & 🩹Healer Alpha — free 1M context models via @OpenRouter 🧠 GPT 5.4 stops stopping mid-thought 💎 Gemini Embedding 2 for memory 💻 OpenCode Go support 🔒 Security hardening sprint We ship faster than they can clone. github.com/openclaw/openc…
English

You thought generating code was crazy? Welcome to the next level of vibecoding.
Meta just dropped the blueprint for real autonomous coding agents
> AI doesn't just write your code anymore
> It *is* the debugger
> Step into, step over, set breakpoints
> Zero Python runtime actually needed
> It simulates the entire execution state
> It even time travels backwards to find the inputs
Literally a world model for code execution
arxiv.org/abs/2603.09951

English














