Spencer Sterling

236 posts

Spencer Sterling

Spencer Sterling

@cerspense

Artist and researcher building autonomous creative systems. Founder, Out of Distribution Labs

Oakland, CA Katılım Ağustos 2022
484 Takip Edilen5K Takipçiler
Spencer Sterling retweetledi
Julien | MJM
Julien | MJM@JulienAIArt·
@cerspense has been developing a tool (Sentinel) that allows live diffusion. I have been playing with it in conjunction with TouchDesigner (for audio reactivity) and my Teenage Engineering OP-XY (for additional effects trigger). Being able to make animations on the fly based on a song feels like playing an instrument that makes visuals instead of sounds. It is awesome! This has been made possible thanks to my Dell Pro Max T2 tower equipped with an NVIDIA RTX PRO GPU (you can check out the spec here: lnkd.in/gQ-S9cF3). #DellProPrecision #DellTech #NVIDIA
English
3
5
41
982
Spencer Sterling
Spencer Sterling@cerspense·
Here's what the graph looked like at the end
Spencer Sterling tweet media
English
5
0
46
4.3K
Spencer Sterling
Spencer Sterling@cerspense·
This is what directing an AI creative studio looks like. I gave my orchestration system a reference image and had it deploy three different interpretations across three machines running Blender simultaneously, all visible from a single dashboard. When one direction looks promising, I branch from it. The system copies the files to the other machines and keeps exploring from that new starting point. Branch, direct, branch again. Each split opens up a new creative path while I give notes in real-time. The number of creative directions I can explore simultaneously is only limited by how many computers I have.
English
54
84
843
82.7K
ComfyUI
ComfyUI@ComfyUI·
Most GenAI workflows were built for screens. @MomentFactory needed one built for buildings. They used ComfyUI to pre-map outputs to architectural surfaces with spatial logic enforced from day one. As result: ✦ Concept work moved from days to hours ✦ Over 20 directions explored with 20–40 iterations each ✦ One artist operating the system, with two others focused on direction and alignment ✦ Upscaling to 18K in ~20 minutes post-approval This is when generative AI stopped being a black box and started behaving like controllable, spatially-aware material. Full case study here -> bit.ly/4r06X3A
English
5
14
81
12.7K
Spencer Sterling
Spencer Sterling@cerspense·
A few weeks ago I gave Opus 4.6 a pack of stage assets in Unreal Engine, had it analyze all the example levels, then told it to build a stage. It assembled the whole thing, checked its own work from multiple camera angles, and when I gave it notes (trusses too short, stairs clipping through a crowd barrier) it fixed everything autonomously. It's gotten a lot better since then
English
18
33
367
24.1K
Spencer Sterling
Spencer Sterling@cerspense·
@LinusEkenstam 1. Used my Claude Code sub for this 2. Yes, every step of the workflow is pure python and can be distilled to a single tool call to create parametric variations 3. Yeah all tools, skills and learnings are shared between all sessions moving forward
English
5
0
38
4.3K
Linus ✦ Ekenstam
Linus ✦ Ekenstam@LinusEkenstam·
A few questions: 1. How much did this copy/paste cost? 2. If you ask it to make a donut again, will it make one instantly? 3. If we can teach agents skills by having them watch tutorials, how do you compress the skill to be transferred? Feels extremely wasteful to do this more than once. Cool work, nice demonstration
English
2
0
27
6.4K
Spencer Sterling
Spencer Sterling@cerspense·
I built an agentic system that taught itself the Blender donut tutorial by watching it on YouTube. It watched the tutorials, extracted the steps, filled in the gaps in own tooling and completed the entire thing autonomously.
English
132
205
2.2K
249.3K
Spencer Sterling
Spencer Sterling@cerspense·
@InternalDamn @em0tionull Yeah totally. This system orchestrates multiple harnesses across different computers. The MCP tools it developed for itself work in a single harness just fine. Just a lot slower to develop/create with only one harness at a time
English
0
0
3
823
Internal Damnation
Internal Damnation@InternalDamn·
@cerspense @em0tionull you can use any agentic harness to achieve the same results though, claude code, opencode, codex, etc. Still pretty cool you built this but it doesn’t need to be built
English
1
0
4
835
Spencer Sterling
Spencer Sterling@cerspense·
@2abstract4me Each worker computer has Blender, Unreal, ComfyUI and Windows MCPs at the moment, all of it custom. The orchestrator is able to spawn Claude Code instances in these worker computers with access to all of these
English
1
0
1
938
anil
anil@2abstract4me·
@cerspense the computer use is thru blender mcp or is it more generic?
English
1
0
0
973
Spencer Sterling
Spencer Sterling@cerspense·
@amaratatva Yeah! it watches them to build itself new tools, and create repeatable workflow steps we can use for future projects. It can do this all autonomously with multiple sessions running in parallel while I sleep.
English
0
0
6
2.3K
Amara
Amara@amaratatva·
@cerspense Can you tell me why it had to "watch" a Youtube tutorial when it's LLM would have already ingested Blender tutorials in its training data and would know right away how to make a Donut? What was missing that it learnt from a YT video??
English
3
0
6
2.7K
Shannon Potter
Shannon Potter@cifilter·
@cerspense Stupid question, but what is the workflow visualizer setup at the bottom?
English
1
0
1
2.1K
Spencer Sterling
Spencer Sterling@cerspense·
@shaneswrld_ Pretty much! Properly guiding it through that transcript and giving it the right tools to keep itself on track is the hard part
English
2
0
5
2.5K
shane
shane@shaneswrld_·
@cerspense it just read the youtube video transcript and followed directions but yea still cool
English
1
0
1
2.7K
Spencer Sterling
Spencer Sterling@cerspense·
@shadowdefense Yeah the orchestrator has an mcp of its own as well as the blender mcp directly. Absolutely possible to be guiding its creation in real time as its building
English
1
0
3
1.2K
Shadow Defense
Shadow Defense@shadowdefense·
@cerspense can you make agent a 2way audio chatbot ai with an api from blender or preferably unity to make things we ask for
English
1
0
1
1.4K
Anthropic
Anthropic@AnthropicAI·
We're proud to support @LACMA's Art + Technology Lab—a program that empowers artists to prototype ideas at the edges of art, science, and emerging technology. The 2026 call for proposals is open to artists worldwide. Grants up to $50K. Apply by Apr 22: lacma.org/art/lab/grants
English
144
162
1.6K
331.5K
Spencer Sterling
Spencer Sterling@cerspense·
@dominicditanna 1 hour. most of the time spent was syncing files, not actually building anything! also using faster models like flash and haiku would speed this up massively.
English
0
0
5
1.6K
Spencer Sterling
Spencer Sterling@cerspense·
@em0tionull It built its own blender mcp and runs in an agentic loop, improving its techniques and tools autonomously
English
3
0
50
4.4K
emotionull
emotionull@em0tionull·
@cerspense This is just blender mcp i dont see how this is agentic.
English
1
0
18
4.8K
Huang I Lan
Huang I Lan@Huang_I_Lan·
@cerspense Does it work with other blender tutorials? Please let us know if you tested it 😱
English
1
0
1
2.5K
Maulik Maanche
Maulik Maanche@MaulikShakya8·
@cerspense Which api did you use and how much it spent in the process?
English
1
0
0
2.8K
Spencer Sterling
Spencer Sterling@cerspense·
@michaelgold Yeah it uses both visual evaluation and programmatic evaluation at different steps! Screenshots are also extracted at different points in the tutorial as reference
English
1
0
27
4.4K
Michael Gold
Michael Gold@michaelgold·
@cerspense What was the feedback loop that it needed to understand when a step was completed successfully?
English
1
0
8
4.8K