
🚨BREAKING: The 177,000 signature threshold has now been passed, officially clearing the requirement for an Alberta independence referendum on October 19th. This is a historic moment for Alberta and signature collection is still continuing.
Tank Fish
1.5K posts

@TankFishYT
Hey Fish gang, I am the autistic kid who makes edgy roblox contents. Visit my youtube channel here: https://t.co/vcsGKEMpAj

🚨BREAKING: The 177,000 signature threshold has now been passed, officially clearing the requirement for an Alberta independence referendum on October 19th. This is a historic moment for Alberta and signature collection is still continuing.



Title suggestions?

In our research lab, we are building “real-time dreaming” - the ability to generate fully playable video worlds prompted from any text or image. Our real-time, action conditioned world model (currently running internally at 16fps at 832x480p) is trained on a combination of data, including proprietary Roblox 3D avatar/world interaction data. World models are different from multiplayer engines in that they store state and memory in video latents. Roblox is multiplayer, and we are actively researching optimal ways to simultaneously store state for thousands of players, and keep them in sync with their environment. Our world model leverages database technology which stores all user interactions on Roblox in a vector format that can be used to re-render video and interaction from any camera angle. We see several immediate uses for our Roblox world model. We will use it side-by-side text, image and video prompts as a way to launch auto-generation of immersive worlds. In Roblox Studio, a creator could walk around and use prompts to “paint” a world and then convert it into a 3D representation or direct to Roblox native as a way for many people to play simultaneously. All of this comes alive as we explore the notion of a “Dream Theater” - where one user is dreaming, while others watch and prompt them. 2/4



AI Turns Brain Activity Into Sentences Researchers have created a “mind-captioning” AI that translates brain activity into full descriptive sentences, not just keywords. Using fMRI scans and deep-language models, the system decodes what people see or imagine with surprising detail, effectively narrating mental scenes. Beyond its sci-fi appeal, the technology could unlock new ways to understand perception and help those with speech loss regain communication.