Jiří Prokop
4.1K posts

Jiří Prokop
@synaptiko_o
Behavioral pattern observer, recorder, modifier & replayer...




The timeline isn't ready for this 🤯 Testing out Kling 2.6 motion control and the realism is the best I've seen. Perfect facial expressions in real-time... from Taylor to Ye instantly. we're entering a weird new era of content creation. (If you want a full video tutorial with all the prompts I used to build this comment "MOTION"👇 and I'll dm you the full doc) must be following so I can dm you



🚨Workflow share🚨 Want to create epic Chibi-style figures and bring them to life? No worries — here’s the full workflow, with prompts included. ✨ Bookmark this for later.








While working on my project involving fabric fibers, I needed to achieve Differential Growth with Geometry Nodes. 🧬 #b3d Here’s the solution I came up with 👇

New Paper: Continuous Thought Machines 🧠 Neurons in brains use timing and synchronization in the way that they compute, but this is largely ignored in modern neural nets. We believe neural timing is key for the flexibility and adaptability of biological intelligence. We propose a new neural architecture, “Continuous Thought Machines” (CTMs), which is built from the ground up to use neural dynamics as a core representation for intelligence. By using neural dynamics as a first-class representational citizen, CTMs naturally perform adaptive computation. Many emergent, interesting behaviors arise as a result: CTMs solve mazes by observing a raw maze image and producing step-by-step instructions directly from its neural dynamics. When tasked with image recognition, the CTM naturally takes multiple steps to examine different parts of the image before making its decision. This step-by-step approach not only makes its behavior more interpretable but also improves accuracy: the longer it “thinks,” the more accurate its answers become. We also found that this allows the CTM to decide to spend less time thinking on simpler images, thus saving energy. When identifying a gorilla, for example, the CTM’s attention moves from eyes to nose to mouth in a pattern remarkably similar to human visual attention. I think this work underscores an important, yet often lost, synergy between neuroscience and AI. While modern AI is ostensibly brain-inspired, the two fields often operate in surprising isolation. By starting with such inspiration and iteratively following the emergent, interesting behaviors, we developed a model with unexpected capabilities, such as its surprisingly strong calibration in classification tasks, a feature that was not explicitly designed for. When we initially asked, “why do this research?”, we hoped the journey of the CTM would provide compelling answers. By embracing light biological inspiration and pursuing the novel behaviors observed, we have arrived at a model with emergent capabilities that exceeded our initial designs. We are committed to continuing this exploration, borrowing further concepts to discover what new and exciting behaviors will emerge, pushing the boundaries of what AI can achieve.

there's no doubt that extremely powerful AI beyond human level, will still need a few more breakthroughs could take around 5-10 years? probably but nearly human-level early AGI that can impact the global economy is almost certain by the end of this decade



























