AI

60.7K posts

AI banner
AI

AI

@DeepLearn007

Imtiaz Adam CS #AI Postgrad |#Tech #Strategy #MachineLearning #DeepLearning | #RL #Agentic | #LLM Liberal | #GenAI| MBA alum @morganstanley @LBS @Columbia_Biz

Bergabung Eylรผl 2012
109.5K Mengikuti136.8K Pengikut
AI me-retweet
DailyPapers
DailyPapers@HuggingPapersยท
Complementary Reinforcement Learning Alibaba researchers introduce a neuroscience-inspired RL paradigm enabling policy actors and experience extractors to co-evolve, fixing misalignment in static memory systems and boosting agentic performance by 10%.
DailyPapers tweet media
English
2
11
36
2.5K
AI me-retweet
Science Magazine
Science Magazine@ScienceMagazineยท
A new Science study finds that different dendritic segments of a single neuron follow distinct rules. The results challenge the idea that neurons follow a single learning strategy and offer a new perspective on how the brain learns and adapts behavior ๐Ÿ“„: scim.ag/3GjGf4d #SciencePerspective: scim.ag/4jl6fed
Science Magazine tweet media
English
14
259
787
59.1K
AI me-retweet
Paul F. Austin
Paul F. Austin@PaulAustin3wยท
We can now watch psilocybin grow new brain connections in real time. Not metaphorically. Not "neuroplasticity" as a vague buzzword. Actual, physical structures โ€” dendritic spines โ€” sprouting from cortical neurons within 24 hours of a single dose. A team at Yale used chronic two-photon microscopy to image individual dendritic spines on layer 5 pyramidal neurons in the medial frontal cortex of living mice. Before psilocybin. After psilocybin. Same neurons. Same spines. Day after day. And here's what they found: A single dose of psilocybin produced a ~10% increase in spine density and spine size. New spines began forming within 24 hours. Most of these new connections were still there a month later. That last part matters most. Psilocybin has a half-life of about 3 hours. The molecule is gone by dinner. But the structural changes it triggers persist for at least 34 days (and likely far longer). This is the biological explanation for something clinicians have observed for years: a single psilocybin session producing therapeutic benefits that last months. The drug disappears, but the architecture it built does not. There's a critical mechanistic detail. When researchers pre-treated with ketanserin โ€” a 5-HT2A receptor antagonist โ€” the spine growth was completely blocked. This confirms that structural remodeling depends on activation of the serotonin 2A receptor. The same receptor responsible for the psychedelic experience itself. A 2025 follow-up from the same lab went further. Using rabies tracing to map brain-wide inputs to these new spines, they discovered psilocybin's rewiring is network-specific. It selectively strengthens inputs from perceptual and default mode network regions, the same networks implicated in self-referential processing, rumination, and depression. It doesn't just grow connections randomly. It grows the RIGHT ones. Here's what this means for practitioners: The window after a psychedelic experience isn't just psychological. It's structural. New dendritic spines form and stabilize in the days and weeks following a session. Integration practices โ€” therapy, journaling, somatic work, meditation, breathwork โ€” aren't just processing insights. They may be reinforcing which of these new physical connections survive. You're not just supporting someone's mental model. You're supporting their neural architecture. Think about what that reframes. The integration period isn't a nice-to-have. It's a biological imperative. Those new spines either stabilize into lasting connections or get pruned. The environment, practices, and support during that window may determine which. We're not just learning that psilocybin works. We're watching exactly how it works, at the level of individual synapses. The implications for how we design protocols, structure integration, and time follow-up sessions are enormous. What do you make of this research? Is psilocybin the miracle drug that science makes it out to be?
Paul F. Austin tweet media
English
210
1K
5.9K
1.2M
AI me-retweet
Alexander A. Wolf
Alexander A. Wolf@alexanderawolfยท
We built a Spiking Neural Network (SNN) to fully control a physical robot arm (6-DOF), and it's running entirely on a @Raspberry_Pi (no HAT). In our approach, the SNN starts untrained and drives the servos directly, learning in real time through local learning rules (no backprop or gradient descent). The goal with Twitch-y (the robot + SNN brain) is to let it explore and build an understanding of its own body until it can touch the plate on the mat, while also validating the SNN architecture and some of the math. In the video: - Top right: the SNN architecture - Bottom right: the mini-brain spiking in real time while controlling the arm - Left: Twitch-y (@huggingface SO-101) If you want to learn more about Artificial Brains, come visit us this Friday (13th), at @fdotinc ๐Ÿ‘‰ luma.com/artifact
English
18
29
264
28.5K
AI me-retweet
Alexander A. Wolf
Alexander A. Wolf@alexanderawolfยท
We built a Spiking Neural Network to control a 7-DOF robotic arm in simulation. It aims to learn in real time using local learning rules, running entirely on a CPU. It wasnโ€™t trained on any dataset, and it's experiencing the world for the first time as it moves with very sparse reward policies. In the video: the first few seconds show the SNN architecture; the remainder shows a Panda robot controlled by this mini-brain in real time (1x speed).
English
7
5
47
2.6K
AI me-retweet
NOELREPORTS ๐Ÿ‡ช๐Ÿ‡บ ๐Ÿ‡บ๐Ÿ‡ฆ
โ—๏ธUkrainian experts in the Middle East were shocked by US air defense tactics, citing the use of up to 8 Patriot missiles per drone and $6M interceptors against $70K UAVs, while poorly concealed radars were destroyed by cheap drones, The Times reports. #Iran
English
50
291
1.6K
98.2K
AI me-retweet
MAKS 25 ๐Ÿ‡บ๐Ÿ‡ฆ๐Ÿ‘€
๐Ÿ‡บ๐Ÿ‡ฆโ—๏ธโ€œI have no idea what the allies have been looking at for four years while we have been at war,โ€ โ€” Ukrainian military instructors who went to help counter Iranian missiles and UAVs are shocked by the way the US shoots down โ€œShaheds,โ€ writes The Times โ€“ First, the Persian Gulf countries launched as many as 8 Patriot missiles at one (!) enemy target, each costing more than $3 million. โ€“ They often used a ship-based SM-6 missile, worth about $6 million, to shoot down a โ€œShahedโ€ worth $70,000. โ€“ The US and its allies often literally โ€œshineโ€ their radars like beacons โ€” without proper camouflage. Ukrainians work differently: mobile radars constantly change positions. For example: just three (!) cheap Shahed drones destroyed the AN/FPS-132 early warning radar (~$1 billion) and another air defense radar (~$300 million), which had been standing in one place for months and were perfectly โ€œreadableโ€ from satellites.
MAKS 25 ๐Ÿ‡บ๐Ÿ‡ฆ๐Ÿ‘€ tweet media
English
78
700
3K
87.4K
AI me-retweet
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandisยท
Recent neuromorphic computer breakthroughs mean we can now simulate complex physics on brain-inspired chips using 1,000x less energy than supercomputers. We're literally making silicon think like neurons to solve equations that used to require entire data centers. The Cambrian explosion of AI wasn't the finish lineโ€”it was the starting gun.
English
57
61
549
26K
AI me-retweet
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandisยท
Weโ€™re watching the Supersonic Tsunami hit. Neuromorphic chips (1000x efficiency). Fusion plants coming online. Humanoid robots entering our homes. $1 trillion infrastructure deployment. These aren't separate trendsโ€”they're one converging system. The next 6 months will be wild.
English
77
124
1.1K
38.8K
AI me-retweet
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandisยท
The human brain uses only 20 watts of power but performs ~1 exaFLOP (10^18 operations/sec). Today's top AI chips burn 700 watts for ~1 petaFLOP. We're still ~1,000x less efficient than biology. When neuromorphic chips close that gap, we won't be building data centersโ€”we might be growing them.
English
116
157
914
65.2K
AI me-retweet
Femke Plantinga
Femke Plantinga@femke_plantingaยท
Is your RAG pipeline failing because of your data, or because of your queries? Most developers optimize their vector databases. But smart developers optimize their queries first. These 4 techniques optimize your queries before they hit your vector database: ๐Ÿญ. ๐—ค๐˜‚๐—ฒ๐—ฟ๐˜† ๐——๐—ฒ๐—ฐ๐—ผ๐—บ๐—ฝ๐—ผ๐˜€๐—ถ๐˜๐—ถ๐—ผ๐—ป Query decomposition breaks down complex questions into smaller, manageable pieces. So instead of asking "How do I build an agentic RAG system that handles multi-step reasoning?โ€, decompose it into: - "What are the core components of agentic RAG?โ€ - "How do agents handle multi-step reasoning chains?" - "What are the best tools for coordinating AI agents and vector search?" This technique enables agents to approach tasks systematically, thereby improving the accuracy and reliability of LLM responses. ๐Ÿฎ. ๐—ค๐˜‚๐—ฒ๐—ฟ๐˜† ๐—ฅ๐—ผ๐˜‚๐˜๐—ถ๐—ป๐—ด Direct queries to the most appropriate data source or index. Legal question? โ†’ Route it to your legal documents. Technical question? โ†’ Send it to your engineering docs. This targeted approach dramatically improves relevance. ๐Ÿฏ. ๐—ค๐˜‚๐—ฒ๐—ฟ๐˜† ๐—ง๐—ฟ๐—ฎ๐—ป๐˜€๐—ณ๐—ผ๐—ฟ๐—บ๐—ฎ๐˜๐—ถ๐—ผ๐—ป Rewrite queries to better match your data structure. Transform "latest updates" โ†’ "recent changes 2025" or expand acronyms automatically. This bridges the gap between how users ask questions and how information is stored. ๐Ÿฐ. ๐—ค๐˜‚๐—ฒ๐—ฟ๐˜† ๐—”๐—ด๐—ฒ๐—ป๐˜ Query agents are the most advanced approach, using AI agents to intelligently handle the entire query processing pipeline. The agent can reformulate the query, choose the right search type and filters, and decide which data collections to search. Query optimization happens before retrieval, addressing the root cause of poor results rather than trying to compensate for them downstream. Dive deeper in this free RAG ebook: weaviate.io/ebooks/advanceโ€ฆ Learn more about the query agent: docs.weaviate.io/agents/query?uโ€ฆ
Femke Plantinga tweet media
English
10
21
138
5.5K
AI me-retweet
Femke Plantinga
Femke Plantinga@femke_plantingaยท
AI memory systems don't fail because they forget. They actually fail because they remember everything. If you've ever used an agent that repeats your own preferences back to you, or worse, ignores them, youโ€™ve hit the โ€˜๐—น๐—ถ๐—บ๐—ถ๐˜๐—ฒ๐—ฑ ๐—น๐—ผ๐—ผ๐—ฝ.โ€™ Each interaction is treated as disposable. No continuity between sessions. No growth. For a chatbot, that's annoying. For an autonomous agent, itโ€™s catastrophic. Memory isn't just something you simply store and retrieve. It's something you actively maintain. Imagine a developer-facing agent that recommends a specific library version early in a project. Months later, the tooling has changed, but the old guidance is still in memory. The agent confidently suggests outdated instructions, leaving users worse off than if they hadn't asked at all. So to move from simple storage to intelligence, your memory system needs: - ๐—ช๐—ฟ๐—ถ๐˜๐—ฒ ๐—ฐ๐—ผ๐—ป๐˜๐—ฟ๐—ผ๐—น: Deciding what to store and at what confidence level - ๐——๐—ฒ๐—ฑ๐˜‚๐—ฝ๐—น๐—ถ๐—ฐ๐—ฎ๐˜๐—ถ๐—ผ๐—ป: Collapsing repeated information into canonical facts - ๐—ฅ๐—ฒ๐—ฐ๐—ผ๐—ป๐—ฐ๐—ถ๐—น๐—ถ๐—ฎ๐˜๐—ถ๐—ผ๐—ป: Handling contradictions as reality changes - ๐—”๐—บ๐—ฒ๐—ป๐—ฑ๐—บ๐—ฒ๐—ป๐˜: Correcting wrong facts rather than appending newer versions - ๐—ฃ๐˜‚๐—ฟ๐—ฝ๐—ผ๐˜€๐—ฒ๐—ณ๐˜‚๐—น ๐—ณ๐—ผ๐—ฟ๐—ด๐—ฒ๐˜๐˜๐—ถ๐—ป๐—ด: Allowing temporary information to naturally fade Without active maintenance, memory becomes an ever-growing pile of notes - some useful, some stale, some flat-out wrong. Once your system relies on memory for continual learning and adaptation, it stops behaving like a feature and starts behaving like infrastructure - requiring the same durability, isolation, and governance guarantees as your storage layer. At Weaviate, we're building memory from the ground up as a first-class data problem. Read the full deep-dive: weaviate.io/blog/limit-in-โ€ฆ And if you're interested in where memory is heading at Weaviate, sign up for a preview ๐Ÿงก
Femke Plantinga tweet media
English
8
9
43
1.9K
AI me-retweet
Victoria Slocum
Victoria Slocum@victorialslocumยท
Not all ๐—บ๐˜‚๐—น๐˜๐—ถ-๐—ฎ๐—ด๐—ฒ๐—ป๐˜ ๐—ฎ๐—ฟ๐—ฐ๐—ต๐—ถ๐˜๐—ฒ๐—ฐ๐˜๐˜‚๐—ฟ๐—ฒ๐˜€ are created equal. Here are six patterns that actually work in production: 1๏ธโƒฃ ๐—›๐—ถ๐—ฒ๐—ฟ๐—ฎ๐—ฟ๐—ฐ๐—ต๐—ถ๐—ฐ๐—ฎ๐—น One top-level agent coordinates multiple specialized sub-agents. The coordinator analyzes the query and routes it to the right specialists - one might handle proprietary internal data, another personal accounts (email, chat), another public web searches, then synthesizes the results into a coherent answer. ๐˜ž๐˜ฉ๐˜ฆ๐˜ฏ ๐˜ต๐˜ฐ ๐˜ถ๐˜ด๐˜ฆ: When you need to query across different data sources that require different patterns or strategies. 2๏ธโƒฃ ๐—›๐˜‚๐—บ๐—ฎ๐—ป ๐—ถ๐—ป ๐˜๐—ต๐—ฒ ๐—Ÿ๐—ผ๐—ผ๐—ฝ Critical decisions get routed to humans for approval before execution. The workflow pauses, a human validates or modifies the proposed action, then the agent continues. ๐˜ž๐˜ฉ๐˜ฆ๐˜ฏ ๐˜ต๐˜ฐ ๐˜ถ๐˜ด๐˜ฆ: High-stakes decisions, regulated environments, or anywhere you need accountability and oversight. 3๏ธโƒฃ ๐—ฆ๐—ต๐—ฎ๐—ฟ๐—ฒ๐—ฑ ๐—ง๐—ผ๐—ผ๐—น๐˜€ Each agent has its own role and focus, but they can all call the same APIs, databases, or search functions - same tools, different tasks. ๐˜ž๐˜ฉ๐˜ฆ๐˜ฏ ๐˜ต๐˜ฐ ๐˜ถ๐˜ด๐˜ฆ: When the tools are general-purpose but the ๐˜ณ๐˜ฆ๐˜ข๐˜ด๐˜ฐ๐˜ฏ๐˜ช๐˜ฏ๐˜จ about how to use them needs to be specialized. 4๏ธโƒฃ ๐—ฆ๐—ฒ๐—พ๐˜‚๐—ฒ๐—ป๐˜๐—ถ๐—ฎ๐—น Agents work in a pipeline, where the output of one agent becomes the input for the next. Agent 1 retrieves documents โ†’ Agent 2 filters and ranks โ†’ Agent 3 synthesizes the final answer. ๐˜ž๐˜ฉ๐˜ฆ๐˜ฏ ๐˜ต๐˜ฐ ๐˜ถ๐˜ด๐˜ฆ: When your workflow has clear stages where you need specialized expertise at each step. 5๏ธโƒฃ ๐—ฆ๐—ต๐—ฎ๐—ฟ๐—ฒ๐—ฑ ๐——๐—ฎ๐˜๐—ฎ๐—ฏ๐—ฎ๐˜€๐—ฒ ๐˜„๐—ถ๐˜๐—ต ๐——๐—ถ๐—ณ๐—ณ๐—ฒ๐—ฟ๐—ฒ๐—ป๐˜ ๐—ง๐—ผ๐—ผ๐—น๐˜€ All agents access the same underlying database (like a vector store), but each has different specialized tools for ๐˜ธ๐˜ฉ๐˜ข๐˜ต they do with that data. One agent might have tools for semantic search, another for data transformation. ๐˜ž๐˜ฉ๐˜ฆ๐˜ฏ ๐˜ต๐˜ฐ ๐˜ถ๐˜ด๐˜ฆ: When you have a centralized knowledge base but need different types of operations performed on it. 6๏ธโƒฃ ๐— ๐—ฒ๐—บ๐—ผ๐—ฟ๐˜† ๐—ง๐—ฟ๐—ฎ๐—ป๐˜€๐—ณ๐—ผ๐—ฟ๐—บ๐—ฎ๐˜๐—ถ๐—ผ๐—ป Agents that modify data in place within the database. This allows agents to not just retrieve but actively maintain and update the knowledge base. ๐˜ž๐˜ฉ๐˜ฆ๐˜ฏ ๐˜ต๐˜ฐ ๐˜ถ๐˜ด๐˜ฆ: When your data needs continuous enrichment, cleanup, or transformation as part of the agentic workflow. The reality is that most production systems use ๐—ต๐˜†๐—ฏ๐—ฟ๐—ถ๐—ฑ ๐—ฎ๐—ฝ๐—ฝ๐—ฟ๐—ผ๐—ฎ๐—ฐ๐—ต๐—ฒ๐˜€ combining multiple patterns. You might have a hierarchical coordinator that routes to sequential pipelines, with human-in-the-loop gates at critical decision points, all working with a shared database. Learn more about building multi-agent systems in our ebook: weaviate.io/ebooks/agenticโ€ฆ Or check out @weaviate_io Agent Skills to start building: weaviate.io/blog/weaviate-โ€ฆ
Victoria Slocum tweet media
English
36
124
705
39.3K