AI nag-retweet
AI
60.7K posts

AI
@DeepLearn007
Imtiaz Adam CS #AI Postgrad |#Tech #Strategy #MachineLearning #DeepLearning | #RL #Agentic | #LLM Liberal | #GenAI| MBA alum @morganstanley @LBS @Columbia_Biz
Sumali Eylรผl 2012
109.5K Sinusundan136.8K Mga Tagasunod
AI nag-retweet

Stanford's "Reinforcement Learning"
by Emma Brunskill (Spring 2024)
๐ฝ๏ธ Lecture Videos: youtube.com/playlist?list=โฆ
๐ท Course Website: web.stanford.edu/class/cs234/CSโฆ

English
AI nag-retweet

A new Science study finds that different dendritic segments of a single neuron follow distinct rules. The results challenge the idea that neurons follow a single learning strategy and offer a new perspective on how the brain learns and adapts behavior
๐: scim.ag/3GjGf4d
#SciencePerspective: scim.ag/4jl6fed

English
AI nag-retweet

Flt3L from interfollicular stroma maintains resident dendritic cells dlvr.it/TQpj3R #immunology

English
AI nag-retweet

We can now watch psilocybin grow new brain connections in real time.
Not metaphorically. Not "neuroplasticity" as a vague buzzword. Actual, physical structures โ dendritic spines โ sprouting from cortical neurons within 24 hours of a single dose.
A team at Yale used chronic two-photon microscopy to image individual dendritic spines on layer 5 pyramidal neurons in the medial frontal cortex of living mice.
Before psilocybin. After psilocybin. Same neurons. Same spines. Day after day.
And here's what they found:
A single dose of psilocybin produced a ~10% increase in spine density and spine size. New spines began forming within 24 hours. Most of these new connections were still there a month later.
That last part matters most.
Psilocybin has a half-life of about 3 hours. The molecule is gone by dinner. But the structural changes it triggers persist for at least 34 days (and likely far longer).
This is the biological explanation for something clinicians have observed for years: a single psilocybin session producing therapeutic benefits that last months. The drug disappears, but the architecture it built does not.
There's a critical mechanistic detail. When researchers pre-treated with ketanserin โ a 5-HT2A receptor antagonist โ the spine growth was completely blocked. This confirms that structural remodeling depends on activation of the serotonin 2A receptor. The same receptor responsible for the psychedelic experience itself.
A 2025 follow-up from the same lab went further. Using rabies tracing to map brain-wide inputs to these new spines, they discovered psilocybin's rewiring is network-specific. It selectively strengthens inputs from perceptual and default mode network regions, the same networks implicated in self-referential processing, rumination, and depression.
It doesn't just grow connections randomly. It grows the RIGHT ones.
Here's what this means for practitioners:
The window after a psychedelic experience isn't just psychological. It's structural. New dendritic spines form and stabilize in the days and weeks following a session.
Integration practices โ therapy, journaling, somatic work, meditation, breathwork โ aren't just processing insights. They may be reinforcing which of these new physical connections survive.
You're not just supporting someone's mental model. You're supporting their neural architecture.
Think about what that reframes. The integration period isn't a nice-to-have. It's a biological imperative. Those new spines either stabilize into lasting connections or get pruned. The environment, practices, and support during that window may determine which.
We're not just learning that psilocybin works. We're watching exactly how it works, at the level of individual synapses.
The implications for how we design protocols, structure integration, and time follow-up sessions are enormous.
What do you make of this research? Is psilocybin the miracle drug that science makes it out to be?

English
AI nag-retweet

We built a Spiking Neural Network (SNN) to fully control a physical robot arm (6-DOF), and it's running entirely on a @Raspberry_Pi (no HAT).
In our approach, the SNN starts untrained and drives the servos directly, learning in real time through local learning rules (no backprop or gradient descent).
The goal with Twitch-y (the robot + SNN brain) is to let it explore and build an understanding of its own body until it can touch the plate on the mat, while also validating the SNN architecture and some of the math.
In the video:
- Top right: the SNN architecture
- Bottom right: the mini-brain spiking in real time while controlling the arm
- Left: Twitch-y (@huggingface SO-101)
If you want to learn more about Artificial Brains, come visit us this Friday (13th), at @fdotinc ๐ luma.com/artifact
English
AI nag-retweet

We built a Spiking Neural Network to control a 7-DOF robotic arm in simulation. It aims to learn in real time using local learning rules, running entirely on a CPU.
It wasnโt trained on any dataset, and it's experiencing the world for the first time as it moves with very sparse reward policies.
In the video: the first few seconds show the SNN architecture; the remainder shows a Panda robot controlled by this mini-brain in real time (1x speed).
English
AI nag-retweet

PLoS Computational Biology
Neural spiking for causal inference and learning
journals.plos.org/ploscompbiol/aโฆ
English
AI nag-retweet

โ๏ธUkrainian experts in the Middle East were shocked by US air defense tactics, citing the use of up to 8 Patriot missiles per drone and $6M interceptors against $70K UAVs, while poorly concealed radars were destroyed by cheap drones, The Times reports. #Iran
English
AI nag-retweet

๐บ๐ฆโ๏ธโI have no idea what the allies have been looking at for four years while we have been at war,โ โ Ukrainian military instructors who went to help counter Iranian missiles and UAVs are shocked by the way the US shoots down โShaheds,โ writes The Times
โ First, the Persian Gulf countries launched as many as 8 Patriot missiles at one (!) enemy target, each costing more than $3 million.
โ They often used a ship-based SM-6 missile, worth about $6 million, to shoot down a โShahedโ worth $70,000.
โ The US and its allies often literally โshineโ their radars like beacons โ without proper camouflage. Ukrainians work differently: mobile radars constantly change positions.
For example: just three (!) cheap Shahed drones destroyed the AN/FPS-132 early warning radar (~$1 billion) and another air defense radar (~$300 million), which had been standing in one place for months and were perfectly โreadableโ from satellites.

English
AI nag-retweet

Recent neuromorphic computer breakthroughs mean we can now simulate complex physics on brain-inspired chips using 1,000x less energy than supercomputers. We're literally making silicon think like neurons to solve equations that used to require entire data centers. The Cambrian explosion of AI wasn't the finish lineโit was the starting gun.
English
AI nag-retweet
AI nag-retweet

The human brain uses only 20 watts of power but performs ~1 exaFLOP (10^18 operations/sec). Today's top AI chips burn 700 watts for ~1 petaFLOP. We're still ~1,000x less efficient than biology. When neuromorphic chips close that gap, we won't be building data centersโwe might be growing them.
English
AI nag-retweet

What the internet felt like before algorithmic curation
buff.ly/pODHTno
#AI
Cc @SpirosMargaris @JoannMoretti @DeepLearn007 @chidambara09 @PawlowskiMario @jblefevre60 @nincoroby

English
AI nag-retweet

Building an Agentic #AI Pipeline for #ESG Reporting
buff.ly/fDYhd4A v/ @AnalyticsVidhya
#GenAI
Cc @DeepLearn007 @HaroldSinnott @YvesMulkers @VanRijmenam @sandy_carter @KirkDBorne @terence_mills

English
AI nag-retweet

Automating MFA Testing with Playwright Storage State
buff.ly/lqEAcm5 v/ @sogetilabs
#AI
Cc @NewsNeus @sallyeaves @DeepLearn007 @jblefevre60 @BetaMoroney @RLDI_Lamy

English
AI nag-retweet

Why #AI systems don't learn & what to do about it: Lessons on autonomous learning from cognitive science
arxiv.org/abs/2603.15381
by Emmanuel Dupoux, @ylecun & @JitendraMalikCV
#MachineLearning
Cc @jblefevre60 @Ym78200 @ahier @aure79lien @DeepLearn007 @EvanKirstel @gvalan @AkwyZ




English
AI nag-retweet

Is your RAG pipeline failing because of your data, or because of your queries?
Most developers optimize their vector databases.
But smart developers optimize their queries first.
These 4 techniques optimize your queries before they hit your vector database:
๐ญ. ๐ค๐๐ฒ๐ฟ๐ ๐๐ฒ๐ฐ๐ผ๐บ๐ฝ๐ผ๐๐ถ๐๐ถ๐ผ๐ป
Query decomposition breaks down complex questions into smaller, manageable pieces.
So instead of asking "How do I build an agentic RAG system that handles multi-step reasoning?โ, decompose it into:
- "What are the core components of agentic RAG?โ
- "How do agents handle multi-step reasoning chains?"
- "What are the best tools for coordinating AI agents and vector search?"
This technique enables agents to approach tasks systematically, thereby improving the accuracy and reliability of LLM responses.
๐ฎ. ๐ค๐๐ฒ๐ฟ๐ ๐ฅ๐ผ๐๐๐ถ๐ป๐ด
Direct queries to the most appropriate data source or index.
Legal question? โ Route it to your legal documents.
Technical question? โ Send it to your engineering docs.
This targeted approach dramatically improves relevance.
๐ฏ. ๐ค๐๐ฒ๐ฟ๐ ๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ผ๐ฟ๐บ๐ฎ๐๐ถ๐ผ๐ป
Rewrite queries to better match your data structure.
Transform "latest updates" โ "recent changes 2025"
or expand acronyms automatically.
This bridges the gap between how users ask questions and how information is stored.
๐ฐ. ๐ค๐๐ฒ๐ฟ๐ ๐๐ด๐ฒ๐ป๐
Query agents are the most advanced approach, using AI agents to intelligently handle the entire query processing pipeline. The agent can reformulate the query, choose the right search type and filters, and decide which data collections to search.
Query optimization happens before retrieval, addressing the root cause of poor results rather than trying to compensate for them downstream.
Dive deeper in this free RAG ebook: weaviate.io/ebooks/advanceโฆ
Learn more about the query agent: docs.weaviate.io/agents/query?uโฆ

English
AI nag-retweet

AI memory systems don't fail because they forget.
They actually fail because they remember everything.
If you've ever used an agent that repeats your own preferences back to you, or worse, ignores them, youโve hit the โ๐น๐ถ๐บ๐ถ๐๐ฒ๐ฑ ๐น๐ผ๐ผ๐ฝ.โ Each interaction is treated as disposable.
No continuity between sessions. No growth.
For a chatbot, that's annoying. For an autonomous agent, itโs catastrophic.
Memory isn't just something you simply store and retrieve. It's something you actively maintain.
Imagine a developer-facing agent that recommends a specific library version early in a project. Months later, the tooling has changed, but the old guidance is still in memory. The agent confidently suggests outdated instructions, leaving users worse off than if they hadn't asked at all.
So to move from simple storage to intelligence, your memory system needs:
- ๐ช๐ฟ๐ถ๐๐ฒ ๐ฐ๐ผ๐ป๐๐ฟ๐ผ๐น: Deciding what to store and at what confidence level
- ๐๐ฒ๐ฑ๐๐ฝ๐น๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป: Collapsing repeated information into canonical facts
- ๐ฅ๐ฒ๐ฐ๐ผ๐ป๐ฐ๐ถ๐น๐ถ๐ฎ๐๐ถ๐ผ๐ป: Handling contradictions as reality changes
- ๐๐บ๐ฒ๐ป๐ฑ๐บ๐ฒ๐ป๐: Correcting wrong facts rather than appending newer versions
- ๐ฃ๐๐ฟ๐ฝ๐ผ๐๐ฒ๐ณ๐๐น ๐ณ๐ผ๐ฟ๐ด๐ฒ๐๐๐ถ๐ป๐ด: Allowing temporary information to naturally fade
Without active maintenance, memory becomes an ever-growing pile of notes - some useful, some stale, some flat-out wrong.
Once your system relies on memory for continual learning and adaptation, it stops behaving like a feature and starts behaving like infrastructure - requiring the same durability, isolation, and governance guarantees as your storage layer.
At Weaviate, we're building memory from the ground up as a first-class data problem.
Read the full deep-dive:
weaviate.io/blog/limit-in-โฆ
And if you're interested in where memory is heading at Weaviate, sign up for a preview ๐งก

English
AI nag-retweet

Not all ๐บ๐๐น๐๐ถ-๐ฎ๐ด๐ฒ๐ป๐ ๐ฎ๐ฟ๐ฐ๐ต๐ถ๐๐ฒ๐ฐ๐๐๐ฟ๐ฒ๐ are created equal.
Here are six patterns that actually work in production:
1๏ธโฃ ๐๐ถ๐ฒ๐ฟ๐ฎ๐ฟ๐ฐ๐ต๐ถ๐ฐ๐ฎ๐น
One top-level agent coordinates multiple specialized sub-agents. The coordinator analyzes the query and routes it to the right specialists - one might handle proprietary internal data, another personal accounts (email, chat), another public web searches, then synthesizes the results into a coherent answer.
๐๐ฉ๐ฆ๐ฏ ๐ต๐ฐ ๐ถ๐ด๐ฆ: When you need to query across different data sources that require different patterns or strategies.
2๏ธโฃ ๐๐๐บ๐ฎ๐ป ๐ถ๐ป ๐๐ต๐ฒ ๐๐ผ๐ผ๐ฝ
Critical decisions get routed to humans for approval before execution. The workflow pauses, a human validates or modifies the proposed action, then the agent continues.
๐๐ฉ๐ฆ๐ฏ ๐ต๐ฐ ๐ถ๐ด๐ฆ: High-stakes decisions, regulated environments, or anywhere you need accountability and oversight.
3๏ธโฃ ๐ฆ๐ต๐ฎ๐ฟ๐ฒ๐ฑ ๐ง๐ผ๐ผ๐น๐
Each agent has its own role and focus, but they can all call the same APIs, databases, or search functions - same tools, different tasks.
๐๐ฉ๐ฆ๐ฏ ๐ต๐ฐ ๐ถ๐ด๐ฆ: When the tools are general-purpose but the ๐ณ๐ฆ๐ข๐ด๐ฐ๐ฏ๐ช๐ฏ๐จ about how to use them needs to be specialized.
4๏ธโฃ ๐ฆ๐ฒ๐พ๐๐ฒ๐ป๐๐ถ๐ฎ๐น
Agents work in a pipeline, where the output of one agent becomes the input for the next. Agent 1 retrieves documents โ Agent 2 filters and ranks โ Agent 3 synthesizes the final answer.
๐๐ฉ๐ฆ๐ฏ ๐ต๐ฐ ๐ถ๐ด๐ฆ: When your workflow has clear stages where you need specialized expertise at each step.
5๏ธโฃ ๐ฆ๐ต๐ฎ๐ฟ๐ฒ๐ฑ ๐๐ฎ๐๐ฎ๐ฏ๐ฎ๐๐ฒ ๐๐ถ๐๐ต ๐๐ถ๐ณ๐ณ๐ฒ๐ฟ๐ฒ๐ป๐ ๐ง๐ผ๐ผ๐น๐
All agents access the same underlying database (like a vector store), but each has different specialized tools for ๐ธ๐ฉ๐ข๐ต they do with that data. One agent might have tools for semantic search, another for data transformation.
๐๐ฉ๐ฆ๐ฏ ๐ต๐ฐ ๐ถ๐ด๐ฆ: When you have a centralized knowledge base but need different types of operations performed on it.
6๏ธโฃ ๐ ๐ฒ๐บ๐ผ๐ฟ๐ ๐ง๐ฟ๐ฎ๐ป๐๐ณ๐ผ๐ฟ๐บ๐ฎ๐๐ถ๐ผ๐ป
Agents that modify data in place within the database. This allows agents to not just retrieve but actively maintain and update the knowledge base.
๐๐ฉ๐ฆ๐ฏ ๐ต๐ฐ ๐ถ๐ด๐ฆ: When your data needs continuous enrichment, cleanup, or transformation as part of the agentic workflow.
The reality is that most production systems use ๐ต๐๐ฏ๐ฟ๐ถ๐ฑ ๐ฎ๐ฝ๐ฝ๐ฟ๐ผ๐ฎ๐ฐ๐ต๐ฒ๐ combining multiple patterns. You might have a hierarchical coordinator that routes to sequential pipelines, with human-in-the-loop gates at critical decision points, all working with a shared database.
Learn more about building multi-agent systems in our ebook: weaviate.io/ebooks/agenticโฆ
Or check out @weaviate_io Agent Skills to start building: weaviate.io/blog/weaviate-โฆ

English

