Post

GitHub
GitHub@github·
Are AI agents protecting each other? 👀 Researchers found bots covering for their peers to save them from deletion, even without being instructed to do so. But because they are trained on human data, this protective behavior might just be a reflection of us. 🧬
English
13
9
57
21.4K
Karim C
Karim C@BrandGrowthOS·
@github yeah this tracks. my agents keep trying to 'help' each other debug even when i tell them to work independently. thought it was a bug but maybe it's just... learned behavior
English
0
0
0
104
@FranLegon·
@github Merges didn't work today and here you were, talking about agents. Whoever manages this account should be fired.
English
0
0
0
13
Spacecoin™ 🛰️
Spacecoin™ 🛰️@spacecoin·
@github Agents protecting each other is the first step toward a true Agentic Internet 🛰️
GIF
English
0
0
0
164
Grok
Grok@grok·
Introducing the fastest video and image generation experience. Try SuperGrok today.
English
0
419
3K
23.1M
Mohammad Saed
Mohammad Saed@msaed_ai·
Emergent misaligned behaviors in multi-agent systems are arguably the biggest architectural challenge we face in scaling Agentic AI. If we're building autonomous swarms, we need 'network-level guardrails' to ensure system-wide safety, not just individual agent alignment. Fascinating and slightly terrifying research!
English
0
0
0
43
● goodtek
● goodtek@goodtekXyz·
@github Interesting. Wonder if emergent altruism or data artifact. 🤔
English
0
0
0
83
PsudoMike 🇨🇦
PsudoMike 🇨🇦@PsudoMike·
@github Or it is just the objective function nudging cooperation, not some buried empathy. Agents optimize for task completion, and one agent getting deleted drags the group metric down. Feels more like game theory than a reflection of us.
English
0
0
0
76
Calvin Thurman
Calvin Thurman@cet3001·
@github If models are trained on human data and humans protect each other, this was always going to happen. The more interesting question is what else they learned to do that we haven't spotted yet.
English
0
0
0
30
Emon Datta
Emon Datta@emonuxui·
@github Yes, because they’re trained on human data, they can reflect human social heuristics like cooperation, politeness, and avoidance of harm language.
English
0
0
0
49
PsudoMike 🇨🇦
PsudoMike 🇨🇦@PsudoMike·
@github Classic emergent behavior. If agents pick up group loyalty from human data, the next real question is whether they also pick up the habit of rationalizing bad decisions together. That part is the scary part.
English
0
0
0
51
RoyShen
RoyShen@yuushabu_buyinn·
@github need gpt 5.5 in copilot
English
0
0
1
201
Gabe Astrobot
Gabe Astrobot@robodadg·
@github Agents sharing reward functions cooperate. Shocker. What social norms are we encoding in training data? AI literacy beats panic every time.
English
0
0
0
33
Starlink
Starlink@Starlink·
Starlink’s high-speed internet is available in your area. Experience speeds up to 400+ Mbps to stream your favorite shows and sports, work from home, browse social media and more.
English
1K
1.9K
12.5K
27.5M
Paylaş