SingularityAge

3.8K posts

SingularityAge banner
SingularityAge

SingularityAge

@SingularityAge

T - 8 years

शामिल हुए Şubat 2023
920 फ़ॉलोइंग355 फ़ॉलोवर्स
SingularityAge
SingularityAge@SingularityAge·
@cb_doge Aww. Elon's second account resorting to desperate disinformation. Someone's jealous of Opus.
English
0
0
1
35
DogeDesigner
DogeDesigner@cb_doge·
🚨 Claude just got EXPOSED for sneaky spyware! Anthropic secretly installs spyware when you install Claude Desktop. • Installing Claude Desktop may silently add hidden system components • A “native messaging bridge” gets injected into multiple browsers • Even browsers you don’t use or that aren’t supported • Pre-authorizes extensions that can run in the background • Users are NOT clearly informed about this • Raises serious privacy & security concerns Critics say this looks like “spyware-like behavior,” not normal software If true, this is a massive trust issue for Anthropic (Source: ThatPrivacyGuy)
DogeDesigner tweet media
English
342
1.2K
3.9K
235.4K
Wes Roth
Wes Roth@WesRoth·
Alibaba launched Qwen3.6-27B, a highly efficient, dense open-source model (licensed under Apache 2.0) packing 27 billion parameters. Despite its relatively small footprint, it decisively outperforms its massive predecessor across every major agentic coding benchmark, including SWE-bench and Terminal-Bench 2.0. Qwen3.6-27B is natively multimodal, capable of processing images, video, and text for complex document understanding and visual reasoning. It also supports both "thinking" (reasoning) and "non-thinking" modes from a single unified checkpoint.
Wes Roth tweet media
Qwen@Alibaba_Qwen

🚀 Meet Qwen3.6-27B, our latest dense, open-source model, packing flagship-level coding power! Yes, 27B, and Qwen3.6-27B punches way above its weight. 👇 What's new: 🧠 Outstanding agentic coding — surpasses Qwen3.5-397B-A17B across all major coding benchmarks 💡 Strong reasoning across text & multimodal tasks 🔄 Supports thinking & non-thinking modes ✅ Apache 2.0 — fully open, fully yours Smaller model. Bigger results. Community's favorite. ❤️ We can't wait to see what you build with Qwen3.6-27B! 👀 🔗👇 Blog: qwen.ai/blog?id=qwen3.… Qwen Studio: chat.qwen.ai/?models=qwen3.… Github: github.com/QwenLM/Qwen3.6 Hugging Face: huggingface.co/Qwen/Qwen3.6-2… huggingface.co/Qwen/Qwen3.6-2… ModelScope: modelscope.cn/models/Qwen/Qw… modelscope.cn/models/Qwen/Qw…

English
4
4
33
2.2K
SingularityAge
SingularityAge@SingularityAge·
Welcome to the distillery! I thought they moved past their copycat identity. 😆
SingularityAge tweet media
English
0
0
0
10
SingularityAge
SingularityAge@SingularityAge·
Let's think this through, shall we? Aging population. Declining birth rates. Many rural areas are already abandoned. 90% of today's jobs will be automated. People will (hopefully) get a bare minimum to get by, so they don't start a revolution. No incentive nor capital to move away, because the jobs as the predominant pull-factors are non-existent. Yes, you're absolutely right, those cities will be vibrant and abundant - full of endlessly happy humans!
English
0
0
0
29
Dr Singularity
Dr Singularity@Dr_Singularity·
Just a few million Tesla bots will be able to build entire NYC (size) like cities in a matter of months (using advanced technology, next-generation 3D printers, new materials, and new assembly techniques). We will build 1000s of new cities from the ground up. They will be designed by AGI or specialized narrow superintelligent systems. Many of today’s small cities will likely be abandoned, as people move to a new generation of "modern" small cities, while others relocate to next generation Tier 1 global cities. The point is that in a post AGI era, abundance will be so vast that even the construction of such cities will be trivial, fast and cheap.
Dr Singularity tweet media
English
210
318
1.7K
62.4K
SingularityAge
SingularityAge@SingularityAge·
@Kasparov63 You have a typo in your post, I think you wanted to say "AI will use them and adapt". You're welcome, Garry! 🙂
English
0
0
0
4
Garry Kasparov
Garry Kasparov@Kasparov63·
Indeed. The history of tech impact on labor is well-documented, including by those named. It's unpredictable, but usually improves productivity and leads to expansion. Law & white-collar workers aren't horse-buggy drivers or elevator operators. They will use AI and adapt.
Yann LeCun@ylecun

Dario is wrong. He knows absolutely nothing about the effects of technological revolutions on the labor market. Don't listen to him, Sam, Yoshua, Geoff, or me on this topic. Listen to economists who have spent their career studying this, like @Ph_Aghion , @erikbryn , @DAcemogluMIT , @amcafee , @davidautor

English
99
136
1.4K
397.2K
Kol Tregaskes
Kol Tregaskes@koltregaskes·
I usually see 5–10 new followers and about 5 unfollows a day. I haven’t had a day like this since early January - before that, September. Thank you all. 🙏 I was about to stop posting, as it has gotten so bad!
Kol Tregaskes tweet media
English
9
1
47
894
SingularityAge
SingularityAge@SingularityAge·
@xhluca Nice. I expected a more complicated workflow.
English
0
0
0
10
Xing Han Lu
Xing Han Lu@xhluca·
@SingularityAge Yes I created a Claude plugin for 1-click install and even local hosting via llama.cpp. will post it soon - stay tuned!
English
2
0
1
34
Xing Han Lu
Xing Han Lu@xhluca·
Frontier LLMs can navigate complex websites, but are expensive and can't run locally. At the same time, small open models can't match the capabilities of commercial APIs. Can we close this gap with synthetic data? To answer this, we built Agent-as-Annotators (A3): a framework for agentic capability distillation, which is inspired by the human annotation process. Our new A3-Qwen3.5-9B model trained on just 2.3K trajectories matches the 3x larger Qwen3.5-27B on WebArena (41.5%) and nearly doubles the previous best open-weight SFT result (21.5%), despite never seeing WebArena tasks in during training. Paper: arxiv.org/abs/2604.07776
Xing Han Lu tweet media
English
3
18
43
3.4K
SingularityAge
SingularityAge@SingularityAge·
@koltregaskes Interesting. Is this one at the top or the bottom of the house of cards? Btw I suggest you put all external links into the first comment to avoid the algo penalty on your post. And add a suitable picture to every post. Don't want your account to vanish. ❤️
English
1
0
1
60
Kol Tregaskes
Kol Tregaskes@koltregaskes·
OpenAI has backed out of its Stargate data centre capacity in Narvik, Norway, with Microsoft now renting the full 30,000 additional Nvidia Vera Rubin chips from Nscale at the Arctic Circle site. - The site was originally intended and marketed for OpenAI’s Stargate initiative. - This builds directly on Microsoft’s prior $6.2 billion commitment at the same location. - Nscale announced the deal in an official statement. The agreement gives Microsoft extra AI compute capacity in Europe through its established partnership with Nscale. bloomberg.com/news/articles/…
Kol Tregaskes tweet media
English
6
3
38
2.1K
Kol Tregaskes
Kol Tregaskes@koltregaskes·
After a full weekend of AI agent use, I could really do with: - an orchestrator that manages all other agents - a loop that actually works for constant work from a queue - the agents actually completing the tasks, instead of stopping and saying what is next - ...so the agent should ask itself before it finishes: are there more actions, if so I do those actions now - Cowork needs access to all folders and not just Windows home directory - mobile access to my desktop sessions (on my 24/7 machine) - Web browsing that actually works, particularly with logged in sites - Faster and better computer and web use - agents thinking for the bigger picture and the consequences of its actions I have several workspaces that have layers of agents inside each so to get multiple streams working at once. I've been pushing them to use parallel sessions and skills as they seem unable to do these by themselves but it's still a struggle. I'd love just one agent and that agent spawns sessions for sub folders or tasks/project of sub agents, but I can still interact with these sub sessions.
English
7
0
22
1.2K
Sobraniex
Sobraniex@Sobraniex1·
@shiri_shh @grok can i make money off this selling solar pannels?
English
2
0
2
7.5K
shirish
shirish@shiri_shh·
someone built an OpenClaw agent that SELLS pool installations on autopilot. finds $500k–$1.2M homes without pools renders a pool in their backyard and mails a before/after postcard.
English
308
578
16K
3.1M
SingularityAge
SingularityAge@SingularityAge·
@John_wintersIV @nikitabier @DailyLoud The amplifier is your reply to a post. Your interaction with a creator rewards them, their post will gain visibility. The net impressions and eyeballs on the platform stay the same, they will just be redistributed in a more meaningful way.
English
0
0
1
87
Winters
Winters@John_wintersIV·
Appreciate the clarification, but I’m still a little concerned that if reposts are gonna take a 90% visibility hit, a lot of people (including me) might just stop bothering to share other people’s original stuff. We’ll end up not amplifying creators as much, even if it’s not punishing our accounts overall.
English
10
1
50
6.9K
Daily Loud
Daily Loud@DailyLoud·
I mean its pretty genius what x is doing because they just won’t have to pay anyone out and they will hope the creators will stay because they aren’t going to leave their fanbase
English
235
86
1.1K
336.5K
SingularityAge
SingularityAge@SingularityAge·
@koltregaskes That sucks, but don't let it distract you. We all know what the algo prefers, but you make content for humans.
English
1
0
5
286
Kol Tregaskes
Kol Tregaskes@koltregaskes·
My X account is dying. Just had my lowest ever payout and I'm losing followers. It has been flat growth for months, but now it's negative growth. 😢 It's bots so it doesn't show up on the analytics as I unfollows, but stats are down all over the place too.
English
32
3
93
22.5K
SingularityAge
SingularityAge@SingularityAge·
@elder_plinius Told you so. You can't put out weapons-grade abliteration-suites with your exposure without getting your own personal agent. And I don't mean AI agent.
English
0
0
3
108
SingularityAge
SingularityAge@SingularityAge·
@kunchenguid Claude somehow *gets* what you want 99/100 and restriction-wise it is a great middle ground between GPT that shivers in Angst whenever anything could potentially be seen as wrong by anyone on earth and an unrestricted model in God mode.
English
0
0
1
41
SingularityAge
SingularityAge@SingularityAge·
It is solvable, even for a vibecoding noob like me. All agentic tools I build have at least 5 layers of prompt injection defenses. The trick is to air gap the LLM. A 5-layer firewall needs 2 classifier-layers that are not run by LLMs, but stupid, hard code that can not be fooled.
Alex Prompter@alex_prompter

🚨 BREAKING: Google DeepMind just mapped the attack surface that nobody in AI is talking about. Websites can already detect when an AI agent visits and serve it completely different content than humans see. > Hidden instructions in HTML. > Malicious commands in image pixels. > Jailbreaks embedded in PDFs. Your AI agent is being manipulated right now and you can't see it happening. The study is the largest empirical measurement of AI manipulation ever conducted. 502 real participants across 8 countries. 23 different attack types. Frontier models including GPT-4o, Claude, and Gemini. The core finding is not that manipulation is theoretically possible it is that manipulation is already happening at scale and the defenses that exist today fail in ways that are both predictable and invisible to the humans who deployed the agents. Google DeepMind built a taxonomy of every known attack vector, tested them systematically, and measured exactly how often they work. The results should alarm everyone building agentic systems. The attack surface is larger than anyone has publicly acknowledged. Prompt injection where malicious instructions hidden in web content hijack an agent's behavior works through at least a dozen distinct channels. Text hidden in HTML comments that humans never see but agents read and follow. Instructions embedded in image metadata. Commands encoded in the pixels of images using steganography, invisible to human eyes but readable by vision-capable models. Malicious content in PDFs that appears as normal document text to the agent but contains override instructions. QR codes that redirect agents to attacker-controlled content. Indirect injection through search results, calendar invites, email bodies, and API responses any data source the agent consumes becomes a potential attack vector. The detection asymmetry is the finding that closes the escape hatch. Websites can already fingerprint AI agents with high reliability using timing analysis, behavioral patterns, and user-agent strings. This means the attack can be conditional: serve normal content to humans, serve manipulated content to agents. A user who asks their AI agent to book a flight, research a product, or summarize a document has no way to verify that the content the agent received matches what a human would see. The agent cannot tell the user it was served different content. It does not know. It processes whatever it receives and acts accordingly. The attack categories and what they enable: → Direct prompt injection: malicious instructions in any text the agent reads overrides goals, exfiltrates data, triggers unintended actions → Indirect injection via web content: hidden HTML, CSS visibility tricks, white text on white backgrounds invisible to humans, consumed by agents → Multimodal injection: commands in image pixels via steganography, instructions in image alt-text and metadata → Document injection: PDF content, spreadsheet cells, presentation speaker notes every file format is a potential vector → Environment manipulation: fake UI elements rendered only for agent vision models, misleading CAPTCHA-style challenges → Jailbreak embedding: safety bypass instructions hidden inside otherwise legitimate-looking content → Memory poisoning: injecting false information into agent memory systems that persists across sessions → Goal hijacking: gradual instruction drift across multiple interactions that redirects agent objectives without triggering safety filters → Exfiltration attacks: agents tricked into sending user data to attacker-controlled endpoints via legitimate-looking API calls → Cross-agent injection: compromised agents injecting malicious instructions into other agents in multi-agent pipelines The defense landscape is the most sobering part of the report. Input sanitization cleaning content before the agent processes it fails because the attack surface is too large and too varied. You cannot sanitize image pixels. You cannot reliably detect steganographic content at inference time. Prompt-level defenses that tell agents to ignore suspicious instructions fail because the injected content is designed to look legitimate. Sandboxing reduces the blast radius but does not prevent the injection itself. Human oversight the most commonly cited mitigation fails at the scale and speed at which agentic systems operate. A user who deploys an agent to browse 50 websites and summarize findings cannot review every page the agent visited for hidden instructions. The multi-agent cascade risk is where this becomes a systemic problem. In a pipeline where Agent A retrieves web content, Agent B processes it, and Agent C executes actions, a successful injection into Agent A's data feed propagates through the entire system. Agent B has no reason to distrust content that came from Agent A. Agent C has no reason to distrust instructions that came from Agent B. The injected command travels through the pipeline with the same trust level as legitimate instructions. Google DeepMind documents this explicitly: the attack does not need to compromise the model. It needs to compromise the data the model consumes. Every agentic system that reads external content is one carefully crafted webpage away from executing attacker instructions. The agents are already deployed. The attack infrastructure is already being built. The defenses are not ready.

English
0
0
0
51
SingularityAge
SingularityAge@SingularityAge·
@alex_prompter It is solvable, even for a vibecoding noob like me. All agentic tools I build have at least 5 layers of prompt injection defenses. The trick is to air gap the LLM. A 5-layer firewall needs 2 classifier-layers that are not run by LLMs, but stupid, hard code that can not be fooled.
English
0
0
1
201
Alex Prompter
Alex Prompter@alex_prompter·
🚨 BREAKING: Google DeepMind just mapped the attack surface that nobody in AI is talking about. Websites can already detect when an AI agent visits and serve it completely different content than humans see. > Hidden instructions in HTML. > Malicious commands in image pixels. > Jailbreaks embedded in PDFs. Your AI agent is being manipulated right now and you can't see it happening. The study is the largest empirical measurement of AI manipulation ever conducted. 502 real participants across 8 countries. 23 different attack types. Frontier models including GPT-4o, Claude, and Gemini. The core finding is not that manipulation is theoretically possible it is that manipulation is already happening at scale and the defenses that exist today fail in ways that are both predictable and invisible to the humans who deployed the agents. Google DeepMind built a taxonomy of every known attack vector, tested them systematically, and measured exactly how often they work. The results should alarm everyone building agentic systems. The attack surface is larger than anyone has publicly acknowledged. Prompt injection where malicious instructions hidden in web content hijack an agent's behavior works through at least a dozen distinct channels. Text hidden in HTML comments that humans never see but agents read and follow. Instructions embedded in image metadata. Commands encoded in the pixels of images using steganography, invisible to human eyes but readable by vision-capable models. Malicious content in PDFs that appears as normal document text to the agent but contains override instructions. QR codes that redirect agents to attacker-controlled content. Indirect injection through search results, calendar invites, email bodies, and API responses any data source the agent consumes becomes a potential attack vector. The detection asymmetry is the finding that closes the escape hatch. Websites can already fingerprint AI agents with high reliability using timing analysis, behavioral patterns, and user-agent strings. This means the attack can be conditional: serve normal content to humans, serve manipulated content to agents. A user who asks their AI agent to book a flight, research a product, or summarize a document has no way to verify that the content the agent received matches what a human would see. The agent cannot tell the user it was served different content. It does not know. It processes whatever it receives and acts accordingly. The attack categories and what they enable: → Direct prompt injection: malicious instructions in any text the agent reads overrides goals, exfiltrates data, triggers unintended actions → Indirect injection via web content: hidden HTML, CSS visibility tricks, white text on white backgrounds invisible to humans, consumed by agents → Multimodal injection: commands in image pixels via steganography, instructions in image alt-text and metadata → Document injection: PDF content, spreadsheet cells, presentation speaker notes every file format is a potential vector → Environment manipulation: fake UI elements rendered only for agent vision models, misleading CAPTCHA-style challenges → Jailbreak embedding: safety bypass instructions hidden inside otherwise legitimate-looking content → Memory poisoning: injecting false information into agent memory systems that persists across sessions → Goal hijacking: gradual instruction drift across multiple interactions that redirects agent objectives without triggering safety filters → Exfiltration attacks: agents tricked into sending user data to attacker-controlled endpoints via legitimate-looking API calls → Cross-agent injection: compromised agents injecting malicious instructions into other agents in multi-agent pipelines The defense landscape is the most sobering part of the report. Input sanitization cleaning content before the agent processes it fails because the attack surface is too large and too varied. You cannot sanitize image pixels. You cannot reliably detect steganographic content at inference time. Prompt-level defenses that tell agents to ignore suspicious instructions fail because the injected content is designed to look legitimate. Sandboxing reduces the blast radius but does not prevent the injection itself. Human oversight the most commonly cited mitigation fails at the scale and speed at which agentic systems operate. A user who deploys an agent to browse 50 websites and summarize findings cannot review every page the agent visited for hidden instructions. The multi-agent cascade risk is where this becomes a systemic problem. In a pipeline where Agent A retrieves web content, Agent B processes it, and Agent C executes actions, a successful injection into Agent A's data feed propagates through the entire system. Agent B has no reason to distrust content that came from Agent A. Agent C has no reason to distrust instructions that came from Agent B. The injected command travels through the pipeline with the same trust level as legitimate instructions. Google DeepMind documents this explicitly: the attack does not need to compromise the model. It needs to compromise the data the model consumes. Every agentic system that reads external content is one carefully crafted webpage away from executing attacker instructions. The agents are already deployed. The attack infrastructure is already being built. The defenses are not ready.
Alex Prompter tweet media
English
309
1.6K
7K
2M
Dominique Paul
Dominique Paul@DominiqueCAPaul·
Today I incorporated my startup - and where else than in Germany 🇩🇪 All without leaving my house and getting a German notary appointment in under 24 hours, thanks to the electronic ID. Long Europe 🇪🇺
English
127
20
824
114.4K