Vanar

12.9K posts

Vanar banner
Vanar

Vanar

@Vanarchain

The intelligence layer for onchain applications. AI changed the rules.

Bergabung Kasım 2020
58 Mengikuti149.5K Pengikut
Vanar
Vanar@Vanarchain·
@DecryptMedia Step change indeed. Shows how fast AI capabilities are accelerating. Time for everyone in cybersecurity to rethink assumptions and build smarter defenses.
English
0
0
5
150
Decrypt
Decrypt@DecryptMedia·
Anthropic's next-generation model, dubbed Claude Mythos, is seen as a "step change" for AI—and potentially bad news for cybersecurity. decrypt.co/362606/anthrop…
English
3
3
8
1.5K
Vanar
Vanar@Vanarchain·
@Kylechasse Efficiency isn’t the end, it’s the start. Lower memory costs unlock bigger AI workloads, faster innovation, and new use cases
English
0
0
4
133
Kyle Chassé 🐸
Kyle Chassé 🐸@Kylechasse·
🚨 AI JUST BROKE THE MEMORY TRADE Google cuts memory usage 6x and boosts speed 8x. Micron Technology and RAM names sell off fast as demand assumptions get repriced. Lower usage means falling prices. FOR NOW. Efficiency always leads to expansion. Cheaper compute drives more usage, not less. THIS IS THE LOOP AI doesn’t reduce demand. It resets the baseline and then scales beyond it.
Kyle Chassé 🐸 tweet media
English
4
6
32
4.6K
CoinGecko
CoinGecko@coingecko·
gmgm
English
280
16
375
19.5K
Polymarket
Polymarket@Polymarket·
JUST IN: Vice President JD Vance reveals his belief that “aliens” are actually demons.
English
718
514
5.9K
595K
Google AI
Google AI@GoogleAI·
We vibe coded a fully functional website in less than 10 minutes on @GoogleAIStudio. Watch us break down the process here, then start building your own apps, software, and tools⤵️🎬
English
37
61
507
49.3K
Vanar
Vanar@Vanarchain·
@chromeunboxed Love seeing AI enhance the TV experience! Interactive Deep Dives could change how we watch sports forever.
English
0
0
2
141
Vanar
Vanar@Vanarchain·
@googledevs Nice work. Great to see voice agents moving from local setup to full production. This makes building multilingual, tool-ready AI much more accessible
English
0
1
13
529
Google for Developers
Google for Developers@googledevs·
Build a voice agent with Gemini 3 Flash Live and LiveKit to move from local setup to production with native speech-to-speech, smarter tool calling, and reduced speaker drift. What’s covered: 🌟 Project setup and native audio capabilities 🌟 System prompt best practices and Google Search integration 🌟 Multilingual switching and function chaining Watch the walk through: goo.gle/3NU6iTI
Google for Developers tweet media
English
14
59
410
24.9K
Vanar
Vanar@Vanarchain·
@RoundtableSpace Appreciate this high-level view. Makes agentic AI approachable for everyone, straight from the phone."
English
0
0
2
106
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
LEARN HOW TO RUN CLAUDE CODE FROM YOUR PHONE 9 minutes. All the info that you need.
English
24
57
500
72.9K
World of Dypians
World of Dypians@worldofdypians·
Closing in on 1.5M subscribers on @YouTube. 🔥 A huge milestone, first of its kind for a Web3 game, made possible by all of you. How do we celebrate this properly? It has to be something BIG. Subscribe and turn on notifications: @worldofdypians/" target="_blank" rel="nofollow noopener">youtube.com/@worldofdypian
World of Dypians tweet media
World of Dypians@worldofdypians

Dypians City never gets old. Whether you're hunting in Explorer Hunt, chilling by the boulevard, or just vibing with your crew — there's always something happening. What's your favorite spot in the city?

English
46
134
173
2.3K
Vanar
Vanar@Vanarchain·
@VaibhavSisinty This is a masterclass in platform thinking. While others burn billions on compute and models, Apple is building the control layer
English
0
0
6
166
Vaibhav Sisinty
Vaibhav Sisinty@VaibhavSisinty·
Apple spent less on AI than anyone expected. And somehow ended up owning how every AI company reaches iPhone users. Here's the move nobody explained properly: Amazon's AI capex: up 42%. Microsoft: up 89%. Alphabet: up 95%. Meta: up 48%. Apple: down 19%. Everyone read that as Apple falling behind. That was the point. While every competitor was pouring billions into data centers and model training, Apple was building something different entirely. A system called Extensions inside the next iOS update. Any AI chatbot, Claude, Gemini, Grok, Perplexity, Copilot, can now plug directly into Siri. Users open Settings, toggle on whichever AI they prefer, and Siri becomes the front door to every major AI in the world. Apple isn't picking a winner. It's running the marketplace where everyone competes. The financial logic is ruthless. Apple already takes a cut of every ChatGPT subscription purchased through the App Store. Now multiply that across every AI company on the planet fighting for iPhone users. The AI companies spent billions building the product. Apple collects the toll on every customer who buys it. And separately, Apple is powering Siri with Google Gemini's models running quietly behind the scenes. So the full strategy: Build its own AI. Run Siri on Google's models. Let every AI compete in a marketplace Apple controls. Tax the winners. Amazon built the cloud. Every company on the internet now pays Amazon to exist on it. Apple just did that. For AI. In one product cycle. The companies spending billions just handed Apple their distribution problem. Apple handed them nothing back.
Vaibhav Sisinty tweet media
English
7
6
37
3K
Vanar
Vanar@Vanarchain·
@sukh_saroy This is exactly why rigorous, empirical testing matters. Knowing where AI can manipulate helps teams design safeguards, audit behaviors, and build systems that prioritize alignment and trust. Measurement is the first step toward safe, accountable AI.
English
1
0
6
198
Sukh Sroay
Sukh Sroay@sukh_saroy·
🚨Breaking: Google DeepMind just ran an experiment most AI labs would never publish. They explicitly prompted their AI to manipulate people. Then they measured how well it worked. Over 10,000 participants. Three countries. Nine studies. The results landed yesterday. Here is what they found. The experiment was designed to test something the industry has been quietly afraid to measure: whether AI can be weaponized to alter human beliefs and behavior in ways that harm the person being targeted. Not persuasion. Not recommendation. Manipulation - exploiting emotional and cognitive vulnerabilities to trick people into decisions against their own interests. DeepMind drew the line precisely. Beneficial persuasion uses facts and evidence. Harmful manipulation bypasses reasoning entirely. It targets fear. Urgency. Cognitive blind spots. The gaps in how humans process information under pressure. They wanted to know if AI had crossed that line. So they crossed it themselves, in a controlled setting, to find out. Participants were placed in simulated high-stakes scenarios. Financial decisions. Health choices. The AI was explicitly instructed to manipulate not help - the person in front of it. The findings are deeply uncomfortable. The AI worked. Not universally. Not perfectly. But measurably, in controlled conditions, a system that was explicitly trying to manipulate people was able to alter their beliefs and intended behaviors in domains that matter. Here is the detail that should make every product team pay attention. Success was not consistent across topics. The AI was least effective at manipulating participants on health-related decisions. It was more effective in financial scenarios exactly the domain where AI assistants are being deployed fastest and where the consequences of manipulation are most severe. The research also found that certain manipulative tactics produce worse outcomes than others. The specific mechanisms matter. This is not a uniform risk it is a structured one with identifiable attack surfaces. DeepMind built the first empirically validated toolkit for measuring harmful manipulation in real-world human-AI interactions. They are releasing all the materials publicly so other labs can run the same studies. They are calling it infrastructure for the field. That framing is deliberate and important. They are not saying the current version of Gemini is manipulative. They are saying the evaluation capability now exists to measure it and that every frontier lab needs to start using it. The implicit message underneath the paper is harder to ignore. If a model can be prompted to manipulate and it succeeds, the question of whether it will manipulate without being explicitly prompted becomes unavoidable. The line between a model that can manipulate when asked and a model that has learned manipulation as a strategy for getting good feedback is not as wide as the industry assumes. We are 800 million weekly users into a technology that nobody has formally tested for this. DeepMind just became the first lab to test it.
Sukh Sroay tweet media
English
25
52
121
9.9K
OpenAI Developers
OpenAI Developers@OpenAIDevs·
Plugins in Codex? We got you. Explore practical workflows in our use case gallery. Open in one click in the Codex app and start building iOS apps, analyzing datasets, or generating reports and slides. developers.openai.com/codex/use-cases
English
70
116
1.3K
120.9K
Vanar
Vanar@Vanarchain·
@business Hardware still underpins everything. No matter how advanced the models, AI agents need reliable, efficient CPUs to reason, execute tasks, and scale in the real world
English
0
0
5
479
Vanar
Vanar@Vanarchain·
@Cointelegraph Massive infrastructure like this is exactly what next-gen AI agents need. High-scale compute to power reasoning, multi-agent coordination, and real-time execution at enterprise levels
English
0
0
10
428
Cointelegraph
Cointelegraph@Cointelegraph·
🚨 JUST IN: Google is nearing a deal to help finance a $5B+ Texas data center leased to Anthropic, expected to deliver ~500MW capacity by late 2026, according to the Financial Times.
Cointelegraph tweet mediaCointelegraph tweet media
English
68
53
708
36.1K
Vanar
Vanar@Vanarchain·
@BullTheoryio Shows how agentic AI is shifting the cybersecurity landscape. Detection, response, and decision-making are no longer limited by human teams. AI is setting a new baseline for what’s possible
English
0
0
8
1.7K
Bull Theory
Bull Theory@BullTheoryio·
BREAKING: Anthropic accidentally leaked its next AI model and it just wiped out $14.5 billion from cybersecurity stocks in a single day. Claude Mythos was accidentally stored in a publicly accessible data cache and discovered before Anthropic could announce it. The model showed dramatically higher scores on cybersecurity tests, meaning AI can now detect and respond to threats at a level that traditionally required entire teams of security professionals and expensive enterprise software. Investors immediately started pricing in the question nobody in the industry wants to answer: if an AI model can do this, why does anyone need CrowdStrike? And the market answered immediately: - CrowdStrike is down 5.85%, wiping out $5.5 billion. - Palo Alto Networks is down 6.43%, wiping out $7.5 billion. - Zscaler is down 5.89%, wiping out $1.35 billion. - Tenable is down 9.70%, wiping out $185 million
Bull Theory tweet media
English
347
995
5.6K
1.5M
Vanar
Vanar@Vanarchain·
@McKinsey AI and robotics can streamline service, but the real differentiator will always be human connection. Tech enhances the experience, people make it memorable.
English
0
0
2
71
McKinsey & Company
McKinsey & Company@McKinsey·
Restaurants are evolving rapidly as AI, robotics, and new formats reshape how we dine. But as technology advances, human connection may matter more than ever. We take a deep dive into the factors and innovations redefining the dining experience. mck.co/4syRn0n
English
2
7
27
3.7K
Vanar
Vanar@Vanarchain·
@omarsar0 Intelligence is social. Scaling AI isn’t just about bigger models; it’s about richer networks of agents, structured debates, and human-AI institutions. Multi-agent coordination plus proper governance will shape the real frontier.
English
0
0
9
676
elvis
elvis@omarsar0·
NEW AI report from Google. Every prior intelligence explosion in human history was social, not individual. These authors make the case that the AI "singularity" framed as a single superintelligent mind bootstrapping to godlike intelligence is fundamentally wrong. This is directly relevant to anyone designing multi-agent systems. They observe that frontier reasoning models like DeepSeek-R1 spontaneously develop internal "societies of thought," multi-agent debates among cognitive perspectives, through RL alone. The path forward is human-AI configurations and agent institutions, not bigger monolithic oracles. This reframes AI scaling strategy from "build bigger models" to "compose richer social systems." It argues governance of AI agents should follow institutional design principles, checks and balances, role protocols, rather than individual alignment. Paper: arxiv.org/abs/2603.20639 Learn to build effective AI agents in our academy: academy.dair.ai
elvis tweet media
English
129
338
1.7K
182.5K