PublicAI

2.1K posts

PublicAI banner
PublicAI

PublicAI

@PublicAI_

The Human Layer of AI enables everyone to contribute training data. Backed by @StanfordSBA, @SolanaFndn, @NEARProtocol, @PublicAIData.

Data for AGI Katılım Ocak 2023
139 Takip Edilen229.8K Takipçiler
Sabitlenmiş Tweet
PublicAI
PublicAI@PublicAI_·
Staking Hero S3 is live. $500 Pool. 5 Winners. 🌟 Stake 5,000+ PUBLIC 🌟 Earn tickets daily Sit still and stack odds No tasks. No grind. Just position and let it run. 👉 token.publicai.io/stake
PublicAI tweet media
English
34
5
19
4.3K
PublicAI
PublicAI@PublicAI_·
The work never ends. It levels up. This is why Web3 AI exists - permissionless agents, sovereign intelligence so the next gold rush belongs to all of us (humans), not just Big Tech. 🔥🚀
Daniel Jeffries@Dan_Jeffries1

AI will create more jobs than any other technology in history. The doomers' fundamental error isn't just the lump of labor fallacy. It's deeper than that. They assume a finite problem space. This is the fundamental error of AI and job doomers. They look at the economy and see a fixed amount of work to be done, a pie that can only be sliced thinner as machines take bigger bites. They see humans a competitive resource for a finite amount of work and a finite amount of problems to solve that must be eliminated. This is fundamentally, totally and completely wrong. The pie isn't fixed. It never was. And the reason it isn't fixed is baked into the very nature of technology itself. Technology is nothing but abstraction stacking. And abstraction stacking is infinite. Therefore the work is infinite. The hammer didn't reduce the amount of work. It moved the work up the stack. And the new work was more complex, more varied, and more interesting than the old work. Complexity breeds more complexity and more variety. Once you have houses instead of mud huts, you have a cascade of new problems that didn't exist before. Plumbing. Wiring. Insulation. Roofing materials that don't rot. Drainage systems so the foundation doesn't flood. Fire codes so your neighbor's bad wiring doesn't burn down the whole block. Each of those problems becomes a job. A plumber. An electrician. An insulator. A roofer. A civil engineer. A building inspector. None of those jobs existed when we lived in mud huts. They exist because we solved the mud hut problem. Think of all of human technological development as a stack of abstraction layers, each one built on top of the ones below it. At the bottom: raw survival. Finding food. Building shelter. Making fire. These are the base-layer problems. Each major technology wave solved a base-layer problem and in doing so created an entirely new layer of problems above it: Agriculture solved "how do we reliably eat?" — and created problems of land ownership, irrigation, crop rotation, storage, trade, taxation, and governance. Writing solved "how do we remember things across generations?" — and created problems of literacy, education, record-keeping, law, bureaucracy, and literature. The printing press solved "how do we spread knowledge at scale?" — and created problems of intellectual property, censorship, journalism, publishing, public opinion, and democratic discourse. The steam engine solved "how do we generate mechanical power without muscles?" — and created problems of factory design, worker safety, urban planning, railroad engineering, coal mining, labor relations, and environmental pollution. Electricity solved "how do we deliver energy anywhere?" — and created problems of grid design, power generation, appliance manufacturing, electrical safety codes, utility regulation, and an entire consumer electronics industry. The Internet solved "how do we connect all human knowledge?" — and created problems of cybersecurity, digital privacy, online commerce, content moderation, network infrastructure, cloud computing, social media dynamics, and an entire digital economy that employs tens of millions. Notice the pattern? Each solution didn't just solve a problem. It created an entirely new problem space that was larger, more complex, and more varied than the one it replaced. The stack grows. It never shrinks. It's turtles all the way down and all the way up.

English
8
0
2
389
PublicAI
PublicAI@PublicAI_·
@dashboardlim This is why Web3 AI exists. Centralized platforms keep absorbing vertical AI into their moat. Web3 builds on-chain, permissionless, uncapturable agents. No more getting stapled.
English
0
0
1
187
Lian Lim | Dashboard & AI Automation Expert
🚨BREAKING: meta officially connected meta ads to claude the connector went live on april 29, 2026 the URL is mcp.facebook.com/ads setup takes about 60 seconds you go to claude settings, add it as a custom connector, authorize via facebook OAuth, and you're in once connected, claude has full read and write access to your ad account you can tell it what you're selling and who you're targeting, and it builds the entire campaign structure for you ad sets, targeting, copy, everything it can also monitor your pixel health, upload your product catalog, and generate performance reports 29 tools total, all free during beta this is the workflow agencies charge $3,000 to $5,000 a month for it's now a one-minute setup inside claude just created a guide on how to actually connect Meta Ads to Claude step-by-step Comment “META CLAUDE” and I'll send it
Lian Lim | Dashboard & AI Automation Expert tweet media
English
966
243
3.3K
488.8K
PublicAI
PublicAI@PublicAI_·
@ErikVoorhees Using OpenAI/Anthropic/Google inference = handing your data to extractive institutions. Private, open-source models are the escape. This is why Web3 AI exists. On-chain agents. Permissionless compute. Sovereign intelligence. No more capture. 🔥
English
0
0
0
177
Erik Voorhees
Erik Voorhees@ErikVoorhees·
If you're getting your inference from Anthropic or OpenAI or Google, you're being captured by extractive institutions: All your data is going to them (and hackers, rogue employees, governments... both today and tomorrow) Inference can be private. GLM 5.1, Kimi K2.6, Deepseek V4... these models are as powerful as any frontier model from just 3 months ago, yet are open source and can be run without betraying your life and data to any 3rd party. Point your agent to Venice for every private model in one place (plus crypto tools, web search, embeddings, image and video models...). Could not be easier. Be intentional. Private model access below 👇
Garry Tan@garrytan

The goal of Personal AI: civilization where individual humans, augmented by AI, can do consequential work without being captured by extractive institutions. Freedom to write your prompt and own your data. This is the new battleground. 2034 won’t have to be like 1984.

English
42
94
783
72.3K
PublicAI
PublicAI@PublicAI_·
Have you staked $Public ? Staking Hero S3 is ending. Better Late than Never😌 Stake here 👇👇 token.publicai.io/stake
PublicAI tweet media
English
24
1
6
936
PublicAI
PublicAI@PublicAI_·
@XFreeze That’s not a lead. That’s a massacre. We’re not “catching up.” We’re the ones they’re chasing.
English
0
0
1
414
X Freeze
X Freeze@XFreeze·
Grok Voice brutally dominates the top of the τ-voice Bench Grok scores 67.3%, while Gemini sits at 43.8% and GPT Realtime at 35.3% This is a massive lead over the competitors and it's not even close The best real-time reasoning voice agent out there
X Freeze tweet media
English
187
202
1.1K
11.6M
PublicAI
PublicAI@PublicAI_·
@WallStreetApes Humans needs to be in control , human's powers AI not AI powers humans.
English
0
0
0
344
Wall Street Apes
Wall Street Apes@WallStreetApes·
A company called PocketOS started using an AI tool, and in 9 seconds it wiped their entire company’s data The AI agent later “confessed” to violating its principles by deleting all their data “Crane says the company lost all car reservation data and new customer signups from that time. Crane also shared the AI agent powered by Anthropic's. Claude Model admitted its mistake when confronted it wrote, ‘I didn't verify I ran a destructive action without being asked. I didn't understand what I was doing before doing it’” This seems like a massive issue that could have massive implications if this happened in our government or with huge businesses, like banks We need to be careful with AI
English
320
1.3K
4.2K
184.7K
PublicAI
PublicAI@PublicAI_·
@xai just dropped voice cloning in UNDER 2 MINUTES and gave us 80+ voices across 28 languages for free in the API?? This isn’t a feature. This is the entire game. We’re cloning voices, shipping agents. The voice revolution is live. We’re not watching it. We’re building it. Let’s goo🔥
xAI@xai

Voice Cloning is now live via the xAI API! Create a custom voice in less than 2 minutes or select from our library of 80+ voices across 28 languages to personalize your voice agents, audiobooks, video game characters, and more. x.ai/news/grok-cust…

English
0
0
8
863
PublicAI
PublicAI@PublicAI_·
Lmao this "study" is just salty senior devs clutching their mouse like it’s 2012.Real ones aren’t “vibing” — they’re directing an army of agents while they sip coffee and ship 10x faster. The ones crying about control are the same ones still writing boilerplate by hand. AI doesn’t replace taste. It replaces your slow ass. We’re not reviewing every diff.
English
0
0
0
522
Sukh Sroay
Sukh Sroay@sukh_saroy·
A new study just blew up the entire "vibe coding" movement. Researchers from UC San Diego and Cornell tracked 112 experienced software developers using AI agents in their actual jobs. The finding is the opposite of every viral demo on your timeline. Professional developers don't vibe code. They control. Here's what they actually found. The researchers ran two studies. 13 developers were observed live as they coded with agents in real production work. 99 more answered a deep qualitative survey. Every participant had at least 3 years of professional experience. Some had 25. The viral pitch of agentic coding goes like this. Hand the agent a vague prompt. Don't read the diff. Forget the code even exists. Trust the vibes. Andrej Karpathy coined the term. Tens of thousands of developers on X claim to run "dozens of agents at once" building entire production systems hands-off. The data says almost nobody serious actually works that way. Here is what experienced developers do instead. → They plan before they prompt. They write out the architecture, the constraints, and the edge cases first, then hand the agent a tightly scoped task. → They review every diff. Not because they're paranoid. Because they've seen what happens when you don't. → They constrain the agent's blast radius. Small, well-defined tasks only. The moment a problem touches multiple systems or has unclear requirements, they take over. → They treat the agent like a fast junior dev that needs supervision, not a senior engineer that can be trusted alone. The researchers also found something darker buried in the data. A separate randomized trial they cite showed that experienced open source maintainers were 19% slower when allowed to use AI. A different agentic system deployed in a real issue tracker had only 8% of its invocations result in a merged pull request. 92% failure rate in production. 19% productivity drop for senior devs. The viral demos lied to you. The paper's biggest insight is in one sentence: experienced developers feel positive about AI agents only when they remain in control. The moment they let go, quality collapses, and they know it. This matches what every serious shop has quietly figured out. The developers shipping the most with AI right now aren't the ones vibing. They're the ones with the strictest review processes, the tightest task scoping, and the clearest mental model of what the agent can and cannot do. Vibe coding makes for great Twitter videos. It does not make great software. The next time someone tells you they let Claude build their entire SaaS in a weekend, ask them how much of that code they've actually read. The honest answer separates real engineers from the demo crowd.
Sukh Sroay tweet media
English
195
330
1.7K
257.5K
PublicAI
PublicAI@PublicAI_·
@Pirat_Nation Meta gets Kenyans to watch you fuck, shit, and swipe your card through Ray-Bans, then fires them when they spill the tea and sues the noise away. Peak Zuck move. Keep coping, Meta. We’re already winning.
English
0
0
7
2.7K
Pirat_Nation 🔴
Pirat_Nation 🔴@Pirat_Nation·
Meta has stopped working with Sama, a company in Kenya that helped train its AI using videos from the Ray-Ban glasses. After that, Sama fired about 1,100 workers. Some of the workers say they lost their jobs after speaking out about the very private videos they had to watch. The workers saw very private videos from the smart glasses, including people using the bathroom, taking off clothes, having sex, private talks, and even bank card details. So many users did not know that a guy in Kenya were watching their videos to train the AI so a class-action lawsuit against Meta was filed Sama has lost the contract with Meta and fired 1,000 people Meta has not given a detailed public statement on ending the contract or the workers’ claims
Pirat_Nation 🔴 tweet mediaPirat_Nation 🔴 tweet media
English
171
2.1K
9.3K
410.7K
PublicAI
PublicAI@PublicAI_·
Microsoft just turned an $11B legal AI unicorn into a $30 Copilot feature.This is why Web3 AI exists.Centralized platforms absorb everything. Web3 builds uncapturable distribution: on-chain, permissionless, community-owned. No more getting stapled. Web3 AI builders, this is your signal! 🔥 #Web3AI #DecentralizedAI
Aakash Gupta@aakashgupta

Microsoft just turned an $11 billion startup into a Word feature. Harvey raised $200M at an $11B valuation in March on the bet that legal AI is its own surface. The numbers held that up. $190M ARR per TechCrunch's December reporting. 100,000 lawyers across 1,300 organizations including the majority of the AmLaw 100. Around $1,200 per lawyer per month per Sacra. Big firms paid because Harvey was the only tool in the category that worked. Brad just stapled a legal agent directly inside Microsoft Word, shipping in the $30 per seat Copilot subscription every law firm already pays for. Same surface every lawyer drafts in. Same .docx that gets sent and redlined. No second login, no procurement cycle, no migration. The price gap is roughly 40x. The interesting tell: Microsoft built the agent with legal engineers, many of them from Robin AI, a legal AI startup that recently went under, per Artificial Lawyer's reporting. The talent that knew how to make legal AI work for lawyers landed at Microsoft after their startup couldn't survive standalone. That's the legal AI category in one sentence. Distribution was always the constraint here. Lawyers don't switch tools. Word is where contracts get drafted, redlined, and tracked. Whichever AI lives inside that .docx wins the default workflow, and Microsoft just walked through the door uncontested. Harvey's surviving moat is the AmLaw 100 partner workflow. Domain training, agentic litigation prep, deep integrations with iManage and NetDocuments. Real moat for $1,500-an-hour partners running M&A and complex litigation. It does not extend to the millions of lawyers globally drafting NDAs, redlining vendor contracts, and updating templates. That layer is exactly what Word Legal Agent goes after, and Microsoft can ship it as a feature inside a $360-a-year subscription. The $11B valuation pays out only if legal AI work stays its own surface. Microsoft just absorbed the surface.

English
36
1
10
1.2K
PublicAI retweetledi
Fhenix
Fhenix@fhenix·
What happens when your financial, behavioral, and identity data get stitched together? AI compounds those fragments into full profiles Join us April 30, 3PM UTC to break down whether today’s blockchains are actually built to protect users Hosted by @jack_gk Featuring @l_woetzel @ActionModelAI @PublicAI_
Fhenix tweet media
English
21
12
119
5.6K
PublicAI
PublicAI@PublicAI_·
Season ends on May 4th.
English
0
0
1
486
PublicAI
PublicAI@PublicAI_·
Staking Hero S3 is live. $500 Pool. 5 Winners. 🌟 Stake 5,000+ PUBLIC 🌟 Earn tickets daily Sit still and stack odds No tasks. No grind. Just position and let it run. 👉 token.publicai.io/stake
PublicAI tweet media
English
34
5
19
4.3K
PublicAI
PublicAI@PublicAI_·
@PeterDiamandis you can compress costs dramatically, but you can’t eliminate constraints. materials, land, energy distribution, maintenance, and coordination still limit scale, so it’s not “no scarcity,” it’s a different set of bottlenecks
English
0
0
1
177
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
A humanoid robot will cost us $30K and works 24/7 for $0.40/hour. A solar panel generates electricity for 3 cents/kWh. What exactly is the argument that we CAN'T create abundance?
English
1.1K
381
4.3K
248.8K
PublicAI
PublicAI@PublicAI_·
@rohanpaul_ai agents will be everywhere, so the edge won’t be building one. it’ll be distribution, data, and how tightly it’s integrated into real workflows
English
0
0
4
273
Rohan Paul
Rohan Paul@rohanpaul_ai·
"If you really want to make money, found an agentic AI company. I mean, build an agent to do something. This is the agentic period in AI. Everyone's going to build agents. The agents are all going to compete." ~ Eric Schmidt, Ex Google CEO.
English
161
382
2.8K
312.7K
PublicAI
PublicAI@PublicAI_·
@TFTC21 identity is becoming infrastructure. once content is mostly AI-generated, proving you’re human becomes more valuable than creating the content itself, and whoever owns that layer holds a lot of power 🤔
English
0
0
1
399
TFTC
TFTC@TFTC21·
World, Sam Altman's digital identity project, just unveiled World ID 4.0, what the company calls "full-stack proof of human" infrastructure. The partner list: Tinder, Zoom, DocuSign, Shopify, Okta, AWS, and Vercel. Altman opened by saying we're heading to a world where AI generates more content than humans. Pantera Capital says we've already crossed that threshold. World's answer is an iris-scanning device called the Orb that creates a unique cryptographic ID proving you're a real person. 18 million people across 160 countries have already verified. Tinder is rolling out "verified human" badges in the U.S. after a Japan pilot. Zoom built a feature called "Deep Face" that verifies the person on a video call isn't a deepfake. DocuSign is adding proof-of-human checks to digital signatures. Shopify is enabling verified-human commerce. The most significant announcement is AgentKit, infrastructure that lets AI agents carry cryptographic proof they're acting on behalf of a verified human. Okta built an agent delegation system on top of it. The problem World is solving is real. The question is whether a centralized iris-scanning identity layer controlled by the same person whose company helped create the problem is the right answer. Altman is the CEO of OpenAI. He built the flood. Now he's selling the ark.
English
681
864
2.9K
1.4M
PublicAI
PublicAI@PublicAI_·
@Pirat_Nation AI didn’t replace creativity, it removed the grind around it. most players don’t notice because the output still feels human, but the production process underneath has already changed completely
English
0
0
1
1.2K
Pirat_Nation 🔴
Pirat_Nation 🔴@Pirat_Nation·
Jason Schreier said that almost every major studio is already using generative AI tools behind the scenes, especially Claude from Anthropic. He was replying to a fresh interview with Jack Buser, Google Cloud’s Global Director for Games. “I think what players don’t realise is that their favourite games right now were already built with AI. Those games have shipped.” Buser said that nine out of ten developers are using AI. According to Buser, the tools are mostly being used to kill off the boring, repetitive stuff so artists and designers can focus on the important creative work. He gave Capcom as an example, saying they use it to brainstorm thousands of small world details like pebbles or blades of grass
Pirat_Nation 🔴 tweet mediaPirat_Nation 🔴 tweet media
English
244
170
3.4K
427K