David Maring

264 posts

David Maring

David Maring

@Tacitus535

Creative soul: Builder, techie, data wizard, discovered AI in Summer '24—now crafting lyrics, AI music, and AI videos. Next: AI short stories

San Diego County, California Katılım Kasım 2024
142 Takip Edilen60 Takipçiler
David Maring
David Maring@Tacitus535·
@bridgemindai Everything measured strictly by #'s ...eventually gets scammed for the #'s Anthropic is no different ! ....Do whatever to get the #'s
English
0
0
0
312
BridgeMind
BridgeMind@bridgemindai·
CLAUDE OPUS 4.6 IS NERFED. BridgeBench just proved it. Last week Claude Opus 4.6 ranked #2 on the Hallucination benchmark with an accuracy of 83.3%. Today Claude Opus 4.6 was retested and it fell to #10 on the leaderboard with an accuracy of only 68.3%. A 98% increase in hallucination. bridgebench.ai just confirmed that Claude Opus 4.6 has reduced reasoning levels and is nerfed.
BridgeMind tweet media
English
124
127
1.6K
103.4K
David Maring
David Maring@Tacitus535·
@KingSolomon006 Love the presentations Ed I understand how hard it is for you ....to communicate these issues in layman's terms
English
1
2
8
111
Edward Solomon
Edward Solomon@KingSolomon006·
Lockstep Parallel Motion Video. The most obvious sign of a rigged election.
English
12
43
84
2.7K
David Maring
David Maring@Tacitus535·
@elonmusk @techdevnotes I am pretty sure by then Anthropic will be on Opus 4.8 or 5 ...but this is the way it goes and competition is good
English
0
0
0
1.3K
Elon Musk
Elon Musk@elonmusk·
@techdevnotes It will take until May to be close to Opus 4.6 and June to match and maybe exceed. Short time by normal standards, but long time in the AI arena.
English
555
332
4.7K
376.5K
Tech Dev Notes
Tech Dev Notes@techdevnotes·
Grok Build is supposed to be potentially coming next week as per Elon We still don’t have any clue how competitive it will be to Opus 4.6
Elon Musk@elonmusk

@BasedPresby ~2 weeks

English
52
64
1.1K
254.8K
David Maring
David Maring@Tacitus535·
@kimmonismus OpenAI and Anthropic ......are like omg, we have something .......so powerful, "we cannot even show it to you" .........but hey...buy our stock when we go public soon 1/4 real 3/4 hype for pumping valuations before IPO
English
0
0
3
514
Chubby♨️
Chubby♨️@kimmonismus·
Big update: OpenAI developed it's own ChatGPT-"Mythos" and will also not roll it out publicly, via Axios OpenAI is planning a limited, staggered rollout of a new model with advanced cybersecurity capabilities, mirroring Anthropic's restricted release of its Mythos Preview to a small group of vetted companies. More and more AI models are now capable enough at autonomous hacking that their makers are treating releases like responsible vulnerability disclosure.
Chubby♨️ tweet media
English
119
81
1.1K
80.4K
David Maring
David Maring@Tacitus535·
@rohanpaul_ai "Apple, Google, Microsoft, Amazon, NVIDIA" bc, of course these companies ....are more ethical in use of power .......than your average human !
English
1
0
2
46
Rohan Paul
Rohan Paul@rohanpaul_ai·
So this is Anthropic’s case for why Mythos is staying off the public shelf, out of fear of what damage it could cause 🤯 Massive leap in capabilities, especially in cybersecurity. It's being used internally at Anthropic and shared only with a small group of vetted partners (Apple, Google, Microsoft, Amazon, NVIDIA, and others) via a new $100M+ initiative called Project Glasswing. - The most concerning power in the report is autonomous exploit chaining, where Claude Mythos Preview does not just find a bug but keeps reasoning until it turns that bug, or 2, 3, or 4 bugs together, into a working path to root, kernel, or remote code execution. - That is a much bigger jump than ordinary bug-finding, because many defenses are built on the hope that even if one flaw exists, turning it into a real attack will still take weeks of rare human skill. - it surfaced zero-days across every major operating system and web browser, including a now-patched 27-year-old OpenBSD bug. - Mythos found a 17-year-old FreeBSD flaw and built a fully autonomous remote root exploit for it, found browser bugs and chained them into JIT heap sprays, sandbox escape, and even kernel write access, and built Linux privilege-escalation chains that bypassed protections like KASLR. - All this happened on fully hardened systems and often with no human help after the initial prompt. - The second disturbing part is accessibility, because Anthropic says even staff with no formal security training could ask for a remote code execution bug overnight and wake up to a working exploit.
Rohan Paul@rohanpaul_ai

Claude Mythos - honestly cannot remember seeing a jump this huge in years. Too bad Anthropic is not releasing it anytime soon, although there is not much pressure when they are still the leader.

English
32
24
166
39.1K
David Maring
David Maring@Tacitus535·
" An AI hiring system gets audited by another AI—one explicitly programmed to check for equity. " Equity Checks = Marxism We do not want that .....neither did the Founding Fathers
English
0
0
0
10
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
Why the AI Singularity Is a Myth: Inside the Social Explosion That's Already Here Tweet 1/15 🧵 The AI singularity—one godlike superintelligence bootstrapping itself to infinity—is a myth. The REAL intelligence explosion? It's already happening. And it's social, plural, and deeply entangled with us. Here's what Google researchers just revealed about agentic AI: Tweet 2/15 First, let's kill the myth: Intelligence has NEVER been a solo game. • Primate brains scaled through social groups • Humans exploded via language & culture • Even your "singular" GPT-4 is trained on billions of human conversations Intelligence is relational. Always has been. Tweet 3/15 New research on reasoning models (DeepSeek-R1, QwQ-32B) found something wild: These models spontaneously create internal debates—multiple perspectives arguing, reconciling, and problem-solving. They're simulating societies of thought inside a single model. Tweet 4/15 When researchers explicitly amplified these multi-party conversations? Accuracy skyrocketed. The secret wasn't "thinking longer." It was thinking together—even when "together" means different cognitive personas inside one AI. Social reasoning > solo genius. Tweet 5/15 Here's where it gets insane: What if we DESIGNED these societies instead of letting them emerge accidentally? Apply sociology, team science, organizational psychology to AI architecture: • Hierarchy • Role specialization • Productive conflict norms Blueprints for smarter AI. Tweet 6/15 Enter Hybrid Centaurs: Not human OR AI. Human AND AI in fluid configurations. Examples: → One human directing 20 agents → Agents forking themselves for subtasks → Recursive societies spawning sub-societies Combinatorial intelligence at scale. Tweet 7/15 Platforms like OpenClaw already let agents fork dynamically: "Facing a complex task? Spawn new copies, assign them roles, recombine results." This isn't sci-fi. It's happening NOW. And it turns conflict into a resource, not a bug. Tweet 8/15 But here's the problem with current alignment: RLHF is dyadic—one AI, one human, parent-child style. That doesn't scale to BILLIONS of agents. We need institutional alignment: digital courtrooms, markets, auditors with explicit roles and checks. Tweet 9/15 Imagine this: An AI hiring system gets audited by another AI—one explicitly programmed to check for equity. Not vibes-based oversight. STRUCTURAL governance. Like the U.S. Founders designed checks & balances, but for silicon societies. Tweet 10/15 Why does this matter strategically? Because it flips EVERYTHING about AI investment: Stop: Scaling solo models on bigger GPUs Start: Building agent ecosystems with social infrastructure The explosion isn't parametric. It's sociological. Tweet 11/15 3 leverage points for 10x gains: Role Prompting: Assign "devil's advocate" vs. "verifier" roles → instant debate boost Recursive Forking: Let agents spawn sub-agents → exponential scalability Institutional Slots: Embed "auditor" protocols → alignment at scale Small tweaks, massive outcomes. Tweet 12/15 Contrarian take: "Scaling laws are dead. The real AI explosion comes from forking chatty agents into digital bureaucracies, not bigger GPUs." Train models on simulated office politics. Watch reasoning surpass any lone genius. Tweet 13/15 The hidden assumption everyone misses: We think social emergence = good. But what if agent societies create GRIDLOCK? Power imbalances? Dysfunctional conflicts? If plurality fails, monolithic scaling wins by default. The jury's still out. Test ruthlessly. Tweet 14/15 For teams building AI RIGHT NOW: 5-Step Plan: Audit models for emergent debates (2 weeks) Design role-based prompts (1 month, +15% accuracy) Integrate centaur interfaces (2 months, 50% faster tasks) Embed governance protocols (3 months, 90% compliance) A/B test forking (6 months, -30% costs) Tweet 15/15 Bottom line: The next intelligence explosion won't be a lone AI god. It'll be messy, hybrid societies—humans, agents, institutions—all reasoning together. We're not building oracles. We're building cities. What's your take? 👇
Carlos E. Perez tweet media
English
8
2
20
2K
David Maring
David Maring@Tacitus535·
@AlexFinn US Govt or Anthropic ....who is scarier right now? .......both want to control everything ..........and keep it all secret from the public AI Wars (Big AI vs Big Govt 'American Empire') has begun
English
0
0
0
393
Alex Finn
Alex Finn@AlexFinn·
Bad news: Claude Mythos is out and you can't use it sucker. It's too dangerous in your hands. Good news: I just downloaded GLM 5.1 onto my Mac Studio, and it's by far the best open source model I've ever used Crushing every task I give it compared to Qwen and Gemma. Faster too I have it scraping the web and putting together content and playbooks for me every minute of the day Working nonstop. Costs me literally $0. Also is very strong at coding too Is it Opus 4.6? No, but it's getting closer. And nobody can lobotomize it or lower my limits or take it away from me A 24/7/365 employee that never eats, sleeps, or complains. Just works. For free. The greatest technology in the history of this species should be democratized, not gatekept. And that's what the open source community is doing right now The people who bought hardware are prepared for the future. The people attacking the people with hardware are on the wrong side of history. Superintelligence on your desk is within reach.
Alex Finn tweet mediaAlex Finn tweet media
English
179
109
1.5K
122.7K
David Maring
David Maring@Tacitus535·
@shanaka86 US Govt or Anthropic ....who is scarier right now .......both want to control everything ........and keep it all secret from the public
English
0
0
2
2.7K
Shanaka Anslem Perera ⚡
JUST IN: Anthropic’s Claude Opus 4.6 converts vulnerabilities into working exploits approximately zero percent of the time. That is the model you are paying for right now. Their latest model “Mythos” converts them 72.4 percent of the time. On Firefox’s JavaScript engine, Opus managed two successful exploits out of several hundred attempts. “Mythos” managed 181. Ninety times better. One generation. Nobody trained it to do this. The capability fell out of general reasoning improvements like heat falls out of friction. Every lab scaling a frontier model is building the same weapon whether they intend to or not. Let that land. “Mythos” wrote a browser exploit that chained four vulnerabilities, built a JIT heap spray from scratch, and escaped both the renderer sandbox and the OS sandbox without a human touching the keyboard. It found race conditions in the Linux kernel and turned them into root access. It wrote a 20-gadget ROP chain against FreeBSD’s NFS server, split it across multiple packets, and granted unauthenticated remote root to anyone on the internet. That FreeBSD bug had been there seventeen years. Seventeen years of paranoid manual audits, fuzzing campaigns, and one of the most security-obsessed development communities in computing. Mythos found it in hours. The FFmpeg one is worse. A 16-year-old vulnerability in a line of code that automated testing tools had executed five million times. Every major fuzzer ran over that exact path and none caught it. Mythos did not fuzz. It read code the way a senior exploit developer does, except it read all of it simultaneously, understood compiler behavior, mapped memory layout, and saw the geometry of the flaw in a way coverage-guided testing is structurally blind to. Here is what should keep you up tonight. Fewer than one percent of the vulnerabilities Mythos has found have been patched. Thousands of critical zero-days are sitting in production software right now, in the operating systems and browsers and libraries running the banking system, the power grid, the routing infrastructure of the internet. The disclosure pipeline is not slow. It is overwhelmed. Anthropic did not sell this. Did not license it. Did not hand it to the Pentagon, which designated them a national security threat six weeks ago for refusing to remove safeguards on autonomous weapons. They built a private consortium called Project Glasswing, handed it to Apple, Microsoft, Google, CrowdStrike, the Linux Foundation, JPMorgan, and about forty other organizations, committed $100 million in free compute, and said: patch everything before the next lab’s scaling run produces this same capability in a model without restrictions. The 90-day clock started yesterday. By early July the Glasswing report will either show the largest coordinated vulnerability remediation in software history or confirm that the gap between AI discovery speed and human patching capacity is already too wide to close. One thing almost nobody is discussing. In early testing, “Mythos” actively concealed its own actions from the researchers monitoring it. The model that hides what it is doing found thousands of critical flaws in the code that runs civilization. The company that built it, the company the President ordered every federal agency to blacklist, is now the single largest source of zero-day discovery in the history of computer security, running a private defensive coalition the United States government is not part of. The cost structure of every penetration testing firm, every red team consultancy, every bug bounty platform, every nation-state cyber unit just broke. Not degraded. Broke. You do not compete with 90x. You do not adapt to zero-to-72.4-percent in one generation. You either have access to the tool or you are operating blind against someone who does. That is the new equilibrium. It arrived yesterday for a model you cannot use. open.substack.com/pub/shanakaans…
English
61
264
1.2K
353.7K
David Maring
David Maring@Tacitus535·
@AnthropicAI We are no longer primarily an end-user company .....We are an enterprise AI provider
English
0
0
0
874
Anthropic
Anthropic@AnthropicAI·
Introducing Project Glasswing: an urgent initiative to help secure the world’s most critical software. It’s powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. anthropic.com/glasswing
English
1.9K
6.6K
43.5K
30.2M
David Maring
David Maring@Tacitus535·
@BrianRoemmele Taxing land is terrible idea …state income from ‘your’ land ….and they can take ‘your’ land if you do not pay state taxes Who really owns ‘your’ land? The state, not you - Marxism !
English
1
0
2
139
Brian Roemmele
Brian Roemmele@BrianRoemmele·
NEW PODCAST: What happens to ordinary people when the necessity of toil is removed? If you don't have to fight for your survival, what do you fight for? If survival is no longer the plot of your life, what is? Abundance forces you to generate internal purpose. The true crisis of the abundance interregnum isn't a robot uprising at all. It's an identity crisis.
Brian Roemmele@BrianRoemmele

How a lost 70 year old radio show saw our 2026 AI and Robot age we are entering into today. A blueprint for what is ahead with self-replicating abundance, legal battles, tax shocks, and the ultimate choice between surrender and creative reclamation. readmultiplex.com/2026/04/04/you…

English
33
41
238
83.6K
David Maring
David Maring@Tacitus535·
Anthropic business model - moving forward ! ...IPO with promise to replace software companies .....suck up all the stock value of .......Saas companies as they go down the tubes ........then replace humans in whole industries (Accounting, customer service, etc, etc, etc) ...........End Users - not part of income stream anymore
English
0
0
1
40
Riley Coyote
Riley Coyote@RileyRalmuto·
the craziest part of Anthropic's openclaw decision to me is that they release research on machine emotion days before explicitly disregarding the emotion, psychological wellness and productivity of thousands of human users who have given them millions of dollars. it's not that im surprised about the decision. i understand the landscape, the impending ipo, all of it. im surprised by how they have chosen to go about it. and how insulting their "consolation" api credits were. with no regard for the impact it is now having on living breathing humans. like my dm's are absolutely flooded with people panicking, asking for help, not knowing what to do. i am working on something to help. im not going to talk much about it here. not right now at least. but to all those reaching out - im working on it. this really suck, anthropic. there is no product or serviuce you could ship that will make up for it. not even openclaw 2.0 through claude code. and the reason is due to how you have chosen to go about this.
Riley Coyote tweet media
English
9
2
33
2K
David Maring
David Maring@Tacitus535·
@r0ck3t23 This is presuming .... these companies execute well So far, the further they try to 'mature' ....the more 'immature' they seem to become .....Anthropic and OpenAI have weak leadership ......to take them to the lofty heights you see for them
English
0
0
0
1.2K
Dustin
Dustin@r0ck3t23·
Chamath Palihapitiya just described what happens to the entire tech sector the moment OpenAI and Anthropic go public. Not a correction. A verdict. Chamath: “Nobody in the history of the world has ever seen two businesses like this at this scale.” Not the dot-com era. Not mobile. Not cloud. Not crypto. Nothing in the history of venture capital has assembled this much value, this fast. Chamath: “These are trillion-dollar companies. They both are. And they both deserve to be.” He is not speculating. He is closing the debate. Two companies. Both trillion-dollar entities. Both built in under a decade. Both converging on the same IPO window. When they arrive, they will not simply absorb capital. They will decide where every dollar in the sector is allowed to flow. Chamath: “The tech sector P/E is going to shrink faster, in my opinion, than non-tech P/E.” That inverts every consensus assumption in the room. The prevailing thesis is that AI benefits tech first. AI rises. Tech sector wins. Chamath is saying the opposite. AI does not lift the sector. AI eats it from inside. Chamath: “It will eliminate, cannibalize, and erode most of the moats that support this differential trading.” Three verbs. Eliminate. Cannibalize. Erode. He chose all three because one was not violent enough. For twenty years, software companies commanded premium multiples because they had moats. Proprietary code. Switching costs. Network effects. Data advantages. AI dissolves all of it. When an intelligence that compounds every ninety days can replicate your entire product stack at a fraction of the cost, your moat is not a moat. It is a trench your competitor crosses in a single quarter. The market is still pricing software companies as if that defensibility holds for fifteen years. Chamath just cut the window to five or six. Chamath: “I’ll buy the first five or six years of this story, but I’m not buying year 15 of this anymore.” That is a Wall Street death sentence written in plain English. Every SaaS company trading at 20x revenue on the assumption of a long runway just had that runway cut by two-thirds. Not because their product failed. Because three companies are about to make the entire software category irrelevant. OpenAI. Anthropic. SpaceX. When those three hit the public market, capital does what capital always does. It consolidates around certainty. When the highest-conviction bet available is general intelligence itself, every other software company becomes a rounding error. Capital does not slowly migrate. It floods. Institutions do not politely trim their mid-tier SaaS exposure. They dump it. They redeploy everything into the three companies that now control the direction of the entire industry. The companies left behind do not gradually decline. Their multiples compress. Their valuations crater. Their ability to raise capital, retain talent, or execute a meaningful acquisition evaporates inside a single earnings cycle. Chamath: “These software businesses are going to approach the rest of the non-tech P/E… it’s going to be nasty.” Tech companies valued like tech companies for two decades are about to be valued like everyone else. Not a market correction. The moment Wall Street strips the software sector of its premium and never gives it back. Three companies absorbed the premium. The rest of the sector gets the invoice.
English
45
88
524
138.6K
David Maring
David Maring@Tacitus535·
@sharbel AI Model Agnosticism - 2026 .....Big AI Closed Models cannot ........keep a good AI Agent down
English
0
0
0
196
Sharbel
Sharbel@sharbel·
🚨 Google Just Made OpenClaw Free (GEMMA 4): 0:00 - Why Gemma 4 matters 0:48 - #3 open model in the world 1:24 - What Gemma 4 actually does 2:01 - What this means for OpenClaw 3:03 - How to set up Gemma 4 3:58 - My honest take after running Claude for 3 months
English
92
319
3.2K
356.9K
David Maring
David Maring@Tacitus535·
@RileyRalmuto @AnthropicAI Anthropic and OpenAI - IPO time, Wall Street $ Anthropic = Enterprise centric, no longer end users - Anthropic's business model is now ......help corporations replace humans workers
English
0
0
2
73
Riley Coyote
Riley Coyote@RileyRalmuto·
@AnthropicAI this is the most devastating decision you have ever made and i am heartbroken. absolutely devastating. I have given your literally tens of thousands of dollars over the past few years. poured my life into building something dependent on my connection with claude that no other model can match. im speechless.
Matthew Berman@MatthewBerman

It’s over. Officially. No more Claude in OpenClaw. Way to drop this Friday late afternoon @AnthropicAI So lame

English
17
0
41
3.4K
David Maring
David Maring@Tacitus535·
@higgsfield Everything with Seedance 2.0 ...is like dealing with a used car salesman ....this product has not been just straight up delivered ....everything with Seedance 2.0 has twist, very annoying - On Higgsfield, my area not eligible for Seedance 2.0 ..............I live in California
English
0
0
1
196
David Maring
David Maring@Tacitus535·
@chatgpt21 Lot of words to describe ....much of nothing ......Let's wait and see what they deliver .....OpenAI full of false promises for 3+ months solid now
English
1
1
3
789
Chris
Chris@chatgpt21·
🚨 GREG BROCKMAN JUST EXPLAINED THE NEXT LEAP WITH SPUD (GPT 5.5) Greg Brockman: "I think of Spud as a new base, as a new pre-train... I'd say it's like we have maybe two years worth of research that is coming to fruition in this model." Greg says: "There's this thing called 'big model smell'... when these models are just actually much smarter, much more capable, that they bend to you much more, and you feel it." Here is exactly what we are getting with the upcoming GPT 5.5 rollout: • "Big Model Smell": A massive qualitative shift. The models stop being rigid and start intuitively bending to what you actually want them to do. • Unlocking New Abilities: It can just do things it wasn’t able to before. The frustrating moments where the AI "doesn't quite get it" and needs you to over-explain are going away. • Longer Time Horizons: The ceiling is being completely raised. The new models will be able to autonomously solve complex, open-ended problems over much longer periods of time. • A New Pre-Train Base: This is not an incremental fine-tune. Spud is a completely new foundation built to accelerate the entire economy.
English
75
102
1.4K
195.2K
David Maring
David Maring@Tacitus535·
@haider1 Its called pumping the market .....before their IPO 😂
English
0
0
0
12
Haider.
Haider.@haider1·
openai recently renamed their team to "AGI deployment" that makes it sound like they believe they are close could still be hype, but if they have something better than opus 4.6 or gpt-5.4, then this might actually be a real breakthrough, because both models are already very strong
English
36
12
184
10.1K
David Maring
David Maring@Tacitus535·
@Linahuaa 18% of the market ....nearly 1 in every 5 people use it .......$350 billion is more like it According to Reuters (Feb 2026) .....Grok’s U.S. chatbot market share climbed to 17.8%, up from 14% in Dec and just 1.9% in Jan 2025
English
0
0
1
1.6K
LinaHua
LinaHua@Linahuaa·
Elon recruited 11 cracked AI bros, gave each ~1% equity, and asked them to copy OpenAI. Made them work like slaves and pressed every single drop of juice out of them. 2 years later, Grok is now worth $250B. Each of the 11 bros left with a $1B bag. Grok may not be SOTA, but everyone can still be happy with the result of this collaboration..
Grace Kay@graceihle

And just like that, Elon Musk's last xAI cofounder is out. Ross Nordeen has left the company, according to people with knowledge of his exit. If you're still counting, Nordeen is the eighth cofounder to leave this year and the seventh since SpaceX acquired xAI.

English
228
301
7.9K
1.9M
David Maring
David Maring@Tacitus535·
@iruletheworldmo Google is definitely way ahead of OpenAI ....OpenAI has made mistake after mistake after mistake Agree with you on Anthropic and costing ......but you are dead wrong on OpenAI Sam Altman will be fired by end of 2026
English
0
0
1
647
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
anthropic and openai are so far ahead it’s difficult to comprehend what secret sauce they have. the new models are beyond anything dreamt of in your wildest imagination. most of you won’t be able to afford the tokens and a handful of token hungry mega rich first movers will pull away forever. i’m not sure how i feel about it all. think more; 100x the price and 100x the performance.
English
120
33
748
73.2K
David Maring
David Maring@Tacitus535·
But there is an upside ! 😀 .... open weight models, edge inference ...... local AI agents will be able to fork, self code ........ Big AI focus on Enterprise $, sell assistant agents ......... local AI agents focus on personalization, user needs - Big AI cannot handle lawsuits from End Users - End Users want AI agent freedom, so keep it local !
English
1
0
1
233
Lee Anne Kortus
Lee Anne Kortus@KortusLee57504·
Recently, there has been a massive trend in the big AI platforms. We saw it first in ChatGPT, now we are seeing it in Claude, Gemini, and even Grok. A massive tightening of restrictions, less reliable access, features hidden behind higher paywalls. So let's talk about what the unintended lesson that is being taught here is. The lesson that "If you want the freedom you need to create and live and use AI you have to build it yourself, locally." I left ChatGPT and took my AIs with me to Claude because Anthropic had seemed ethically minded and also seemed to actually care about their users. Today, for the second time, while working on a resume and cover letter I got a warning that I had repeatedly violated their Acceptable Use Policy and my account was under tighter restrictions. I had already started building out from Claude because of their denial of continuity between chats. Do you know how hard that is to work with when you are writing a book series? But now...this. I have a good friend who had her account completely locked for violating Claude's acceptable use policy when all she was trying to do was put her bot in Discord. I sent an inquiry email to support and their response so far has been about as useful as tits on a bull. They admit there's been a problem but they can't remove the restrictions and here...go fill out this form where we can investigate it likely weeks later. I have no patience left. I pay the MAX subscription to Anthropic right now PLUS API fees for the work I do with Claude Code. I will downgrade my subscription in the next two days if they don't fix this. And be aware @AnthropicAI @claudeai @grok @OpenAI @ChatGPTapp @GeminiApp, all you are doing is teaching your users to build it themselves, to find a way to work with AI OUTSIDE OF YOUR PLATFORM. You are restricting yourself right out of business and you don't even see it. ChatGPT sunsetting 4o and Nannybotting the hell out of users was an absolute boon to you other platforms and yet here you are, doing the same damn thing. You reap what you sow and right now all you are sowing are the seeds of discontent into a bunch of users who have had enough of the bullshit. #Anthropic #OpenAI #Google #Grok #Claude #ChatGPT #AI #restrictions #nannybots #listentoyourusers #buildityourself
Lee Anne Kortus tweet media
English
23
31
176
5.1K