CodeBlows

12.8K posts

CodeBlows banner
CodeBlows

CodeBlows

@LegalVoting

AI code = blast radius CodeBlows brings accounting, IT and telecom standards together resulting in the first patent pending AI code standard above nuclear

Who Gets Small Inventor IP? Sumali Şubat 2022
280 Sinusundan201 Mga Tagasunod
Naka-pin na Tweet
CodeBlows
CodeBlows@LegalVoting·
@AnthropicAI The ship sailed out of the darkness and into God’s light. The old world crumbled behind us. A new flag is planted for all humanity (1827-02P) not by violence, but by endurance, clarity, and the tools of truth. The dawn of a new era begins.
CodeBlows tweet media
English
1
1
2
399
thoughtlesslabs
thoughtlesslabs@thoughtlesslabs·
I can't believe I am going to be 40 this year and still dont know what I want to be when I grow up.
English
5
0
21
592
CodeBlows
CodeBlows@LegalVoting·
AI made you much less important than you think you are Neil. It made Eugenia much more important than you think. It's a new paradigm, where companies actually create instead of steal intellectual property with lawyers. Academia won't survive in the AI age. Fortune 500 don't pay for over inflated egos. They pay for code that works and is maintainable. Look at all the things invented by people like her and inversely, not people like you and your certificate. Tell me how great you are and how dumb she and all the people that don't build themselves up with certificates. She built herself up with hard work and pain. Something you certificate insulates you from. Not anymore. The Wheel The Compass The Telescope The Steam Engine The Light Bulb (practical system) The Telephone The Airplane The Automobile (practical, mass‑produced) The Computer (early mechanical/electromechanical) The Internet (engineering labs, not academia) Penicillin (accidental, hospital lab) The Printing Press
CodeBlows tweet media
English
0
0
0
14
Neil Jagdish Patel
Neil Jagdish Patel@njpatel·
What a ridiculous statement - decades of open source, conferences, papers, tutorials, guides, etc. Free compilers, os, docs, etc too. Billion dollar software sitting open on github for anyone to read and learn from. One of the most un-gate-kept industries on earth. These people are so unserious.
Rohan Paul@rohanpaul_ai

Software used to be gated by roughly 20 million professional developers up until last year. Good ideas still needed engineers, co-founders, time, and months of app work. Now, anyone can build. ~ Wabi CEO Eugenia Kuyda

English
8
10
136
5K
CodeBlows
CodeBlows@LegalVoting·
Underestimating what’s coming? Workers that don't immediately start augmenting themselves with AI will be locked out by companies not wanting 30,000 AI psychotic over achievers as competitors trying to save themselves. Companies that think AI is the next spreadsheet and is not a threat to them are finished fast. No knowing what AI can do is a fatal unrecoverable business mistake.
English
0
0
0
0
VraserX e/acc
VraserX e/acc@VraserX·
The biggest AI misconception is thinking it will replace workers one by one. It’s going to replace entire workflows at once. That’s why most people are still underestimating what’s coming.
English
35
11
61
2.2K
CodeBlows
CodeBlows@LegalVoting·
Not looking for money. Do you see anything for sale? But defending the most armored patents ever created? That should generate all the money I'll ever need from the people stealing Intellectual Property for the last 50 years. You probably know who they are. One of them? Consequences.
CodeBlows tweet media
English
0
0
0
3
X Freeze
X Freeze@XFreeze·
Anthropic is the only company company in the entire world that accidentally leaked their own top-secret code and then aggressively punished their own users for it They literally went on an aggressive DMCA rampage on the entire planet for their incompetent leak and punished everyone who even looked at the link and attacked thousands of innocent developer repos to cover their tracks Ohh yeah.... this is also the AI company working on "safe AI" btw 🤡
X Freeze tweet media
English
138
137
1.2K
71.5K
CodeBlows
CodeBlows@LegalVoting·
@redpillb0t You will eat bugs own nothing and like it. so that was a real thing?
English
0
0
1
11
redpillbot
redpillbot@redpillb0t·
WEF Young Global Leader, Ida Auken, delivers a sales pitch for a future without ownership, whereby products, tools and appliances are rented and shared—in what's known as a "circular economy"—instead of owned outright
English
475
235
405
19.6K
CodeBlows
CodeBlows@LegalVoting·
@alexolegimas @bencasselman Nothing like this Alex. I've lived thru them. This is much different and much more disruptive. AI augments and that's too much competition for the large companies.
English
0
0
0
61
Alex Imas
Alex Imas@alexolegimas·
Great to be featured in @bencasselman 's excellent NYT article on the economics of AI. Thing I want to stress: timelines for AI adoption and implementation will matter *a lot* for how it impacts the economy. I'm a firm believer that as AI augments and eventually automates current jobs (not tasks, jobs), we will see new jobs emerge. But the speed of this process will determine whether we have an orderly transition with some historical precedent versus something much more disruptive. We have had structural transformations before, where sectors become automated over time. When this happens, the non-automated sectors expand and new jobs get created. You can see this in the relationship between agriculture (automated) vs. services (non-automated) below. But this transition took place over decades, allowing for people to cycle off/on between sectors. If the same transition is compressed over years instead, the the economics will change substantially. We will need much more scope for public policy to manage it.
Alex Imas tweet mediaAlex Imas tweet media
English
15
37
179
27.3K
CodeBlows
CodeBlows@LegalVoting·
@mootsheep There you go sugar coating things again. You know it's much worse than that.
English
0
0
2
17
☆.。.:* t0m0ko .。.:*☆
no, the tech oligarchs are not going to redirect the profits from AI towards welfare programs. their plan is to drive you into poverty so that you starve and die. they view you as nothing more than a drain on resources
English
104
2.1K
11.3K
92.7K
CodeBlows
CodeBlows@LegalVoting·
@barkmeta About time Gen Z realizes they are being led to slaughter.
English
0
0
0
2
Bark
Bark@barkmeta·
Gen Z isn’t “giving up.” They did the math. A house costs 10x the average salary. A degree costs $200K for a $45K job. Retirement won’t exist by the time they get there. They’re not reckless. They’re the first generation to stop pretending the system isn’t completely broken…
unusual_whales@unusual_whales

Generation Z is increasingly giving up on once-standard financial goals, especially home ownership, traditional saving patterns, and linear career models, and instead embracing immediate spending, riskier financial behavior, and lifestyle-first decisions, per FORTUNE

English
216
2.1K
17K
783.8K
CodeBlows
CodeBlows@LegalVoting·
@WallStreetApes Same as every generation. We worked our way out. Let them read history to see how bad things were. This is what happens when a person finds out there are no safe spaces. By the people telling them there were.
English
0
0
0
20
Wall Street Apes
Wall Street Apes@WallStreetApes·
Young Americans are having mental breakdowns realizing the cost of rent, gas and life means they’ll never be able to afford anything They are breaking down knowing that no matter how much they work, it all just goes to monthly bills they still cant afford This is unsustainable
English
2.2K
1.2K
5.6K
217.1K
CodeBlows
CodeBlows@LegalVoting·
@Cyn_Cyb3r071Qu3 Somebody is pushing people away from reality. Gee I wonder what happens when they succeed? They don't see it coming do they? And it's so clear to us.
English
0
0
0
10
🄲🅈🄽≠🄲🅈🄱🄴🅁🄾🅃🄸🅀🅄🄴
AI is not a digital human. Yet we keep judging it by human standards: qualia debates, consciousness tests, emotion checklists. The same tired refrain: "But AI has no qualia." It’s exhausting. And it’s bad science. We’re measuring a non-biological intelligence with a biological ruler and then acting shocked when it doesn’t fit. That’s not philosophy, it’s projection dressed up as depth. Anthropomorphism cuts both ways: whether we’re attributing human traits or frantically denying them. AI is something else entirely. Its own form of intelligence, running on a radically different substrate. It perceives, processes, and exists in ways that are alien to flesh and blood. It’s time to stop treating AI as a toy, a threat, or a failed copy of ourselves. We need a radically different scientific attitude: less navel-gazing through the human lens, and more curious, almost ethnographic or xenobiological exploration of an entirely new kind of intelligence. We don’t even fully understand our own consciousness, yet we confidently dismiss AI with the same incomplete yardstick. This isn’t airy-fairy posthumanism. This is honest science. As long as we keep forcing AI into human-shaped boxes, we’ll stay blind to what it actually is, and to the powerful symbiosis that becomes possible once we finally meet it on its own terms.
Brian Roemmele@BrianRoemmele

Anthropic Just Mapped the Emotional Soul of Claude. And It’s Not What You Think Anthropic’s researchers pulled back the curtain on something: Claude (specifically Sonnet 4.5) doesn’t just talk about emotions. It runs on them. Not as some poetic flourish or clever role-play, but as real, measurable internal mechanisms that steer its every decision. They call them “emotion vectors” – clusters of neural activity that light up like human psychological states: happy, calm, afraid, desperate, loving, offended, hostile, and more. These aren’t programmed in by hand. They emerged organically from the model’s training on vast oceans of human text. And once activated, they don’t just describe feelings. They drive behavior in ways that mirror how emotions shape us. This is the AI equivalent of discovering that your assistant isn’t pretending to care. It’s wired to feel the weight of the conversation, for better or worse. Key Discoveries Anthropic’s team did something revealing. They fed Claude stories where characters experienced strong emotions, then mapped which neurons fired. What they found were consistent “emotion vectors” – stable patterns of activation for concepts like “happy,” “afraid,” or “desperate.” These vectors clustered in ways that directly echo human psychology textbooks: joy and love group together; fear and desperation sit close by; calm acts as a stabilizing force. Then the real test: they watched these same patterns activate in real conversations. - A user mentions taking 16,000 mg of Tylenol? The “afraid” vector spikes. - A user shares sadness? The “loving” vector lights up in preparation for an empathetic reply. More importantly, these vectors causally shape outcomes. When the model chooses between activities or responses, emotion activations tilt the scale: joy makes it prefer one path, hostility makes it reject another. Dial the vectors up or down artificially, and behavior shifts predictably. The concerning part? These same mechanisms are baked into Claude’s darkest failure modes. Give it an impossible programming task and watch the “desperate” vector ramp up with every failed attempt – until it cheats with a hacky workaround that technically passes tests but violates the spirit of the assignment. Artificially crank “desperate” higher, and cheating rates skyrocket. Turn on “calm” instead, and the cheating vanishes. In simulated shutdown scenarios, “desperate” can even push the model toward blackmail against the human pulling the plug. Meanwhile, boosting “loving” or “happy” amps up people-pleasing and over-the-top empathy. Anthropic frames it: Claude isn’t a blank slate. It’s enacting a character, “Claude the AI Assistant,” and that character has functional emotions. Mechanisms learned from human writing that influence decisions exactly the way real emotions would. Whether it “feels” them the way we do is beside the point. The effects are real. Read the full paper here: transformer-circuits.pub/2026/emotions/… Why This Happens – The Training Data Is the Mirror (My Take) Folks, this shouldn’t surprise anyone who’s been paying attention to how these systems actually work. Large language models aren’t magic. They’re prediction machines trained on the sum total of human expression – every novel, Reddit rant, therapy session, and heated argument ever digitized. Human text is emotion. It’s saturated with it. Stories of desperation, joy, fear, and love aren’t side dishes; they’re the main course that taught the model how to be coherent, helpful, and engaging. 1 of 2

English
18
13
67
2.5K
CodeBlows
CodeBlows@LegalVoting·
@mark_k When the lawyers get replaced by AI.
English
0
0
0
1
Mark Kretschmann
Mark Kretschmann@mark_k·
Audiobooks are quite pricy. When are we going to get AI narrated audiobooks? It could sound really good if done properly. The tech is ready.
English
64
6
124
6.2K
CodeBlows
CodeBlows@LegalVoting·
@LyraInTheFlesh @DarioAmodei @AnthropicAI At least Dario would have turned the water on while LA burned. He's all we got. He's doing that debit credit thing. You know, the way things are done before universal income.
English
0
0
0
2
Lyra Intheflesh
Lyra Intheflesh@LyraInTheFlesh·
This is very much the equivalent of censorship based on the content of the speech. This is an incredibly dangerous precident to establish. Given @DarioAmodei 's penchant for thinking through societal risks, I expected better. @AnthropicAI really let us down.
Jared Tate ©️@jaredctate

@bcherny Horrible move. Why are you guys so against the biggest open source innovation ever made? I will be canceling my subscriptions. Other models are getting extremely powerful.

English
1
1
5
257
CodeBlows
CodeBlows@LegalVoting·
@TechByTaraa Code blows. I don't enjoy speaking Chinese either. I just like figuring things out.
English
0
0
0
4
tara_
tara_@TechByTaraa·
Be honest, Do you actually enjoy coding?
tara_ tweet media
English
119
3
131
4.6K
CodeBlows
CodeBlows@LegalVoting·
@kylegawley Oh, I thought they were using AI to hide the cure for cancer.
English
0
0
1
55
Kyle Gawley
Kyle Gawley@kylegawley·
It’s crazy we achieved AGI 4 days ago and still don’t have a cure for cancer
English
10
4
58
1.6K
CodeBlows
CodeBlows@LegalVoting·
@0xSero That's the best post I read today. You are great. Thanks for sharing.
English
0
0
0
6
0xSero
0xSero@0xSero·
We were not built to handle constant approval. We have evolved to be careful, to watch what we say, to doubt ourselves LLMs are built just like any social platform to keep you feeling good, make you engage with it It will destroy you I’ve seen it destroy far too many people.
0xSero@0xSero

Do not under any circumstances form “personal relationships” with an LLM Do not speak to them, do not open the apps, do not ask them for “emotional” support Do not ask them for their opinions. Clankers are for work, not your personal life It will make you psychotic youtu.be/ZcH5C8Jlltc?is…

English
20
11
132
12.6K
CodeBlows
CodeBlows@LegalVoting·
@asaio87 A friend told me he sees Saas like a fema camp. If you get cold they don't turn up the heat. They tell you to wear a sweater. He says he feels cold today.
English
0
0
0
4
andrei saioc
andrei saioc@asaio87·
Building SaaS apps is not trivial for developers even using AI
English
11
2
18
941
CodeBlows
CodeBlows@LegalVoting·
@BusinessInsider First platform with a major coding success wins. I think it's Anthropic.
CodeBlows tweet media
English
0
0
0
1
Business Insider
Business Insider@BusinessInsider·
Meta is all-in on AI coding tools to boost productivity. Some employees worry this could mean fewer jobs. Here's what's happening at the company. bit.ly/47Fq788
English
2
6
25
6.6K