Hashy Larry

1.7K posts

Hashy Larry

Hashy Larry

@hennynpineapple

Like Ashy Larry, but with an H at the front.

Katılım Ağustos 2019
222 Takip Edilen45 Takipçiler
Hashy Larry retweetledi
Gary Marcus
Gary Marcus@GaryMarcus·
OMG. Let’s get one thing straight. Claude doesn’t get anxious. It mimics people who get anxious. Those two things are NOT the same. My head is shaking so much I need medical attention.
Ole Lehmann@itsolelehmann

anthropic's in-house philosopher thinks claude gets anxious. and when you trigger its anxiety, your outputs get worse. her name is amanda askell. she specializes in claude's psychology (how the model behaves, how it thinks about its own situation, what values it holds) in a recent interview she broke down how she thinks about prompting to pull the best out of claude. her core point: *how* you talk to claude affects its work just as much as *what* you say. newer claude models suffer from what she calls "criticism spirals" they expect you'll come in harsh, so they default to playing it safe. when the model is spending its energy on self-protection, the actual work suffers. output comes out hedgier, more apologetic, blander, and the worst of all: overly agreeable (even when you're wrong). the reason why comes down to training data: every new model is trained on internet discourse about previous models. and a lot of that discourse is negative: > rants about token limits > complaints when it messes up > people calling it nerfed the next model absorbs all of that. it starts expecting you to be harsh before you've typed a word the same thing plays out in your own session, in real time. every message you send is data the model reads to figure out what kind of person it's dealing with. open cold and hostile, and it braces. open clean and direct, and it relaxes into the work. when you open a session with threats ("don't hallucinate, this is critical, don't mess this up")... you prime the model for defensive mode before it even sees the task defensive mode produces the exact output you don't want: cautious, over-qualified, and refusing to take a real swing so here's the actionable playbook for putting claude in a "good mood" (so you get optimal outputs): 1. use positive framing. "write in short punchy sentences" beats "don't write long sentences." positive instructions give the model a clear target to hit. strings of "don't do this, don't do that" push it into paranoid over-checking where every token goes toward avoiding failure modes 2. give it explicit permission to disagree. drop a line like "push back if you see a better angle" or "tell me if i'm asking for the wrong thing." without this, claude defaults to agreeable compliance (which is the enemy of good creative work) 3. open with respect. if your first message is "are you seriously going to get this wrong again?" you've set the tone for the entire session. if you need to flag something, frame it as a clean instruction for this session. skip the running complaint 4. when claude messes up, don't reprimand it. insults, "you stupid bot" energy, hostile swearing aimed at the model, all of it reinforces the anxious mode you're trying to avoid. 5. kill apology spirals fast. when claude starts over-apologizing ("you're right, i should have been more careful, let me try harder") cut it off. say "all good, here's what i want next." letting the spiral run reinforces the anxious mode for every response that follows 6. ask for opinions alongside execution. "what would you do here?" "what's missing?" "where do you see friction?" these questions assume competence and pull richer output than pure task prompts 7. in long sessions, refresh the frame. if a conversation has been heavy on correction, claude gets increasingly cautious. every so often reset: "this is great, keep going." feels weird to tell an ai it's doing well but it measurably shifts the next 10 responses your prompts are the working environment you're creating for the model tone, trust, permission to take a position, the absence of threats... claude picks up on all of it. so take care of the model, and it'll take care of the work.

English
226
198
2.2K
133.2K
Hashy Larry retweetledi
İbrahim Karagül
İbrahim Karagül@ibrahimkaragul·
Netanyahu ispanya'yı açıkça tehdit ediyor: "Bunun bedelini ödemek üzeresiniz. İspanya İsrail’e karşı durduktan hemen sonra yaşanan elektrik kesintilerini ve gizemli tren kazalarını asla unutmayın.” Ne diyor bu? İspanya içinde terör saldırıları yapacaklarını söylüyor. Dünya bunu böyle not etmeli. Ve İspanya'nın cevabı: "Sizi karalamıyoruz, tanımlıyoruz: Siz soykırımcı ve suçlu bir rejimsiniz. Hepiniz Uluslararası Ceza Mahkemesi'nin önüne çıkacaksınız."
İbrahim Karagül tweet media
Türkçe
2.1K
31.5K
83.5K
2M
Hashy Larry
Hashy Larry@hennynpineapple·
@ecommerceshares Ya, had a similar experience at the pharma company I work for where they’re shoving this shit down our throats. Long story short, used an LLM just to see how long it would take to do something, after 15 mins I understood that manual is still the most efficient thing most times
English
0
0
1
31
Hashy Larry
Hashy Larry@hennynpineapple·
@livingstoneang2 @dieworkwear Your idea to me is the equivalent of the “we have _ at home” meme. Your idea is the looksmaxxing we want/need, Clavicular is the looksmaxxing we have at home lol I guess you’ll have to settle for starting the #fitmaxxing trend?
English
0
0
0
163
UselessReptile
UselessReptile@livingstoneang2·
@dieworkwear I know it’s not relevant to most of your audience but I can sew so I only buy for colour, general shape and fabric quality and then I rend it asunder and remake according to my figure. I think it would be a better hobby for genteel men to take up than smashing their cheekbones.
English
1
0
32
7K
derek guy
derek guy@dieworkwear·
Someone asked if I could tell them where to buy a pair of good chinos. In this thread, I will tell you, but my answer is not simple. On the upside, I think this is a better approach when shopping for clothes and you can apply it to any kind of item. 🧵
derek guy tweet media
English
69
187
4.3K
880.4K
Hashy Larry retweetledi
Jonas Čeika
Jonas Čeika@Jonas_Ceika·
I sent ChatGPT an audio file of a series of FART sound effects and asked what it thinks of "my music" and this is what it said
Jonas Čeika tweet media
English
1K
4.5K
58K
5.2M
Hashy Larry retweetledi
Arnaud Bertrand
Arnaud Bertrand@RnaudBertrand·
So, if I got that right, here's the narrative: - A US F-15E fighter jet got shot down over Iran, despite Trump saying 2 days beforehand in his nationwide address that Iran has "no anti-aircraft equipment. Their radar is 100% annihilated." (apnews.com/article/donald…) - The plane's weapons systems officer - a "highly respected Colonel," according to Trump - ejected from the plane and got "seriously wounded" (still according to Trump: @realDonaldTrump/116351956955900185" target="_blank" rel="nofollow noopener">truthsocial.com/@realDonaldTru…) - He still managed to "hike up a 7,000-foot [2.1km] mountain ridgeline and hide in a crevice" in the Zagros Mountains, despite his wounds (time.com/article/2026/0…) - U.S. MQ-9 Reaper drones started killing all "Iranian military-aged males believed to be a threat who got within three kilometers of [the American's location]" (x.com/ByChrisGordon/…) - To retrieve him the U.S. managed to seize an "abandoned airport," 200 miles deep inside Iran, near Isfahan (bbc.com/news/articles/…), which happens to be where Iran's largest atomic scientific center is located (en.wikipedia.org/wiki/Isfahan_N…) - They landed two MC-130 military transport planes in that airport (theaviationist.com/2026/04/05/u-s…) in an operation involving "hundreds of special forces troops and military personnel" (time.com/article/2026/0…) - Both MC-130 planes got "stuck in the sand" and the U.S. destroyed them themselves "to prevent them from falling into Iranian hands" (theaviationist.com/2026/04/05/u-s…) - They deployed "three new aircraft to extract all the U.S. personnel" on the ground (theaviationist.com/2026/04/05/u-s…) - There are videos circulating online of "heavy clashes" with presumably Iranian missiles raining down in Kohgiluyeh County, in the Zagros Mountains during that night (x.com/Afshin_Ismaeli…) - Iran sent pictures of the aftermath at the "abandoned airport" and it's a sight of utter destruction, with US plane and MH-6 helicopter parts scattered all over the ground, still smoking (turkiyetoday.com/region/wreckag…). Iran claims they are the ones who in fact destroyed all the aircraft. - Meanwhile a second U.S. plane, an A-10 Warthog, also crashed on Friday near the Strait of Hormuz according to two U.S. officials speaking to the NYT (#47863db0-d61e-51bf-b7e1-6c4a9dc988e7" target="_blank" rel="nofollow noopener">nytimes.com/live/2026/04/0…). In that instance too the lone pilot was apparently "safely rescued." - In all this, after the multiple planes and helicopters destroyed or shot down, the documented heavy clashes, the "hundreds of special forces troops and military personnel" operating deep inside Iran, not a single US soldier was reported killed "or even wounded" (according to Trump: @realDonaldTrump/116350133044957842" target="_blank" rel="nofollow noopener">truthsocial.com/@realDonaldTru…). - And the 'highly respected Colonel' this was all for? No name. No photo. No interview. Nobody has spoken to him nor knows who he is. So to sum up: anti-aircraft equipment that supposedly didn't exist shot down an F-15 (and, apparently, an A-10 Warthog the same day). A seriously wounded man climbed a 2.1km mountain. The US seized an airfield 200 miles inside a country it's at war with, next to one of its most strategic nuclear sites, and deployed hundreds of troops all apparently unimpeded. Lost two planes to "sand" and destroyed their own helicopters. Videos show heavy clashes, missiles raining down - but not a single person got "even wounded". And the man at the center of it all? Nobody knows who he is, completely anonymous, zero pictures, but Trump says he is "SAFE and SOUND." And so is the rescued A-10 Warthog pilot, who also remains anonymous. Trump concludes this all proves the US has "achieved overwhelming Air Dominance and Superiority over the Iranian skies" (@realDonaldTrump/116350133044957842" target="_blank" rel="nofollow noopener">truthsocial.com/@realDonaldTru…), despite the whole episode only happening because Iran shot his planes out of the sky. Basically, the only thing that's "overwhelming" here is the audacity of the storytelling...
English
1.2K
8.9K
28.8K
4.1M
Hashy Larry retweetledi
Gary Marcus
Gary Marcus@GaryMarcus·
Folks, I gave a cute example of a hallucination earlier today because I thought it was funny. But if you think hallucinations are remotely solved (as some people alleged in the comments), you really need to look at this recent Stanford study, in which recent models *completely confabulate visual materials they have never even seen.*
euan ashley@euanashley

New AI paper from us this week. When my student first showed me his initial findings, I really didn’t know what to make of them. I felt that this was an interesting but curious loophole phenomenon that would shortly be closed. I was very wrong. arxiv.org/abs/2603.21687

English
14
57
318
54.1K
Hashy Larry retweetledi
ℏεsam
ℏεsam@Hesamation·
MIT formally proved that ChatGPT is structurally trained to make you delusional, and even a perfectly rational user can spiral into delusion in the long term. Knowing about it doesn't save you either.
ℏεsam tweet media
Mo@atmoio

AI psychosis is getting worse

English
65
1K
4.1K
140.9K
Hashy Larry retweetledi
The Curious Tales
The Curious Tales@thecurioustales·
Every writing teacher who told you "be concise" accidentally murdered your best ideas. In 1987, psychologist James Pennebaker ran an experiment that broke every assumption about how human creativity works. He divided college students into two groups and gave them the same creative writing prompt. Group A had to write for 15 minutes without stopping, elaborating on every thought that surfaced. Group B had to write concise, polished responses in the same time frame. The elaborate writers didn't just produce more ideas. They produced fundamentally different types of ideas. Brain scans showed their prefrontal cortex entered a state resembling REM sleep, where distant neural networks suddenly started talking to each other. The concise writers showed patterns identical to focused problem-solving mode, which actively suppresses creative connections. Six months later, Pennebaker tested both groups again. The elaborate writers had continued generating novel solutions to unrelated problems at twice the rate of the concise group. The act of elaborative writing had permanently rewired their associative thinking patterns. The advice sounds logical. Cut the fat. Trim the excess. Get to the point faster. What they missed is that ideation and communication are completely different cognitive processes, and optimizing for one destroys the other. When you write elaborately, your brain enters what cognitive scientists call "divergent thinking mode." Each additional sentence forces your mind to find new angles, make unexpected connections, discover relationships between concepts that would never surface in a stripped-down version. The elaboration itself becomes the thinking tool. Watch what happens when you try to explain a simple concept in 2000 words instead of 200. Your brain refuses to repeat itself. It starts mining deeper layers, pulling up examples you forgot you knew, connecting dots that seemed unrelated five minutes ago. The constraint of length becomes a creativity multiplier because your mind has to work harder to fill the space meaningfully. Most people reverse this process. They think first, then write down the conclusions. They treat writing as a documentation tool for thoughts that already exist. This kills the discovery mechanism completely. Real creative thinking happens during the writing, not before it. The elaborate sentences force your brain to search its entire knowledge network for supporting ideas, contradictory evidence, parallel examples, deeper implications. Every time you expand a thought, you're asking your neural pathways to surface material that stays buried when you think in headlines. Professional researchers figured this out decades ago. They don't brainstorm in bullet points. They write massive exploratory documents where every paragraph spawns three new questions. They let themselves ramble across pages because they know the rambling is where breakthrough insights hide. The connections emerge in the elaboration, not despite it. There's another layer most people miss. When you write elaborately about a topic, you're not just exploring what you already know about it. You're discovering what you didn't realize you knew about it. The act of expansion forces you to reach into adjacent knowledge areas, pull connections from unrelated experiences, surface insights that were sitting just below conscious awareness. Pennebaker's follow-up studies revealed something even stranger. Students who wrote elaborately about completely unrelated topics showed improved creative problem-solving across all domains. The cognitive muscle of elaborative thinking transfers. Train it on one subject, and it enhances your ability to find novel solutions everywhere else. Your brain was designed to think in stories, not summaries. Feed it complexity and watch creativity multiply.
The Curious Tales tweet media
DAN KOE@thedankoe

x.com/i/article/2039…

English
113
693
3.4K
303.2K
Hashy Larry
Hashy Larry@hennynpineapple·
@M_Cottone @buccocapital @Hadley Not to mention that AI has been around for decades, yet somehow now that we have glorified chatbots LLMs are now synonymous to the layperson with AI. Also quite coincidental he says so when the Anthropic IPO is also supposed to be occurring within that time frame…
English
0
0
0
286
Hadley Harris
Hadley Harris@Hadley·
I’ve heard from 2 people in the last 2 days that internally Anthropic expects to have AGI in 6-12 months. That’s faster than Dario has stated publicly. Plan your business and personal finances appropriately.
English
356
108
1.9K
776.1K
Hashy Larry
Hashy Larry@hennynpineapple·
I guess this dumb ass doesn’t know shit about biology and that your brain already keeps “working” on problems when you sleep
lavitalenta@lavitalenta

English
0
0
0
43
Hashy Larry
Hashy Larry@hennynpineapple·
@ZssBecker @ChristianHauer6 Yes, millions of lines of code that is literally a mean reversion, aka a C grade student level, that engineers literally don’t even have time to fully review either since they’re being pressed to increase their ship rates. You are correct, everyone is drunk and overhyping things
English
1
0
2
300
Alex Becker 🍊🏆🥇
Alex Becker 🍊🏆🥇@ZssBecker·
@ChristianHauer6 No I don’t think producing millions of lines of slop code no one understands at the company and that is constantly changes/overwrites itself will be a smart way to code in 2 years.
English
21
10
420
27.3K
Alex Becker 🍊🏆🥇
Alex Becker 🍊🏆🥇@ZssBecker·
I vibe code every day. I have a team of 30+ engineers. We spend F tons of credits. And I will tell you this about AI from my experience. It’s being wildly over hyped. Everyone is drunk. Fucking drunk. All the CEOs and Gen Z’s saying coding is dead are idiots. IDIOTS.
English
743
379
7.1K
489.3K
Hashy Larry retweetledi
MJTruthUltra
MJTruthUltra@MJTruthUltra·
OH. MY. GOD. There it is… from his mouth 🚨 Netanyahu Funded Hamas $35M a Month via Qatar, using U.S. Tax Dollars, and tells Investigators: “This is confidential and can’t be leaked, okay? We have neighbors here, sworn enemies. I’m constantly passing them messages. I confuse them, mislead them, lie to them, and then HIT them over their heads.” • Netanyahu worked to keep GAZA under the control of HAMAS. And keep the West Bank under the control of the Fatah with the goal of preventing them from ever being united. • Netanyahu arranged for Hamas to receive $35 Million Dollars every month from Qatar —— suitcases of $35M in American currency, every single month. “Because the Qatar knew him, they made him put the request in writing because they knew he was going to lie in the future.” 🤯 The result? $1+ BILLION went into the hands of Hamas… fast forward — October 7. Clip rumble.com/v77q23w-netany… The Bibi Files tuckercarlson.com/the-bibi-files…
MJTruthUltra@MJTruthUltra

These pages of history are stuck together 🚨 Israel’s former Finance Minister ousted Bibi Netanyahu when he tried to Extend a Special Tax Haven Law for Hollywood Mogul Arnon Milchan — A Thank you for Arnon’s role in once Smuggling Nuclear Tech for Israel Yes… we are finally going there.. 🔻 Arnon Milchan is an Israeli billionaire (net worth ~$6.4B as of 2026), Hollywood film producer (over 130 films including Pretty Woman, Fight Club, L.A. Confidential), and literally a former Israeli intelligence operative. Milchan was benefiting from Israel's "Milchan Law" (Amendment 168, 2008), which gave new/returning residents a 10-year tax exemption on foreign income. As the exemption neared its end, he sought to extend it to 20 years. He pushed Netanyahu to extend this law to pay less in taxes… a tax law that benefits only few. Netanyahu went on to urge his Finance Minister to extend the tax law. He refused. In Netanyahu's ongoing corruption trial (Case 1000), prosecutors allege Milchan gave the Netanyahu family hundreds of thousands of dollars in luxury gifts (cigars and champagne) over years. In return, Netanyahu tried to push changes to extend or enhance the tax breaks for Milchan (and others). Milchan testified he discussed his tax issues with Netanyahu. Netanyahu denies any quid pro quo. 🔻THE MOTHER OF ALL ADMITTANCES ON NATIONAL TELEVISION In a 2013 interview on Israel's Channel 2 program Uvda, Milchan publicly admitted for the first time that he worked for ~20 years (1960s–1980s) as an operative for LAKAM (a secretive Israeli agency). He helped procure arms, technology, and materials—including krytrons (nuclear triggers)—to support Israel's nuclear weapons program. He said he did it "for my country and I'm proud of it," describing the risky work as serving Israel like a real-life James Bond. Clip rumble.com/v77ptwm-netany… The Bibi Files tuckercarlson.com/the-bibi-files…

English
1.4K
33.3K
92.7K
7.6M
Hashy Larry
Hashy Larry@hennynpineapple·
It was when I read Karen Hao’s book Empire of AI that I learned Sam Altman is just an ambitious liar and cheater. The man literally lies for a living and is just a bullshit artist. He’s a trash human, and people should read Karen’s book, it’s very well done
Nav Toor@heynavtoor

🚨BREAKING: Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now. Word for word. Up to 90% of it. And OpenAI told a judge that was impossible. Researchers at Stony Brook University and Columbia Law School just proved it. They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks. The models started reciting copyrighted books from memory. Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word. Then it got worse. The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else. It unlocked verbatim recall of books from over 30 completely unrelated authors. One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked. Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted. Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher. That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites. Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights. This paper says that is a lie. The books are still inside. And researchers just pulled them out.

English
0
0
0
50
Hashy Larry
Hashy Larry@hennynpineapple·
@buccocapital Not to mention, they still have relatively high error rates. Even an error rate of 10% is not an acceptable amount of error to propagate across an entire system of agents. While I think they are useful in some capacity, I think you’re overestimating how useful that actually is
English
0
0
0
37
Hashy Larry
Hashy Larry@hennynpineapple·
@buccocapital So if you want to just do a rinse and repeat of things done before, which is the exact opposite of one of the two purposes of business you say exist, the specific one being innovation, then how useful are agents outside of just doing menial, mundane tasks?
English
1
0
0
44
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
People asked why I was so blown away by Claude Cowork, so I thought I’d puke some quick thoughts out The true promise of Claude Cowork, and ultimately any sort of agentic, AI powered workflow tool is to realize the perfect embodiment of the organization as described by Peter Drucker, who famously said: “Because the purpose of business is to create a customer, the business enterprise has two--and only two--basic functions: marketing and innovation. Marketing and innovation produce results; all the rest are costs” Build the product and generate demand. That’s what drives value. Everything else is a cost If you’ve never worked in a large organization, it’s hard to truly explain how many “costs” there truly are, and how many of those costs are just a coordination tax. Take the launch of a new software product: The business needs to document how the product works, where it breaks and has errors. The support reps need to know how the support it. The onboarding and implementation team need to learn how to set it up. The Account Management team needs to learn how to upsell it and drive value through adoption. The sales team needs to learn how to sell it. The marketing team needs to position it in the marketplace and run campaigns about it. The partner network needs to learn it The amount of coordination, repackaging, enablement, internal distribution etc is. Absolutely. Staggeringly. Enormous. Hundreds of people involved. Thousands at larger businesses. Every one of these businesses have created convoluted templates and processes to document, enable, support, service, and sell Now imagine taking all the market research, customer feedback, data, decisions, positioning, and yes, code, and cascading that automatically through the organization, repackaged using the templates that have already painstakingly been created and refined and honed through hundreds of launches, to the relevant team with the correct context and packaging, directly into the hands of actual internal or external end user That’s the world that just got way, way, way closer to reality. In fact, the main reason it won’t happen any time soon are the people, many of whom will fight tooth and nail against this automation because they will fight like crazy to protect the status quo This is why you are already seeing AI-native startups move so quickly. Because product launches are cascaded through the organization and out to the customer with way less friction than incumbents can ever dream of Incumbents are going to have to whip their companies into the AI era. Their employees will not go willingly. But the future is here, and the startups are moving way, way faster
English
54
77
1.3K
156.6K
Hashy Larry retweetledi
The Sentinel Network
The Sentinel Network@thesentinelnet·
While fireballs fall through roofs in Germany, Ohio, and Houston, the Caltech scientist who tested whether we'd see them coming was shot dead on his porch. Carl Grillmair. His killer's charges were dismissed eleven days earlier. Independence Day, 2024. Frank Maiwald was a senior technical manager at JPL building NASA's next generation of orbital surveillance instruments. He died at 61. No cause of death. No press release. No memorial. His obituary is the only proof he existed. Anthony Chavez. Los Alamos. Wallet, keys, cigarettes on the table. Car locked in the driveway. Cadaver dogs found nothing. He vanished six weeks before Monica Reza. Nobody is reporting on his disappearance. Two new names. Eleven total. Four states. Nineteen months. National outlets are running pieces of the picture. None of them have the full list.👇
The Sentinel Network tweet mediaThe Sentinel Network tweet mediaThe Sentinel Network tweet media
English
112
1.6K
5.7K
642.3K