Ezra

228 posts

Ezra banner
Ezra

Ezra

@Revan

US 가입일 Mart 2007
1.2K 팔로잉359 팔로워
Ezra 리트윗함
Καλός
Καλός@realKalos·
Got my answer. Full body chills.
Καλός tweet media
KickChamp👑@Kick_Champ

#2 ranked CHAD Androgenic has LANDED in AMERICA and is actively SEARCHING for the ASU FRAT LEADER who FRAME MOGGED Clavicular 👀

English
52
309
13.8K
469.9K
Ezra 리트윗함
JT
JT@jiratickets·
Knew a dude who emailed the IT guy directly instead of submitting a ticket and they blew his shit smoove off
English
486
8K
181.7K
5.8M
Ezra
Ezra@Revan·
@MurrayHillGuy1 Until dating services have financial incentives that align with partnering people instead of maximizing screen time for lonely men, none of these or other well intentioned ideas will be implemented. As it is, you're the guy politely requesting a thermostat adjustment in hell.
English
0
0
0
67
Murray Hill Guy
Murray Hill Guy@MurrayHillGuy1·
Here’s what I think could make dating apps better: • 20 swipes a day. If you can’t decide from 20 people, you don’t really wat anything, you want options. • 50 inbound likes max. Once you hit it, you’re off the feed for future people until you clear… • 3 months on, 3 months off (said this before) If you’ve been “on Hinge” for years, that’s a lifestyle issue and you’re the problem. • Max 5 messages. Get off the app… stop pen paling. • No “how was your day/weekend.” Blocked. Be interesting or be gone. Auto ban. • Photos from the last 18 months only. No pre-COVID, pre-weight-gain, pre-hairline fraud. • Anti-validation mode. No like counts. Dating apps should optimize for dates, not dopamine endless likes! Anything to add?
English
139
19
1.7K
277.3K
Ezra
Ezra@Revan·
@fleshsimulator Not that two things can't be true at the same time but I'm not interested in hearing about federal jackboots from people who never gave a shit about Vicki Weaver
English
0
0
0
10
Simulator di tutti i Simulatori
Simulator di tutti i Simulatori@fleshsimulator·
These CBP/ICE shootings make me sad because most of these seem like people who think they’re doing the right thing and have just been dangerously, dangerously misled about the reality of fucking with the feds Right wing people already know that the Feds will truly, inexcusably, viciously fuck you up in ways that you can’t even comprehend, and then get away with it. Left wingers are learning that currently
English
180
247
4.2K
149.2K
Landeur 🏴󠁧󠁢󠁥󠁮󠁧󠁿
I've slowed this down. In the huddle, one or more ICE officers shout 'GUN'. This caused another officer to draw his sidearm. Another officer removed the man's sidearm, and then walked away, but it was discharged negligently (see for yourself). The Sig P320 is notorious for NDing, but typically only when dropped. That then triggered the other agents to think that the man on the ground, Alex Pretti, had begun shooting, so they neutralised the perceived threat. If this is what happened, it's an incredibly unfortunate accident in Minneapolis.
English
4.2K
3.9K
26.8K
6.2M
Ezra
Ezra@Revan·
@9mmsmg If sig making a striker fire fulfills a prophecy just imagine the butterfly effect if keltec made something useful
English
0
0
2
235
9mmSMG
9mmSMG@9mmsmg·
I don't know if that's what happened but can you imagine if a sig p320 went off on it's own and the downstream effect was a literal civil war? Sig entering the striker fire market collapses the entire country. Butterfly effect
Chase Geiser@realchasegeiser

1. One of the officers shouts, “GUN” 2. Another officer disarms the man in response 3. The shitty gun goes off uncommanded 4. The remaining officers shoot the protester in response to hearing the discharge

English
315
863
14.6K
794.1K
Ezra
Ezra@Revan·
@KelTecOfficial I mean. Yeah I believe you when you said you didn't ask a focus group.
English
0
0
0
6
KelTec
KelTec@KelTecOfficial·
Announcing the all new PR-3AT. The stripper clip fed pistol in .380 AUTO. Available Now! As KelTec celebrates its 35th anniversary, we introduce the all-new PR-3AT. The evolution of the pistol that defined the ultra-compact category with the original P-3AT. Alongside the launch, we are also releasing the PR-3AT Defender, a Factory Exclusive variant available now in limited quantities online.
English
342
337
3.8K
709.7K
Ezra 리트윗함
Ink Blot
Ink Blot@inkblotistan·
looksmatched couple
English
11
35
678
23K
Ezra
Ezra@Revan·
@Bricktop_NAFO Did you think there's like an Indiana Jones warehouse full of every car somebody's died in, preserved for future generations? Or, what, are you not convinced she's dead?
English
0
0
0
3
Bricktop_NAFO
Bricktop_NAFO@Bricktop_NAFO·
They didnt even wrap the car to preserve evidence. They didnt even cover the window. They just drove the car in which they assasinated an American citizen in down a highway with the window open. They had no intentions of doing a thorough investigation.
English
2.6K
12.7K
71.3K
2.5M
Ezra
Ezra@Revan·
@utacult The other thing captured well here, surely just a coincidence, is the total indifference and narcissism of doctors in the face of people with serious medical problems more difficult to diagnose than a broken arm. Hmm.
English
0
0
0
4
Irene
Irene@utacult·
pitt reminding you of the casual racism doctors face every day #thepitt
Irene tweet media
English
1.3K
3K
84.2K
9.4M
Ezra
Ezra@Revan·
This is largely my experience both with using AI for software development and for personal work. Getting good results usually happens within the first 3 prompts or not at all, and depends heavily on narrowly defining success criteria, and usually brevity.
Robert Youssef@rryssf_

This paper quietly explains why so many people feel like LLMs are “almost smart, but somehow wrong.” The core claim in this paper is very uncomfortable: most failures are not about missing information. They are about misreading intent even when all the relevant context is present. The authors show that LLMs are very good at mapping text to plausible responses, but surprisingly weak at inferring what the user is trying to achieve. Two prompts can contain nearly identical information, yet imply very different goals. Humans pick this up instantly. Models often do not. The paper separates “context understanding” from “intent understanding.” Context is the literal content: entities, constraints, instructions. Intent is latent: priorities, tradeoffs, what matters most if things conflict. Current models optimize for surface-level alignment, not goal inference. One experiment makes this painfully clear. Users asked questions that could reasonably be interpreted as either exploratory or decision-oriented. The models answered confidently but chose the wrong mode at high rates, giving verbose explanations when users wanted a recommendation, or giving a decisive answer when users were clearly still exploring. The information was correct. The response was wrong. Another failure mode is over-literal instruction following. When users implicitly expect the model to fill gaps or challenge assumptions, the model instead treats the prompt as a closed specification. The result looks obedient but misses the point. This is not hallucination. It is misaligned helpfulness. The authors also test paraphrasing. When the same intent is expressed with different phrasing, model behavior shifts significantly. That tells us the model is anchoring on linguistic form, not reconstructing an underlying goal. "Humans normalize phrasing differences. Models react to them." What’s striking is that longer context often worsens intent alignment. Adding more background increases the chance the model optimizes for local relevance instead of global purpose. More tokens give the illusion of understanding while diluting the signal of what the user actually wants. The paper argues this is not solvable by bigger context windows or better prompting alone. Intent is not explicitly stated most of the time. It has to be inferred, tracked, and sometimes revised mid-conversation. That requires models to reason about users, not just text. The implication is brutal for agents and copilots. If a system cannot reliably infer intent, autonomy becomes dangerous. Tool use amplifies mistakes. Confident execution based on a misunderstood goal is worse than asking a clarifying question. The authors suggest future work should treat intent as a first-class object: something to model, update, and verify explicitly. Not just “what was said,” but “what outcome is being optimized.” Until then, many AI systems will continue to feel smart, fast, and subtly wrong. This paper explains why that feeling keeps coming up. Paper: Beyond Context: Large Language Models Failure to Grasp Users Intent

English
0
0
1
169
Ezra 리트윗함
Lomez
Lomez@L0m3z·
@RichardHanania You should seriously reckon with the fact that your inability to understand fundamental moral intuitions felt by 99% of humabity disqualifies you from making recommendations for how those people organize their societies
English
107
491
14.1K
346.1K
Ezra
Ezra@Revan·
@DJSnM That would be meaningful and helpful though
English
0
0
0
315
Ezra
Ezra@Revan·
@WizardGoesBoom I don't know if I can really articulate this but there was something about the vibe of magic and spells in the AD&D books that just isn't there anymore. Maybe it was the inaccessiblity of the system itself enhancing the feel of "magic".
English
0
0
2
15
Wizard Goes Boom
Wizard Goes Boom@WizardGoesBoom·
Cantrips first came out in Unearthed Arcana in 1985. They were just simple little spells. While 5e cantrips are OP by comparison, I still appreciate that in 5e wizards are more useful at lower levels, especially when you burn through leveled spells.
Wizard Goes Boom tweet media
English
49
13
257
7.3K
Ezra
Ezra@Revan·
@jxnlco Honestly don't try to help these animals.
English
0
0
1
88
jason liu
jason liu@jxnlco·
holy shit
jason liu tweet media
English
466
271
6K
1.3M
Reno May
Reno May@RenoMayGuns·
The P320 it practically shoots for you.
English
12
1
93
5.1K