Esteban

20.3K posts

Esteban

Esteban

@estebarb

Costa Rica Katılım Eylül 2009
2.2K Takip Edilen227 Takipçiler
Sabitlenmiş Tweet
Esteban
Esteban@estebarb·
@Estebarb" target="_blank" rel="nofollow noopener">hachyderm.io/@Estebarb
ZXX
0
0
5
0
Esteban
Esteban@estebarb·
¿Soy yo o pintaron el Banco Nacional de SJ? Yo lo recordaba en otro tono
Español
0
0
0
5
Esteban retweetledi
Nick Kapur
Nick Kapur@nick_kapur·
Powerful words from University of Pennsylvania students against their university's headlong rush to embrace of AI:
Nick Kapur tweet media
English
36
1.5K
7.4K
213.1K
Esteban
Esteban@estebarb·
@icmlconf It is possible to edit the answers after publishing (to add requested results that require further training)? I'm wondering if answering right away or waiting to have everything before answering.
English
0
0
4
1K
ICML Conference
ICML Conference@icmlconf·
Preliminary reviews are available for #ICML2026! Authors have until March 30 to respond to the reviews. Reviewers are required to acknowledge responses, and there is the opportunity for one more round of back-and-forth interaction between authors and reviewers, ending April 7.
English
3
2
74
11.6K
Esteban
Esteban@estebarb·
At ICML some reviewers are asking me some questions I can answer right away, others that require running experiments. Someone knows if I can answer right away and answer again once I have the results they asked? Or I can only post once?
English
2
2
5
1.2K
Esteban retweetledi
Daniel Hnyk
Daniel Hnyk@hnykda·
LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below
English
300
2.3K
9.3K
5.3M
Esteban retweetledi
jon drake
jon drake@DrakeGatsby·
You have to stop assuming he’s trying to do a good job. The current administration is not interested in governing. They are only worried about extracting as much wealth as possible and insulating themselves from the consequences
Ryan@BanUnsweetTea

Genuinely seems Trumps presidency is completely falling apart. Basic government services aren’t getting executed. He’s stuck in a quagmire in the Middle East. Job growth is negative and inflation is accelerating. Gas is 4-5 dollars a gallon. What a mess

English
183
7.3K
40.7K
720.8K
Esteban retweetledi
Vivian
Vivian@suchnerve·
This is called the Demonize, Defund, Destroy Cycle btw. 1. Demonize a government service 2. Defund the service to the point that it can no longer function properly 3. Abolish or privatize the service, using the excuse that it’s clearly just a bad idea that can’t possibly work
Asscela Express 🚅@biz_socks

Another cornerstone of Project 2025 and the Trump republicanism is to intentionally break government services and say “see government doesn’t work” so they can try to privatize essential services with tax payer money to enrich their friends

English
33
4.7K
16.6K
223.1K
Esteban retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Tinder’s co-founder admitted the swipe was modeled on B.F. Skinner’s pigeon experiments. Skinner gave pigeons food at random intervals. The pigeons pecked compulsively, believing their behavior caused the reward. Badeen saw that study and built a dating app on it. 1.6 billion swipes per day. 90 minutes of average daily use. Eleven logins. A male match rate of 0.6%, which means one match per 167 swipes. The behavioral loop works exactly as Skinner predicted. Match Group made $3.5 billion last year. Their paying subscribers declined seven quarters in a row while revenue per payer climbed 17%. The people who find someone leave. The people still looking pay more. The product is optimized for the second group. A gala had the opposite incentive. The host’s reputation rose when people connected. The app’s revenue rises when they don’t. We replaced a system designed to bring people together with a system designed to keep them searching.
Cjay@ced_jayy

somehow we lost galas and ended up with dating apps

English
36
302
2.4K
302.6K
Esteban retweetledi
eL Zyo (Mina Simp Era)
OE QUE EXTRAÑO QUE DE LA NADA EL TEMA DE LOS THERIANS DESAPARECIÓ ¿NO? Quién diría que cuando dejan de insistir con un tema inflado artificialmente este desaparece de las vidas de todos ¿No? Curioso, muy curioso, ajá
Español
79
2.6K
20.8K
173.3K
Esteban
Esteban@estebarb·
@elvin_not_11 @wowstartsnow I think it is still posible to close a window by double clicking the top left corner. A reminiscent of the pre Windows 95 era.
English
0
0
0
92
Elvin
Elvin@elvin_not_11·
it's beautiful that I can traverse through 25 years of UI design history by clicking 3 times on Windows 11.
Elvin tweet media
English
279
2.4K
46.5K
1M
Esteban retweetledi
Jameson Lopp
Jameson Lopp@lopp·
Consider that the dumbest people you know are repeatedly being told "You're absolutely right!" by LLMs.
English
245
2.3K
23.9K
647.7K
Esteban retweetledi
Sukh Sroay
Sukh Sroay@sukh_saroy·
🚨 BREAKING: You asked AI to improve your writing. It changed what you were actually saying. New research just proved it. In a controlled study, heavy AI writing assistance led to a 70% increase in essays that gave no clear answer to the question being asked. Not unclear writing. Neutral writing. The kind that sounds polished but commits to nothing. Here's what makes this worse: Researchers took essays written in 2021 — before ChatGPT existed — and asked an LLM to revise them based on real expert feedback. The instruction was simple: fix the grammar. The model changed the meaning anyway. Every time. It can't help it. The training pushes toward inoffensive, agreeable, averaged-out text. That's not a bug they can patch. It's the objective function. And then there's the peer review finding. 21% of reviews at a recent top AI conference were AI-generated. Those reviews scored papers a full point higher on average. They also placed significantly less weight on clarity and significance — the two things peer review is supposed to evaluate. So we're not just talking about your email sounding a little corporate. We're talking about AI quietly flattening scientific discourse. Laundering opinions into non-answers. Replacing your voice with the mean of everyone's voice. The industry keeps asking: is AI-written content detectable? Wrong question. The right question is: what are we losing when a billion people let the same model edit their thinking?
Sukh Sroay tweet media
English
56
291
769
45.6K
Esteban retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
Wharton’s latest AI study points to a hard truth: “AI writes, humans review” model is breaking down Why "just review the AI output" doesn't work anymore, our brains literally give up. We have started doing "Cognitive Surrender" to AI - Wharton’s latest AI study points to a hard truth: reviewing AI output is not a reliable safeguard when cognition itself starts to defer to the machine.when you stop verifying what the AI tells you, and you don't even realize you stopped. It's different from offloading, like using a calculator. With offloading you know the tool did the work. With surrender, your brain recodes the AI's answer as YOUR judgment. You genuinely believe you thought it through yourself. Says AI is becoming a 3rd thinking system, and people often trust it too easily. You know Kahneman's System 1 (fast intuition) and System 2 (slow analysis)? They're saying AI is now System 3, an external cognitive system that operates outside your brain. And when you use it enough, something happens that they call Cognitive Surrender. Cognitive surrender is trickier: AI gives an answer, you stop really questioning it, and your brain starts treating that output as your own conclusion. It does not feel outsourced. It feels self-generated. The data makes it hard to brush off. Across 3 preregistered studies with 1,372 participants and 9,593 trials, people turned to AI on over 50% of questions. In Study 1, when AI was correct, people followed it 92.7% of the time. When it was wrong, they still followed it 79.8% of the time. Without AI, baseline accuracy was 45.8%. With correct AI, it jumped to 71.0%. With incorrect AI, it dropped to 31.5%, worse than having no AI. Access to AI also boosted confidence by 11.7 percentage points, even when the answers were wrong. Human review is supposed to be the safety net. But this research suggests the safety net has a hole in it: people do not just miss bad AI output; they become more confident in it. Time pressure did not eliminate the effect. Incentives and feedback reduced it but did not remove it. And the people most resistant tended to score higher on fluid intelligence and need for cognition. That makes this feel less like a laziness problem and more like a cognitive architecture problem.
Rohan Paul tweet mediaRohan Paul tweet media
English
173
714
3K
276.9K
Jeudyx
Jeudyx@jeudyx·
Yo uso mucho el TextEdit en Linux, para mi poder integrarle un LLM sería una killer app por la que pagaría. Conocen alguna así? (sino para hacerla vibecodearla jaja)
Español
4
0
7
1.2K
Esteban retweetledi
Lucas Maes
Lucas Maes@lucasmaes_·
JEPA are finally easy to train end-to-end without any tricks! Excited to introduce LeWorldModel: a stable, end-to-end JEPA that learns world models directly from pixels, no heuristics. 15M params, 1 GPU, and full planning <1 second. 📑: le-wm.github.io
English
95
511
3.7K
609.8K
Esteban retweetledi
Dmitrii Kovanikov
Dmitrii Kovanikov@ChShersh·
I feel that people who are bad at coding are more bullish on AI. And this explains a lot.
English
279
124
2.2K
125.9K