Eren Bilen

407 posts

Eren Bilen banner
Eren Bilen

Eren Bilen

@Ernbilen

Assistant Professor | Data Analytics @DickinsonCol | Applied Microeconomics, Data Science, Tournaments, Competition, Academic Integrity, Chess♟

Carlisle, PA Katılım Şubat 2015
394 Takip Edilen311 Takipçiler
Sabitlenmiş Tweet
Eren Bilen
Eren Bilen@Ernbilen·
🚨🚨New publication alert 🚨🚨 So delighted that my JMP "The Queen's Gambit: Explaining the Superstar Effect Using Evidence from Chess" joint with my advisor Alexander Matros has found its home at Journal of Economic Behavior & Organization. authors.elsevier.com/c/1hqeZc24b3Sar A thread🧵1/n
English
2
6
39
8.6K
Eren Bilen retweetledi
Misha Teplitskiy | Science of Science
How do LLMs affect literature search in science? Will they point people to only famous papers, as some studies suggested? Seems like no. When scientists start using LLMs for search and for writing papers, they cite more books, older works, and lower-cited works
Misha Teplitskiy | Science of Science tweet media
English
10
33
98
11.5K
Eren Bilen retweetledi
Abdul Șhakoor
Abdul Șhakoor@abxxai·
I found a way to read a research paper the way academics actually read them. A friend of mine at Cambridge showed me her Claude workflow. I thought she was just fast. Then I watched her pull apart a methodology section in twenty minutes that her seminar group had spent a week discussing without fully understanding. Here's exactly what she did: First: she didn't ask Claude to summarise the paper. That's what everyone does. They paste in a paper and ask for a summary. They get a clean paragraph. They feel like they've read it. They move on. That's not reading. That's skimming with extra steps. She did something completely different. She read the paper herself first. All of it. Without Claude. Then she asked: "Based on the methodology and results sections alone, what can and cannot be legitimately concluded from this study? Now read the abstract and tell me where the authors overreach." She wasn't asking Claude to read the paper for her. She was using it to test whether the paper was actually saying what it claimed to be saying. The gap between those two things is where most students get lost. They read what the authors claim and treat it as what the authors found. An experienced academic never does that. She learned not to in twenty minutes. But the next part is what I keep thinking about. She asked: "What did this study not measure that would have significantly strengthened or weakened the central claim? What is the authors' methodology quietly assuming without ever stating it?" Most students read a methodology section to understand what the researchers did. She read it to find what they didn't do and what they hoped nobody would notice. Those are completely different acts of reading. One produces a student who can describe a study. The other produces a researcher who can evaluate one. Her seminar group spent a week on the same paper and never reached that question. Then she did something most students never think to do. She tested the paper against itself. "If I tried to replicate this study with a different population in a different context, what would most likely change about the results? What does that tell me about how far the authors' conclusions actually travel?" Most published claims are presented as general. Most are actually specific. That question finds the line between the two every time. Once you see it you cannot read a paper without looking for it. It changes what you take from every study you ever read after that. Then she mapped the paper's place in the conversation. She asked: "What debate is this paper entering? Who wrote the work this paper is responding to and what would those authors say back? Where does this paper sit in the argument that was already happening before it was written?" She stopped reading papers as standalone objects that day. Every paper is a reply to something. Most students never find out what. She found out in five minutes and it changed the way the paper meant something entirely. A paper you understand in isolation is information. A paper you understand inside its conversation is knowledge. Then she ran the final check. Before closing the paper she asked: "What is the single most important citation missing from this paper that every serious researcher in this field would consider essential? What conversation is this author not in that they should be?" She found a foundational paper the authors had never cited. Not because they were careless. Because they came from a slightly different tradition and had a blind spot they weren't aware of. That blind spot explained a gap in their argument she hadn't been able to name until that moment. She walked into the seminar and named it. Her supervisor stopped the discussion and asked her to explain how she'd found it. She told him she'd asked the right questions of the paper instead of just reading it. He told her that was exactly what twenty years in academia teaches you to do. She'd been doing it for three weeks. Here is the actual workflow. Five questions. In order. Question one: what can and cannot be legitimately concluded from the methodology and results alone? Where does the abstract overreach? Question two: what did this study not measure that would have changed what it found? What is the methodology quietly assuming it never defends? Question three: if you replicated this with a different population or context, what changes? How far do the conclusions actually travel? Question four: what debate is this paper entering? Who is it responding to and what would those people say back? Question five: what is the most important paper missing from the bibliography? What conversation is this author not in? Most students spend three years at university reading papers from the outside. Those five questions put you on the inside in twenty minutes. Claude didn't read the paper for her. It taught her the questions that experienced academics ask automatically after years in a field. She just learned them earlier. The papers didn't change. The questions did. Most students finish a paper feeling like they've understood it. She finished a paper knowing exactly what it proved, what it didn't prove, where it sat in the field, and what it was quietly hoping nobody would ask. That is not a faster way to read. It's a completely different thing to do with a paper. And almost nobody teaches it directly.
Abdul Șhakoor tweet media
English
59
238
1.3K
142.2K
Eren Bilen retweetledi
nxthompson
nxthompson@nxthompson·
A fascinating result from a new Anthropic study on how AI influences skill formation. That second graph is the key. anthropic.com/research/AI-as…
nxthompson tweet media
English
12
54
354
41.6K
Eren Bilen retweetledi
Millie Marconi
Millie Marconi@MillieMarconnni·
Holy shit...Stanford just built a system that converts research papers into working AI agents. It’s called Paper2Agent, and it literally: • Recreates the method in the paper • Applies it to your own dataset • Answers questions like the author This changes how we do science forever. Let me explain ↓
Millie Marconi tweet media
English
91
824
4.2K
299.1K
Eren Bilen retweetledi
Nicholas Decker
Nicholas Decker@captgouda24·
This paper is one of the most astonishing feats of sustained data wizardry I have ever seen. Using data from Uber, they are able to estimate the roughness of every road in America and precisely estimate the value people place on it, and so much more. 1/
Nicholas Decker tweet media
English
72
861
8.4K
582K
Eren Bilen retweetledi
Florian Ederer
Florian Ederer@florianederer·
Research output after tenure drops off a cliff for business, economics, sociology, and other non-lab fields. But it remains high post-tenure in lab-based fields such as chemistry, physics, computer science, and engineering.
Florian Ederer tweet media
English
54
127
825
119.5K
Eren Bilen retweetledi
Misha Teplitskiy | Science of Science
How do star scientists affect their departments? - Could be making colleagues better, i.e. positive "peer effects". But peer effects literature finds inconsistent results (and publication bias) - This paper points to more important channel: hiring of new scientists
Misha Teplitskiy | Science of Science tweet media
English
3
7
20
1.5K
Eren Bilen retweetledi
Adam Grant
Adam Grant@AdamMGrant·
Men become more caring when they become girldads. Data on >12k CEOs: After having a firstborn daughter, men spend 10% more on social responsibility initiatives—especially culture, inclusion, and the environment. They also pay female employees better. Daughters activate empathy.
Adam Grant tweet mediaAdam Grant tweet media
English
96
560
4.7K
472.6K
Eren Bilen retweetledi
Misha Teplitskiy | Science of Science
Books authored by women spend about 50% of space on female characters, and books by men only 30ish%
Misha Teplitskiy | Science of Science tweet media
English
1
1
10
669
Eren Bilen retweetledi
Misha Teplitskiy | Science of Science
One of the craziest soc sci papers of all time: an email nudge generated an extra $184M in taxes, or 0.22% of Dominican Republic's GDP
Misha Teplitskiy | Science of Science tweet media
English
11
84
668
94.4K
Eren Bilen retweetledi
Misha Teplitskiy | Science of Science
Argument: Because famous scientists are more trusted, their discoveries do more good for society. Therefore it's rational that they get more credit for the same discovery as less famous ones
Misha Teplitskiy | Science of Science tweet media
English
6
5
30
2.7K
Eren Bilen retweetledi
David Almog
David Almog@davidalmog25·
🚨New working paper🚨 (link in reply) Are workers hesitant to use AI because they worry how it makes them look? In an online experiment, I find that social image concerns, EVEN when such concerns are not instrumental, lead people to reject helpful AI advice and perform worse.
David Almog tweet media
English
1
12
68
11.7K
Eren Bilen retweetledi
Misha Teplitskiy | Science of Science
Does professor quality matter for training the next generation of researchers? Or will good PhD students do good research either way? Quality matters
Misha Teplitskiy | Science of Science tweet media
English
9
49
262
27K
Eren Bilen retweetledi
Sinan Aral
Sinan Aral@sinanaral·
🚨New Working Paper!🚨 We just ran 12,000 search queries across 7 countries, generating 80,000 real-time GenAI and traditional search results, to understand current global exposure to GenAI search. We then used a preregistered, randomized experiment on a large study sample to understand when humans trust AI Search. The results were surprising and a bit unnerving...
Sinan Aral tweet media
English
6
29
115
26.6K
Eren Bilen retweetledi
Cassidy Laidlaw
Cassidy Laidlaw@cassidy_laidlaw·
We built an AI assistant that plays Minecraft with you. Start building a house—it figures out what you’re doing and jumps in to help. This assistant *wasn't* trained with RLHF. Instead, it's powered by *assistance games*, a better path forward for building AI assistants. 🧵
English
86
208
2.3K
490.1K
Eren Bilen retweetledi
NBER
NBER@nberpubs·
A 10 μg/m3 increase in daily PM2.5 air pollution causes a 5.7 percent increase in full-day student absences, a 13.1 percent increase in teacher absences, and a 28 percent increase in behavior referrals, from Sarah Chung, @ClaudiaLPersico, and Jing Liu nber.org/papers/w33549
NBER tweet media
English
1
34
106
16.4K