Matthew Hindman

2.4K posts

Matthew Hindman banner
Matthew Hindman

Matthew Hindman

@MattHindman

Professor at @SMPAGWU and @GWIDDP | Author of The Internet Trap and The Myth of Digital Democracy (both @PrincetonUPress) | @[email protected]

Washington, DC Katılım Ağustos 2009
2.4K Takip Edilen1.8K Takipçiler
Matthew Hindman retweetledi
Ethan Porter
Ethan Porter@EthanVPorter·
Some people find politics interesting. Others do not. In a new paper, I show that appealing to MEANING increases political interest. In 6 experiments, connecting what people find meaningful in their lives to politics increases political interest. Link: osf.io/preprints/soca…
Ethan Porter tweet media
English
1
4
18
997
Matthew Hindman retweetledi
Avi Chawla
Avi Chawla@_avichawla·
ML researchers just built a new ensemble technique. It even outperforms XGBoost, CatBoost, and LightGBM. Here's a complete breakdown (explained visually):
English
31
242
2.7K
488.4K
Derek Holliday
Derek Holliday@d_e_holliday·
It's official -- I'll be taking my talents to Foggy Bottom this Fall! Excited to announce I've accepted an offer to join the Department of Political Science at the George Washington University as an Assistant Professor of American Politics!
Derek Holliday tweet media
English
8
4
129
7.5K
Matthew Hindman retweetledi
Rasmus Kleis Nielsen
Rasmus Kleis Nielsen@rasmus_kleis·
Striking chart from recent @pewresearch work on how US Americans think about the news. Just 7% say that the news they get often makes them feel empowered - a fraction of the many more who say it makes them angry, sad, scared, and/or confused. #what-americans-think-news-is-and-is-not" target="_blank" rel="nofollow noopener">pewresearch.org/journalism/202…
Rasmus Kleis Nielsen tweet media
English
1
1
5
874
Matthew Hindman retweetledi
Brendan Nyhan (@BrendanNyhan on 🟦☁️)
Levitsky, Way, and Ziblatt: "No One Has Ever Defeated Autocracy From the Sidelines" (gift link in thread) Key points: 1. We are now living under competitive authoritarianism. 2. We need ALL of civil society to resist the temptation of appeasement and defend democracy.
Brendan Nyhan (@BrendanNyhan on 🟦☁️) tweet mediaBrendan Nyhan (@BrendanNyhan on 🟦☁️) tweet media
English
5
38
66
10.9K
Matthew Hindman retweetledi
Yunkang Yang, PhD
Yunkang Yang, PhD@yangyunkang·
Happy to see our paper out on @SciReports! We introduce a statistical method for detecting coordination and apply it to Facebook, using 11.2 million link posts shared by 16k pages that discussed US politics in 2021. nature.com/articles/s4159…
English
1
3
9
468
Matthew Hindman
Matthew Hindman@MattHindman·
@ScottNover @washingtonpost Congrats Scott! Well deserved. I’m just sorry that this opportunity comes at such a slow time for the media beat. Should be a sleepy six months—good luck finding something to report on!
English
1
0
1
60
Scott Nover
Scott Nover@ScottNover·
Some personal news! Today is my first day covering media for the @washingtonpost. I'll be here for the next six months writing about news organizations, press freedom, and how people get their information. Send tips, press releases, and friendly barbs to scott.nover@washpost.com
English
64
41
857
64.2K
Matthew Hindman retweetledi
Christian von Sikorski
Christian von Sikorski@CvSikorski·
🚨 Job Alert: 2 PhD Positions Available at 📍 @FU_Berlin 🚨 I recently started a new position as Prof of Communication @FU_Berlin, honored to lead the Media Effects Research Group. I have 2 PhD positions available—if you're passionate about media effects, political communication
English
2
14
23
2.1K
Matthew Hindman retweetledi
Eric Goulet
Eric Goulet@EJGoulet·
🚨 🚨 🚨 🚨 🚨🔥 ❗️❗️❗️ @HouseGOP Republicans are launching an unprecedented attack on the residents of DC. This is the most significant Congressional interference that I’ve seen in my 22 years living in DC, & it is being done maliciously to harm our city. (1/8)
English
19
92
336
47.2K
Matthew Hindman retweetledi
Martin Austermuhle
Martin Austermuhle@maustermuhle·
NEWS: For the first time, @DCAttorneyGen is suing three non-D.C. drivers for tens of thousands of dollars worth of unpaid traffic camera tickets racked up over a decade. The three drivers collectively have 226 tickets worth more than $90,000 in fines.
English
31
127
1.9K
457.6K
Matthew Hindman retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
This paper addresses the issue of inefficient inference when using long contexts in transformer models. Introduces a method to reduce inference costs. It attends only to the most important tokens at each step using a top-k selection, enhancing efficiency for long contexts. ----- 📌 By offloading key/value cache to CPU and using top-k selection, this method drastically reduces GPU memory needs. It enables million-token context inference on commodity GPUs. 📌 This work empirically validates inherent attention sparsity in LLMs. Top-k attention effectively exploits this, focusing on crucial tokens while maintaining performance with minimal overhead. 📌 Architecturally, top-k attention smartly shifts key/value cache to CPU, using approximate nearest neighbor search. This decouples context length scaling from GPU memory limitations for practical deployment. ---------- Methods Explored in this Paper 🔧: → It selects only the most relevant key-value pairs for attention computation at each layer. → Key and value vectors are stored in CPU memory within a vector database. → Approximate k-Nearest Neighbor search is used to retrieve top-k keys from the CPU cache for each query during decoding. → The retrieved keys and values are moved to GPU for attention computation. → This method reduces computation and memory overhead by focusing on crucial tokens. → The value of k can be adjusted per layer to optimize performance and efficiency. ----- Key Insights 💡: → Modern LLMs exhibit sparse attention patterns. → Only a small fraction of tokens significantly contribute to the attention mechanism. → Top-k attention effectively exploits this sparsity. → Models can maintain performance even when attending to a very small percentage of input tokens. → This sparsity is observed across different model sizes, architectures and training types. -----
Rohan Paul tweet media
English
1
5
22
2.4K
Matthew Hindman retweetledi
Jonathan Berk
Jonathan Berk@berkie1·
🚨After eliminating parking requirements last year, Cambridge, Massachusetts has passed one of the more ambitious rezoning efforts in America tonight allowing 6-stories citywide by an 8-1 vote. Prior to this update, the City estimated only 350 new units would be built by 2040.
Jonathan Berk tweet media
Burhan Azeem, Cambridge Vice Mayor@realBurhanAzeem

I can’t believe it - after years of advocacy, exclusionary zoning has ended in Cambridge. We just passed the single most comprehensive rezoning in the US—legalizing multifamily housing up to 6 stories citywide in a Paris style Here’s the details 🧵

English
10
151
2K
217.1K
Matthew Hindman retweetledi
Tom Goldstein
Tom Goldstein@tomgoldsteincs·
New open source reasoning model! Huginn-3.5B reasons implicitly in latent space 🧠 Unlike O1 and R1, latent reasoning doesn’t need special chain-of-thought training data, and doesn't produce extra CoT tokens at test time. We trained on 800B tokens 👇
Tom Goldstein tweet media
English
48
267
2.1K
268.6K
Matthew Hindman retweetledi
Gabriel S. Lenz
Gabriel S. Lenz@GabeLenz·
An underappreciated reason why Harris lost was flat real disposable income growth in the election year. RDI was above trend, but voters care about growth in the election year itself. Between Jan. and Nov. 2024, nothing happened.
Gabriel S. Lenz tweet media
English
3
8
39
4.4K
Matthew Hindman
Matthew Hindman@MattHindman·
@ML_Burn Congratulations! Well deserved, and a great spot to continue your work.
English
0
0
1
155
Mike Burnham
Mike Burnham@ML_Burn·
This Fall I'm joining the Texas A&M political science department as an assistant professor! Thrilled to be joining such a fantastic program and can't wait to get to work.
English
21
2
163
12.5K
Matthew Hindman retweetledi
Dan McAteer
Dan McAteer@daniel_mac8·
everyone comparing deepseek-r1 to o1 and forgetting about Gemini 2 Flash Thinking which is better than r1 on every cost and performance metric
Dan McAteer tweet media
English
262
276
3.5K
569.5K