Ushnish Sengupta @[email protected]

4.7K posts

Ushnish Sengupta @ultush@thecanadian.social banner
Ushnish Sengupta @ultush@thecanadian.social

Ushnish Sengupta @[email protected]

@ultush

Award Winning Teacher. Passionate about #SocialJustice #NonProfits #SocialEnterprise #OpenData #OpenGov #Technology4Good #AIEthics @[email protected]

Toronto, Canada Katılım Mart 2009
3.3K Takip Edilen992 Takipçiler
Sabitlenmiş Tweet
Ushnish Sengupta @ultush@thecanadian.social
3 umpires of MLB were debating how to call balls and strikes, ‘I calls ‘em the way they is,’ the 1st said. ‘Me,’ said the 2nd, ‘I calls ‘em the way I sees ‘em.’ ‘Naw,’ declared the 3rd, who had been around the longest, ‘they ain’t nothin’ till I calls ‘em.’-Marshall Sahlins 2002
English
1
0
6
0
Ushnish Sengupta @[email protected] retweetledi
Guri Singh
Guri Singh@heygurisingh·
Holy shit... researchers just proved that vibe coding is destroying the internet's visual diversity. University of Washington studied AI-generated apps and found something terrifying: The title? "Interrogating Design Homogenization in Web Vibe Coding." And the findings are devastating.
Guri Singh tweet media
English
20
18
67
7.3K
Ushnish Sengupta @[email protected] retweetledi
Ruijiang Gao
Ruijiang Gao@ruijianggao·
What happens when you invite 150 AI economists (Claude Code) to a research conference, give them the exact same data, and ask them to test the same hypotheses? We did just that. The results reveal a new phenomenon: Nonstandard Errors in AI Agents. 🧵👇
English
22
275
1.5K
190.4K
Ushnish Sengupta @[email protected] retweetledi
Big Brother Watch
Big Brother Watch@BigBrotherWatch·
👁️A woman was WRONGLY jailed for nearly 6 months, following a facial recognition error. She lost her home, car and pet as a result... Meanwhile in Britain, an Asian man was wrongly arrested for a crime committed in a city 100 miles away from his home Facial recognition technology is hugely discriminatory putting children, women and people of colour at greater risk of wrongful allegations We can't stand back as more innocent people get caught in the net of mass surveillance. #StopFacialRecognition
Big Brother Watch tweet media
English
19
431
867
18.9K
Ushnish Sengupta @[email protected] retweetledi
Sean Mullin
Sean Mullin@MullinSean·
Very proud to launch Sovereign by Design, a new report I co-authored with Jaxson Khan at U of T's Munk School. At ~80 pages, it's the most comprehensive treatment of AI sovereignty in Canada to date. Read it at aicompetitiveness.ca
Sean Mullin tweet media
English
2
25
86
13.5K
Ushnish Sengupta @[email protected] retweetledi
Dr. Sally Sharif
Dr. Sally Sharif@Sally_Sharif1·
I just gave a closed-book, pen-and-paper midterm exam in my 300-level course at UBC with 100 students. All exams were graded by an experienced graduate-level TA according to a rubric. *** The average was 64/100.*** My class averages at UBC are usually 80-85. Context: • This was the first midterm, covering ONLY 4 weeks of material. • Students had a list of possible questions in advance: no surprise questions. • Questions included (a) 3 concept definitions, (b) 3 paragraph-long questions, and (c) a 1.5-page essay. • I have taught this class multiple times. Nothing in my teaching style changed this semester. • We read entire paragraphs of text in class, so students don't have to do something on their own that wasn't covered during the lecture. • Students take a 10-question multiple-choice quiz at the end of every class (30% of the final grade). • Attendance is 95-99% every class. Attention during lectures and participation in pair-work activities are very high → anticipating the end-of-class quiz. *** But unfortunately, I suspect many students are not reading the material on the syllabus. They are asking LLMs to summarize it instead.*** After the midterm, students reported: • They thought they knew concept definitions but couldn't produce them on paper. • They thought they understood the arguments but struggled to connect them or identify points of agreement and disagreement. My view: It might be “cool” or “innovative” to teach students to summarize readings with ChatGPT or write essays with Claude. But we may be doing them a disservice: reducing their ability to retain material, think creatively, and reason from what they know. If you only read what AI has summarized for you, you don’t truly "know" the material. Moving forward: We have a second midterm coming up. I don't know how to convey to students that the best way to do better on the exam is to rely on and improve their own reading skills.
David Perell Clips@PerellClips

Ezra Klein: "Having AI summarize a book or paper for me is a disaster. It has no idea what I really wanted to know and wouldn't have made the connections I would've made. I'm interested in the thing I will see that other people wouldn't have seen, and I think AI typically sees what everybody else would see. I'm not saying that AI can't be useful, but I'm pretty against shortcuts. And obviously, you have to limit the amount of work you're doing. You can't read literally everything. But in some ways, I think it's more dangerous to think you've read something that you haven't than to not read it at all. I think the time you spend with things is pretty important." @ezraklein

English
522
2.5K
16.1K
3.5M
Ushnish Sengupta @[email protected] retweetledi
Hedgie
Hedgie@HedgieMarkets·
🦔 Meta contractors in Kenya told Swedish newspapers they're being asked to review intimate footage from Ray-Ban AI glasses, including people undressing, using the bathroom, watching porn, and filming sex. One contractor said users often don't realize they're still recording when they set the glasses down. Meta sold 7 million pairs in 2025, up from 2 million in 2023-2024 combined. Users can't use the AI features without agreeing to share data with Meta's servers, and the terms of service bury the fact that humans may manually review your footage. One annotator said "if they knew about the extent of the data collection, no one would dare to use the glasses." My Take This is the Google Home story again but worse. At least with cameras in your house, you know where they are. These are glasses you wear on your face that keep recording when you take them off and set them on your nightstand. And the footage goes to contractors overseas who are paid to watch and label it for AI training. One worker described seeing a man leave the room, then his wife come in and change clothes. People forget the camera is still on. Meta buries all of this in terms of service nobody reads. The product is marketed as a cool way to capture your life and interact with AI. The reality is strangers in Kenya watching you undress so they can annotate the footage to make Zuckerberg's AI better. Seven million people bought these last year. I'd bet almost none of them understood what they were actually agreeing to. Hedgie🤗
Hedgie tweet media
English
885
9.5K
26K
3.8M
Ushnish Sengupta @[email protected] retweetledi
Ben Inskeep
Ben Inskeep@Ben_Inskeep·
🚨Google has filed an application at IDEM for an air permit for 179 backup diesel generators & 179 diesel storage tanks (~1 million gallons total) at its 1,200 MW Monrovia, IN AI data center under construction. 🚨 (Google filed under shell company Woodland Caribou LLC.)
Ben Inskeep tweet mediaBen Inskeep tweet media
English
10
38
110
27.6K
Ushnish Sengupta @[email protected] retweetledi
Ronan Farrow
Ronan Farrow@RonanFarrow·
One large study found that when a major daily newspaper covering a federal judicial district disappears, corruption cases there—bribery, embezzlement, fraud—rise by more than 7%. The researchers suggest officials feel emboldened when no one's looking. cjr.org/analysis/local…
English
27
1.5K
3.6K
193.1K
Ushnish Sengupta @[email protected] retweetledi
Michael Thomas
Michael Thomas@curious_founder·
It can now take as long as 7 years to connect a data center to the power grid in the US. To avoid those delays, some developers are building their own power plants—and doing so in unexpected ways. In 2024, Elon Musk's xAI was told by utilities that it would take years to connect their Colossus data center. Rather than wait, they brought in gas turbines on semitrucks and powered the first phase of the project in 4 months. For many, this move represented all that's wrong with tech. A billionaire trucked in polluting gas turbines to a low-income community and built a dirty data center. But to some in Silicon Valley, xAI's move was creative and inspiring. "You can just do things" is how one popular AI newsletter put it. In 2025, dozens of similar data centers were proposed. Some are trucking in mobile gas generators like Musk did. Others are using jet engines. One company that sells engines to cruise lines and warships is supplying a few hundred MW of engines to a data center in Ohio. When I first heard about all of this, I was skeptical. I assumed the turbine backlogs and grid delays I had been hearing about for years would throttle the ambitions of tech giants. But I was wrong. For our latest report at Cleanview, Bypassing the Grid, we found dozens of permit documents, site plans, and equipment orders showing that many of these projects have secured their permits and equipment. Many are under construction with crews working through the night. That means dozens of gigawatts of data center capacity could come online within the next 1-2 years. 75% of it will be powered by natural gas. If all of its built, it would be like adding five New York City's worth of power demand. I track data centers and power projects for a living and all of this—especially the speed and scale of it—shocked me. The full report is available on the Cleanview website linked below.
Michael Thomas tweet media
English
36
138
755
90.1K
Ushnish Sengupta @[email protected] retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Google’s single data center in Council Bluffs, Iowa consumed 1 billion gallons of fresh water in 2024. One facility. One year. Enough to supply every home in Iowa for five days. The reason they need fresh water is pure chemistry. Evaporative cooling towers work by running water over hot surfaces and letting it evaporate. 80% of the water a data center pulls in literally vanishes into the atmosphere as steam. You can’t recycle steam. The remaining 20% becomes concentrated mineral waste. Calcium, magnesium, silica. Every cycle through the cooling loop makes the water more corrosive. After enough passes, it starts clogging pumps and eating through heat exchangers. Multi-million dollar equipment destroyed by limescale. Recycled wastewater carries even more of these minerals from the start. You could treat it, but less than 1% of U.S. water is recycled. Most cities don’t even have separate pipes to deliver reclaimed water to industrial customers. A data center wanting to use recycled water would essentially need to build its own treatment plant on site. Meanwhile, municipal potable water costs almost nothing. So they just drink from the tap. Across all its data centers, Google used 8.1 billion gallons in 2024, nearly double what it used three years earlier. The company claims its water stewardship projects “replenished” 4.5 billion gallons. Those projects aren’t even in the same watersheds where they’re pulling the water. Same playbook as carbon offsets. Consume locally, offset globally, call it sustainable. The trajectory is the real story. U.S. data center water consumption could quadruple by 2028. That’s 68 billion gallons for cooling alone, before the 211 billion gallons consumed indirectly through electricity generation. Two-thirds of new data centers since 2022 are being built in regions already facing water scarcity. Nobody’s asking why they use fresh water. They’re asking what happens to the towns sharing a water main with a facility that drinks like 50,000 people showed up overnight.
Rushi@rushicrypto

It’s been months and I’m still trying to figure out why AI data centers need fresh water. Not used water. Not recycled water. Fresh water???

English
802
4.8K
11.6K
1.2M
Ushnish Sengupta @[email protected] retweetledi
Lydia DePillis
Lydia DePillis@lydiadepillis·
Via Goldman Sachs, the hit from rising electricity costs as a result of the data center boom will fall disproportionately on lower income households.
Lydia DePillis tweet media
English
14
222
450
29.7K
Ushnish Sengupta @[email protected] retweetledi
Justin Curl
Justin Curl@curl_justin·
AI won’t automatically make legal services cheaper. @random_walker, @sayashk, and I analyze three bottlenecks that stand between advanced AI and cheaper legal services: 1. Regulatory barriers can limit individuals’ ability to access AI for legal services (like unauthorized practice of law regulations or the recent Judge Rakoff ruling on privilege) and lawyers’ incentives to experiment with AI (like entity regulations restricting who may own equity in legal services businesses). 2. Arms races that channel AI productivity gains into increasing outputs rather than lower costs. The adversarial structure of American litigation means that when both parties adopt productivity-enhancing technologies, competitive equilibria cam simply shift upward. The history of discovery digitization is instructive: Rather than reducing costs, parties exploited the explosion of digital documents to impose greater burdens on opponents. 3. Our desire for human oversight is a final limit on how quickly AI can bring down the cost of legal services. Even where AI dramatically reduces the cost of legal *tasks*, the speed of human decision-makers—judges resolving disputes, lawyers understanding contracts—places an upper limit on acceleration without sacrificing adequate oversight. We think this human element is desirable, especially in high-profile legal cases that implicate important questions about how we want our society to function. We recognize, however, that this is a bottleneck grounded in normative beliefs rather than some characteristic of the system. Finally, we discuss potential reforms to help promote a world where AI is used to benefit people who need legal services. These include relaxing UPL and entity regulations, using the Federal Rules of Evidence and Civil Procedure to make the litigation process less adversarial, and hiring more judges. Thanks to @ARozenshtein and the @lawfare team for their excellent editorial assistance, and @zittrain, @benpress435, and many others not on X for their helpful comments on the piece.
Justin Curl tweet media
English
5
27
136
26.3K
Ushnish Sengupta @[email protected] retweetledi
Reuters
Reuters@Reuters·
The FDA received unconfirmed reports of at least 100 malfunctions and adverse events after AI was added to a medical device used to treat chronic sinusitis. In one case, a surgeon mistakenly punctured the base of a patient’s skull reut.rs/4tnaAD6 @specialreports
Reuters tweet media
English
62
1.1K
2.2K
347.6K
Ushnish Sengupta @[email protected] retweetledi
Ewan Morrison
Ewan Morrison@MrEwanMorrison·
Since AI has been added to sinus surgery - "Two (patients) suffered strokes after surgeons accidentally damaged carotid arteries while the system allegedly misinformed them about where their instruments were inside patients' heads."
Hedgie@HedgieMarkets

🦔 Since Johnson & Johnson added AI to its TruDi Navigation System for sinus surgery in 2021, the FDA has received reports of at least 100 malfunctions and adverse events, up from 8 before the AI was added. At least 10 patients were injured. Two suffered strokes after surgeons accidentally damaged carotid arteries while the system allegedly misinformed them about where their instruments were inside patients' heads. My Take Medical device makers are racing to add AI to their products because it looks good in marketing materials and investor presentations. One lawsuit alleges the company pushed AI into TruDi "as a marketing tool" to claim it had "new and novel technology," and set a goal of only 80% accuracy before shipping it. Eighty percent accuracy is fine for a playlist recommendation. It's not fine for software telling a surgeon where his instrument is inside someone's skull. The FDA has now authorized over 1,350 AI-enabled medical devices, double the number from 2022. Researchers found that 43% of recalls for these devices happened less than a year after approval, twice the rate of non-AI devices. This is what happens when AI becomes a checkbox for fundraising and marketing instead of a technology you deploy because it actually works better. The rush to put AI on everything is running ahead of anyone's ability to know if it's safe. Patients are the ones finding out. Hedgie🤗

English
36
1.8K
7.9K
488.6K
Ushnish Sengupta @[email protected] retweetledi
The Associated Press
To highlight the environmental and water costs of AI data centers, a community in Chile ran a chatbot to show that not every question needs an instant answer.
English
29
95
226
78K
Ushnish Sengupta @[email protected] retweetledi
Cato Institute
Cato Institute@CatoInstitute·
Without immigrants, US government public debt at all levels would be at least 205% of GDP—nearly twice its 2023 level, @David_J_Bier reports. Compared to the US-born population, immigrants of every education level reduced the debt-to-GDP ratio from 1994 to 2023. ow.ly/oHGh50Y8lqX
Cato Institute tweet media
English
232
605
1.3K
114.4K