Yogesh Rathi

1.1K posts

Yogesh Rathi banner
Yogesh Rathi

Yogesh Rathi

@Yogi_Spoke

Associate Professor at Harvard; neuroscience; MRI, AI; Yoga; alumni BITS Pilani, Georgia Tech

Massachusetts, USA Katılım Temmuz 2017
378 Takip Edilen892 Takipçiler
Sabitlenmiş Tweet
Yogesh Rathi
Yogesh Rathi@Yogi_Spoke·
Our new pre-print on mesoscale image reconstruction from multiple views - termed - Rover-MRI. We reconstruct T2w image at an isotopic resolution of 180um in just 17 minutes of scan time. This work will be presented as oral at #ISMRM this year arxiv.org/abs/2502.08634
English
3
5
19
1.3K
Yogesh Rathi retweetledi
Ricardo
Ricardo@Ric_RTP·
Amazon just got caught running a secret price manipulation operation with Levi's, Home Depot, Walmart, and many more. Every time you "comparison shopped" online, you were looking at prices that were already rigged. Here's what happened: Amazon would monitor prices on Walmart, Target, Best Buy, Home Depot, and Chewy in real time. The second a competitor listed a product cheaper than Amazon, they'd contact the brand directly and tell them to "fix it." And the exact emails are now PUBLIC. Amazon sent Levi's links to two Walmart listings with the subject line "styles of concern." They basically said the prices on Walmart are too low and we have a problem. The next day, Levi's responded: "I talked to Walmart and they have partnered with us to take Easy Khaki Classic fit back up to ladder SPP price, $29.99 immediately." Levi's literally called Walmart and told them to raise the price. Because Amazon told Levi's to make the call. Walmart complied. Then Amazon matched the HIGHER price. Both retailers ended up charging more. The customer paid extra. Nobody competed. Same playbook with Hanes: Amazon sent them links showing Target and Walmart prices were lower. Hanes confirmed they "reached out to Target and Walmart to have the prices increased." Target increased the prices. Walmart increased the prices. Amazon kept their margins. But it gets even worse... Amazon told Allergan (the company that makes eye drops) that their product was "suppressed" on Amazon because it was cheaper on another site. Allergan responded: "Walmart got their price back up to $16.99." Amazon then unsuppressed the listing. They did this with pet treats on Chewy. Furniture on Home Depot. Products across dozens of categories spanning YEARS. The mechanism is simple but terrifying: If you're a brand and you sell cheaper on Walmart than on Amazon, Amazon suppresses your product, removes you from the Buy Box, buries you in search results, and effectively makes you invisible to 300 million customers. Brands can't afford that. So they call Walmart and Target and say "raise your prices or we'll lose our Amazon listings." Walmart and Target comply because they need the brand's products. Amazon captures 40 cents of every dollar spent online in America. That gives them the leverage to set prices across THE ENTIRE internet. Not just their own platform. So turns out, you were never comparison shopping. You were looking at a coordinated price floor set by Amazon through backroom phone calls between brands and their competitors. "Amazon is working to make your life more unaffordable." 3 separate antitrust trials are now scheduled for 2027. The FTC has its own case. 18 states plus the DOJ are piling on. This is literally happening during the WORST affordability crisis in a generation. Groceries up 25% since 2020. Housing unaffordable. Wages flat. And the largest ecommerce company on Earth has been secretly coordinating with brands to make sure you can't find a cheaper price ANYWHERE. "Competition" in retail is just a fantasy.
English
1.8K
25.1K
53.9K
2.2M
Paul Thompson
Paul Thompson@PTenigma·
Interesting to take part in today’s panel debate with @NIH Director, Dr Jay Bhattacharya @DrJBhattacharya , at a USC event hosted by Dean Carolyn Meltzer and Dr Neeraj Sood (USC School of Public Policy). Some takeaways: 💡The NIH budget will likely remain flat next year, but an increase in “forward funding” (now 37% of grants, up from 20%) will likely mean fewer grants awarded. Forward funding is intended to allow projects to spend much more in their first year than if the budget was constant in all years, and some investigators, e.g. a junior researcher, may need initially larger funding to set up their lab. 💡If a clinical trial is fully funded at the start, it avoids having nothing to show if each continuation year depends on annual funding appropriations (which are unstable). Forward-funding leads to a temporary very large drop in numbers of grants awarded but “at equilibrium” the same number of grants will be awarded. 💡Calls to increase the 500k/year budget for R01s (the standard type of NIH grant) have to be carefully balanced with the recognition that raising this would lead to fewer grants overall. 💡There is very high priority on reproducibility of research, as not prioritizing this has led to loss of public trust in science. Consortia can address this. Some audience members (incl Rob McConnell) noted it might make sense to expand reproducibility to include experimental work where multiple lines of converging evidence point to a conclusion, from multiple different approaches, rather than just repeating the experiment. But this can be expensive. 💡Paylines (where the top x% of grants are funded, and x is known) are being replaced by a system where NIH institute directors and POs have more discretion. This is because the top-N by score may not be necessarily more impactful than the best selection of grants that collectively as a portfolio could make the greatest impact. If there is redundancy, this can be traded off by funding a lower-scoring grant with higher risk/higher impact. 💡Foreign components on grants are welcome, but must use the new PF5 format with higher expectations of auditing and reporting for the foreign site, including making primary research records available to the funder via the prime site. Foreign subawards are no longer being used as they involve less oversight which can lead to loss of public trust. 💡Panelists noted the need to speed up reviews and the time-to-funding, which has greatly increased. Dr Bhattacharya noted that some ideas take time to incubate in the community before they can be reliably funded, whereas others (perhaps AI, clinical trials) can lose impact if delayed or review is too slow. 💡There is a proposal being entertained that K awards (for junior faculty) be given as an allocation to the institution to give out to people they vet, rather than directly awarded. 💡NIH wants to “spread out” funding to more institutions, across more of the country, to reduce the concentration of funding at some institutions. 💡Innovation is sometimes killed by reviewers who put too much emphasis on the certainty of the approach working. Often Aims 2 and 3 of a project depend on a high-risk, high-payoff Aim 1 working. Forward funding of 2-3 years can allow a checkpoint to be included on a high-risk Aim, before more funding. 💡Thank you to @KECKSchool_USC for hosting the event, and to Neeraj Sood for his "Open Dialogues" project.
Paul Thompson tweet media
English
9
26
85
17.8K
Yogesh Rathi retweetledi
Markets by Zerodha
Markets by Zerodha@zerodhamarkets·
India runs one of the most unusual policy experiments in the world. Since 2014, any sufficiently large Indian company is legally required to spend a fixed share of its profits on social causes. Not just disclose it. Actually spend. No other major economy on earth does this.🧵👇
English
21
80
832
125.7K
Yogesh Rathi retweetledi
Bo Wang
Bo Wang@BoWang87·
This week, the "AI replacing doctors" debate is back. The CEO of America's largest public hospital system says he's ready to replace radiologists with AI. The Stanford-Harvard NOHARM study shows top models outperforming generalists. The discourse is moving fast. I run AI at @UHN, the largest hospital in Canada. Here's what I actually see. We've developed AI models across imaging, pathology, and clinical decision support. In controlled conditions, the accuracy numbers are real. In some narrow tasks, models genuinely outperform. That's not hype. But the operational reality of running these systems inside a large hospital teaches you things benchmarks never will. The errors that hurt patients aren't the confident wrong answers. They're the quiet omissions, i.e., the thing the model didn't flag because it wasn't in the training distribution. NOHARM found 76.6% of AI errors were omissions. We see this too. And in a hospital, a missed finding doesn't just affect one case. It propagates: the downstream physician trusts the AI read, the patient waits, the window closes. The accountability structure also doesn't exist yet. When an AI-assisted diagnosis leads to harm, who is responsible: the physician, the hospital, the vendor? In Canada, we don't have a clear answer. No hospital system deploying AI at scale does. That's not a regulatory delay. That's a fundamental gap in the infrastructure for AI-in-medicine. What I'm genuinely optimistic about: AI is already changing how our radiologists work. Not replacing them, but changing the shape of the job. Routine reads get faster. Their time shifts toward complex cases, clinical correlation, cases where the AI flags uncertainty. That's the right direction. But "ready to replace radiologists" skips 10 hard years of work on deployment infrastructure, liability frameworks, clinician training, and failure mode monitoring that nobody wants to talk about because it's less exciting than accuracy benchmarks. The capability question is nearly answered. The deployment question has barely been asked. CEO story: beckershospitalreview.com/radiology/nyc-… NOHARM paper: arxiv.org/abs/2512.01241
English
35
94
353
63.3K
Yogesh Rathi retweetledi
Dr. Sethuraman (Panch) Panchanathan
Indian & US partnership presents a chance to accelerate global good. With Priyamvada Natarajan (Yale) + Shivkumar Kalyanaraman (Anusandhan National Research Foundation), we talked the power of discovery & translation research, & the importance of innovation, at @IndiasporaForum.
Dr. Sethuraman (Panch) Panchanathan tweet media
English
0
2
4
209
Yogesh Rathi retweetledi
Yogesh Rathi retweetledi
Paul Thompson
Paul Thompson@PTenigma·
Working on a new tutorial paper, a sequel to [1] How Much Data Is Enough? A Zeta Law of Discoverability, featuring the enigmatic Riemann Zeta Function Abstract This article is the second in a series examining how much data we need to make different kinds of scientific discoveries. Now that we have large-scale biomedical datasets from millions of patients, and AI models with billions of parameters, will we soon discover factors and tests that reliably diagnose all brain diseases? How soon will we discover all the genetic variants that affect the brain? Will we identify new drugs if we have enough data to learn from? When we use deep learning to make discoveries, do we need more data or less? How do we know? For many of today’s problems in data science and biomedicine, a central question is: how much data is enough to discover a signal? Will discoverability accelerate as we add more data, or do we need to more ingenious with our models? Empirically, increasing sample size improves performance in some settings-such as classifying Alzheimer’s or Parkinson’s disease from brain images-yet gains are often modest or inconsistent for other disorders. How much data would we need to diagnose the full range of brain diseases? How does this depend on the types of input data and the models used? Current mathematics of sample complexity and scaling laws do not properly explain this variability, particularly for high-dimensional cross-modal problems where signals must be aligned across heterogeneous data types. We propose a unifying framework for cross-modal "discoverability" based on the spectral structure of data, signals, and their alignment. Many standard performance metrics such as AUC can be expressed in terms of an effective signal-to-noise parameter that accumulates across spectral modes of an encoder and a cross-modal operator (analogous to that used in canonical correlation analysis (CCA), and vision-language models, VLMs). Under mild assumptions, the growth of this parameter with sample size follows a zeta-like law, involving the enigmatic Riemann zeta function, governed by the decay rates of the signal spectrum (spectral slopes) and the data covariance spectrum. The framework offers several insights. First, discoverability depends on effect size and sample size, but also the spectral alignment between modalities, which can be learned and optimised. This clarifies when multimodal learning (e.g., adding genetics or text) improves performance. Second, encoder choice re-maps the spectrum, explaining why sparse models, low-rank embeddings, and vision-language models (based on contrastive pre-training methods, such as CLIP) can massively improve sample efficiency. Third, heterogeneity and subtyping can map diffuse, high-rank signals into lower-rank structure, improving scaling behaviour and data efficiency. When data requirements are formulated in terms of spectral slopes of these encoders and cross-modal operators, unexpected benefits emerge: adding a new modality, such as text or natural language, can vastly improve discoverability (even if it is just a text embedding of the other data). After reviewing some relevant classical results such as the Davis-Kahan theorem, we illustrate this principle in applications including multimodal brain disorder classification in large consortia, imaging genetics, tractometry, and quantile-based distributional effects. Together, these results suggest a general law. The success of data scaling is governed by the spectral geometry of the cross-modal signals, the encoders, and the cross-modal operator. A resulting “zeta law of cross-modal discoverability” gives us a practical way to predict when additional data, or new modalities, or better models (and which encoders) are needed to yield meaningful gains, and when they will not. As a corollary, this approach also provides a power analysis for some types of high-dimensional deep learning models. [1] this paper is a sequel to this one x.com/PTenigma/statu…
English
2
3
18
1.5K
Yogesh Rathi retweetledi
Yogesh Rathi retweetledi
Dr. Catharine Young
Dr. Catharine Young@DrCatharineY·
We were told NIH funding cuts were about eliminating DEI. But the data now shows grants are down across nearly every field of medicine: cancer, diabetes, mental health, brain disorders. With the greatest cuts hitting Alzheimer’s research, down more than 50%.
Dr. Catharine Young tweet media
English
117
2K
3.6K
192.8K
Yogesh Rathi retweetledi
Dr. Catharine Young
Dr. Catharine Young@DrCatharineY·
The pipeline that drives discovery - new knowledge and treatments for diseases that affect us all - is collapsing in the United States. New NIH funding opportunities are down 91% this fiscal year.
Dr. Catharine Young tweet media
English
60
773
1.5K
216.3K
Yogesh Rathi retweetledi
Bo Wang
Bo Wang@BoWang87·
A physics textbook says certain particle interactions can't happen. GPT-5.2 said "what if they can — under these specific conditions?" Then it conjectured a formula. Then it proved it. 12 hours of reasoning. One new result in theoretical physics. The preprint has IAS, Harvard, Cambridge, Vanderbilt authors alongside OpenAI. The AI wasn't just a tool — it's listed as having contributed the key conjecture. This feels like a phase change.
OpenAI@OpenAI

GPT-5.2 derived a new result in theoretical physics. We’re releasing the result in a preprint with researchers from @the_IAS, @VanderbiltU, @Cambridge_Uni, and @Harvard. It shows that a gluon interaction many physicists expected would not occur can arise under specific conditions. openai.com/index/new-resu…

English
144
405
4.6K
789.3K
Yogesh Rathi
Yogesh Rathi@Yogi_Spoke·
@DrPanch Very well said Dr. Sethuraman. All the more relevant in today’s times.
English
0
0
0
19
Dr. Sethuraman (Panch) Panchanathan
The AI of today, I often remind people, is five to six decades of sustained investment by NSF, even during the so-called AI winters. Any technology we celebrate today is because of the sustained investment of the past.
English
1
0
3
220
Yogesh Rathi
Yogesh Rathi@Yogi_Spoke·
I haven’t gotten a refund yet for the flight you cancelled @IndiGo6E - I called customer service atleast 5 times and they say the same thing “please send us an email “ which I did. It’s been 3 weeks and I haven’t seen an email that says what the refund will be and by when ! @RamMNK @MoCA_India - please take some action against @IndiGo6E asap.
English
2
0
1
72
Yogesh Rathi
Yogesh Rathi@Yogi_Spoke·
@IndiGo6E cancelled my flight without informing. I came to know 2 days before the flight when I tried to check the status. After 4 hours with customer service - they told me they cannot put me on any other flight but one that is 3 days later which would not work for me. I agreed to take 2 hops - but they won’t agree. Time to dismantle @IndiGo6E for the sake of India @PMOIndia @narendramodi
English
3
1
0
171
Yogesh Rathi
Yogesh Rathi@Yogi_Spoke·
@PTenigma Looks like you are having quite a vacation on the sidelines of the conference Paul 😀
English
1
0
1
95
Paul Thompson
Paul Thompson@PTenigma·
I have borrowed a cheerful dog to walk in the Himalayas
Paul Thompson tweet mediaPaul Thompson tweet mediaPaul Thompson tweet mediaPaul Thompson tweet media
English
1
0
17
572
Yogesh Rathi
Yogesh Rathi@Yogi_Spoke·
@venkmurthy @MRIvikas @elonmusk What if we are able to use AI models to provide the uncertainty/probability of true/false positives that can help radiologists decide if a biopsy is needed or not.
English
1
0
0
49
Venk Murthy MD PhD
Venk Murthy MD PhD@venkmurthy·
Although @elonmusk is brilliant, this is wrong. In people at average risk, anxiety from false positives as well as complications from biopsies would almost certainly outweigh the benefits. Happy to discuss further!
Elon Musk@elonmusk

@PalmerLuckey Widespread MRI usage done at least annually with AI reviewing the data would greatly improve wellbeing and mortality

English
469
56
876
150.8K
Yogesh Rathi retweetledi
Elon Musk
Elon Musk@elonmusk·
@PalmerLuckey Widespread MRI usage done at least annually with AI reviewing the data would greatly improve wellbeing and mortality
English
1.2K
621
18.9K
1.1M
Yogesh Rathi
Yogesh Rathi@Yogi_Spoke·
@IndiGo6E Thank you for the call - but you did not resolve nor compensate for cancelling my flight @RamMNK - please consider removing their monopoly and bringing sanity to the Indian skies.
English
0
0
0
17