Shuang Fu

116 posts

Shuang Fu

Shuang Fu

@judefffff

Interested in SMLM/computing SR technique | PostDoc in LiLab @YimingLi_SZ, SUSTech

เข้าร่วม Mayıs 2022
129 กำลังติดตาม20 ผู้ติดตาม
Shuang Fu รีทวีตแล้ว
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. Not until the next update. Always. They proved it with math. Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level. And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth. Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do. The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing. So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up. OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product. This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent. Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?
Nav Toor tweet media
English
1.4K
8.9K
33.7K
3.3M
Shuang Fu รีทวีตแล้ว
Yongdeng Zhang
Yongdeng Zhang@YongdengZhang·
Happy to share our latest work, 4Pi-SIMFLUX, which combines structured illumination with interferometric detection to achieve near-isotropic 3D localization precision of 2–3 nm and resolve sub-10 nm structural features in cells. nature.com/articles/s4159… rdcu.be/eQVxt
Yongdeng Zhang tweet mediaYongdeng Zhang tweet mediaYongdeng Zhang tweet media
English
10
19
71
8.2K
Shuang Fu รีทวีตแล้ว
Yiming Li
Yiming Li@YimingLi_SZ·
We’re excited to share LiteLoc — a lightweight and scalable deep learning framework for high-throughput single-molecule localization microscopy, enabling analysis speed of >500 MB/s on 8× RTX 4090 GPUs without compromising accuracy. rdcu.be/eztp6
Yiming Li tweet media
English
0
5
12
983
Shuang Fu รีทวีตแล้ว
Nature Methods
Nature Methods@naturemethods·
Really excited to share this Perspective from John Danial covering how technology is moving the field of fluorescence microscopy toward structural biology. nature.com/articles/s4159…
English
1
53
154
14.5K
Shuang Fu
Shuang Fu@judefffff·
This is perfectly compatible with LUNAR, which leverages temporal context up to 9 frames to achieve ultra-high localization precision. Imagine the possibilities of large-DOF in situ multicolor 3D reconstructions!
Massive Photonics@massphoton

🎊Fantastic work!!! This pre-print by @LE_Laurent_ @S_LevequeFort et. al introduces Brightness Demixing for simultaneous 3-target imaging using 1 laser🤯 📷 Our #DNAPAINT ready 2ry antibodies & imagers were used for vimentin, clathrin,& tubulin imaging biorxiv.org/content/10.110…

English
0
0
1
72
Shuang Fu รีทวีตแล้ว
Volodymyr Kuleshov 🇺🇦
Volodymyr Kuleshov 🇺🇦@volokuleshov·
If you're at #AAAI2025, try to catch Cornell PhD student @yingheng_wang, who just presented a poster on Diffusion Variational Inference. The main idea is to use a diffusion model as a flexible variational posterior in variational inference (e.g., as the q(z|x) in a VAE) [1/3]
Volodymyr Kuleshov 🇺🇦 tweet media
English
4
21
325
27.9K
Shuang Fu รีทวีตแล้ว
Gabriel Peyré
Gabriel Peyré@gabrielpeyre·
The Laplacian pyramid is the ancestor of the wavelet transform. Defines a compact multiscale representation by iterative lowpass/highpass filterings. en.wikipedia.org/wiki/Pyramid_(…
Gabriel Peyré tweet media
English
5
83
356
18.5K
Shuang Fu รีทวีตแล้ว
Math Cafe
Math Cafe@Riazi_Cafe_en·
MIT's "Street Fighting Mathematics" This course teaches the art of guessing results and solving problems without doing a proof or an exact calculation. Book: ocw.mit.edu/courses/18-098…
Math Cafe tweet media
English
15
340
2.8K
175.6K
Shuang Fu รีทวีตแล้ว
Volodymyr Nechyporuk-Zloy
Microscopy Nodes: versatile 3D microscopy visualization with Blender Oane Gros, Chandni Bhickta, Granita Lokaj, Yannick Schwab, Simone Köhler and Niccolò Banterle biorxiv.org/content/early/…
Volodymyr Nechyporuk-Zloy tweet media
Română
2
13
68
7.4K
Shuang Fu รีทวีตแล้ว
Reto Fiolka
Reto Fiolka@RetoPaul·
Maybe the most amazing microscope paper of the year just dropped: Live 4pi-SIM (a.k.a I5S*): nature.com/articles/s4159… Having worked with Mats Gustafsson, I can attest that this is both extremely hard, but also extremely cool! Kudos to the authors! *cell.com/AJHG/fulltext/…
Reto Fiolka tweet mediaReto Fiolka tweet media
English
11
60
240
23.3K
Shuang Fu รีทวีตแล้ว
Michael 英泉 Eisen
Michael 英泉 Eisen@mbeisen·
This is - unintentionally - an absolutely damning critique of modern biology.
Sam Rodriques@SGRodriques

One of the remarkable things for me about NeurIPS this year was how quickly the entire AI for Biology community has gone all-in on biological foundation models. Virtual cell models will enable us to predict how cell states will change in response to chemical perturbations. Protein language models will enable us to identify better enzymes for degrading plastics, and so on. Everyone wants bigger data on more things to throw into bigger models. These models are going to be awesome, but real biology discoveries look somewhat different. Contrast these dreams of foundation models with the latest table of contents from Science or Nature: --“A long noncoding eRNA forms R-loops to shape emotional experience–induced behavioral adaptation” — The authors identified a lncRNA in mice that is expressed in response to neuronal activity that modulates the 3D structure of chromatin, thereby activating genes that are involved in neuronal plasticity. The authors further identified that this lncRNA is essential for certain forms of learning. --“Cancer cells impair monocyte-mediated T cell stimulation to evade immunity” — The authors identified that mouse melanoma cells secrete a lipid metabolite that prevents monocytes from activating CD8+ T cells. --“Postsynaptic competition between calcineurin and PKA regulates mammalian sleep–wake cycles” — By generating mouse knockout lines, the authors identified phosphatases and kinases that are critical for regulating the sleep-wake cycle, and showed that they act through regulation of proteins at excitatory postsynaptic sites. I struggle to imagine how any of these discoveries could fall out of a multimodal biology foundation model. This is not intended to be a straw man argument. Surely, a foundation model could potentially identify the lncRNA from the first paper, but I am not sure how such a foundation model would associate it with chromatin remodeling. A multimodal foundation model with enough data could also potentially identify metabolic changes associated with melanoma cells subjected to certain kinds of treatments, but I don’t see how that foundation model could identify the effect of those metabolites in preventing CD8+ T cell activation. Indeed, I do not think that any of the foundation models that are being developed today would be capable of generating rich new biological insights of the kind described in these papers. And yet, these are the kinds of insights that new therapies are made from. The issue, I think, is that machine learning models work extremely well on structured data, and so all the foundation models that are being built are highly structured. Take a protein sequence as input and produce a protein sequence as output. Take a cell state and a chemical perturbation as input and produce a new cell state as output. Biology, however, is poorly structured. The lncRNA insight is case in point: what structured representation can we use for the action of the lncRNA in modulating chromatin architecture? Protein models cannot represent it; DNA models cannot represent it; virtual cell models cannot represent it. Perhaps a model that incorporates RNA expression and 3D genome state could represent it, but then how would that model represent the lipid modulation of the monocytes? I worry that every discovery may need its own representation space. Indeed, the nature of biology is such that there likely is no representation, short of an atomic-resolution real-space model of the entire organism, that is sufficient to represent the diversity of biological phenomena that are relevant for disease. Except, of course, for natural language, which is evolved to represent all concepts that humans are capable of contemplating. Indeed, I think natural language has an essential role to play in representing biology, and is ultimately unavoidable, insofar as it is the only medium we know of that is sufficiently structured for machine learning and sufficiently flexible to represent the full diversity of biological concepts. At FutureHouse, we work on language agents, which is one way of combining language and biology, but this is not the only way. Models that combine natural language with protein, DNA, transcriptomics, and so on will also be extremely productive, provided the addition of the structured datatypes does not restrict their ability to represent unstructured concepts. However we do it, I think this essential role of natural language in representing biology is currently largely underappreciated. The history of biology is built on tools that we have found in nature to study biological phenomena. As all biologists know, trying to engineer things from scratch (almost) never works; what works is finding things in nature and repurposing them. It will be aesthetically pleasing if it turns out that our engineered representations are yet again insufficient for studying biology, and that natural language is simply another such tool that we have found in nature that must be applied instead.

English
13
34
289
82.5K
Shuang Fu รีทวีตแล้ว
Nikon Microscope Solutions
Nikon Microscope Solutions@NikonInst·
New scattering compensation method is compatible with widefield #Fluorescence #Microscopy, requiring tens of frames of raw data acquired using random illumination patterns, but without the need for a spatial light modulator or target sparsity: bit.ly/4i9ivOW
Nikon Microscope Solutions tweet media
English
0
4
23
1.3K
Shuang Fu รีทวีตแล้ว
Reto Fiolka
Reto Fiolka@RetoPaul·
PetaKit is great; it deals efficiently with large data, and it is easy to use. I can highly recommend it if you need deskewing, rotation and/or deconvolution for your post-processing. The repository is also not large, so it does not take much to clone it and try it out.
HHMI | Janelia@HHMIJanelia

Introducing PetaKit5D: a comprehensive suite of software tools for processing petabyte-scale microscopy data from Janelia Sr Fellow/HHMI Investigator @Eric_Betzig, Xiongtao Ruan, Matthew Mueller, Srigokul Upadhyayula @ABCUCBerkeley @UCBerkeley & colleagues nature.com/articles/s4159…

English
0
8
34
3.4K