TorchIO

211 posts

TorchIO banner
TorchIO

TorchIO

@TorchIOLib

Medical image preprocessing and augmentation for deep learning. Part of the @PyTorch Ecosystem. Learn more on YouTube: https://t.co/SUOEefPx9B. Tweets by @fepegar_

Se unió Mayıs 2021
374 Siguiendo325 Seguidores
TorchIO retuiteado
PyTorch
PyTorch@PyTorch·
We are announcing that PyTorch will stop publishing Anaconda packages on PyTorch’s official anaconda channels. For more information, please refer to the following post on dev-discuss: dev-discuss.pytorch.org/t/pytorch-depr…
English
59
302
1.4K
680.9K
TorchIO
TorchIO@TorchIOLib·
You can now slice TorchIO images and subjects! >>> import torchio as tio >>> subject = tio.datasets.FPG() >>> subject[50:200, 50:-30].plot() The image origin in the spatial metatada is updated accordingly as the Crop transform is used under the hood. (orientation here is PIR+)
TorchIO tweet mediaTorchIO tweet media
English
0
0
6
398
TorchIO retuiteado
Soumith Chintala
Soumith Chintala@soumithchintala·
PyTorch's design origins, its connection to Lua, its intertwined deep connection to JAX, its symbiotic connection to Chainer The groundwork for PyTorch originally started in early 2016, online, among a band of Torch7's contributors. Torch7 (~2010-2017) These days, we also commonly refer to Torch7 as LuaTorch, as it was used via Lua. Torch7 was written by Ronan Collobert, @clmt and @koraykv in ~2010. I was deeply involved in Torch7 since 2012, with official "maintainer" status, joining these three original authors in April 2014. Refactoring LuaTorch to be language agonstic (late 2015 to mid 2016) LuaTorch's C backend with all the CPU and CUDA code for Linear Algebra and Neural Networks was deeply intertwined with Lua. So, a bunch of us lead by @lantiga @neurosp1ke @szagoruyko5 me @apaszke @fvsmassa refactored these backends to be agnostic of Lua, and usable independently. We did this after discussing online that we should move LuaTorch to a new, modern design, but hadn't quite framed what that design should be. Writing a new Python based Torch (mid 2016) @apaszke reached out to me early 2016 looking for internships. At that time, the entire LuaTorch team at @AIatMeta was ~3 people (@GregoryChanan @TrevorKilleen and me). I asked Adam to come do an internship to build the next version of LuaTorch, with modern design. @colesbury was in-between projects, so he joined in full-time as well. We started from a fork of the LuaTorch, LuaTorch-nn codebases specifically for two things: 1. the TH/THC and THNN/THCUNN C backends 2. Building a compatibility with LuaTorch's checkpoints, so that LuaTorch users could smoothly continue into PyTorch. We did this by transpiling LuaTorch's `nn` code to Python. We called this package in PyTorch `torch.legacy.nn`. Then, coming to the design itself, we debated a lot of designs. The strong inspirations were: 1. torch-autograd (written by @awiltschko and @clmt ) 2. Chainer (written by the team at @PreferredNet ). @ebetica who loved Chainer would obsessively tell us its the best thing ever, so he came on board to build this together with us. Quite a few others such as Natalia Gimelshein and @adamlerer part-time got involved in various ways. We wrote the code for the new design of PyTorch from scratch. The connection to JAX: inspiration of HIPS/autograd @awiltschko's torch-autograd (which was a big inspiration for PyTorch's design) was directly inspired by @SingularMattrix @DougalMaclaurin @DavidDuvenaud and @ryan_p_adams 's HIPS/autograd library, so in that indirect sense, we had strong inspiration from Ryan's library. In fact, we were so oblivious to certain origins that we named our Autodiff engine `torch.autograd` because we thought it was the norm within the autodiff community to call things "autograd". We later had to apologize to @SingularMattrix and team about the name of our subpackage conflicting with their `autograd` package. Later, @SingularMattrix @DougalMaclaurin and others went on to create JAX, continuing down their design exploration of HIPS/autograd. The inspiration from Chainer -> PyTorch and the inspiration for PyTorch -> Chainer v2 Chainer was a strong inspiration, we really liked the concept of Chains and stuff. The Chainer devs were friends of us, and we interacted with them a lot as well. I visited them in Japan in 2017. Chainer's design is in my opinion a revolutionary design -- very original for that time and pretty awesome. We are proud to have been inspired from it. However, unlike people commonly misunderstand and misattribute, we didn't simply replicate Chainer's design as-is. People have posted online on how PyTorch's design looks exactly like Chainer's and hence its origins are just copy-paste -- and that's because they don't understand the co-evolution. After PyTorch's release, Chainer evolved to include some of PyTorch's good ideas, and eventually they converged to look the same. For example, Chainer's nn Chains required you to pass in all the modules to the constructor (or use an add_link). The concept of self-assignment (i.e.) `self.conv = nn.Conv2d(...)`, the concept of `Parameter` was something we introduced as an evolved upgrade from Chainer v1. We also innovatively changed the way the autodiff engine was implemented -- things like "variable versioning" to detect correctness issues with inplace operations, and a few other new ideas, ideas that eventually went back into Chainer in their v2. When Chainer's community wanted to stop development, @PreferredNet amicably and proactively joined the PyTorch community (link in references). Post-launch evolution (2017 to present) This post doesn't have the space to cover PyTorch's: * evolution to add in ideas from Caffe2 (@jiayq @dzhulgakov et. al) * its 5 compiler designs before we landed on what seems great (Zach DeVito, @ezyang @apaszke @jamesr66a Jason Ansel Christian Sarofeen et. al.) * our inspirations from JAX and designing functorch (Richard Zou, @cHHillee @vfdev_5 Animesh Jain) * our entire distributed design and evolution * the origins of the sparse package (@braizh ) and its evolution (@cpuhrsch et. al.) * PyTorch's domain libraries * data loading (@colesbury @TongzhouWang ) * community design, community growth, innovation in design of incentives (@ptrblck_de Alban Desmaison, me) * Several innovations in GPU code (several key folks from NVIDIA and Meta) Many other parts of PyTorch that I didn't include -- its become somewhat of a monolith at this point. Attributing ideas is healthy, awesome and should be done more often Since PyTorch has launched, several new libraries have used the designs and ideas from PyTorch -- the particular new ideas that we introduced eventually propagated to many other libraries -- and this is awesome. We are proud to have been inspired by work before us, and we are proud to have inspired work after us. We also take pride in always attributing our inspirations clearly -- torch-autograd, chainer and many other projects that have inspired us in lesser ways. I think people don't do this enough, attribute their origins clearly -- either ego or corporate controls come into play to erase history -- and people should do more here. In that sense, I'm really proud of my JAX friends who see framework design as a scientific endeavor, openly discussing ideas and evolutions, and proudly attributing their origins and inspiration. References: 1. My reply in March'17 on the origins of PyTorch: discuss.pytorch.org/t/pytorch-tuto… 2. Chainer's v1 design: #L11-L33" target="_blank" rel="nofollow noopener">github.com/chainer/chaine… 3. pytorch.org/blog/pytorch-a… 4. PyTorch's autodiff innovations in a short paper: openreview.net/pdf?id=BJJsrmf… 5. The PyTorch paper: proceedings.neurips.cc/paper_files/pa…
English
41
259
1.4K
466.5K
TorchIO retuiteado
Alex Carlier
Alex Carlier@alexcarliera·
EfficientSAM was just released and it's fast! 💨 With 20x fewer params, it is now 20x faster than the original SAM segmentation model, while staying in the same accuracy range. See below for the project page and an interactive @huggingface space to try it out! ⬇️⬇️
English
8
103
477
66.2K
TorchIO retuiteado
Nina Montaña Brown
Nina Montaña Brown@public_enembe·
No autonomous surgical robots without synthetic data! Our latest paper introduces SARAMIS, the first large-scale dataset capturing human anatomy as 3D rendering assets for robotic assisted and minimally invasive surgery, to be presented at @NeurIPSConf shorturl.at/qTUY1 🧵
GIF
English
2
8
26
3.1K
TorchIO retuiteado
Polina Kirichenko
Polina Kirichenko@polkirichenko·
Excited to share our #NeurIPS paper analyzing the good, the bad and the ugly sides of data augmentation (DA)! DA is crucial for computer vision but can introduce class-level performance disparities. We explain and address these negative effects in: openreview.net/pdf?id=yageaKl… 1/9
Polina Kirichenko tweet media
English
3
70
409
76K
TorchIO retuiteado
Stanford AIMI
Stanford AIMI@StanfordAIMI·
🚀 Introducing the Stanford AIMI Dataset Index – a diverse inventory of AI-ready datasets for machine learning in healthcare! We hope this resource will help democratize access to health data for AI development. Explore & contribute here: bit.ly/3RmEGFL #AIMIDatasetIndex
Stanford AIMI tweet media
English
0
38
142
16.3K
TorchIO retuiteado
Machine Learning for Biomedical Imaging
$3,200 as the publication fee is still a lot! Many folks can't really afford this and we believe such high fees are unjustified. At MELBA, our goal is to bring this down to ZERO. Currently, we charge $10 as a submission fee and that's it. And we are fully open access. Consider supporting the independent non-profit journals in your field.
Thomas Yeo@bttyeo

The publication fees for HBM have dropped from $3850 to $3200. I might be presumptuous here but I believe this is the success of our collective resignation from the two NeuroImage journals and the establishment of Imaging Neuroscience as a high quality lower cost alternative.

English
1
23
93
23.3K
TorchIO retuiteado
Peyman Milanfar
Peyman Milanfar@docmilanfar·
Radon Transform (RT) was formulated in 1917 but remained useless in practice until CT scanners were invented in the 60s But RT isn't just for CTs. You can use it for measuring motion! RT g(p,θ): Shoot rays at θ+90 & offset p, measure line integrals of f(x,y) along the ray 1/6
Peyman Milanfar tweet media
English
9
98
739
108.1K
TorchIO retuiteado
Woojin Kim
Woojin Kim@woojinrad·
There is a saying in #radiology, “One view is no view.” When generating radiology reports for most exam types, one image is no image. #GPT4V is impressive, but you must ask the right questions & be careful not to overhype #AI, esp. in #healthcare. #GenAI #RadTwitter #MedTwitter
Woojin Kim tweet media
English
4
35
155
30.2K
TorchIO retuiteado
Greg Zaharchuk
Greg Zaharchuk@GregZ_MD·
GPT4 and images. Exciting times, but still some work to be done. Be careful about how you name your files. 😉
Greg Zaharchuk tweet media
English
5
15
93
29.7K
TorchIO retuiteado
Jakob Wasserthal
Jakob Wasserthal@JakobWasserthal·
TotalSegmentator v2 is here: - 33 new classes - up to 32x faster runtime on CPU - improved label quality - larger training dataset (n=1559) - uses nnU-Net v2 Details: github.com/wasserth/Total…
Jakob Wasserthal tweet media
English
4
30
144
11.8K