Alfred Ssekagiri

2.4K posts

Alfred Ssekagiri banner
Alfred Ssekagiri

Alfred Ssekagiri

@alfred_ug

Kampala, Uganda Katılım Mayıs 2011
918 Takip Edilen278 Takipçiler
Alfred Ssekagiri retweetledi
Centre for Epidemic Response & Innovation (CERI)
Metagenomic virus discovery is vital for early detection and global health preparedness. Applications and #scholarships for the 2026 MVD Workshop are now open. Join leading experts for hands‑on training across sequencing, analysis pipelines, and real‑world pathogen surveillance. 📍 Stellenbosch, South Africa from 12-16 Oc 2026 📅 Deadline: 30 April 2026 Apply here: ceri.org.za/news/?token=962 #CERITrainings
Centre for Epidemic Response & Innovation (CERI) tweet media
English
1
10
43
2.6K
Alfred Ssekagiri retweetledi
Seqera
Seqera@SeqeraLabs·
Work in diagnostics or are interested in scalable workflow solutions for clinical applications? Join our upcoming webinar💻 We'll explore operational challenges diagnostics teams encounter as testing demands scale, and how @nextflowio and Seqera help address them. 🔗 hubs.la/Q047h4Q_0
Seqera tweet media
English
0
2
2
58
Alfred Ssekagiri retweetledi
Naval
Naval@naval·
A “computer” used to be a job title. Then a computer became a thing humans used. Now a computer is becoming a thing computers use.
English
1.3K
2.9K
29.8K
38.8M
Alfred Ssekagiri retweetledi
UniProt
UniProt@uniprot·
UniProt Webinar Reminder Join us to learn how automatic annotation tools enrich UniProt protein data. Automatic annotation systems in UniProt 15.00 (GMT) Thursday 12th March. Registration is free but essential: ebi.ac.uk/training/event…
UniProt tweet media
English
0
4
6
675
Alfred Ssekagiri retweetledi
Seqera
Seqera@SeqeraLabs·
Introducing the Seqera AI CLI, your AI-powered terminal assistant for @nextflowio development.🚀Build locally, deploy to the cloud, debug intelligently. All from one interface.🧵hubs.la/Q045kG1M0
English
1
3
9
259
Alfred Ssekagiri retweetledi
Seqera
Seqera@SeqeraLabs·
@nextflowio Podcast episode: Pipeline chaining, meta pipelines, and automation🎙️In this episode, we discuss strategies for connecting multiple Nextflow pipelines, from meta pipelines and unified DAGs to external orchestration approaches. The discussion covers technical considerations for meta pipelines in @nf_core, plus practical chaining approaches including @SeqeraLabs APIs, nf-cascade, and automation frameworks like #NodeRED Watch now: hubs.la/Q045cnL70
English
0
1
1
62
Alfred Ssekagiri retweetledi
Stanley E. Strawbridge
Stanley E. Strawbridge@StanStrawbridge·
I strongly encourage ECRs to apply for opportunities like the Travelling Fellowships from the ever supportive @Co_Biologists. They are fantastic for learning new skills, building collaborative networks, and honing scientific writing. doi.org/10.1242/dev.20…
English
0
22
97
13.3K
Alfred Ssekagiri retweetledi
Mushtaq Bilal, PhD
Mushtaq Bilal, PhD@MushtaqBilalPhD·
An excellent and free tutorial on conducting systematic reviews by the University College London: Covers: • What a systematic review is • Types of systematic reviews • Stages in a systematic review • Apps for systematic reviews linkedin.com/feed/update/ur…
English
2
71
338
23.7K
Alfred Ssekagiri retweetledi
Segata Lab
Segata Lab@cibiocm·
The Rising Stars Symposium (by @ibfg_es in Salamanca 🇪🇸) is an innovative meeting designed to identify & attract outstanding young researchers. We're happy that our lab member Liviana Ricci will take part of this event & present her work. Read more: readymag.website/u1842310954/58…
English
1
7
13
1.1K
Alfred Ssekagiri retweetledi
Segata Lab
Segata Lab@cibiocm·
🥁 NEW publication out nature.com/articles/s4158… based on the ZOE PREDICT cohorts involving  33,596 healthy people from 5 cross-sectional studies from 🇬🇧 &🇺🇸 with gut microbiome samples, detailed individual information and dietary habits! Check out the 🧵of key findings ⬇️
Francesco Asnicar@fasnicar

I'm thrilled to share our latest manuscript, published in @Nature!! ➡️ doi.org/10.1038/s415... We analyzed data from over 34,000 participants from UK and US, revealing strong associations between specific gut bacteria and markers of cardiometabolic health and dietary habits.

English
0
7
25
2.6K
Alfred Ssekagiri retweetledi
Dr. Diana Atwine
Dr. Diana Atwine@DianaAtwine·
@CanaryMugume We have control of our data as health and no personal data will be shared .Aggregated data will be shared
English
9
10
56
7.3K
Alfred Ssekagiri retweetledi
BURKOV
BURKOV@burkov·
This paper really is groundbreaking. It solves a long-standing embarrassment in machine learning: despite all the hype around deep learning, traditional tree-based methods (XGBoost, CatBoost, random forests, etc) have dominated tabular data—the most common data format in real-world applications—for two decades. Deep learning conquered images, text, and games, but spreadsheets remained stubbornly resistant. This paper's (published in Nature by the way) main contribution is a foundation model that finally beats tree-based methods convincingly on small-to-medium datasets, and does so very fast. TabPFN in 2.8 seconds outperforms CatBoost tuned for 4 hours—a 5,000× speedup. That's not incremental; it's a different regime entirely. The training approach is also fundamentally different. GPT trains on internet text; CLIP trains on image-caption pairs. TabPFN trains on entirely synthetic data—over 100 million artificial datasets generated from causal graphs. TabPFN generates training data by randomly constructing directed acyclic graphs where each edge applies a random transformation (using neural networks, decision trees, discretization, or noise), then pushes random noise through the root nodes and lets it propagate through the graph—the intermediate values at various nodes become features, one becomes the target, and post-processing adds realistic messiness like missing values and outliers. By training on millions of these synthetic datasets with very different structures, the model learns general prediction strategies without ever seeing real data. The inference mechanism is also unusual. Rather than finetuning or prompting, TabPFN performs both "training" and prediction in a single forward pass. You feed it your labeled training data and unlabeled test points together, and it outputs predictions immediately. There's no gradient descent at inference time—the model has learned how to learn from examples during pretraining. The architecture respects tabular structure with two-way attention (across features within a row, then across samples within a column), unlike standard transformers that treat everything as a flat sequence. So, the transformer has basically learned to do supervised learning. Talk to the paper on ChapterPal: chapterpal.com/s/a1899430/acc… Download the PDF: nature.com/articles/s4158…
BURKOV tweet media
English
77
394
2.6K
330.5K
Alfred Ssekagiri retweetledi
GA4GH
GA4GH@GA4GH·
The GA4GH Genomics Knowledge Standards (GKS) Work Stream aims to develop a “common language” to describe and share variants, helping to deliver on the promise of scalable genomic medicine. Watch the animation to learn more! buff.ly/9iA52Mf
English
0
2
2
313
Alfred Ssekagiri retweetledi
nature
nature@Nature·
Getting Rosalind Franklin’s story right is crucial, because she has become a role model for women going into science nature.com/articles/d4158…
English
15
190
546
44.2K