Norbert Siegmund

1.3K posts

Norbert Siegmund banner
Norbert Siegmund

Norbert Siegmund

@Norbsen

@[email protected] Full Professor for Software Systems at Leipzig University. Please use mastodon in the future...

Leipzig, Germany Katılım Ocak 2010
172 Takip Edilen432 Takipçiler
Norbert Siegmund retweetledi
Belinda Schantong
Belinda Schantong@schanlin·
Been having a splendid time at #ICSE2025, where I presented our paper on "programmer's block", joint work with @Norbsen and Janet Siegmund. Interested in learning how problem's from software developers compare to writer's block? Check out the paper → rdcu.be/ektLk
Belinda Schantong tweet media
English
0
1
2
115
Norbert Siegmund retweetledi
Sebastian Simon
Sebastian Simon@SimiSimon17·
Just presented our paper "Themes of Building LLM-based applications for Production" at @CAINconf in Ottawa. Find a preprint, including a thematic map of relevant topics when engineering LLM-based systems 🤖, here: arxiv.org/pdf/2411.08574
Sebastian Simon tweet media
English
1
2
5
74
Norbert Siegmund retweetledi
Chip Huyen
Chip Huyen@chipro·
During the process of writing AI Engineering, I went through so many papers, case studies, blog posts, repos, tools, etc. This repo contains ~100 resources that really helped me understand various aspects of building with foundation models. github.com/chiphuyen/aie-… Here are the highlights: 1. Anthropic’s Prompt Engineering Interactive Tutorial The Google Sheets-based interactive exercises make it easy to experiment with different prompts and see immediately what works and what doesn’t. I’m surprised other model providers don’t have similar interactive guides: docs.google.com/spreadsheets/d… 2. OpenAI’s best practices for finetuning While this guide focuses on GPT-3, many techniques are applicable to full finetuning in general. It explains how finetuning works, how to prepare training data, how to pick training hyperparameters, and common finetuning mistakes: docs.google.com/document/d/1rq… 3. Llama 3 paper The section on post-training data is a gold mine as it details different techniques they used to generate 2.7 million examples for supervised finetuning. It also covers a crucial but less talked about topic: data verification, how to evaluate the quality of synthetic data: arxiv.org/abs/2407.21783 4. Efficiently Scaling Transformer Inference (Pope et al., 2022) An amazing paper co-authored by Jeff Dean about inference optimization for transformers models. It covers not only different optimization techniques and their tradeoffs, but also provides a guideline for what to do if you want to optimize for different aspects, e.g. lowest possible latency, highest possible throughput, or longest context length: arxiv.org/abs/2211.05102 5. Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models (Lu et al., 2023) My favorite study on LLM planners, how they use tools, and their failure modes. An interesting finding is that different LLMs have different tool preferences: arxiv.org/abs/2304.09842 6. AI Incident Database For those interested in seeing how AI can go wrong, this contains over 3000 reports of AI harms: incidentdatabase.ai 7. I find case studies from teams that have successfully deployed AI applications extremely educational. Here are some of my favorite enterprise case studies. I'll add more case studies soon! - LinkedIn: linkedin.com/blog/engineeri… - Pinterest's Text-to-SQL: medium.com/pinterest-engi… - Gmail’s Smart Compose (2019): arxiv.org/abs/1906.00080 - Grab: engineering.grab.com/llm-powered-da…
English
31
235
1.5K
102.8K
Norbert Siegmund retweetledi
Norbert Siegmund retweetledi
Andreas Zeller
Andreas Zeller@AndreasZeller·
“The Devil's Guide to Doing your PhD - 10 tips for despair, dismay, and disappointment” - this was the eynote I gave today at the @issta_conf 2024 doctoral symposium. My slides are now available at slideshare.net/slideshow/the-… - enjoy!
Andreas Zeller tweet media
English
5
19
81
5.8K
Norbert Siegmund retweetledi
Richard Hanania
Richard Hanania@RichardHanania·
Strong evidence showing that getting a PhD is extremely bad for your mental health. A new paper uses Swedish medical records and matches them to the full population of PhD students for which the authors could get gender and birth year data from 2006 to 2017. After some exclusion criteria, they end up with a sample size of 20,085 individuals. The paper compares PhD students to those who have masters degrees and don't start a PhD program. Before starting a PhD program, people who stop at a masters and those who go on to seek a PhD have similar rates of psychiatric medication use and hospitalization. A few years into a PhD program, however, 40% more individuals are on psychiatric medications, before the number falls off as people leave or finish their studies. You see the same pattern with psychiatric hospitalizations. PhD students are up to 150-175% more likely to be hospitalized after starting a program! These are incredible numbers, too massive to be the result of chance or a flaw in the methodology. This is comparing the same people over time. If you're considering a PhD program, and the terrible job prospects and waste of time aren't enough, here's yet another reason to stay away.
Richard Hanania tweet mediaRichard Hanania tweet media
English
181
1.2K
7.2K
1.1M
Norbert Siegmund retweetledi
Belinda Schantong
Belinda Schantong@schanlin·
Ever encountered a block during programming? Feel like you can’t produce any useful line of code? You are not alone! Programmer’s block really exists during programming and is similar to writer’s block. Check out our recently accepted paper @emsejournal: mytuc.org/gsqc
Belinda Schantong tweet media
English
2
4
9
876
Norbert Siegmund retweetledi
Belinda Schantong
Belinda Schantong@schanlin·
#devs, do you sometimes get stuck on a programming task like you've hit a block? Or do you hear about your colleagues being stuck? Great! Help us research blocks during programming and share your experience with us by completing this 15-minute survey: mytuc.org/gmrk
English
0
3
1
825
Norbert Siegmund
Norbert Siegmund@Norbsen·
#devs, do you sometimes get stuck on a programming task like you've hit a block? Or do you hear about your colleagues being stuck? Great! Help us research blocks during programming and share your experience with us by completing this 15-minute survey: mytuc.org/gmrk
English
0
1
3
346
Norbert Siegmund retweetledi
David Lo
David Lo@davidlo2015·
Teaching can be challenging and demanding. @JanetSiegmund is presenting valuable tips on how to manage and overcome challenges and thrive in teaching at @ICSEconf New Faculty Symposium :)
David Lo tweet media
English
0
1
22
1K
Norbert Siegmund retweetledi
Johannes Dorn
Johannes Dorn@Joh4nnesDorn·
Thanks everyone at #FOSD2024 for the great talks and discussions!
Johannes Dorn tweet media
English
0
5
12
767
David Lo
David Lo@davidlo2015·
Thank you very much @TheOfficialACM for including me in the 2023 ACM Fellow list. Thank you very much to the nominator and endorsers, selection committee, advisors, mentors, students, collaborators, and colleagues for your kind help and support to make this possible😀
Association for Computing Machinery@TheOfficialACM

Announcing the 2023 #ACMFellows! This year's 68 inductees include the inventor of the World Wide Web, the "godfathers of AI," and other colleagues whose contributions have been important building blocks in forming the digital society that shapes our world. bit.ly/3Ohwo06

English
32
4
154
10.6K
Norbert Siegmund retweetledi
Greg Brockman
Greg Brockman@gdb·
People often ask if ML or software skills are more the bottleneck to AI progress. It’s the wrong question—both are invaluable, and people with both sets of skills can have outsized impact. We find it easier, however, to teach people ML skills as needed than software engineering.
roon@tszzl

coming up with good ml research ideas is significantly easier than implementing them in a complex codebase — the returns to being an extremely good engineer are super high

English
102
296
3.5K
730.1K