Albert Thomas @albertcthomas.bsky.social

334 posts

Albert Thomas @albertcthomas.bsky.social banner
Albert Thomas @albertcthomas.bsky.social

Albert Thomas @albertcthomas.bsky.social

@albertcthomas

Research engineer in machine learning at Huawei

Katılım Kasım 2013
181 Takip Edilen178 Takipçiler
Albert Thomas @albertcthomas.bsky.social retweetledi
Rachel Thomas
Rachel Thomas@math_rachel·
Vibe coding is the creation of large quantities of complex AI-generated code. Executives push lay-offs claiming AI can handle the work. Managers pressure employees to meet quotas of how much of their code must be AI-generated... yet results are far from what was promised 1/
Rachel Thomas tweet media
English
26
87
634
140.5K
Greg Yang
Greg Yang@TheGregYang·
modern recommendation systems are by far the most successful examples of continual learning in production
English
102
34
1.2K
63.9K
Albert Thomas @albertcthomas.bsky.social retweetledi
Gael Varoquaux 🦋
Gael Varoquaux 🦋@GaelVaroquaux·
@fpedregosa @amuellerml @agramfort Code generation, or even worse agents tackling issues, is a major problem in @scikit_learn these days. The devs are drowned with contributions that don't solve the actual problem, because of poor quality or failure to account for broader context and discussion.
English
2
3
9
897
Albert Thomas @albertcthomas.bsky.social
"Because solveit dialogs are fluid and editable, it’s much easier to go back and edit/remove mistakes, dead ends, and unrelated explorations. You can even edit past AI responses, to steer it into the kinds of behaviour you’d prefer" This is such a good idea!
Jeremy Howard@jeremyphoward

It's a strange time to be a programmer—easier than ever to get started, but easier to let AI steer you into frustration. We've got an antidote that we've been using ourselves with 1000 preview users for the last year: "solveit" Now you can join us.🧵 answer.ai/posts/2025-10-…

English
0
0
0
83
Albert Thomas @albertcthomas.bsky.social retweetledi
Abdelhakim Benechehab
Abdelhakim Benechehab@abenechehab·
🚀 I'm happy to share that our latest paper has been accepted at #ICML2025 🌟 📌 "AdaPTS: Adapting Univariate Foundation Models to Probabilistic Multivariate Time Series Forecasting" 👉 paper & code: github.com/abenechehab/Ad… See you in Vancouver! 🍁
English
1
4
10
599
Albert Thomas @albertcthomas.bsky.social retweetledi
Abdelhakim Benechehab
Abdelhakim Benechehab@abenechehab·
Excited to be heading to Singapore for #ICLR2025 this week! 🇸🇬 I will be presenting our two latest works across the main conference and a workshop. 🧵
English
1
4
10
475
Albert Thomas @albertcthomas.bsky.social retweetledi
Scientific Python
Scientific Python@scientific_py·
Hello friends! 👋 🔥 We’re working on ecosystem recommendations called specs to foster collaboration and shared best practices across the scientific Python ecosystem. 🤝 ⭐ Support this effort by starring: 🔗 github.com/scientific-pyt…
English
0
5
10
3K
Albert Thomas @albertcthomas.bsky.social
@GuillaumeRozier @france_identite Pourquoi est-il plus facile d’utiliser l’identité numérique qui m’envoie un code sur mon téléphone (pour me connecter via France Connect) alors que France Identité me demande d’aller scanner ma carte d’identité, que je n’ai pas toujours si je suis dans mon fauteuil par exemple ?
Français
1
0
1
831
GRZ
GRZ@GuillaumeRozier·
L’app Carte Vitale est actuellement la plus téléchargée sur l’App Store en France. Allez, encore un petit effort pour faire passer @france_identite en numéro 2 ! 🇫🇷 L’innovation publique est en forme.
GRZ tweet media
Français
77
38
390
61.1K
Albert Thomas @albertcthomas.bsky.social retweetledi
Abdelhakim Benechehab
Abdelhakim Benechehab@abenechehab·
🚀 I'm happy to share that our latest paper has been accepted at #ICLR2025 🌟 📌 "Zero-shot Model-based Reinforcement Learning using Large Language Models" See you in Singapore! 🇸🇬
Abdelhakim Benechehab@abenechehab

Looking to leverage LLMs for multivariate time series forecasting? 🎉 Search no more! You can do exactly that with our new package DICL (Disentangled In-Context Learning). 🖥️ Code & Demo: github.com/abenechehab/di… 📜 New preprint: DICL for RL arxiv.org/pdf/2410.11711 1/🧵

English
1
4
14
1.9K
Balázs Kégl
Balázs Kégl@balazskegl·
I think ChatGPT is a pretty good writing companion if we replace each "leveraging" by "using", and each "delve into" by "explore". Isn't there an add-on that does this automatically?
English
2
0
2
246
Albert Thomas @albertcthomas.bsky.social
@sh_reya I understand the concern and can agree with it. But I would tend to try to rely on one of them before starting from scratch and having to maintain everything. Who would you recommend implementing from scratch for someone that just wants to try something and play with LLM agents?
English
0
0
1
49
Shreya Shankar
Shreya Shankar@sh_reya·
been thinking about writing a blog post about this. my take is that it’s a mess of the following: - if the goal is to minimize the time to working prototype, the framework will try to pydantic-ify or dataclass-ify everything (eg decorators for tools, memory objects) - validation is not a first class citizen; usually (incorrectly treated as) an afterthought. maybe this is a consequence of the previous point, because simple LLM wrappers get you 90% of the way there - validation needs to be hierarchical. for example, the first failure may result in a retry, the second may try a different prompt, the third may resort to a hard-coded fallback - design patterns are implemented in agent frameworks as features (eg retries), so these frameworks seem like a hodgepodge of features - people want to build agentic systems with a microservices mentality but this is a bad idea - if there’s no UI/playground, there’s no easy way to iterate. iteration is required because nobody knows what the LLM is capable of until they try some prompts/agents - minimal support for parallelization (a basic requirement)…users are left to reason about where and how to parallelize - no support for progressively executing queries, ie where you return some results fast but keep running and slowly returning better results will keep thinking about it and write something if I have anything new to add to the conversation
anton@abacaj

agent frameworks are useless (why are there so many)

English
18
21
354
53.7K
:probabl.
:probabl.@probabl_ai·
We will be going live later today to explore how the timeseries stack of Nixtla can work together with scikit-learn. Feel free to join us live here at 12:30: eu1.hubs.ly/H0dW9-30
English
1
0
7
278
Albert Thomas @albertcthomas.bsky.social
@HamelHusain Don’t you have to take care (and be good at) many more areas when doing independent consulting than when being employed at a FAANG company : administrative work, marketing, communication, finding clients, …?
English
1
0
1
566
Albert Thomas @albertcthomas.bsky.social retweetledi
Haitham Bou Ammar
Haitham Bou Ammar@hbouammar·
I am excited to present Agent K as the first end-to-end agent (i.e., autonomous from Kaggle URL to submissions that win competitions) to achieve an equivalent of Kaggle grandmaster level. Our agent codes the whole data science pipeline from a natural language description of the competition and raw data! It does at least the following: 1. Cleans and pre-processing the data automatically; 2. Do feature engineering if needed automatically; 3. Write machine learning models that it thinks can solve the tasks automatically; 4. Trains the models and optimises their hyperparameters with HEBO automatically; 5. Write Kaggle submission files and decide to upload them to Kaggle to get the score automatically; It uses this score to improve its pipeline and submission automatically. Regarding results, we win six gold, three silver, and seven bronze medals. We also score in the top 38% against Kagglers. Since we win medals in all competition types, we make a fair comparison to human participants by awarding them extra medals if needed. Here, we also see that our Agent K is more likely to earn more medals than humans. The difference is particularly significant for bronze medals, where Agent K outperforms in 42% of match-ups and underperforms in only 23%. Similarly, for gold medals, the agent's winning rate of 14% is over twice its losing rate of 6%. How's that for LLMs that can't reason ;) Whoop whoop! #AI #machine_learning #MachineLearning #DataDriven #DataScientist #DataScientist arxiv.org/pdf/2411.03562
Haitham Bou Ammar tweet mediaHaitham Bou Ammar tweet mediaHaitham Bou Ammar tweet mediaHaitham Bou Ammar tweet media
English
21
42
248
59.4K
Albert Thomas @albertcthomas.bsky.social
uv and mamba make package management much faster. I already use mamba and am considering moving to uv instead of pip. One thing I will miss from pip and conda: they are implemented in python so it was easier for me to read/debug their code if needed.
English
0
0
1
109
Hamel Husain
Hamel Husain@HamelHusain·
I’m trying to upgrade my python version in my base conda env Wish me luck
English
49
2
179
95.3K