Brunel IDA

838 posts

Brunel IDA banner
Brunel IDA

Brunel IDA

@IDABrunel

The IDA group is a leading centre of excellence for multidisciplinary work involving Artificial Intelligence, Data Science, and Statistics. Brunel University.

Hillingdon, London Katılım Kasım 2019
399 Takip Edilen186 Takipçiler
Brunel IDA
Brunel IDA@IDABrunel·
Happening now! Brunel University 's Computer Science PhD Symposium'25. Keynote Speaker Dr Michael Schlictkrull @michael_sejr who introduced the novel NLP task of finding and summarising indicators of reliability and trust, rather than truthfulness: automated source criticism.
Brunel IDA tweet mediaBrunel IDA tweet media
English
0
1
7
287
Brunel IDA
Brunel IDA@IDABrunel·
Welcome to the PhD Symposium 2025!
Brunel IDA tweet media
English
0
1
1
179
Brunel IDA retweetledi
Massimo
Massimo@Rainmaker1973·
How ice cream was invented [🍨 thestellarsagas]
English
23
366
1.6K
165.4K
Brunel IDA retweetledi
Andrew Ng
Andrew Ng@AndrewYNg·
I am so sorry that the U.S. is letting down our friends and allies. Broad tariffs, implemented not just against adversaries but also steadfast allies, will damage the livelihoods of billions of people, create inflation, make the world more fragmented, and leave the U.S. and the world poorer. AI isn’t the solution to everything, but even amidst this challenging environment, I hope our community can hold together, keep building friendships across borders, keep sharing ideas, and keep supporting each other. Much has been written about why high, widespread taxes on imports are harmful. In this letter, I’d like to focus on its possible effects on AI. One silver lining of the new tariffs is that they focus on physical imports, rather than digital goods and services, including intellectual property (IP) such as AI research inventions and software. IP is difficult to tax, because each piece of IP is unique and thus hard to value, and it moves across borders with little friction via the internet. Many international AI teams collaborate across borders and timezones, and software, including specifically open source software, is an important mechanism for sharing ideas. I hope that this free flow of ideas remains unhampered, even if the flow of physical goods is. However, AI relies on hardware, and tariffs will slow down AI progress by restricting access to it. Even though a last-minute exception was made for semiconductors, taxing imports of solar panels, wind turbines, and other power-generation and -distribution equipment will diminish the ability to provide power to U.S. data centers. Taxing imports of servers, cooling hardware, networking hardware, and the like will also make it more expensive to build data centers. And taxing consumer electronics, like laptops and phones, will make it harder for citizens to learn and use AI. With regard to data-center buildouts, another silver lining is that, with the rise of generative AI, data gravity has decreased because compute processing costs are much greater than transmission costs, meaning it’s more feasible to place data centers anywhere in the world rather than only in close proximity to end-users. Even though many places do not have enough trained technicians to build and operate data centers, I expect tariffs will encourage data centers to be built around the world, creating more job opportunities globally. Finally, tariffs will create increased pressure for domestic manufacturing, which might create very mild tailwinds for robotics and industrial automation. As U.S. Vice President J.D. Vance pointed out in 2017, the U.S. should focus on automation (and education) rather than on tariffs. But the U.S. does not have the personnel — or know-how, or supply chain — to manufacture many of the goods that it currently counts on allies to make. Robotics can be helpful for addressing a small part of this large set of challenges. Generative AI’s rate of progress in robotics is also significantly slower than in processing text, visual data, audio, and reasoning. So while the tariffs could create tailwinds for AI-enabled robotics, I expect this effect to be small. My 4-year-old son had been complaining for a couple of weeks that his shoes were a tight fit — he was proud that he’s growing! So last Sunday, we went shoe shopping. His new shoes cost $25, and while checking out, I paused and reflected on how lucky I am to be able to afford them. But I also thought about the many families living paycheck-to-paycheck, and for whom tariffs leading to shoes at $40 a pair would mean they let their kids wear ill-fitting shoes longer. I also thought about people I’ve met in clothing manufacturing plants in Asia and Latin America, for whom reduced demand would mean less work and less money to take home to their own kids. I don’t know what will happen next with the U.S. tariffs, and plenty of international trade will happen with or without U.S. involvement. I hope we can return to a world of vibrant global trade with strong, rules-based, U.S. participation. Until then, let’s all of us in AI keep nurturing our international friendships, keep up the digital flow of ideas — including specifically open source software — and keep supporting each other. Let’s all do what we can to keep the world as connected as we are able. [I had written this letter before the 90 day pause on the tariffs, but am sharing this here since many of the points are still relevant depends on what happens next.] Original text: deeplearning.ai/the-batch/issu…
English
192
359
2.9K
334.9K
Brunel IDA retweetledi
Jason Wei
Jason Wei@_jasonwei·
New benchmark for deep research agents! An agent that is creative and persistent should be able to find any piece of information on the open web, even if it requires browsing hundreds of webpages. Models that exercise this ability are like a frictionless interface to the internet. BrowseComp, which stands for “Browsing Competition”, has questions that leverage the asymmetry of verification. Humans start from a piece of information, and then write a question in such a way that it would be challenging to find the answer, but easy to verify the answer. This is an example of a BrowseComp style question: Give me the paper published in EMNLP between 2018-2023 where the first author did their undergrad at Dartmouth and the fourth author did their undergrad at UPenn. (Answer: Frequency Effects on Syntactic Rule Learning in Transformers) BrowseComp can be seen as an incomplete but useful benchmark for browsing agents. While BrowseComp sidesteps challenges of a true user query distribution like generating long answers or resolving ambiguity, it measures important core browsing abilities of persistence and creativity in finding information. As a loose analogy, models that solve programming competitions like CodeForces demonstrate high coding capabilities that likely but not necessarily generalize to other coding tasks. Similarly, to solve BrowseComp, the model must be proficient at locating hard-to-find pieces of information, but it's not guaranteed that this generalizes to all browsing tasks. A cool property of BrowseComp is that performance seems to scale with the amount of test-time compute spent trying to solve it!
Jason Wei tweet media
English
24
64
634
52.3K
Brunel IDA retweetledi
Steven Feng
Steven Feng@stevenyfeng·
We are bringing back Stanford’s CS 25 Transformers Course (cs25.stanford.edu) today! It’s open to everybody! This is one of @Stanford's hottest seminar courses. We open the course through Zoom to the public. Lectures start today (Tuesdays), 3-4:20pm PDT, at stanford.zoom.us/j/91661468474?…. Talks will be recorded and released ~2 weeks afterward. Each week, we invite folks at the forefront of Transformers research to discuss the latest breakthroughs, from LLM architectures like GPT and Gemini to creative use cases in generating art (e.g. DALL-E and Sora), biology and neuroscience applications, robotics, and so forth! Past speakers have included folks from @OpenAI, @GoogleDeepMind, @nvidia, @Meta, @AnthropicAI, etc. such as @karpathy, @geoffreyhinton, @DrJimFan, @ashVaswani, @_jasonwei, @hwchung27, @xiao_ted, @janleike, @YejinChoinka, @douwekiela, and many more! [Attached photos with some of them😎] Our class has an incredibly popular reception within and outside Stanford, and over a million total views of our recordings [web.stanford.edu/class/cs25/rec…] on YouTube. Our class with @karpathy was the second most popular YouTube video [youtube.com/watch?v=XfpMkf…] uploaded by Stanford in 2023 with over 750k views! Also, livestreaming and auditing are available to all. Feel free to audit in person or by joining the Zoom livestream. We also have a Discord server [discord.gg/2vE7gbsjzA] (over 5000 members) used for Transformers discussion. We open it to the public as more of a "Transformers community". Feel free to join and chat with hundreds of others about Transformers! Thanks to my co-instructors @DivGarg9 @_KaranPS_ @boson2photon Jenny Duan and the course's faculty advisor @chrmanning! More details: cs25.stanford.edu @StanfordAILab @stanfordnlp @StanfordHAI @agihouse_org #AI #ArtificialIntelligence #ML #DeepLearning #NLP #NLProc #Transformers #Stanford #Education #Innovation #TechEd #Community #naturallanguageprocessing
YouTube video
YouTube
Steven Feng tweet mediaSteven Feng tweet mediaSteven Feng tweet mediaSteven Feng tweet media
English
1
87
444
42.7K
Brunel IDA retweetledi
Pierpaolo Vivo
Pierpaolo Vivo@PierpaoloVivo·
📌 𝑺𝒖𝒎𝒎𝒆𝒓 𝑺𝒄𝒉𝒐𝒐𝒍 𝑨𝒏𝒏𝒐𝒖𝒏𝒄𝒆𝒎𝒆𝒏𝒕📌I have the great pleasure to announce the first Lake Como Summer School on "𝑳𝒆𝒈𝒂𝒍 𝑪𝒐𝒎𝒑𝒍𝒆𝒙𝒊𝒕𝒚 𝒂𝒏𝒅 𝑻𝒆𝒄𝒉𝒏𝒐𝒍𝒐𝒈𝒚: 𝑨𝒅𝒗𝒂𝒏𝒄𝒆𝒅 𝑻𝒐𝒐𝒍𝒔 𝒇𝒐𝒓 𝒕𝒉𝒆 𝑳𝒆𝒈𝒂𝒍 𝑺𝒑𝒉𝒆𝒓𝒆" - all infos below
Pierpaolo Vivo tweet media
English
1
8
10
1.4K
Brunel IDA retweetledi
Stanford NLP Group
Stanford NLP Group@stanfordnlp·
An introductory talk by Christopher Manning @chrmanning on “Large Language Models in 2025 – How much understanding and intelligence?” at the Workshop on a Public AI Assistant to Worldwide Knowledge at @Stanford, covering 3 eras of LLMs, RAG, Agents, DeepSeek-R1, using LLMs, ….
Stanford NLP Group tweet media
English
7
156
834
74.2K
Brunel IDA retweetledi
Jason Wei
Jason Wei@_jasonwei·
In today’s competitive product landscape, scientific understanding of models often lags behind speed of model deployment. If the goal is to train a deployable model (especially when bottlenecked by compute), it totally makes sense to make several changes at a time without studying the effect of each one in isolation. Although this is not ideal, the bright side is, when we find time to go back and ablate how each component contributes to model training, it can be a gold mine of interesting ML phenomena. Like an adventurer going onto an unexplored planet filled with undocumented and never-seen-before alien life forms
English
16
21
292
35.7K
Brunel IDA retweetledi
Andrew Ng
Andrew Ng@AndrewYNg·
To all my AI friends: You must be a good prompt, because whenever we chat, you complete me. Happy Valentine's Day! ❤️
English
174
130
1.8K
105.2K
Brunel IDA retweetledi
Stanford NLP Group
Stanford NLP Group@stanfordnlp·
The final admission that the 2023 strategy of OpenAI, Anthropic, etc. (“simply scaling up model size, data, compute, and dollars spent will get us to AGI/ASI”) is no longer working!
Sam Altman@sama

OPENAI ROADMAP UPDATE FOR GPT-4.5 and GPT-5: We want to do a better job of sharing our intended roadmap, and a much better job simplifying our product offerings. We want AI to “just work” for you; we realize how complicated our model and product offerings have gotten. We hate the model picker as much as you do and want to return to magic unified intelligence. We will next ship GPT-4.5, the model we called Orion internally, as our last non-chain-of-thought model. After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks. In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model. The free tier of ChatGPT will get unlimited chat access to GPT-5 at the standard intelligence setting (!!), subject to abuse thresholds. Plus subscribers will be able to run GPT-5 at a higher level of intelligence, and Pro subscribers will be able to run GPT-5 at an even higher level of intelligence. These models will incorporate voice, canvas, search, deep research, and more.

English
55
113
1.3K
310.8K
Brunel IDA retweetledi
Soumith Chintala
Soumith Chintala@soumithchintala·
i'm comically impressed that people are coping on deepseek by spewing bizarre conspiracy theories -- despite deepseek open-sourcing and writing some of the most detail oriented papers ever. read. replicate. compete. don't be salty, just makes you look incompetent.
English
94
392
4.1K
313.2K
Brunel IDA retweetledi
SIGKDD 2026
SIGKDD 2026@kdd_news·
We are excited to announce the launch of Datasets and Benchmarks Track! It aims to serve as a venue for the presentation of high-quality datasets & benchmarks & tools that are essential for advancing research and applications in data science, data mining, data-centric ML.
English
1
9
30
2.9K
Brunel IDA retweetledi
Yann LeCun
Yann LeCun@ylecun·
To people who think "China is surpassing the US in AI" the correct thought is "Open source models are surpassing closed ones" See ⬇️⬇️⬇️
English
484
1.2K
12K
1.2M
Brunel IDA retweetledi
Jason Wei
Jason Wei@_jasonwei·
Somewhat meta but there is a dopamine cycle in doing AI research that is pretty interesting Every day you wake up and you think about what experiment to run. You think thing X matters so you decide to improve it or ablate it. Then you write the code and pay some compute to find out the answer. Then you get dopamine or confusion depending on whether the result matched expectations For experiments that are small scale you can get several reward signals quickly and develop your intuition. Other times you make a big bet that is non-obvious or controversial and go through a period of hard work, dopamine starvation, and uncertainty to do it. If it works out it's a immense high but if it fails it's natural to be consumed by helplessness Sometimes there is an element of ego. You may believe in X or not, and keep working to find evidence to support it or excuses for why it doesn't apply in that experiment. There can be an underlying urge to either to prove your abilities or to show that you’re still relevant in the quickly changing landscape. Science is supposedly objective but the reality is that there is a huge social aspect to it, and respect of peers is an valuable reward signal to seek That’s how I feel about it, at least. Every day is a small journey further into the jungle of human knowledge. Not a bad life at all, one i’m willing to do for a long time Also cool if your research gets deployed into product
English
25
32
482
43.8K
Brunel IDA retweetledi
Brunel University of London
This Saturday, we will pause to reflect on a year since the passing of @BZephaniah, our beloved Professor of Creative Writing. When asked about his legacy, he simply said: "love". We honour that and extend our thoughts and love to all who feel his loss most deeply. 🤍🙏
Brunel University of London tweet media
English
0
39
142
4K