Oscar Balcells Obeso

41 posts

Oscar Balcells Obeso

Oscar Balcells Obeso

@OBalcells

Katılım Şubat 2022
522 Takip Edilen974 Takipçiler
Sabitlenmiş Tweet
Oscar Balcells Obeso
Oscar Balcells Obeso@OBalcells·
Imagine if ChatGPT highlighted every word it wasn't sure about. We built a streaming hallucination detector that flags hallucinations in real-time.
English
206
616
8.8K
745.2K
Oscar Balcells Obeso retweetledi
Leo Gao
Leo Gao@nabla_theta·
@boazbaraktcs - what happens when the model/safety stack refuses DoW queries? if the DoW gets mad and strongarms openai, like they just did to anthropic, how is openai going to resist? especially if openai doesn't even have the strong contractual protection
English
1
2
133
4.1K
Oscar Balcells Obeso retweetledi
Anthropic
Anthropic@AnthropicAI·
A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War. anthropic.com/news/statement…
English
4.3K
9.5K
56.2K
16.4M
Oscar Balcells Obeso retweetledi
roon
roon@tszzl·
it’s just so clear humans are the bottleneck to writing software. number of agents we can manage, information flow, state management. there will just be no centaurs soon as it is not a stable state
English
175
89
2K
208K
Oscar Balcells Obeso retweetledi
Oscar Balcells Obeso retweetledi
Ethan Perez
Ethan Perez@EthanJPerez·
Fellows grads have started to get a reputation as some of the steepest trajectory researchers at Anthropic. So we’re excited to expand the program and help mentor more new AI safety researchers
Anthropic@AnthropicAI

We’re opening applications for the next two rounds of the Anthropic Fellows Program, beginning in May and July 2026. We provide funding, compute, and direct mentorship to researchers and engineers to work on real safety and security projects for four months.

English
6
31
422
46.3K
Oscar Balcells Obeso retweetledi
Leo Gao
Leo Gao@nabla_theta·
New post: An Ambitious Vision for Interpretability Understanding is essential for ensuring things don't break unexpectedly. AMI is a big risky bet, but so is all ambitious research. AMI is tractable: it has good empirical feedback loops, and we've already made a lot of progress.
Leo Gao tweet media
English
12
27
243
55.1K
Oscar Balcells Obeso retweetledi
Neel Nanda
Neel Nanda@NeelNanda5·
The GDM mechanistic interpretability team has pivoted to a new approach: pragmatic interpretability Our post details how we now do research, why now is the time to pivot, why we expect this way to have more impact and why we think other interp researchers should follow suit
Neel Nanda tweet media
English
29
90
677
248.2K
Oscar Balcells Obeso retweetledi
@levelsio
@levelsio@levelsio·
🇪🇺 As a European citizen and AI founder, I can apparently use these "AI Factories", so I just signed up to use them! Every "supercomputer" has an [ ACCESS NOW ] button which made me very excited I expected to sign up, maybe pay a discounted H100 rate (funded by EU, that'd be nice?) and get a Jypyter notebook, or some SSH login so I can access my GPU like I'd do on @lambdaapi or @awscloud or @Hetzner_Online But I celebrated to early, I signed up, confirmed my email, then ended up in a "Supercomputer Access Calls" page, where I had to select from a tedious list of "Call For Proposals" to get access to a GPU So I could NOT just access a H100 GPU, I have to make sure my project (in this case my business) fits a specific proposal, ok fair This process was already tedious enough but then when I tried to actually go through with it, it started asking me if I had "Respect for Human Agency?", I do I think, and if I was mindful of "Individual, and Social and Environmental Well-Being?", well I am, right guys??? Right??? The questions didn't stop, just endless pages of this Look I get what they're doing, they pivoted the classic university "I need to rent a giant computer for my research" to an EU wide thing and then present it as the "European AI plan" But this isn't really how AI works in production? As a founder in AI, if I wanna do stuff I'd rent a whole bunch H100 GPUs again at @lambdaapi or @awscloud or @Hetzner_Online and SSH into a box Or if I want it more simple I run AI models on @FAL, @wavespeed or @replicate which is just an API call or web front end I can click stuff and run a model The EU has the right intentions here but it's just the wrong execution, this thing will 100% go nowhere, and I'm a born optimist, I want to believe, I'm also a proud European, and I'm in AI a bit and not a complete idiot. There's just better ways to do this If you really want to have the GPU servers in Europe (which arguably isn't that important), then let me rent a GPU box with SSH access at @Hetzner_Online or @OVHcloud that's hosted in Europe and subsidize that for European citizens and European businesses. I don't even believe in that, but at least that'd make it accessible for Europeans. Now it really isn't? What's REALLY much more important though if you want to be a part of the AI race and I've posted for years here with @euaccofficial is to make Europe a really extremely attractive place to start and run an AI business. Remove regulatory obstructions and give tax discounts for startups. Let them build a business first that can compete worldwide and once they make enough money (let's say $100M/y), then slowly start adding regulation. Because right now the regulation only benefits the European incumbents, the dinosaur companies, while making it very difficult for European citizens to start new AI companies here. Which is why we literally have none left. Anyway, I applied to get my GPU, let's see if I get it!
@levelsio@levelsio

What in the F is an AI factory? I had to investigate what the unelected @EU_Commission is talking about today So according to them, it's some data centers (which they call supercomputers) in 6 different EU countries I checked out the most powerful one: Karolina, a Czech data center, it mostly has CPUs though (see pic) not GPUs, so mostly useless for AI The GPUs it does have are 72x 8x NVIDIA A100 GPU, so 576x A100, or equivalent of 240x H100s (H100 is about 2.4x the compute power of A100) So let's compare that: @xAI has 200,000x H100 GPUs So the xAI data center has 800x more compute than the Czech one If we combine xAI, Meta, AWS, etc. it's about 750,000 H100s If we assume the other 5 data centers in the EU are equivalent to the Czech one (which is massive stretch because most of the others seem AI consultacny services, they don't even HAVE chips!), the EU's new "AI factories" have a total of 1,440x H100 GPUs, let's round up to 1,500 to be nice So the EU is trying to compete with 750,000 GPUs with their own 1,500 GPUs, so 500x less?? Correct me if I'm wrong but it's just seems very low impact and another ridiculous idea and burning of EU tax payers money that will end up in local cronies and bureaucrats and will do NOTHING to improve the AI business climate for Europe The best way to improve it is to deregulate, make it super easy and low tax (especially when starting out) to start AI companies in Europe

English
397
477
4.8K
1.5M
Oscar Balcells Obeso retweetledi
Andy Arditi
Andy Arditi@andyarditi·
We found "misaligned persona" features in Llama and Qwen that mediate emergent misalignment. Fine-tuning on bad medical advice strengthens these pre-existing features, causing broader undesirable behavior. lesswrong.com/posts/NCWiR8K8…
English
1
12
81
12.8K
Oscar Balcells Obeso
Oscar Balcells Obeso@OBalcells·
@koltregaskes Ah I see. The annotations are quite expensive to do: ~1M tokens and 15 google searches to annotate a single completion. You could scale this up with a larger token (or API) budget.
English
2
0
8
1K
Kol Tregaskes
Kol Tregaskes@koltregaskes·
@OBalcells Perhaps I'm misunderstanding but you say "We built a large-scale dataset with 40k+ annotated long-form samples across 5 different open-source models".
English
1
0
0
1K
Oscar Balcells Obeso
Oscar Balcells Obeso@OBalcells·
Imagine if ChatGPT highlighted every word it wasn't sure about. We built a streaming hallucination detector that flags hallucinations in real-time.
English
206
616
8.8K
745.2K
Oscar Balcells Obeso
Oscar Balcells Obeso@OBalcells·
@MacGraeme42 It’s not based on the token probabilities. What we train is a simple binary linear (or more complicated too) classifier on the internal activations of the model.
English
2
0
46
3.4K
Scott Graham
Scott Graham@MacGraeme42·
@OBalcells is this based on the token probabilities output by the LLM? Or is it RLHF post-training on what is/is not factual?
English
1
0
12
3.5K
MrUmberto
MrUmberto@MrUmberto_·
@OBalcells Useless if it highlights half of the text.
English
1
0
2
625
Oscar Balcells Obeso
Oscar Balcells Obeso@OBalcells·
@thelokasiffers @antirez Yep, I have found the logprobs to be quite useful in some cases to spot-check the factuality of completions. We include this as a baseline in our paper.
English
0
0
9
786
Oscar Balcells Obeso
Oscar Balcells Obeso@OBalcells·
@_aftz Perplexity (or equivalently the logprobs) are a baseline we compare to.
English
1
0
3
677
fish
fish@_aftz·
@OBalcells How is this different from model perplexity?
English
1
0
1
727
Oscar Balcells Obeso
Oscar Balcells Obeso@OBalcells·
We use some well-known datasets of prompts such as HealthBench and Longfact. We also generate our own set of prompts (we call it Longfact++ in the paper). With these prompt datasets we do rollouts with each model and then we annotate the completions (I.e fact-check them) using claude+search.
English
0
0
16
1.7K
Oscar Balcells Obeso
Oscar Balcells Obeso@OBalcells·
This is something we wanted to check but haven’t yet. It would be interesting follow-up work. We’d like try it out on some honesty datasets to see if it can detect lying. I don’t think that the model internally represents lying (deceptively) in the same way as hallucination but who knows.
English
2
0
22
2.8K
Loquacious Bibliophilia
Loquacious Bibliophilia@LocBibliophilia·
@OBalcells Could be be used to find where the AI may be deceptive, etc or other forms of misalignment?
English
2
0
11
3.7K