Hal Ashton

69 posts

Hal Ashton banner
Hal Ashton

Hal Ashton

@hal_ashton

animate bag of mostly fizzy water with trace elements

London เข้าร่วม Ekim 2020
491 กำลังติดตาม211 ผู้ติดตาม
Hal Ashton
Hal Ashton@hal_ashton·
getting rid of jury trials in England for all but the most 'serious' crimes is catastrophic. It is a fail safe to prevent governments from introducing ever more badly defined crimes.
English
0
0
0
17
Hal Ashton
Hal Ashton@hal_ashton·
@xuanalogue aye! I was just thinking about the intellectual prisons that academic fields make for themselves and the effect they have on heterodox thought.
English
0
0
1
91
xuan (ɕɥɛn / sh-yen)
xuan (ɕɥɛn / sh-yen)@xuanalogue·
trying to reclaim rationality from the rationalists
xuan (ɕɥɛn / sh-yen) tweet media
English
9
3
75
4.8K
Hal Ashton
Hal Ashton@hal_ashton·
What a minefield.
English
0
0
0
53
Hal Ashton
Hal Ashton@hal_ashton·
But one might also interpret this as saying AI lacks animosity towards well us. Happyface.
English
1
0
0
63
Hal Ashton
Hal Ashton@hal_ashton·
With respect to the Thomas Wolf post on what AI is missing, the word Animus came to mind. It's not one that I can honestly say I have ever used much though I feel my knowledge of it came from legal literature. Its dual meaning en.wiktionary.org/wiki/animus seem apt for AI.
English
1
0
1
97
Hal Ashton
Hal Ashton@hal_ashton·
Nice read. I would go a lot further than this, but then I'm reliably wrong about many things and I've got some chores to do.
Thomas Wolf@Thom_Wolf

I shared a controversial take the other day at an event and I decided to write it down in a longer format: I’m afraid AI won't give us a "compressed 21st century". The "compressed 21st century" comes from Dario's "Machine of Loving Grace" and if you haven’t read it, you probably should, it’s a noteworthy essay. In a nutshell the paper claims that, over a year or two, we’ll have a "country of Einsteins sitting in a data center”, and it will result in a compressed 21st century during which all the scientific discoveries of the 21st century will happen in the span of only 5-10 years. I read this essay twice. The first time I was totally amazed: AI will change everything in science in 5 years, I thought! A few days later I came back to it and, re-reading it, I realized that much of it seemed like wishful thinking at best. What we'll actually get, in my opinion, is “a country of yes-men on servers” (if we just continue on current trends). Let me explain the difference with a small part of my personal story. I’ve always been a straight-A student. Coming from a small village, I joined the top French engineering school before getting accepted to MIT for PhD. School was always quite easy for me. I could just get where the professor was going, where the exam's creators were taking us and could predict the test questions beforehand. That’s why, when I eventually became a researcher (more specifically a PhD student), I was completely shocked to discover that I was a pretty average, underwhelming, mediocre researcher. While many colleagues around me had interesting ideas, I was constantly hitting a wall. If something was not written in a book I could not invent it unless it was a rather useless variation of a known theory. More annoyingly, I found it very hard to challenge the status-quo, to question what I had learned. I was no Einstein, I was just very good at school. Or maybe even: I was no Einstein in part *because* I was good at school. History is filled with geniuses struggling during their studies. Edison was called "addled" by his teacher. Barbara McClintock got criticized for "weird thinking" before winning a Nobel Prize. Einstein failed his first attempt at the ETH Zurich entrance exam. And the list goes on. The main mistake people usually make is thinking Newton or Einstein were just scaled-up good students, that a genius comes to life when you linearly extrapolate a top-10% student. This perspective misses the most crucial aspect of science: the skill to ask the right questions and to challenge even what one has learned. A real science breakthrough is Copernicus proposing, against all the knowledge of his days -in ML terms we would say “despite all his training dataset”-, that the earth may orbit the sun rather than the other way around. To create an Einstein in a data center, we don't just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask. One that writes 'What if everyone is wrong about this?' when all textbooks, experts, and common knowledge suggest otherwise. Just consider the crazy paradigm shift of special relativity and the guts it took to formulate a first axiom like “let’s assume the speed of light is constant in all frames of reference” defying the common sense of these days (and even of today…) Or take CRISPR, generally considered to be an adaptive bacterial immune system since the 80s until, 25 years after its discovery, Jennifer Doudna and Emmanuelle Charpentier proposed to use it for something much broader and general: gene editing, leading to a Nobel prize. This type of realization –"we've known XX does YY for years, but what if we've been wrong about it all along? Or what if we could apply it to the entirely different concept of ZZ instead?” is an example of out-side-of-knowledge thinking –or paradigm shift– which is essentially making the progress of science. Such paradigm shifts happen rarely, maybe 1-2 times a year and are usually awarded Nobel prizes once everybody has taken stock of the impact. However rare they are, I agree with Dario in saying that they take the lion’s share in defining scientific progress over a given century while the rest is mostly noise. Now let’s consider what we’re currently using to benchmark recent AI model intelligence improvement. Some of the most recent AI tests are for instance the grandiosely named "Humanity's Last Exam" or "Frontier Math". They consist of very difficult questions –usually written by PhDs– but with clear, closed-end, answers. These are exactly the kinds of exams where I excelled in my field. These benchmarks test if AI models can find the right answers to a set of questions we already know the answer to. However, real scientific breakthroughs will come not from answering known questions, but from asking challenging new questions and questioning common conceptions and previous ideas. Remember Douglas Adams' Hitchhiker's Guide? The answer is apparently 42, but nobody knows the right question. That's research in a nutshell. In my opinion this is one of the reasons LLMs, while they already have all of humanity's knowledge in memory, haven't generated any new knowledge by connecting previously unrelated facts. They're mostly doing "manifold filling" at the moment - filling in the interpolation gaps between what humans already know, somehow treating knowledge as an intangible fabric of reality. We're currently building very obedient students, not revolutionaries. This is perfect for today’s main goal in the field of creating great assistants and overly compliant helpers. But until we find a way to incentivize them to question their knowledge and propose ideas that potentially go against past training data, they won't give us scientific revolutions yet. If we want scientific breakthroughs, we should probably explore how we’re currently measuring the performance of AI models and move to a measure of knowledge and reasoning able to test if scientific AI models can for instance: - Challenge their own training data knowledge - Take bold counterfactual approaches - Make general proposals based on tiny hints - Ask non-obvious questions that lead to new research paths We don't need an A+ student who can answer every question with general knowledge. We need a B student who sees and questions what everyone else missed. --- PS: You might be wondering what such a benchmark could look like. Evaluating it could involve testing a model on some recent discovery it should not know yet (a modern equivalent of special relativity) and explore how the model might start asking the right questions on a topic it has no exposure to the answers or conceptual framework of. This is challenging because most models are trained on virtually all human knowledge available today but it seems essential if we want to benchmark these behaviors. Overall this is really an open question and I’ll be happy to hear your insightful thoughts.

English
1
0
1
153
Matija Franklin
Matija Franklin@FranklinMatija·
I am happy to announce that I've joined the Humanity, Ethics and Alignment Research Team (HEART) at @GoogleDeepMind as a Research Scientist! I'm excited to work alongside Iason Gabriel (@IasonGabriel), Arianna Manzini (@Arianna_Manzini), Nahema Marchal (@nahema_marchal), Canfer Akbulut (@canfer_akbulut), Laura Weidinger (@weidingerlaura), William Isaac (@wsisaac), Atoosa Kasirzadeh (@Dr_Atoosa), and Roberta Fischli (@leonieclaude), as well as other great researchers. I will be contributing to work that aims to anticipate and address the societal implications of advanced AI, aligning them with human values, and guiding the design and deployment of AI agents.
English
14
2
141
9.7K
Hal Ashton
Hal Ashton@hal_ashton·
AI harm insurance is a inevitable future. But we generally only buy insurance when there is risk of a financial loss in prospect. Unfort Harms are often too distributed and imposed upon people powerless to seek redress. Even if there was a body of case law to support them.
Philip Moreira Tomei@synchroaphasia

AI Governance should work with markets not against them Excited to finally share a preprint that @FranklinMatija @rupal15081 & I have been working on.

English
0
1
2
225
Hal Ashton รีทวีตแล้ว
Matija Franklin
Matija Franklin@FranklinMatija·
📜New(ish) Paper📜with the great people over at @ContextualAI "LMUNIT: Fine-grained Evaluation with Natural Language Unit Tests" We introduce natural language unit tests, a paradigm that decomposes response quality into explicit, testable criteria, along with a unified scoring model, LMUNIT, which combines multi-objective training across preferences, direct ratings, and natural language rationales.
Matija Franklin tweet media
English
1
2
4
446
Hal Ashton รีทวีตแล้ว
xuan (ɕɥɛn / sh-yen)
xuan (ɕɥɛn / sh-yen)@xuanalogue·
It's unfortunate that the "utility maximization" formalization of having a goal (or being usefully described as having one) has displaced other conceptions of goal-directedness that are both more general & more like the folk concept, especially in AI alignment discourse.
Find me on bsky @colin-fraser.net@colin_fraser

interesting things can happen when you tell an LLM to do something. Sometimes it does it. I maintain that the magnitude of this miracle is strongly underrated. But this is distinct from giving it "a goal", which would mean giving it a utility function to relentlessly optimize.

English
2
4
52
4.3K
Hal Ashton
Hal Ashton@hal_ashton·
This article is well worth a read. I reserve judgment on the matter though the work I did with @FranklinMatija certainly suggested the public are more than happy to consider robotic actors as "criminal" when they behave badly and are imaginative on the subject of punishment.
Criminal Justice Theory Blog@TheoryJustice

Here is @markedsouza1's post on why AI entities can never be treated as responsible agents anything that is recognisably a system of criminal laws. …iminaljusticetheoryblog.wordpress.com/2024/11/29/why… Comments/reactions welcome!

English
1
1
3
491
Hal Ashton รีทวีตแล้ว
Criminal Justice Theory Blog
Criminal Justice Theory Blog@TheoryJustice·
In our next post, this Friday, @markedsouza1 argues against recent suggestions that the criminal law should consider treating AI entities as autonomous agents, capable of being criminal defendants in their own rights. Look out for that!
English
0
3
8
616
Hal Ashton
Hal Ashton@hal_ashton·
@should_b_workin @brianklaas Indeed. "Or" admits this in every rigorous definition I've seen of it, which makes it strange that in common language we seem to think of the meaning of "Xor" when we say "Or"
English
0
0
1
23
Alex Sarch
Alex Sarch@should_b_workin·
Keep coming back to this line from @brianklaas: "What we want is someone who is good at wielding power for the benefit of us all. But what we’re actually selecting for in our political systems is someone who wants power, is good at getting it, and then never letting go. ..."
English
2
0
4
243
Hal Ashton รีทวีตแล้ว
Objectively Random
Objectively Random@ObjRandom·
Algo Trading Book Recommendations Some of the books I think should be on Algo Traders’ shelves (with some idiosyncratic commentary). docs.google.com/spreadsheets/d…
English
3
22
180
12.4K
Hal Ashton รีทวีตแล้ว
xuan (ɕɥɛn / sh-yen)
xuan (ɕɥɛn / sh-yen)@xuanalogue·
I think one interesting thing about sharing a 26-page paper on this website is that no one really reads it but everyone already has takes
English
8
8
168
7.5K