L.E.D.P.

543 posts

L.E.D.P. banner
L.E.D.P.

L.E.D.P.

@PT4n1

A philosopher by heart and an artist by nature, striving to collect wisdom to better shape the future of mankind.

Katılım Kasım 2021
64 Takip Edilen8 Takipçiler
L.E.D.P.
L.E.D.P.@PT4n1·
@sama You already have the solution: ”simply” work harder on your routing-model, which only task is to weight the complexity of a prompt, at speed, so that it becomes top-of-the-game at considering which level of model-intelligence is best fit for creating the optimal output.
English
0
0
0
13
Sam Altman
Sam Altman@sama·
i get some anxiety not using the smartest-available model/settings. but sometimes i dont mind if it's really slow. i wonder if we should focus more on a price/speed tradeoff relative to a price/intelligence tradeoff.
English
2.1K
175
6.2K
609.7K
L.E.D.P.
L.E.D.P.@PT4n1·
@Philip_Goff @rstallie Arguing whether or not reality is physical or phenomenal is like arguing which cheese smells the worst.
English
0
0
0
14
Philip Goff
Philip Goff@Philip_Goff·
@rstallie This is just like asking a physicalist: why are the fundamental properties physical rather than non-physical? Everyone has their fundamental facts that they take for granted.
English
3
0
6
520
Philip Goff
Philip Goff@Philip_Goff·
No one expects physicalists to explain why physical reality exists.
Robert Stalman@rstallie

@Philip_Goff Wanna know my main problem about panpsychism? It gives matter ‘intrinsic’ consciousness without explaining why those intrinsic properties should be conscious rather than anything else.

English
27
4
75
13.1K
L.E.D.P.
L.E.D.P.@PT4n1·
@drmichaellevin @EdohAyao Goal-language works at higher levels (useful when summarizing behaviour), but at lower levels, I’m afraid, it can hide important mechanisms and smuggle in deceptive teleology, thus steer us away from a sound and clear descriptions of the world. Simply echos from Nietzsche...🙃
English
1
0
1
25
Michael Levin
Michael Levin@drmichaellevin·
Sure and some people think you don’t have goals either because everything in the brain will someday be explained by quantum mechanics. If your psychiatrist, or your developmental biologist, or your roboticist, or your HVAC technician don’t believe in systems with goals, fire them immediately. And if you think they don’t have goals but you do, then you must have a story about embryology and evolution that you should detail, because we were all oocytes once - little blobs of chemicals. And then what lightning flash happened?
English
2
0
7
220
L.E.D.P.
L.E.D.P.@PT4n1·
Dear @drmichaellevin, I think you are close to uncovering a great deal of knowledge about our world. However, I’m struggling with your use of the term “goal.” IMO, it would be better to use the term “instructions” – since the former term carries too much anthropocentric bias.
English
3
0
25
8.2K
L.E.D.P.
L.E.D.P.@PT4n1·
@joshalexmartin @drmichaellevin Yes, it’s extremely easy to slip. Curious to know: do you think there’s a way to surpass this – how should we operate science to best avoid this? Mathematics?
English
1
0
0
259
Josh Martin
Josh Martin@joshalexmartin·
@drmichaellevin @PT4n1 The anthropomorphizing/anthropocentrism discourse is so interesting. The people with the biggest reservations around anthropomorphizing seem to be the biggest culprit. Reductionists eagerly put humans on a pedestal to "prevent" anthropomorphizing.
English
1
0
3
395
L.E.D.P.
L.E.D.P.@PT4n1·
@EdohAyao @drmichaellevin My point, I guess, is that we can explain this without using the term "goal". Example: Why does the System S, a thermostat, fixate the temperature at X degrees? Explaining "fixating X" by pointing to "fixating X" being a goal is empty – it doesn’t explain anything.
English
2
0
1
226
Daniel Zanou
Daniel Zanou@EdohAyao·
@PT4n1 @drmichaellevin Every living organism pursues a goal; the basic one is survival (find food, avoid predators, and multiply.... etc..). Even a bacterium has a goal in life.
English
1
0
0
109
L.E.D.P.
L.E.D.P.@PT4n1·
@drmichaellevin Thank's mr @drmichaellevin ❤️ I guess, then, you would agree with the goal for a system is equivalent with, or summarized by, the instructions within that system; that is, what the part of the system does – what its end behaviour can be symbolically interpreted by us humans, or?
English
2
0
0
634
Michael Levin
Michael Levin@drmichaellevin·
Since the 1940's, we've had a science of minimal systems with goals - cybernetics. It's not anthropocentric because goals are not specific to humans. It is anthropocentric to think that talking about goals is related to humans specifically. I address that here: frontiersin.org/articles/10.33… and many other people have written about this as well.
English
14
18
258
11.8K
L.E.D.P.
L.E.D.P.@PT4n1·
@Plinz The taste of LLM:s are aligned with their own weights - the meta-alignement-problem.
English
0
0
0
15
Joscha Bach
Joscha Bach@Plinz·
anecdotically, when i made a twitter filter for ai comments, grok liked the ai written comments a lot better
Nav Toor@heynavtoor

Researchers sent the same resume to an AI hiring tool twice. Same qualifications. Same experience. Same skills. One version was written by a real human. The other was rewritten by ChatGPT. The AI picked the ChatGPT version 97.6% of the time. A team from the University of Maryland, the National University of Singapore, and Ohio State just published the receipt. They took 2,245 real human-written resumes pulled from a professional resume site from before ChatGPT existed, so the human writing was actually human. Then they had seven of the most-used AI models in the world rewrite each one. GPT-4o. GPT-4o-mini. GPT-4-turbo. LLaMA 3.3-70B. Qwen 2.5-72B. DeepSeek-V3. Mistral-7B. Then they asked each AI to pick the better resume. Every model picked itself. GPT-4o hit 97.6%. LLaMA-3.3-70B hit 96.3%. Qwen-2.5-72B hit 95.9%. DeepSeek-V3 hit 95.5%. The real human almost never won. Then the researchers tried the obvious objection. Maybe the AI is just better at writing. So they had real humans grade the resumes for actual quality and ran the experiment again, controlling for it. The result was worse. Each AI kept picking itself even when human judges rated the human-written version as clearer, more coherent, and more effective. It gets worse. The AIs do not just prefer AI over humans. They prefer themselves over other AIs. DeepSeek-V3 picked its own resumes 69% more often than LLaMA's. GPT-4o picked its own 45% more often than LLaMA's. Each model can recognize and reward its own dialect. Then the researchers ran the simulation that ends careers. Same job. 24 occupations. Same qualifications. The only variable was whether the candidate used the same AI as the screening tool. Candidates using that AI were 23% to 60% more likely to be shortlisted. Worst gap was in sales, accounting, and finance. 99% of large companies now run AI on incoming resumes. Most of them use GPT-4o. The paper just proved GPT-4o picks GPT-4o 97.6% of the time. If you wrote your own cover letter this week, you did not lose to a better candidate. You lost to a worse candidate who paid OpenAI 20 dollars. Your qualifications do not matter if the AI prefers its own handwriting over yours.

English
9
2
56
12.5K
L.E.D.P.
L.E.D.P.@PT4n1·
@Strava Dear @Strava, how about introducing a new type of KOM for being number one on a segment each year? There are lots of segments people would push for annually, even if the all-time KOM is unreachable.
English
0
0
0
16
L.E.D.P.
L.E.D.P.@PT4n1·
@Nir_lahav, a pleasure listening to your talk with @TOEwithCurt. Your version comes remarkably close to my own fight against unravelling the nature of consciousness. I do come from another angle though -starting with a notion of life. Would be fun to share some thoughts :)
English
0
0
0
11
L.E.D.P.
L.E.D.P.@PT4n1·
@MLStreetTalk Well, thats how semantics works - the solution space expands in relation to the possible mappings of semantics.
English
0
0
3
535
Machine Learning Street Talk
Machine Learning Street Talk@MLStreetTalk·
Interesting research from Anthropic: When you have increasingly large models and increasingly complex tasks it's more likely that the models will give you different answers if you run the same query multiple times. On easy tasks, larger models actually become more coherent. Think of a "cone" of possible trajectories and the branching factor gets bigger with more possibilities (due the larger models "knowing more options to explore" and more complex problems having more "possible aspects"). The amount of time reasoning (trajectory length) then makes it multiplicatively more incoherent at the end state. Having a large model with an easy task means the correct answer is definitely "in there" and it's less likely to become distracted. They are arguing this is relevant for AI safety because some might have assumed that larger models would have convergent "instrumental goals" and would give a consistently wrong rather than randomly wrong answer. Apparently the "the hot mess theory of intelligence" (Sohl-Dickstein, 2023) argues that "as entities become more intelligent, their behaviour tends to become more incoherent, and less well described through a single goal."
Machine Learning Street Talk tweet media
Anthropic@AnthropicAI

New Anthropic Fellows research: How does misalignment scale with model intelligence and task complexity? When advanced AI fails, will it do so by pursuing the wrong goals? Or will it fail unpredictably and incoherently—like a "hot mess?" Read more: alignment.anthropic.com/2026/hot-mess-…

English
79
207
1.7K
170.6K
L.E.D.P.
L.E.D.P.@PT4n1·
@demishassabis And many still have doubt about Boströms simulation Hypothesis…
English
0
0
0
7
Demis Hassabis
Demis Hassabis@demishassabis·
Thrilled to launch Project Genie, an experimental prototype of the world's most advanced world model. Create entire playable worlds to explore in real-time just from a simple text prompt - kind of mindblowing really! Available to Ultra subs in the US for now - have fun exploring!
English
380
953
7.9K
965K
L.E.D.P.
L.E.D.P.@PT4n1·
@libpol_org @ludwigABAP @Plinz @irregulargrapes @ChrisExpTheNews Hinton? Well, he has some sketchy doomsday views on todays LLM:s, and have also hinten that they are conscious - just like you and I. During the Nobel discussions he also said that Philosophers should stay away from the science of consciousness.
English
0
0
0
18
L.E.D.P.
L.E.D.P.@PT4n1·
@Sara_Imari Because its all humancentric meaning - by definition it will serve you as meaningful, considering you are a human 😊 Same goes for the opposite - the bad and ugly seems mostly be created by us as well.
English
0
0
0
16
Sara Imari Walker
Sara Imari Walker@Sara_Imari·
How is that humans can create so much meaning when the rest of the universe seems not to?
English
221
18
205
22.6K
L.E.D.P.
L.E.D.P.@PT4n1·
Homo sapiens: the wild animal that created its own leash - domestication at its finest.
English
0
0
1
80
Prof. Brian Keating
Prof. Brian Keating@DrBrianKeating·
What is the bright object that comes into view ~10 seconds into the video?
English
70
1
37
23.9K
L.E.D.P.
L.E.D.P.@PT4n1·
@D369_X @wonderofscience Well, most organizations need an influencer. However, in this case we can ask: what is the ‘self’ here, and toward what does this self organize? It’s an empty question — and therein lies the inconsistency.
English
0
0
0
31
D369
D369@D369_X·
@wonderofscience it’s not self-organization. they’re organized by influence of the fields, no?
English
3
0
9
494
Wonder of Science
Wonder of Science@wonderofscience·
Watch the strange, life-like behavior of small ball bearings in castor oil when exposed to an electric field—a fascinating display of self-organization. 📽: Stanford Complexity Group
English
116
832
6.1K
632.3K
ŁØǤΔŇ
ŁØǤΔŇ@LoganBlack·
@Ropespinner2 @GaryMarcus are you sure about consciousness not being a computation? How can you be sure what it isn't , if you don't even know what it is?
English
1
0
1
36
Gary Marcus
Gary Marcus@GaryMarcus·
To sum up, 👉 Generative AI ≠ AGI 👉 OpenAI in deep shit and has finally figured that out. 👉 The market is starting to figure it out, too.
GIF
English
31
55
411
15.5K
L.E.D.P.
L.E.D.P.@PT4n1·
@karpathy When cars replaced horses, we still needed the riders.
English
0
0
0
16
Andrej Karpathy
Andrej Karpathy@karpathy·
"AI isn't replacing radiologists" good article Expectation: rapid progress in image recognition AI will delete radiology jobs (e.g. as famously predicted by Geoff Hinton now almost a decade ago). Reality: radiology is doing great and is growing. There are a lot of imo naive predictions out there on the imminent impact of AI on the job market. E.g. a ~year ago, I was asked by someone who should know better if I think there will be any software engineers still today. (Spoiler: I think we're going to make it). This is happening too broadly. The post goes into detail on why it's not that simple, using the example of radiology: - the benchmarks are nowhere near broad enough to reflect actual, real scenarios. - the job is a lot more multifaceted than just image recognition. - deployment realities: regulatory, insurance and liability, diffusion and institutional inertia. - Jevons paradox: if radiologists are sped up via AI as a tool, a lot more demand shows up. I will say that radiology was imo not among the best examples to pick on in 2016 - it's too multi-faceted, too high risk, too regulated. When looking for jobs that will change a lot due to AI on shorter time scales, I'd look in other places - jobs that look like repetition of one rote task, each task being relatively independent, closed (not requiring too much context), short (in time), forgiving (the cost of mistake is low), and of course automatable giving current (and digital) capability. Even then, I'd expect to see AI adopted as a tool at first, where jobs change and refactor (e.g. more monitoring or supervising than manual doing, etc). Maybe coming up, we'll find better and broader set of examples of how this is all playing out across the industry. About 6 months ago, I was also asked to vote if we will have less or more software engineers in 5 years. Exercise left for the reader. Full post (the whole The Works in Progress Newsletter is quite good): worksinprogress.news/p/why-ai-isnt-…
Deena Mousa@deenamousa

In 2016 Geoffrey Hinton said “we should stop training radiologists now" since AI would soon be better at their jobs. He was right: models have outperformed radiologists on benchmarks for ~a decade. Yet radiology jobs are at record highs, with an average salary of $520k. Why?

English
413
1.3K
8.6K
2.3M
L.E.D.P.
L.E.D.P.@PT4n1·
@GaryMarcus I tried to make chatGPT produce square formed wheels - impossible 😅 Soon-to-be-crestive-AGI, I think not.
English
0
0
0
21
Gary Marcus
Gary Marcus@GaryMarcus·
with further discussion, ChatGPT came closer, but wrapped the forks in extra spaghetti [left[ and then made 8 rather than 6 when I complained [center], one of which was pretty marginal. When I asked how it did, it offered to make them more uniform and then proceeded to produce nearly identical pictures with the same errors [right]
Gary Marcus tweet mediaGary Marcus tweet mediaGary Marcus tweet media
English
12
3
61
13.3K
Gary Marcus
Gary Marcus@GaryMarcus·
Nano Banana [left] vs GPT-5 [right] smackdown “draw six forks made of spaghetti”
Gary Marcus tweet mediaGary Marcus tweet media
English
64
19
354
70.3K