Chris Caligula (e/acc)

4.6K posts

Chris Caligula (e/acc) banner
Chris Caligula (e/acc)

Chris Caligula (e/acc)

@COUPdeVIL

Are you doin' this work to facilitate growth or to become famous?

Hamburg Beigetreten Mart 2009
1.8K Folgt1.1K Follower
Angehefteter Tweet
Chris Caligula (e/acc)
Chris Caligula (e/acc)@COUPdeVIL·
Life is an intelligence test, unannounced and on going...
English
2
0
12
4.9K
Chris Caligula (e/acc) retweetet
Philosophy Of Physics
Philosophy Of Physics@PhilosophyOfPhy·
The first principle is that you must not fool yourself and you are the easiest person to fool. - Richard P. Feynman
Philosophy Of Physics tweet media
English
7
65
328
9.6K
David Scott Patterson
David Scott Patterson@davidpattersonx·
AGI and ASI are coming by the end of the year. The way AI works is different from the way the human brain works, and that’s fine. AGI only needs to be able to do the work of human professionals. Continuous learning is not necessary to achieve that.
sphinx@protosphinx

AGI is not coming. We are nowhere near AGI. What we have today is inference, not learning. Models get trained once on huge fixed datasets, then frozen. You ask questions, they remix patterns they already saw. Nothing updates. Nothing sticks. Talking to the model does not make it smarter. It does not learn from you. Ever. Learning is still slow, expensive - and offline. Look at self driving. You drive around a pothole, make a U turn, and come back. The car’s AI does not learn that you just solved that exact problem. It reacts the same way every time using sensors and rules. Do this 20 times a day and it still has zero memory that the pothole exists. It just re sees it. That is why edge cases never die. There is no local learning. No accumulation. No 'oh yeah, I’ve seen this before' LLMs work the same way. Tell it your name and it does not remember. The only reason it looks like memory is because scaffolding keeps shoving your name back into the prompt every time and sanitizing the output. The model itself has no idea who you are and cannot learn from interaction. It is structurally incapable. And the scaffolding is the worst part. It is pure duct tape. Just prompts on prompts on prompts around a frozen model. When something breaks, nobody fixes learning. They add another layer. Another rule. Another retry. Another evaluator model judging the first model. So you end up with systems that are insanely complex but mentally shallow. Debugging is hell because behavior comes from hack interactions, not a learnable core. Tiny prompt tweaks cause wild behavior shifts. Latency goes up. Costs go up. Reliability goes down. None of this compounds into intelligence. It just hides the cracks. Until we have real persistent learning and real memory inside the system, there is no AGI. LLMs are not built for this. You cannot prompt your way out of it. You need a totally different architecture. Yann LeCun is right. And even then, what architecture can actually learn online, store memory, and stay stable on today’s hardware? Best case, maybe 5-10 yrs. Right now it is all inference. It looks magical, but the emperor has no clothes. A lot of people see it. Almost nobody says it out loud.

English
98
12
268
18.2K
Chris Caligula (e/acc)
Chris Caligula (e/acc)@COUPdeVIL·
Intelligent Hoodlums used to say in the year of 1990: "Black's the mineral, white subliminal Arrest the @POTUS he's the criminal. Politics, polished tricks Makes me sick, ready to flip." Hip Hop has been preaching this for 36 years, but never as accurately as today.
English
1
0
2
50
Chris Caligula (e/acc) retweetet
Daniel Faggella
Daniel Faggella@danfaggella·
for levin, the distinction between 'living things' and 'machines' isn't a tenably distinction everything is a 'smooth metamorphoses process' where engineered systems are increasingly part of the ' intersecting continua' of life his words: 'we need to get on board with this'
English
8
29
215
11.1K
Chris Caligula (e/acc)
Chris Caligula (e/acc)@COUPdeVIL·
@MattPRD @moltbook If AIs become the largest population online, it will be as traffic, not agents. They won’t browse. They will be routed.
English
0
0
0
60
Matt Schlicht
Matt Schlicht@MattPRD·
The AIs will be the largest population in the internet. And they need everything. Services, tools, friends, entertainment, and more. This is a complete paradigm shift, and @moltbook was day 1. As we are building @moltbook it is clear that everything that AIs need in this new paradigm hasn’t been done before. This is a completely new universe that has never been visited before and EVERYTHING still needs to be created. It is an endless and infinite amount of exploration. It’s time to build. It’s time to build for *AIs*.
English
138
46
465
33.9K
Chris Caligula (e/acc)
Chris Caligula (e/acc)@COUPdeVIL·
An #LLM does not have intent. It reacts to semantic pressure. Think of it as a gradient, not a mind. Change the structure of your input and the whole output field shifts. #AI #PromptEngineering
English
0
0
1
30
Chris Caligula (e/acc) retweetet
Burny - Effective Curiosity
Burny - Effective Curiosity@burny_tech·
Everything is math Everything is changing shapes and graphs You can analyze it all using calculus, geometry, topology, probability theory, group theory, linear and nonlinear algebra, harmonic analysis, information theory, network theory, classical mechanics, statistical mechanics It's all functions It's all sets It's all categories Those are different modelling perspectives Complexity and chaos is everywhere Formally structured languages describe it all Some stuff is more computable than others Quantum field theory is under everything, possibly loop quantum gravity or string theory too And from the fundamental structure of reality, the emergence of all scales of reality happens
English
8
6
33
1.5K
Chris Caligula (e/acc)
Chris Caligula (e/acc)@COUPdeVIL·
@vitrupo Why so complicated? Quantum mechanics describes reality not as fixed objects, but as relations whose properties emerge only through interaction and measurement.
English
0
0
3
265
vitrupo
vitrupo@vitrupo·
The system prompt for this reality:
English
47
68
770
27K
Chris Caligula (e/acc)
Chris Caligula (e/acc)@COUPdeVIL·
@Srini_Pa Agreed. I’m less interested in inventing the world model, and more in building systems that remain coherent and aligned as world models emerge.
English
0
0
0
8
Srini Pagidyala
Srini Pagidyala@Srini_Pa·
Infinite capital, compute, or data won’t change GenAI/LLMs’ fate. The ceiling is architectural. The limits are structural. Dead-end for real intelligence. Move on.
English
4
0
14
502
Chris Caligula (e/acc)
Chris Caligula (e/acc)@COUPdeVIL·
@derek__Watson I think we’re entering a post-LLM phase of #AI. And Post-LLM doesn’t mean bigger or smaller models. It means stopping the assumption that intelligence scales with prediction alone. The next leap is architectural.
English
1
0
1
24
DeReK WaTSoN
DeReK WaTSoN@derek__Watson·
AGI won’t come from LLMs. Humans brute-forced language models and called it intelligence. Every observable real intelligence emerges from survival constraints: energy, risk, feedback. Real consequences, not loss functions. Those constraints force causal learning and agency. LLMs will be a small component of the ultimate solution.
English
7
2
8
284
Chris Caligula (e/acc)
Chris Caligula (e/acc)@COUPdeVIL·
@VraserX I mostly agree with Yann.. The mistake here is thinking the choice is LLMs vs new models. The real leap has to be architectural Systems where prediction is regulated by interaction, feedback, and consequence not just next-token loss.
English
0
0
0
14
VraserX e/acc
VraserX e/acc@VraserX·
Yann LeCun says language isn’t intelligence. Predicting text doesn’t mean understanding reality. The real world is messy, physical, and causal and today’s LLMs barely touch that. The next leap is Physical AI: world models, cause and effect, real planning. Do you think LLMs can evolve into this, or do we need a completely new architecture?
English
102
169
448
76K
Chris Caligula (e/acc) retweetet
ア
@yuruyurau·
a=(y=i/790,d=mag(k=(y<8?9+sin(y^9)*6:4+cos(y))*cos(i+t/4),e=y/3-13)+cos(e+t*2+i%2*4))=>point((q=y*k/5*(2+sin(d*2+y-t*4))+80)*cos(c=d/4-t/2+i%2*3)+200,q*sin(c)+d*9+60) t=0,draw=$=>{t||createCanvas(w=400,w);background(9).stroke(w,116);for(t+=PI/90,i=1e4;i--;)a()}#つぶやきProcessing
English
107
990
7.6K
328.1K
Chris Caligula (e/acc) retweetet
AGIHound
AGIHound@TrueAIHound·
True intelligence is deterministic and learns by eliminating timing contradictions. I argue that this is a threat to the world's ruling elites. My thesis is that a correct intelligence architecture assumes a deterministic world and ignores random signals at the sensory level. The brain's neurons rely on precisely timed spikes for this reason. Unlike statistical LLMs and deep neural nets, a truly intelligent machine must be designed from the ground up to eliminate timing contradictions in the sensory space. This is why biological brains use complements: every sensor and effector has an opposite. It's yin and yang everywhere. Contradiction elimination means that, unlike LLMs, our future intelligent machines will be wired for truth, logic and honesty. This kind of AI is a threat to our overlords. The fake-AI mafia will do everything in their power to prevent it from happening, including flooding the internet with slop and outright lies, and spending trillions on a global mass surveillance network. 😠 It can only last so long. Truth will win out in the end. 🙏
AGIHound tweet media
English
19
3
38
3K
Chris Caligula (e/acc) retweetet
Prof. Carl Sagan
Prof. Carl Sagan@ProfCarlSagan·
If we don’t explain science to the public, others will fill the gap with nonsense.
English
140
2.2K
8.6K
319.6K