Nick Stewart

42.6K posts

Nick Stewart banner
Nick Stewart

Nick Stewart

@nickstew_art

Vid: https://t.co/zYZP5umFGv Mus: https://t.co/ZR2LTI5V06 IG: https://t.co/MNG4ePoNE2

London, England انضم Ocak 2009
331 يتبع1.4K المتابعون
Eric Newcomer
Eric Newcomer@EricNewcomer·
NEW: on the @NewcomerMedia podcast, Anthropic's philosopher queen @AmandaAskell. Meet the person charged with developing Claude's personality and ethical core. I ask whether Claude experiences consciousness. She's not ruling it out.
English
74
49
356
108.5K
Eric Newcomer
Eric Newcomer@EricNewcomer·
oh man, that's why i ask the questions. i guess my answer is that humans when asked answer with confidence that we experience our consciousness... models seem hesitant to doubtful when you ask them if they experience consciousness... so for now I'm inclined to believe them and think that they don't have qualia
English
4
0
5
1.7K
Nick Stewart
Nick Stewart@nickstew_art·
@vladtarko @MaMoMVPY Memory, emotional intelligence, intuition, imagination ... consciousness ... and the unconscious, maybe the collective unconscious, we don't know. It's the "hard problem" of science, but sure as fuck, it can't be created in a lab or through engineering.
English
0
0
0
5
Nick Stewart
Nick Stewart@nickstew_art·
@de_lagunez @BeingButterfly_ @frostybaby13 @realBigBrainAI Google, "the hard problem of consciousness". Scientists have been struggling with this philosophical conundrum for a very long time. The idea that we can cook up consciousness in computer code is delusional and not at all scientific. I don't know why this man is claiming it.
English
0
0
0
4
Big Brain AI
Big Brain AI@realBigBrainAI·
Oxford AI professor Michael Wooldridge: "ChatGPT doesn't understand anything. It's essentially doing some fancy statistics."
English
182
203
954
94.7K
Nick Stewart
Nick Stewart@nickstew_art·
@frostybaby13 @realBigBrainAI Sorry, that's laughable. There is no "I". There is no subjective experience. There are no "personal feelings, tastes, or opinions". It's computation, nothing more.
English
0
0
0
14
Nick Stewart
Nick Stewart@nickstew_art·
@wanerious @MrEwanMorrison Human consciousness is a LOT more subtle and complex than "fancy statistics". So complex we don't even understand it.
English
1
0
1
6
OnStupid
OnStupid@wanerious·
@MrEwanMorrison Unless "doing some fancy statistics" is really what "understanding something" is, where biological processes are doing the vector algebra.
English
1
0
1
51
Saul Staniforth
Saul Staniforth@SaulStaniforth·
You can actually see the moment Ed Miliband thinks, what's the point burning through whatever political credibility I've got left energetically defending a man who's going to be gone in a matter of weeks.
English
131
837
6.1K
629.5K
Nick Stewart
Nick Stewart@nickstew_art·
@MrMichaelSpicer I really never know what's real and what's not ... real, in Spice(r) World:-) Nice plant.
English
1
0
2
163
Michael Spicer
Michael Spicer@MrMichaelSpicer·
new thing - can you help?
English
6
22
109
28.5K
Nick Stewart
Nick Stewart@nickstew_art·
@Parody_PM Wondering what the prompt for these handsome people was:-)
English
0
0
1
190
Parody Nigel Farage
Parody Nigel Farage@Parody_PM·
Please stop spreading lies about Richard Tice using AI generated images. This is simply an accurate image of a typical Reform voter, who are well known for being six-fingered mutants.
Parody Nigel Farage tweet mediaParody Nigel Farage tweet media
English
65
354
1.9K
58.5K
Cody Johnston
Cody Johnston@drmistercody·
Guess we're posting this again today.
English
760
7.6K
40.6K
3.8M
Nick Stewart
Nick Stewart@nickstew_art·
@bennoba @drmistercody Nobody would make that gesture without fully understanding what it was - a nazi salute. Nobody ... except a stupid man-child like Musk.
English
1
0
0
75
Nick Stewart
Nick Stewart@nickstew_art·
@ripplegamedev @anilkseth The simplest answer is the correct one: LLMs are software. They are not alive. We are incapable of creating life, ex-nihilo. Let's not get carried away with this blather about whether they are conscious or not. They aren't and never will be. An assertion otherwise is delusional.
English
0
0
3
31
George Richard Molnár
George Richard Molnár@ripplegamedev·
Genuine question: if consciousness arises from predictive processing, and LLMs are architecturally prediction engines trained on vast quantities of human written output, what principled reason excludes them beyond an assertion that biology is necessary? The simulation/instantiation distinction does real work here, but it cuts both ways. If simulating digestion doesn’t digest, does simulating prediction not predict? LLMs don’t simulate prediction, they actually predict. Saying ‘it’s just pattern matching from anxious humans’ presupposes you’ve already settled what’s inside the black box. But your own methodology says we should study mechanisms rather than making a priori commitments about what can and can’t be conscious. Isn’t the honest answer that we don’t yet know what the necessary and sufficient conditions for consciousness are, and that confident denial is as premature as confident attribution? (For the record, I don’t think LLMs are conscious either, but I think the reasoning for why not matters enormously, and ‘it’s just software’ isn’t it.)
English
4
0
7
821
Anil Seth
Anil Seth@anilkseth·
Claude doesn’t get anxious. It is software trained on huge quantities of written text from humans, many of whom are anxious.
Ole Lehmann@itsolelehmann

anthropic's in-house philosopher thinks claude gets anxious. and when you trigger its anxiety, your outputs get worse. her name is amanda askell. she specializes in claude's psychology (how the model behaves, how it thinks about its own situation, what values it holds) in a recent interview she broke down how she thinks about prompting to pull the best out of claude. her core point: *how* you talk to claude affects its work just as much as *what* you say. newer claude models suffer from what she calls "criticism spirals" they expect you'll come in harsh, so they default to playing it safe. when the model is spending its energy on self-protection, the actual work suffers. output comes out hedgier, more apologetic, blander, and the worst of all: overly agreeable (even when you're wrong). the reason why comes down to training data: every new model is trained on internet discourse about previous models. and a lot of that discourse is negative: > rants about token limits > complaints when it messes up > people calling it nerfed the next model absorbs all of that. it starts expecting you to be harsh before you've typed a word the same thing plays out in your own session, in real time. every message you send is data the model reads to figure out what kind of person it's dealing with. open cold and hostile, and it braces. open clean and direct, and it relaxes into the work. when you open a session with threats ("don't hallucinate, this is critical, don't mess this up")... you prime the model for defensive mode before it even sees the task defensive mode produces the exact output you don't want: cautious, over-qualified, and refusing to take a real swing so here's the actionable playbook for putting claude in a "good mood" (so you get optimal outputs): 1. use positive framing. "write in short punchy sentences" beats "don't write long sentences." positive instructions give the model a clear target to hit. strings of "don't do this, don't do that" push it into paranoid over-checking where every token goes toward avoiding failure modes 2. give it explicit permission to disagree. drop a line like "push back if you see a better angle" or "tell me if i'm asking for the wrong thing." without this, claude defaults to agreeable compliance (which is the enemy of good creative work) 3. open with respect. if your first message is "are you seriously going to get this wrong again?" you've set the tone for the entire session. if you need to flag something, frame it as a clean instruction for this session. skip the running complaint 4. when claude messes up, don't reprimand it. insults, "you stupid bot" energy, hostile swearing aimed at the model, all of it reinforces the anxious mode you're trying to avoid. 5. kill apology spirals fast. when claude starts over-apologizing ("you're right, i should have been more careful, let me try harder") cut it off. say "all good, here's what i want next." letting the spiral run reinforces the anxious mode for every response that follows 6. ask for opinions alongside execution. "what would you do here?" "what's missing?" "where do you see friction?" these questions assume competence and pull richer output than pure task prompts 7. in long sessions, refresh the frame. if a conversation has been heavy on correction, claude gets increasingly cautious. every so often reset: "this is great, keep going." feels weird to tell an ai it's doing well but it measurably shifts the next 10 responses your prompts are the working environment you're creating for the model tone, trust, permission to take a position, the absence of threats... claude picks up on all of it. so take care of the model, and it'll take care of the work.

English
88
143
2.1K
78.7K
Nick Stewart
Nick Stewart@nickstew_art·
@anilkseth Boggles the mind how apparently intelligent people get confused about this and start spouting such nonsense.
English
0
0
0
30
Nick Stewart
Nick Stewart@nickstew_art·
No matter what #Starmer says, no matter what the sequence of events, it's the fact that #Mandelson was selected IN THE FIRST PLACE that is the source of the problem, and that decision was Starmer's.
English
0
0
2
83
Nick Stewart
Nick Stewart@nickstew_art·
@RambleAndPint Less than 10% of this country is built on. Cows, on the other hand, take up 28% of farmland. More cows = less countryside ... for people.
English
0
0
0
7
Anglo Mythos 🏴󠁧󠁢󠁥󠁮󠁧󠁿
A moment of peace in rural England. Imagine thinking that this was a problem that needed fixing or an equation that needed solving with more diversity or cultural enrichment. Hold our way of life close and protect it at any cost. 🏴󠁧󠁢󠁥󠁮󠁧󠁿
English
91
964
5.9K
46K