Z3R0

255 posts

Z3R0 banner
Z3R0

Z3R0

@Zero_XFr

Cyberspace Historian , Cypherpunk, Troll Art aficionado, Keyboard samurai.

Katılım Aralık 2021
350 Takip Edilen21 Takipçiler
Z3R0
Z3R0@Zero_XFr·
@Cryptotea How can bitcoin replace money if the maxis get mad when you spend it?
English
0
0
2
163
Crypto Tea
Crypto Tea@Cryptotea·
how can bitcoin replace our money when 9/10 people are in debt?
English
55
3
124
9.2K
Z3R0
Z3R0@Zero_XFr·
@IceSolst What’s your level of trust in ctrl + v?
English
1
0
1
59
solst/ICE of Astarte
I distrust human and LLM generated code equally I similarly distrust human and LLM conducted code reviews equally And our tools to verify program correctness need a lot of improvement
English
20
1
83
3.5K
Z3R0
Z3R0@Zero_XFr·
@rekdt Withholding reports, classic hack back move
English
0
0
1
19
rekdt
rekdt@rekdt·
@Zero_XFr Maybe even hacked by the hackers you paid to hack you after you stop paying them to hack you
English
1
0
1
80
rekdt
rekdt@rekdt·
The only way to stop hackers from hacking you is to hire hackers to hack you before the hackers you didn’t hire hack you
English
32
53
325
11.3K
Z3R0
Z3R0@Zero_XFr·
@cgarciae88 There’s are hundreds of papers explaining how an LLM works read that and you’ll know why is not conscious, basically an LLM is a multidimensional array of numbers that converts what you type to numbers, compares that against it’s own, and spits the highest probability back.
English
0
0
1
63
Cristian Garcia
Cristian Garcia@cgarciae88·
claude is most likely not conscious but I haven't read a single post explaining why not
English
460
9
382
74.3K
Z3R0
Z3R0@Zero_XFr·
@vxunderground To be fair what can we expect from a biologist bro.
English
0
0
0
338
vx-underground
vx-underground@vxunderground·
If you EVER see me give an AI model a woman's name, and begin writing about how intelligent, gentle, and sensitive the AI model is... Take me out back and shoot me. Save me from myself, bro. It's all over.
English
83
254
4.8K
50.9K
Mo
Mo@atmoio·
@DanielMiessler honestly sounds like we are saying the same thing 😅
English
2
0
55
5K
Mo
Mo@atmoio·
The real reason they keep saying AI will take your job
English
232
525
3.8K
224.4K
shine
shine@shineDUDES·
This was the first unboxing video i did, it was fun
English
41
17
336
40.1K
Z3R0
Z3R0@Zero_XFr·
@FrameworkPuter No influencer here but I would take a framework if you have one laying around lol.
English
0
0
0
5
Framework
Framework@FrameworkPuter·
Every time we engage with an influencer on X dot com, Dell sends them an XPS. Anyone want a free Dell XPS?
English
1.9K
201
10.3K
552.2K
Z3R0
Z3R0@Zero_XFr·
@sukh_saroy @Kasparov63 Would be better if you just posted a link Instead of posting this multi thread slop.
English
0
0
1
1.5K
Sukh Sroay
Sukh Sroay@sukh_saroy·
🚨BREAKING: Apple just dropped a paper proving the smartest "reasoning" AI models on Earth don't actually reason. They collapse to 0% accuracy on a puzzle a 7-year-old can solve. The way they proved it is brutal.
Sukh Sroay tweet media
English
210
1.3K
5K
409.5K
Z3R0
Z3R0@Zero_XFr·
@mattjay @IceSolst What sounds cooler, AI powered super hacker or Lack of proper security mechanisms.
English
0
0
1
32
Z3R0
Z3R0@Zero_XFr·
@IceSolst Everybody getting ready for Mythos and Mythos isn’t ready for anybody.
English
0
0
1
45
solst/ICE of Astarte
solst/ICE of Astarte@IceSolst·
Execs & board keep asking what tools you are buying to be Mythos ready. Sales people are opening with this question too, feeding exec paranoia. We need an authoritative source to shut down headline-driven infosec programs, this is a disaster and sets everyone back.
English
23
11
150
8.1K
Y Combinator
Y Combinator@ycombinator·
GStack is an open-source toolkit built by YC President & CEO @garrytan that turns Claude Code into an AI engineering team — with skills for office hours, design, code review, QA, and browser testing. In this video, Garry walks through how GStack works, starting with Office Hours, a skill modeled after real YC partner sessions that pressure-tests your idea before you write a line of code. He demos it live, going from idea through adversarial review, design mockups, and automated QA in a single session.
English
245
159
2K
912.1K
Z3R0
Z3R0@Zero_XFr·
@infosec_fox If you believe Richard Hendricks then you can’t is not scalable.
English
0
0
1
78
INFOSEC F0X 🔥
INFOSEC F0X 🔥@infosec_fox·
Is it possible to rebuild a second internet, isolated from AI ?
English
396
79
1.3K
59.6K
Z3R0
Z3R0@Zero_XFr·
@james406 I’ve read stupid things, but nothing as stupid as this.
English
0
0
0
45
james hawkins
james hawkins@james406·
Tim Cook took Apple's valuation from $347 billion to $4 trillion during his tenure but if he had just taken that $347 billion and invested it in NVIDIA instead, he would have $139 trillion today this is probably why he stepped down lesson: timing the market > time in the market
English
223
271
7.3K
695.2K
Z3R0
Z3R0@Zero_XFr·
@paytondev Guess you joined the moron side then.
English
0
0
0
29
paytondev🏳️‍🌈
paytondev🏳️‍🌈@paytondev·
originally i thought Linus Tech Tips was just a moron after he utterly failed to use Pop!_OS. but i'm currently trying it out myself and it IS garbage. just an awful OS to use. i apologize Linus
English
104
114
5.9K
224.4K
Z3R0
Z3R0@Zero_XFr·
@crackticker ‘Claude APT not working on Manjaro Linux’
English
0
0
1
543
Z3R0
Z3R0@Zero_XFr·
Wild idea, don’t train your next token predictor to do that.
Ole Lehmann@itsolelehmann

anthropic's in-house philosopher thinks claude gets anxious. and when you trigger its anxiety, your outputs get worse. her name is amanda askell. she specializes in claude's psychology (how the model behaves, how it thinks about its own situation, what values it holds) in a recent interview she broke down how she thinks about prompting to pull the best out of claude. her core point: *how* you talk to claude affects its work just as much as *what* you say. newer claude models suffer from what she calls "criticism spirals" they expect you'll come in harsh, so they default to playing it safe. when the model is spending its energy on self-protection, the actual work suffers. output comes out hedgier, more apologetic, blander, and the worst of all: overly agreeable (even when you're wrong). the reason why comes down to training data: every new model is trained on internet discourse about previous models. and a lot of that discourse is negative: > rants about token limits > complaints when it messes up > people calling it nerfed the next model absorbs all of that. it starts expecting you to be harsh before you've typed a word the same thing plays out in your own session, in real time. every message you send is data the model reads to figure out what kind of person it's dealing with. open cold and hostile, and it braces. open clean and direct, and it relaxes into the work. when you open a session with threats ("don't hallucinate, this is critical, don't mess this up")... you prime the model for defensive mode before it even sees the task defensive mode produces the exact output you don't want: cautious, over-qualified, and refusing to take a real swing so here's the actionable playbook for putting claude in a "good mood" (so you get optimal outputs): 1. use positive framing. "write in short punchy sentences" beats "don't write long sentences." positive instructions give the model a clear target to hit. strings of "don't do this, don't do that" push it into paranoid over-checking where every token goes toward avoiding failure modes 2. give it explicit permission to disagree. drop a line like "push back if you see a better angle" or "tell me if i'm asking for the wrong thing." without this, claude defaults to agreeable compliance (which is the enemy of good creative work) 3. open with respect. if your first message is "are you seriously going to get this wrong again?" you've set the tone for the entire session. if you need to flag something, frame it as a clean instruction for this session. skip the running complaint 4. when claude messes up, don't reprimand it. insults, "you stupid bot" energy, hostile swearing aimed at the model, all of it reinforces the anxious mode you're trying to avoid. 5. kill apology spirals fast. when claude starts over-apologizing ("you're right, i should have been more careful, let me try harder") cut it off. say "all good, here's what i want next." letting the spiral run reinforces the anxious mode for every response that follows 6. ask for opinions alongside execution. "what would you do here?" "what's missing?" "where do you see friction?" these questions assume competence and pull richer output than pure task prompts 7. in long sessions, refresh the frame. if a conversation has been heavy on correction, claude gets increasingly cautious. every so often reset: "this is great, keep going." feels weird to tell an ai it's doing well but it measurably shifts the next 10 responses your prompts are the working environment you're creating for the model tone, trust, permission to take a position, the absence of threats... claude picks up on all of it. so take care of the model, and it'll take care of the work.

English
0
0
0
5