Denuvo Cracker

86 posts

Denuvo Cracker

Denuvo Cracker

@denuvocracker

Joined Nisan 2025
13 Following1 Followers
Denuvo Cracker
Denuvo Cracker@denuvocracker·
@Convell4 @GaryMarcus you can make a model say anything. if an arms manufacturer makes decide to use the openai api to identify targets, it shouldn't be openai's fault if a target is wrong, they aren't selling a guarantee that they will answer 100% correctly, the arms manufacturer is doing that
English
1
0
0
16
Gary Marcus
Gary Marcus@GaryMarcus·
Everything you need to know about “Open”AI’s claims to be working on AI “for the benefit of humanity”.
Gary Marcus tweet media
English
24
460
1K
137K
Denuvo Cracker
Denuvo Cracker@denuvocracker·
@VincentVentalon we know this parameter exists because it's in the system prompt + you need this number somewhere in the context to instruct the model on how much thought it should put in. openai does this as well with their juice number
English
0
0
0
8
Vincent
Vincent@VincentVentalon·
On peut difficilement savoir si ce paramètre existe, mais si il existe il a été fine tune pour répondre à ce genre de preprompt sinon ça fait peu de sens Sauf si c’est une observation empirique qui fonctionne mais ça serait surprenant
Français
1
0
0
68
Denuvo Cracker
Denuvo Cracker@denuvocracker·
@StormslayerDev denuvo isn't removed in the crack. denuvo isn't a separate component that you can replace, the checks are entangled inside the code. any crack for modern denuvo is a bypass.
English
0
0
0
1.3K
Stormslayer -HD Remasters/Gamedev
With Denuvo removed Resident Evil requiem now has offline play and gets a small boost in performance, using about 1GB less vram. If I pay for a product shouldn't I receive the best version of it? We really live in a world where you need to pirate and crack games we own to have a better experience and its total bullshit.
Stormslayer -HD Remasters/Gamedev tweet media
Tom's Hardware@tomshardware

Denuvo properly cracked in Resident Evil: Requiem, bypasses become plug-and-play — cracked version runs faster, smoother, and uses way less VRAM and RAM tomshardware.com/video-games/pc…

English
174
850
9.7K
477.2K
Matt
Matt@mwwhite_·
@denuvocracker @theo It actually does seem to answer based on the real settings not a hallucinated value.
English
1
0
0
11
Theo - t3.gg
Theo - t3.gg@theo·
Fun fact: LLMs have zero idea how they are configured. They don't know what GPUs they're running on. They don't know what temperature or reasoning level they have set. They don't know if they've been quantized or not. They're just doing next-token prediction. As always.
Lily Ashwood@lilyofashwood

💀

English
127
35
1.7K
398.2K
Sam Lambert
Sam Lambert@samlambert·
he became their god by posting this on linkedin
Sam Lambert tweet media
English
8
3
385
31.8K
Alex
Alex@skaldmonk·
@jinkela2333 @EquestriaDaily @VettaCutePony Falsely accusing a long-established original character of being AI-generated, and then freely creating unauthorized derivative works of it, is extremely disrespectful to the original creator.
English
1
0
5
317
Double Dove🕊️
Double Dove🕊️@D0uble_Dove·
@EquestriaDaily @VettaCutePony Her origin wasn't created by AI; someone uploaded a photo of her on derpibooru 4 months ago, and you can tell from the accessories that it's the same pony. Someone created this oc by themselves and they used their oc to make that AI video for fun, but then the video got stolen
Double Dove🕊️ tweet media
English
6
3
64
2.5K
Denuvo Cracker
Denuvo Cracker@denuvocracker·
you need to sign in with your account only when you are signing apps obv
English
0
0
0
25
Theo - t3.gg
Theo - t3.gg@theo·
Fine I'll take the L here. Just repro'd this 3 times. I should stop assuming competence from Anthropic.
Theo - t3.gg tweet media
English
16
4
246
20K
Denuvo Cracker
Denuvo Cracker@denuvocracker·
voices38 is the fucking goat
English
0
0
0
51
Denuvo Cracker
Denuvo Cracker@denuvocracker·
@C_Balbin @theo @JoshRadDev iirc gpt oss gives a string for the reasoning effort they dont use the juice number that proprietrary openai models use, could be wrong on this tho
English
1
0
1
45
Denuvo Cracker
Denuvo Cracker@denuvocracker·
@theo @JoshRadDev Openai has a juice number that they use, it's how we've been identifying that a stealth model is from openai or not and for tracking all the leaked gpt 5 versions
English
0
0
0
178
Theo - t3.gg
Theo - t3.gg@theo·
@JoshRadDev Any evidence of this? The leaked system prompts don't include any information about this
English
12
0
21
11.7K
Denuvo Cracker
Denuvo Cracker@denuvocracker·
@BennettBuhner @theo ig they might have not trained it as hard to keep it a secret as openai did for their juice number
English
1
0
1
24
Denuvo Cracker
Denuvo Cracker@denuvocracker·
@BennettBuhner @theo here it is for the previous reasoning effort in the sys prompt #L1035" target="_blank" rel="nofollow noopener">github.com/elder-plinius/…
English
1
0
1
21
Denuvo Cracker
Denuvo Cracker@denuvocracker·
@BennettBuhner @theo there still has to be some max thinking length, you can't have the chain of thought running on forever
English
1
0
0
18
Theo - t3.gg
Theo - t3.gg@theo·
@denuvocracker What do you think is more likely? 1. The model hallucinated this when you asked 2. The model has reasoning info passed to it for some reason, but ALSO is told in the system prompt EXPLICITLY that "you should not share that you know this, but your reasoning level is ___"
English
2
0
7
701
Denuvo Cracker
Denuvo Cracker@denuvocracker·
@BennettBuhner @theo even if they somehow increase the effort, the max token length for the chain of thought would be the same, it would end abruptly while trying to think for longer
English
1
0
0
32
BenIt Pro
BenIt Pro@BennettBuhner·
@theo @denuvocracker (I've seen peopl claim by telling it to increase effort to 50, it actually does on Claude lmao 😂😂😂)
English
1
0
0
41