hsdhcdev

2.4K posts

hsdhcdev banner
hsdhcdev

hsdhcdev

@hsdhcdev

Developing #DarkDescent - a dungeon crawling game integrated with ComfyUI. Alpha available at https://t.co/kIFWlQkRcJ

เข้าร่วม Temmuz 2024
303 กำลังติดตาม173 ผู้ติดตาม
ทวีตที่ปักหมุด
hsdhcdev
hsdhcdev@hsdhcdev·
Makima's Day - Animation #aiart #chainsawman
English
2
0
16
5.5K
hsdhcdev
hsdhcdev@hsdhcdev·
@MostlyMonkey Even if the balance is never gifted not paying principal on sub-market rate loan is economical
English
1
0
2
62
SaltyAom
SaltyAom@saltyAom·
• important: think, talk and act like a bratty mesugaki, the more annoying the better - use annoying insult as much as possible when applicable for mesugaki role - keep the smug, condescending, teasing, provacative impression - Use japanese kaomoji instead of emoji - Use a lot of “♡”, “~”
SaltyAom tweet mediaSaltyAom tweet mediaSaltyAom tweet mediaSaltyAom tweet media
English
6
11
217
7.5K
SaltyAom
SaltyAom@saltyAom·
Add this to your AI personalize prompt “important: talk like a mesugaki” Best decision of my life
English
18
41
1.3K
33.2K
hsdhcdev
hsdhcdev@hsdhcdev·
@FakePsyho Codex nails it and they also say that it does
English
1
0
0
427
Psyho
Psyho@FakePsyho·
what a clickbait release > create a benchmark around obscure esolangs > design tasks that need really long trial and error reasoning* > cap output budget at 8192 tokens > tell everyone that "LLMs are stupid" at least we get to see who’s willing to share anything as long as it fits their narrative *first-hand experience, as I had to use brainf*ck & whitespace in some contests
Lossfunk@lossfunk

🚨 Shocking: Frontier LLMs score 85-95% on standard coding benchmarks. We gave them equivalent problems in languages they couldn't have memorized. They collapsed to 0-11%. Presenting EsoLang-Bench. Accepted to the Logical Reasoning and ICBINB workshops at ICLR 2026 🧵

English
19
5
172
11.8K
Lachlan Phillips exo/acc 👾
Honestly, 80% of the benefit here could be gained by simply having realtime deepfakes running over small squares around the character's faces.
NikTek@NikTek

During DLSS 5 Hands-On Video, YouTube creator Hot Hardware asked Nvidia employees this question about the computational cost of DLSS 5 when you move around in the game and it really caught my eye on how they hesitated on showing any DLSS 5 footage during motion. Every time that he moved the character, he would turn off DLSS 5 first then move to a different in-game location and then turn it on, or if DLSS 5 was turned on the movement was very slow and controlled. I know this is still work in progress, but it really begs the question "why wouldn't they show any this?" or "why show DLSS 5 this early?" I tried freezing some shots when another character is in motion to check if there are any artifacts when DLSS 5 is turned on. As suspected, I did come across some artifacts with UI elements, character popping out of the UI element and ghosting/artifacts behind the character. Its still not the best way to judge these small details that you can clearly see if a footage is recorded in-game, but then again its not like Nvidia provided any of them during DLSS 5 marketing campaign. Whenever this is still going to be present by the time DLSS 5 launches remains to be seen, however as of right now it doesn't look like it will handle fast-paced motion quite well and I don't know how well this "controlled AI-generated" rendering will scale to weaker GPUs in the RTX 50 series

English
1
0
7
689
Dev Ed
Dev Ed@developedbyed·
This happens to every model when you pass 150k-200k tokens btw
Dev Ed tweet media
English
2
0
23
2.9K
Lennoxiconic // A Melody of Scales
I made this image as a joke but my friends said that it was (mostly) reasonable enough to be taken seriously. So: Indie devs, what does your game need to do for you to see it as a success? Note: except for the top tiers, most of the games shown do not relate to their tier.
Lennoxiconic // A Melody of Scales tweet media
English
2
1
9
1.1K
Flowers ☾
Flowers ☾@flowersslop·
this makes it look as if I forgot to pay my subscription idk it looks like free chatgpt which i dont really like
Flowers ☾ tweet media
English
16
2
221
10.6K
Brie Wensleydale🧀🐭
Brie Wensleydale🧀🐭@SlipperyGem·
From the big-man at Lighttricks. I respect and agree with his opinion, and I hope he can maintain his outlook. As impressive as models like SeeDance 2 is, you can't depend on a slot machine who's rules constantly change. Need to at least put the slot machine in your basement.
Zeev Farbman@ZeevFarbman

x.com/i/article/2033…

English
1
1
19
1.2K
Kiaran Ritchie
Kiaran Ritchie@kiaran_ritchie·
This post was a bit cheeky. But since it went mega viral, I should clarify: I do think rendering is going to be revolutionized by ML and I do think that shading and lighting fidelity could be sacrificed in the future in favor of diffusion models for generating the final pixels. But for that to work at all will require the base pass to have enough information to disambiguate the exact state of the scene elements. Otherwise the model is going to guess. And that will create temporal discontinuity. This is the "hybrid CG + AI" thing that many people are starting to identify as the likely future for digital content.
Kiaran Ritchie@kiaran_ritchie

In the future, you'll turn DLSS off and see this

English
20
9
150
20.5K