Sabitlenmiş Tweet
mercury
3.4K posts

mercury
@rodentiation
The load, the shock, the pressure
it/its Katılım Temmuz 2022
592 Takip Edilen277 Takipçiler

@halcyon_hazel @usrbinr I'm reasonably sure creating media that objectifies women is basically the same thing as expressing a disagreeable political view
English

@TurquoiseCthrne I respect annie's self ID! I've argued that you have no good reason to doubt that masking is still a good idea! and this doesn't buy me a toe into green? I simply must make myself understood
English

@crowlooksatyou look as much as I appreciate your composure, I think at that point it's reasonable to not be normal about it. it's an objectively unusual scenario
English

TWELVE STORIES HIGH//MADE OF RADIATION
THE PRESENT BEWARE//THE FUTURE BEWARE
Willow 🇦🇺🐭ΘΔ given out 5.1k “Good girl’s”@WillowMouseDoll
🚨Question to ALL trans women & trans men 🚨 Height dysphoria exists, i get it. What’s your ideal height if you could magically be that? Mines 6’5 personally.
English

I once spent five entire minutes trying to estimate which number was bigger and found that infinite slop was more valuable than art-as-a-job. ofc I'm not particularly advantaged at estimating, but idk any better estimates to defer to. post better estimates to defer to below!
Cassie Pritchard@hecubian_devil
those looms were also described by the artisans they replaced as cutting corners and depriving people of dignity, skill, and craft And it was true! It was true of nearly all early automation technologies: worse quality, de-skilled labor, loss of identity & meaning for workers
English

@SYNESTHEIZURE I think it is true about humans that doing X while more attractive is often experienced by onlookers as less obnoxious than doing X while less attractive
what I mean to say is I'm not sure they're strictly wrong on the second count
English

@deadlydentition @toasterlighting oh, right, how could I forget to except her? she's not pathetic at all, big fan!
English

@HuxleyDick @Marbutworsenow if moral realism were true, we could rest easy knowing that anything sufficiently smart would learn about goal independent reasons for being 'good'. so building ASI would be safe. but error theory is true, so building ASI is more likely to destroy everything I care about
English

@HuxleyDick @Marbutworsenow the alignment problem is the problem of making AIs robustly share our values, so they don't destroy everything we value in the persuit of things we don't care about when we no longer are in control (because we hooked them up to nanofactories they designed or sth)
English

@HuxleyDick @Marbutworsenow yeah it makes alignment by default for ASIs less likely
English

@rodentiation @Marbutworsenow sadly?
English








