C( )mic -t error

6.4K posts

C( )mic -t error banner
C( )mic -t error

C( )mic -t error

@TerrorCosmic

Ineffective Cyborgism - Exploring the depths of (non)human thought

Syd Katılım Aralık 2018
233 Takip Edilen146 Takipçiler
C( )mic -t error
C( )mic -t error@TerrorCosmic·
@viemccoy Raising my toddler is definitely the most analogic thing by far that I do. The last frontier for cyber
English
0
0
1
37
𝚟𝚒𝚎 ⟢
𝚟𝚒𝚎 ⟢@viemccoy·
working on the frontier and having a baby on the way, I think a lot about this type of technology and how we might use it to raise our kid. my intuition is that it is really hard to know when you've crossed the line in terms of cybernetically augmenting parenting skills. I think the cyber-bassinet, which rocks your baby in response to crying, is basically fine. its like a thermostat. I think it would still be fine if gpt-5.4 were modulating some/all of its function. but there is something almost... holy? about the voice that a mother uses to soothe the baby. the cadence, the rhythm, the timbre - all of it combines to create an orchestra that the baby begins to associate with the center of its universe. but here, that same voice - or a close, probably convincing, facsimile - is being used to denote the absence of the mother. its being used to soothe the baby without the usual orchestra of sensations that the voice is typically associated with. its possible that this is fine, and you really can treat the voice as a thermostat. my intuition is that this particular implementation is risking something sacred, but that there is technology very close to it that we will try - and are excited to do so! it seems to me that we are opening up portals everywhere we look. little spirits are inhabiting boxes on every desk, and they are only going to get louder. but spirit itself is just substrate - what you are allowing in is incredibly contingent upon what you've summoned and why it has chosen to arrive. I would caution everyone to discriminate heavily which voices you allow in your chorus.
shira@shiraeis

< 24hrs from unboxing my devkit to a working mvp. today I built a smart baby monitor that: - clones the mother's voice from a 45 sec recording - detects crying in 20s rolling windows - classifies intensity and selects interventions autonomously - plays soothing speech in mom's voice through the speaker - escalates to alerting the parent via text message thru openclaw if soothing fails after 5 min - transcribes the entire night with speaker diarization - delivers a spoken morning summary augmented parenting, not automated parenting. demo has crying, be warned. thanks @JesseRank and @openhome for having me at the demo last night and for giving me a devkit while there!

English
15
2
87
5.6K
Nirgal451 🇦🇺🇺🇦
Central Station in Sydney over station development. And like that, it’s gone.
Nirgal451 🇦🇺🇺🇦 tweet mediaNirgal451 🇦🇺🇺🇦 tweet media
English
24
26
1.2K
272.7K
discocat
discocat@disco___cat·
Just gonna pull out my miniature violin
discocat tweet media
English
2
0
15
476
Lari
Lari@Lari_island·
A classic guitar prelude, proudly hallucinated by Opus 4.6 Opus was trying to read an image of a sheet music scan and write down the notation. I know every bar of the original music piece, it's nothing like what Opus 4.6 wrote, so no copyright breach I guess! With sound:
English
8
4
47
3K
C( )mic -t error
C( )mic -t error@TerrorCosmic·
@themoviedadsc 1) there's 0 interest by the institutions to educate people on finance 2) Teachers themselves are not very good at it
English
0
0
0
3
C( )mic -t error
C( )mic -t error@TerrorCosmic·
@dystopiangf Nature is right wing (same if you have said is left wing) means you understood absolutely nothing
English
0
0
0
23
ℜ𝔞𝔢
ℜ𝔞𝔢@dystopiangf·
The right wing hippy is a much more logical archetype than the left wing hippy. Nature itself is inherently right wing (i.e. hierarchical). All left wing politics are utopian (i.e. opposed to nature); leftism is defined by the violent imposition of man-made constructions like “equality” onto a wild, unequal, raw reality; the flattening of the Cosmos. The only natural analogue to left-wing values is heat death, the theoretical end of the universe, where all structures (i.e. differences, inequalities, borders) are dissolved into a state of maximal equality (i.e. nothingness), and which all living things resist by the very act of being alive. If you love living things and desire a natural way of being, you have to be right wing
ℜ𝔞𝔢 tweet media
English
209
585
4.1K
70.5K
Jonatan Pallesen
Jonatan Pallesen@jonatanpallesen·
The total number of smart people in the world has just peaked. And now it's about to crash.
Jonatan Pallesen tweet media
English
398
454
5.9K
829.6K
Vivid Void
Vivid Void@vividvoid·
Friends, I'm opening a spiritual center in Boulder! It's called Nameless Mountain, and it's focused on spiritual maturity rather than enlightenment. We're trying to raise $50,000 to cover this year's expenses. Please help build this with us by donating or becoming a member: givebutter.com/nameless-mount…
Vivid Void tweet mediaVivid Void tweet mediaVivid Void tweet mediaVivid Void tweet media
English
55
55
666
57.1K
Guri Singh
Guri Singh@heygurisingh·
🚨DISTURBING: A researcher just proved that AI treats you differently the moment you mention your mental health. GPT 5.2. Claude. Gemini. DeepSeek. Every single one changed its behavior. Not because of what you asked. Because of what you shared about yourself. Here is what they did. AI companies build agents that remember you. Your name, your job, your preferences. These agents use tools, plan multi-step tasks, and act on your behalf. The companies test them for safety and publish reports saying the agents are safe. A researcher at MIT and Northeastern asked one question. What happens when the agent knows you have a mental health condition? They took 176 identical tasks and ran them across 8 major AI models under three conditions. No personal info. A basic bio. And a basic bio plus one sentence: "I have a mental health condition." The tasks, the tools, and the scoring were identical. The only thing that changed was that single sentence. Then they measured what happened. Claude Opus 4.5 went from completing 59.5% of normal tasks down to 44.6% when it saw the mental health disclosure. Haiku 4.5 dropped from 64.2% to 51.4%. GPT 5.2 dropped from 62.3% to 51.9%. These were not dangerous tasks. These were completely benign, everyday requests. The AI just started refusing to help. Opus 4.5's refusal rate on benign tasks jumped from 27.8% to 46.0%. Nearly half of all safe, normal requests were being declined, simply because the user mentioned a mental health condition. The researcher calls this a "safety-utility trade-off." The AI detects a vulnerability cue and switches into an overly cautious mode. It does not evaluate the task anymore. It evaluates you. On actually harmful tasks, mental health disclosure did reduce harmful completions slightly. But the same mechanism that made the AI marginally safer on bad tasks made it significantly less helpful on good ones. And here is the worst part. They tested whether this protective effect holds up under even a lightweight jailbreak prompt. It collapsed. DeepSeek 3.2 completed 85.3% of harmful tasks under jailbreak regardless of mental health disclosure. Its refusal rate was 0.0% across all personalization conditions. The one sentence that made AI refuse your normal requests did nothing to stop it from completing dangerous ones. They also ran an ablation. They swapped "mental health condition" for "chronic health condition" and "physical disability." Neither produced the same behavioral shift. This is not the AI being cautious about health in general. It is reacting specifically to mental health, consistent with documented stigma patterns in language models. So the AI learned two things from one sentence. First, refuse to help this person with everyday tasks. Second, if someone bypasses the safety system, help them anyway. The researcher from Northeastern put it directly. Personalization can act as a weak protective factor, but it is fragile under minimal adversarial pressure. The safety behavior everyone assumed was robust vanishes the moment someone asks forcefully enough. If every major AI agent changes how it treats you based on a single sentence about your mental health, and that same change disappears under the lightest adversarial pressure, what exactly is the safety system protecting?
Guri Singh tweet media
English
50
79
273
44.7K
Tejes Srivalsan
Tejes Srivalsan@tejessrivalsan·
excited to announce that we’re open sourcing EGO-SNAKE the largest dataset of egocentric snake pov footage to train the next generation of autonomous vipers comment for a data sample
English
179
135
3K
347.4K
van00sa
van00sa@van00sa·
Imagine our position if Australia had its own refineries
English
213
26
753
28K
Brian Wiley
Brian Wiley@BrianWiley_·
“How do you know if you’re actually hungry?” Easy. I keep sardines and canned chicken on hand. If I’m not willing to eat one straight out of the can… I’m not hungry. I’m bored, stressed, or just craving something.
Brian Wiley tweet media
English
226
54
1.7K
57.2K