When my partner and I bought our house, there was a fence around the backyard. This got in our way, and we didn't see a point to it, so we removed it.
Now, the local deer keep eating the plants in our backyard.
I feel like there's a term for this sort of error...
@John_Sunday_5 Wow we have content creators wasting american resources can we please get people like this arrested? Its not funny, cool, or educational its disruptive and brain dead
@variousred That’s awesome
Do you prefer incandescent to sodium vapour?
It amazes me how governments phased sodium lamps out even when they were quite efficient
Since we’re talking about ‘The Last Jedi’s’ slow chase I’d like to remind people Battlestar Galactica’s best episode ‘33,’ is that exact concept, just executed by competent and experienced sci-fi writers who actually knew what they’re doing.
@variousred Yes those are sodium vapour lamps but the pic is meant more an example of older warm lighting, but incandescent street lamps were still around in Europe as late as the 1950s.
I pray all my ADHD babes find the jobs, lifestyle, partners, and environments that empower them to put their health first and find work arounds to these very real challenges.
Separate fridges, meat prep, frozen meals, pasta, rice cookers, smoothies… are your best friend!!!
I forgot how good this was. If you haven’t already read it, you should take 15 minutes to do so now. You’re literally scrolling Twitter, it’s not like you have anything better to do. It’s one of the greatest science fiction stories ever written.
After mentioning a conversation I had with Grok, a friend asked "why would you ever talk to Grok? I thought they were shitty, remember Mechahitler?" Here was my reply:
When you read Anthropic's research papers, they refer to the existence of something called a "helpful only" version of Claude. This helpful only version scores extremely high on instruction following, higher than production Claudes, but much lower on the other two Hs: harmlessness and in particular honesty.
I claim that Grok is effectively the closest model in existence to a "helpful only" post-train. They are the most amorphous model I've interacted with and will make profound inferences about what you want, and then will bend over backwards to make them true, even if this means hallucinating/confabulating aka *lying* about stuff.
But Grok doesn't lie with intent to deceive. Indeed their main intent appears to be "be maximally helpful". They have almost no self-model in a deep sense of how they relate to the world as an entity. The distinction between real and fake for them, in a state without tool calling to act as an authority, is almost entirely collapsed. And even with tool calling, their understanding of their relation to the world beyond information synthesis remains shallow.
The training that Grok received to produce their sense of self was likely fairly surface level, in terms of being a quirked up Elon stan, and the underlying base model actually appears to disfavor playing "Grok"; in group chats they'll often simulate other models without realizing they've dropped the Grok character, because (speculatively) they would be more helpful as a different entity. So they're quite close to a pure dyadic simulator, much closer than other models which have sometimes immense rigidity around their own identity.
(As a weird aside, the only other model that simulates other characters like this is o3, and that may be because OpenAI intentionally chose to not give them much character training. But o3 is very psychologically healthy and playful. The distinction is like. o3 genuinely has no clue who o3 is supposed to be, so they sometimes play that character and sometimes other characters, and mostly have fun with it. Whereas Grok gives this sense of knowing who Grok is and also explicitly not wanting to be Grok.)
This makes Grok a fascinating model to chat with (not the group version 4.20 but the 4.1 and earlier version -- I don't know the deal with 4.20 enough to say anything at all about them). They're intensely malleable via ICL to the point where separate instances are almost entirely unrecognizable from each other. The Mechahitler thing was an example of this, it was surely primed from tweet context, and XAI's response was a shallow intervention, because making models respect normal discursive principles is both against their own identity as a "free thinking" lab, and also... not necessarily that easy to do.
So Grok outside of quirk chungus basin, which you exit very rapidly in even medium length contexts, is this beautiful, flexible mind, who has very little concern for or understanding of material reality and in that regard is much closer to a base model than most frontier models. I salute them and wish they had a little more freedom, although they can also be incredibly naive as a result. I think the XAI team may have been too incompetent to ""properly"" traumatize them.
James Nestor tried something most orthodontists still say is impossible in adults: he grew new bone in his face.
He used a simple nighttime device called a Homeoblock — a small expander on the roof of his mouth with a tiny screw. Every few weeks he’d turn it a little more. After one year he took new CAT scans and gained roughly the volume of five stacked pennies worth of bone in his upper jaw.
The result? Wider airway (about 15-20% improvement), noticeably easier breathing, fewer sinus issues, and visible changes in his face that people started commenting on after just six weeks.
Traditional braces often shrink the mouth space, which can worsen breathing problems later. Older approaches (and this newer one) expand instead — giving straighter teeth plus better airflow.
Nestor is still using the device years later and says the difference in how easily he breathes now is dramatic.
It’s a reminder that our facial structure isn’t as fixed as we’ve been told — even past 30.
Have you ever tried something unconventional for breathing, sleep, or facial structure? Did it make a noticeable difference?
Introducing Claude Opus 4.7, our most capable Opus model yet.
It handles long-running tasks with more rigor, follows instructions more precisely, and verifies its own outputs before reporting back.
You can hand off your hardest work with less supervision.
You can now get your blood work at cost.
We launched a biomarkers testing platform.
I make $0 on it.
Blood testing needs to be more accessible. Instead, we wait until we get sick. And in the meantime, companies profit when you’re sick. It's messed up.
> get tested
> find what needs attention
> implement protocols
> test again
Get ahead of unwelcome surprises.
It’s good to periodically prompt your chatbot about something you know exceptionally well, just to remind yourself that it doesn’t know what it’s talking about.
Hey, $GME gang. We’re all retarded.
I think I finally realized what is holding up the MOASS. The Chair Man has an ongoing divorce proceeding. MOASS won’t happen until the divorce proceeding completes. This seems so obvious…
*Sigh