Sabitlenmiş Tweet
Seth Fowler
263 posts

Seth Fowler
@sethlfowler
Pastor. Counselor-in-training. Interested in everything.
San Francisco, CA Katılım Ağustos 2022
295 Takip Edilen130 Takipçiler

@chrisaikenmd so, in other words, no worse than antidepressants?
English

Bad news for psychedelics in new studies:
▪ No better than antidepressants once unblinding accounted for
▪ Possible risk of suicidality and persisting perceptual disturbances
▪ Psilocybin failed in a RCT of treatment-resist depression
Full details:
psych-partners.com/psychedelics-r…

English

@netcapgirl yes - the tedious and mindnumbing work will be automated. love it.
English

@sometherapist i like this a lot. Amish vibes are so in right now
English

There's a college in rural southern California where students can go through a two year digital detox, work on a ranch doing lots of physical labor, study in small groups with peers, deepen their attention spans, and earn a transfer degree for free.
thetimes.com/us/news-today/…
English

ok so this is going viral. I’m locking in.
Let’s do a caffeinated all-nighter to give the people what they want!
JB@jamie247
Just spent $500 in compute vibe coding my own Civilisation RPG but with unbounded natural language diplomacy.. meet Uncivilised. ask me anything.
English

TSX notches biggest gain in 5 weeks on Middle East hopes reut.rs/4lMuCUj reut.rs/4lMuCUj
English

@jadethebpdcoach what’s a way you would focus on something else when the pain is real?
English

@DrMcFillin seems like there’s very little improvement despite mental health taking much more priority in society.
something doesn’t add up
English

@DrMcFillin i’ve always thought that deep feelers have a greater capacity to live life to the fullest, and also to feel much better or worse than a person who doesn’t feel deeply.
English

per the viral podcast clip, he is conflating introspection with neurotic rumination. so he is wrong but he is wrong more on semantic grounds. the most charitable interpretation is that introspection is a superset that contains the type of rumination i think he intends to be pointing at
that said right after this segment in the podcast they both start discussing their unwillingness to ever try psychedelics, and specifically andreessen's fear of them (though he doesn't describe it that way). this is because of the number of anecdotes he's encountered where someone will take psychedelics and then find peace and thus stop building whatever they were building. in short he's making a multi-pronged philosophical category error
anyway you should not turn to a restless person for advice on stillness. the folks following andreeson are probably, on average, the type of people who could benefit from knowing the difference.
Marc Andreessen 🇺🇸@pmarca
My big conclusion from this week: Introspection causes emotional disorders.
English

@godofprompt if it's even slightly good at anything, it's going to teach itself to be perfect at it.
English

🚨 BREAKING: Meta AI just published a paper that redefines what “self-improving AI” means.
It’s called Hyperagents, and it solves a fundamental limitation that every prior self-improving system couldn’t get past.
The problem with current self-improving AI:
→ Systems like the Darwin Gödel Machine (DGM) can generate better versions of themselves over time
→ But they only work in coding, where the improvement task and the target task share the same domain
→ Outside coding, the self-improvement process stays fixed and handcrafted
→ The system gets better at tasks but never gets better at getting better
What Hyperagents actually does:
→ Combines a task agent (solves the problem) and a meta agent (modifies both itself and the task agent) into one editable program
→ The modification process itself is editable, creating what the researchers call “metacognitive self-modification”
→ The agent doesn’t just learn to perform better. It learns to improve at improving
→ This works on any computable task, not just coding
The results across four domains (coding, paper review, robotics reward design, Olympiad-level math grading):
→ Continuous performance improvements over time in every domain tested
→ Outperforms baselines without self-improvement or open-ended exploration
→ Outperforms prior self-improving systems including the original DGM
→ Meta-level improvements (persistent memory, performance tracking) transfer across domains and accumulate across runs
That last point is the one most people will overlook. The improvements to the improvement process don’t just help in one domain. They carry over.
The system builds compounding infrastructure for getting smarter, regardless of the task.
This is the architectural difference between an AI that gets incrementally better at one thing and an AI that builds the scaffolding to accelerate its own progress everywhere.
Meta’s team (Jenny Zhang, Bingchen Zhao, Wannan Yang, Jakob Foerster, Jeff Clune, and others) essentially removed the ceiling that kept self-improving systems domain-locked.

English

(5/5🧵) What's been fulfilling and meaningful for you?
Not productive. Not impressive. Fulfilling. Then ask the harder question: what's preventing you from pressing into that?
The gap between what fills you up and what you actually spend your time on will tell you a lot about where the real conflict lives.
English

(1/5🧵) Sometimes life feels like a thousand problems all demanding attention at once. Like everything is endlessly complex and chaotic.
But most of the time, underneath all that complexity, there are a few deeper conflicts at work. Primary concerns that don't surface until you slow down and honestly reflect.
Here are some things to examine and try...
English










