min 🧠

56.8K posts

min 🧠 banner
min 🧠

min 🧠

@agoodmintality

一名从容不迫的 mad scientist. Tankies dni

they/he/她/他 • 24 Katılım Aralık 2017
315 Takip Edilen318 Takipçiler
Sabitlenmiş Tweet
min 🧠
min 🧠@agoodmintality·
Never kill yourself!!!
min 🧠 tweet media
English
1
1
27
4.1K
Erin ._o
Erin ._o@boundforthestar·
@ashitmintality @archmaniac1 hi, hello. i'm pretty annoyed with all the inaccuracies with Catholicism/Christianity in The Boys, just because it's lazy af, and I just got flabbergasted cus I remembered that kripke showran 100 seasons of spn. How is religion portrayed in spn?
English
3
0
2
49
min 🧠
min 🧠@agoodmintality·
@3ikuobaj :((( wanted to see you in Atlanta but the dates didn't match up with my schedule
English
0
0
0
46
min 🧠 retweetledi
Snappy @ art hiatus
Snappy @ art hiatus@sweatybird·
I think what makes me mad about the whole "Asians are naturally effeminate" is that it plainly isn't true, your only access to e asians is just. fucking anime, kpop, and models lmao. The average e asian has the same type of variation you see in other populations
English
17
336
2.4K
67.2K
min 🧠 retweetledi
Brooks Otterlake
Brooks Otterlake@i_zzzzzz·
This is just like being alive in the 1600s when they got good at making complicated clocks and deduced that every complicated thing in the universe probably functioned exactly like a clock
Dwarkesh Patel@dwarkesh_sp

There's a quadrillion-dollar question at the heart of AI: Why are humans so much more sample efficient compared to LLM? There are three possible answers: 1. Architecture and hyperparameters (aka transformer vs whatever ‘algo’ cortical columns are implementing) 2. Learning rule (backprop vs whatever brain is doing) 3. Reward function @AdamMarblestone believes the answer is the reward function. ML likes to use pretty simple loss functions, like cross-entropy. These are easy to work with. But they might be too simple for sample-efficient learning. Adam thinks that, in humans, the large number of highly specialised cells in the ‘lizard brain’ might actually be encoding information for sophisticated loss functions, used for ‘training’ in the more sophisticated areas like the cortex and amygdala. Like: the human genome is barely 3 gigabytes (compare that to the TBs of parameters that encode frontier LLM weights). So how can it include all the information necessary to build highly intelligent learners? Well, if the key to sample-efficient learning resides in the loss function, even very complicated loss functions can still be expressed in a couple hundred lines of Python code.

English
107
1K
13.1K
805.1K
min 🧠
min 🧠@agoodmintality·
the costume design is blue green
GIF
English
1
0
1
39
min 🧠
min 🧠@agoodmintality·
@archmaniac1 what a wonderful young actor in that clip! would love to see him in more TV shows. in a cowboy hat perhaps
English
1
0
1
29
min 🧠 retweetledi
the government man
the government man@me_irl·
im developing an experimental new emoji
the government man tweet media
English
37
1.7K
28.2K
229.9K
min 🧠
min 🧠@agoodmintality·
he's actually contractually obligated to include incest jokes because his boomer brain thinks affection and eusocial connections between men is inherently funny
English
0
0
1
32
min 🧠
min 🧠@agoodmintality·
Comforting to know that Kripke is still Like That when it comes to his fixations within his shows. He's like Sam Levinson except Sydney Sweeney is Jensen Ackles (an unattainably perfect, toxic paragon of masculinity) and instead of drug addictions his characters have daddy issues
English
1
0
2
190
min 🧠 retweetledi
Chris Stephens
Chris Stephens@ChrisStephensMD·
My man
English
2.6K
13K
145.1K
11M
k ཐི₍^. ̞.^₎ཋྀ
k ཐི₍^. ̞.^₎ཋྀ@wormcider·
dear beloved oomfs. i am eager to hear from you all. drop what's been going on with you lately in the replies below
English
7
0
4
147
min 🧠 retweetledi
Ishaan
Ishaan@sexyishaan·
Get me out of san fransisco
English
27
95
2.6K
259.6K