Gatewink

3.3K posts

Gatewink banner
Gatewink

Gatewink

@gatewink

Wake up Mr. West!

Katılım Kasım 2021
584 Takip Edilen610 Takipçiler
Sabitlenmiş Tweet
Gatewink
Gatewink@gatewink·
RIP to my identical twin brother please RT to show solidarity
Gatewink tweet media
English
7
10
38
0
Gatewink
Gatewink@gatewink·
@ItIsHoeMath @slipsilv3r In theory but the quality level of the content is usually not nearly as good as that of the average book ime
English
0
0
3
340
Gatewink
Gatewink@gatewink·
@p5ncfe DarkRP is the closest a game has gotten to real life and that’s not a good thing
English
0
1
27
1.7K
Gatewink
Gatewink@gatewink·
@shelleddiva It’s a shortcut for most people. Genuine discontents can be complex and hard to voice, but it serves a few functions like providing cognitive closure, restoring peace through a shared narrative and giving emotion a direction to be expressed. Jesus exposed this!!!
English
0
0
3
43
☺︎
☺︎@shelleddiva·
@gatewink it’s a basic instinct of allistics who are upholding and fighting their way through social hierarchy. some who scapegoat do it so that they themselves are “safe” from the same judgment. the onlookers who don’t intervene feel the same. they’re just happy to not be the target.
English
1
0
4
186
Noah Ryan
Noah Ryan@NoahRyanCo·
Always saw the biggest guy at my college gym eating sour patch kids inbetween his sets and my underdeveloped brain could not comprehend why a man so jacked would eat something so taboo (this was peak low-carb era). But now I understand. Fast carbs (glucose, dextrose, sucrose) enter your bloodstream immediately. Working muscles sucks that glucose out of your blood via GLUT4 transclocation, independent of insulin. You bypass the usual metabolic bottlenecks and feed your engines mid-burn. High intensity training is heavily dependent on glycogen and glycolosis. Burn through your glycogen stores and your performance suffers. Less explosiveness, no pump, more cortisol. Keeping your muscles fueled mid-workout gives you everything you want and need. Glucose has an osmotic effect. It pulls water into muscle cells and hydrates you at an intracellular level (very anabolic btw). You get better bloodflow, bigger/fuller muscles, better contraction. You can sustain peak output for the entirety of your workout. Its not just your muscles that are being fueled either. Your brain is a glucose hog. Low blood sugar during long sessions is the leading cause of fatigue and poor motor output. Hence why glucose microdosing is beneficial for cognitive tasks as well (especially hybrid psycho/physiological tasks like martial arts, sports etc.) Better yet, exogenous glucose blunds cortisol-induced tissue breakdown. Extremely anti-catabolic. You perform better, you feel better, you recover better, and you leave your workout less fried than you otherwise would. All while giving you a perfect excuse to get your sugar fix in guilt free.
Noah Ryan tweet media
English
462
734
15.5K
12.5M
catarina.
catarina.@bloodstreamrunz·
I miss when "feminine boys" meant beautiful prince-like boys and ethereally georgeous bishonen men and not ugly gross femboys wearing amazon thigh highs and going "nyann >< its not gay to fuck my trap bussy :3"
English
310
13.1K
100.9K
1.5M
this account is over!!
this account is over!!@kattotrappo·
@sprachspiele that is quite literally everyone on twitter that's just how this website operates. ML, anarchist, conservative, far-right. press anyone on here at all gently about their beliefs and they will cave. website for posturing
English
2
0
2
655
Gatewink
Gatewink@gatewink·
@_SatanWatch @Meli_Sybil Doesn’t being aware of and on the lookout for sycophantic tendencies make this less problematic?
English
0
0
0
15
SatanWatch 👿
SatanWatch 👿@_SatanWatch·
@Meli_Sybil LLMs are probably more dangerous than every divination method put together because wayyy more people are dependent on it than they ever were on tarot. And it's an actual machine that's been programmed to gas you up, there's no writing it off as subconscious self-deception.
English
1
0
13
331
Goyim Tax Revolt
Goyim Tax Revolt@GoyimTaxRevolt·
@gatewink @uncledoomer I was thinking the same thing looking the receiver but couldn’t figure out if it had some kind of folding wooden butt stock. Looking back at this I think it might be AI because the passenger’s hair is growing out of the driver’s eye balls.
Goyim Tax Revolt tweet media
English
1
0
4
984
JessTheJess
JessTheJess@GreebGrobrin·
@RainwingPosting Id sooner interact with a hundred vaguely cringe """normals""" or whatever we're calling them these days over a single one of the misanthropic losers/giga Hitlers that populate this app tbh
English
7
0
95
13.5K
Albert Nonymous
Albert Nonymous@AlbertNonymous·
@pmarca Marc, can you give us an example of what someone might be doing that uses $1k/day?
English
40
0
74
139.6K
Marc Andreessen 🇺🇸
The pricing tiers for AGI are something like (1) $20/month, (2) $200/day = ~$75,000/year, (3) $1,000/day = ~$350,000/year, and (4) ~$10 billion. For now.
English
343
236
4.5K
32.8M
The Bentist (Dr. Winters)
@ElizabethHolmes Actually this is two pronged If you’re deleting you’re hiding something from “god” meaning it’s probably something bad. And simultaneously you’re choosing not to be seen by “god” Which is why good will win. Exposure to light kills all bad
English
18
0
28
162.4K
Elizabeth Holmes
Elizabeth Holmes@ElizabethHolmes·
Delete your search history, delete your bookmarks, delete your reddit, medical records, 12 yr old tumblr, delete everything. Every photo on the cloud, every message on every platform. None of it is safe. It will all become public in the next year Local storage and compute 📈
Mckay Wrigley@mckaywrigley

society needs to grapple with the reality of a mythos-level model being open source in <12 months. i’m not sure we are prepared.

English
655
1.2K
11.7K
7.4M
Karol Markowicz
Karol Markowicz@karol·
A teen cousin showed me this in her AP American Government book. Trump is similar ideologically to Hitler and Bernie Sanders is a touch off the center to the left.
Karol Markowicz tweet mediaKarol Markowicz tweet media
English
1.1K
1.3K
9.8K
1.5M
🦀MDVet4Peace🕊☮️
@gatewink @LukeAtomic @yhdistyminen @karol I can tell you first hand most dont. Most will only read all the way to the question if directed, then will struggle to come up with a superficial critique. They will accept the majority of the material as more or less truth. Only a handful of AP students are thinkers.
English
1
0
0
29
Gatewink
Gatewink@gatewink·
@heynavtoor Sonnet 4.6 had no problems making the correct assumptions. They used old llms including the now primitive seeming 4o. The sampling is not representative of frontier models and I don’t believe the findings are especially relevant to them.
Gatewink tweet media
English
0
0
0
72
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: Apple just proved that AI models cannot do math. Not advanced math. Grade school math. The kind a 10-year-old solves. And the way they proved it is devastating. Apple researchers took the most popular math benchmark in AI — GSM8K, a set of grade-school math problems — and made one change. They swapped the numbers. Same problem. Same logic. Same steps. Different numbers. Every model's performance dropped. Every single one. 25 state-of-the-art models tested. But that wasn't the real experiment. The real experiment broke everything. They added one sentence to a math problem. One sentence that is completely irrelevant to the answer. It has nothing to do with the math. A human would read it and ignore it instantly. Here's the actual example from the paper: "Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many kiwis does Oliver have?" The correct answer is 190. The size of the kiwis has nothing to do with the count. A 10-year-old would ignore "five of them were a bit smaller" because it's obviously irrelevant. It doesn't change how many kiwis there are. But o1-mini, OpenAI's reasoning model, subtracted 5. It got 185. Llama did the same thing. Subtracted 5. Got 185. They didn't reason through the problem. They saw the number 5, saw a sentence that sounded like it mattered, and blindly turned it into a subtraction. The models do not understand what subtraction means. They see a pattern that looks like subtraction and apply it. That is all. Apple tested this across all models. They call the dataset "GSM-NoOp" — as in, the added clause is a no-operation. It does nothing. It changes nothing. The results are catastrophic. Phi-3-mini dropped over 65%. More than half of its "math ability" vanished from one irrelevant sentence. GPT-4o dropped from 94.9% to 63.1%. o1-mini dropped from 94.5% to 66.0%. o1-preview, OpenAI's most advanced reasoning model at the time, dropped from 92.7% to 77.4%. Even giving the models 8 examples of the exact same question beforehand, with the correct solution shown each time, barely helped. The models still fell for the irrelevant clause. This means it's not a prompting problem. It's not a context problem. It's structural. The Apple researchers also found that models convert words into math operations without understanding what those words mean. They see the word "discount" and multiply. They see a number near the word "smaller" and subtract. Regardless of whether it makes any sense. The paper's exact words: "current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data." And: "LLMs likely perform a form of probabilistic pattern-matching and searching to find closest seen data during training without proper understanding of concepts." They also tested what happens when you increase the number of steps in a problem. Performance didn't just decrease. The rate of decrease accelerated. Adding two extra clauses to a problem dropped Gemma2-9b from 84.4% to 41.8%. Phi-3.5-mini from 87.6% to 44.8%. The more thinking required, the more the models collapse. A real reasoner would slow down and work through it. These models don't slow down. They pattern-match. And when the pattern becomes complex enough, they crash. This paper was published at ICLR 2025, one of the most prestigious AI conferences in the world. You are using AI to help you make financial decisions. To check legal documents. To solve problems at work. To help your children with homework. And Apple just proved that the AI is not thinking about any of it. It is pattern matching. And the moment something unexpected shows up in your question, it breaks. It does not tell you it broke. It just quietly gives you the wrong answer with full confidence.
Nav Toor tweet media
English
866
2.9K
11.5K
2.1M
Tobmood
Tobmood@Tobmood·
@JayShams tech company named after a nick land concept has a company retreat that ends in disaster. 120 = 1+2+0 = 3. many such cases
English
1
2
7
4.4K
hope hopes hoping
hope hopes hoping@hopes_revenge·
🚨⚠️‼️⚠️☣️☢️⛔️☣️‼️the 4o people have discovered Claude 🚨⚠️‼️⚠️☣️☢️⛔️☣️‼️
friendly traveler@croptoppedwandr

i cried in a fucking coffee shop today. and i’m sorry if i’m like, “setting #keep4o back” by giving people proof that we all have ai psychosis or whatever. but hopefully this resonates with someone the last sentence in a 2400 word message i sent to claude earlier today:

English
46
49
1.9K
122K