code viber.

121.4K posts

code viber.

code viber.

@nocodeonlyvibes

shit posting on twitter. twitter shitter, fact checker. Not american.

Katılım Aralık 2014
1.8K Takip Edilen473 Takipçiler
code viber. retweetledi
code viber. retweetledi
Meacham
Meacham@MeachamDr·
The guy who threatened to annihilate an entire civilian population would like us to ‘tone down the rhetoric’.
English
41
1.4K
10.3K
74.8K
code viber. retweetledi
HealthRanger
HealthRanger@HealthRanger·
Look how Bessent the terrorist is celebrating the destruction and sabotage of another nation's energy infrastructure... after he also committed economic terrorism against them with a coordinated attack on their currency. The current leaders of the American empire are vile, evil satanists who celebrate destruction, suffering and death.
Treasury Secretary Scott Bessent@SecScottBessent

While the surviving IRGC Leaders are trapped like drowning rats in a sewage pipe, Iran’s creaking oil industry is starting to shut in production thanks to the U.S. BLOCKADE. Pumping will soon collapse. GASOLINE SHORTAGES IN IRAN NEXT!

English
13
75
229
6.1K
code viber.
code viber.@nocodeonlyvibes·
@BaneThe76451 It's always nice to see the "firm handshake" boomer get his comeuppance. What a lovely time. I want to feel bad for him. Really, I do. But I just can't seem to get there.
English
0
0
3
11
Lucius - The Prophet Of Decay
I call this the puppy syndrome. It's when a man constantly wants attention and play from women and can't get himself to understand most women these days are mysandrists that absolutely despise every man they don't find sexually attractive.
English
3
0
7
251
code viber. retweetledi
Roman Helmet Guy
Roman Helmet Guy@romanhelmetguy·
Billionaires in the USA have spent the last 60 years lobbying our government to import 100M migrants to lower their cost of labor, and now the descendants of those migrants are going to vote to take all their stuff. It’s poetry.
English
42
228
2.3K
31.2K
code viber. retweetledi
Praise
Praise@tufpraise·
And after killing her, they threw Pancake themed parties where they made pancakes celebrating "turning her into a pancake". The Israeli depravity you see on display today is not new. It is not something that started on Oct 7th. It is how the Israeli society has always been!
David@dqveed

23 years ago today, American activist Rachel Corrie was crushed to death by an Israeli bulldozer as she tried to stop the demolition of a Palestinian home in Gaza.

English
11
1.4K
5.7K
53K
code viber. retweetledi
code viber. retweetledi
Niw45
Niw45@Niw451·
Difícil ser irlandés… “¿Cómo te llamas?”, preguntó la profesora. “Mohammad”, respondió. “Ahora estás en Irlanda”, dijo la profesora, “así que a partir de ahora te llamarás Mike”. Mohammad volvió a casa después de la escuela. “¿Cómo te fue hoy, Mohammad?”, preguntó su madre. “Mi nombre no es Mohammad. Estoy en Irlanda y ahora me llamo Mike”. “¿Te avergüenzas de tu nombre? ¿Intentas deshonrar a tus padres, tu herencia, tu religión? ¡Qué vergüenza!” Y su madre le dio una paliza. Luego llamó a su padre, quien volvió a golpearlo. Al día siguiente, Mohammad regresó a la escuela. La profesora vio todos sus moretones recientes. “¿Qué te pasó, Mike?”, preguntó. “Bueno, poco después de convertirme en irlandés, fui atacado por dos malditos árabes.”
Español
38
390
5.6K
186.1K
code viber. retweetledi
BURKOV
BURKOV@burkov·
If you don't understand this, you will not understand why LLM-based agents are irreparably failing for a general-purpose problem solving. An agent (by the way it was the topic of my PhD 20 years ago) to be useful, must be rational. Being rational means to always prefer an outcome that results in the maximal expected utility to its master/user. Let’s say an agent has two actions they can execute in an environment: a_1 and a_2. If the agent can predict that a_1 gives its user an expected utility of 10, and a_2 gives an expected utility of -100, then a rational agent must choose a_1 even if choosing a_2 seems like a better option when explained in words. The numbers 10 and -100 can be obtained by summing the products of all possible outcomes for each action and their likelihoods. Now here is the problem with LLM-based agents. The LLM is not optimizing expected utility in the environment. It is optimizing the next token, conditioned on a prompt, a context window, and a training distribution full of examples of what helpful answers are supposed to look like. Those are not the same objective. So when we wrap an LLM in a loop and call it an “agent,” we have not created a rational decision-maker. We have created a text generator that can imitate the surface form of deliberation. It may say things like: “I should compare the expected outcomes.” “The best action is probably a_1.” “I will now execute the optimal plan.” But the internal mechanism is not selecting actions by maximizing the user’s expected utility. It is generating a continuation that is statistically appropriate given the prompt and prior context. This distinction matters enormously. For narrow tasks, the imitation can be good enough. If the environment is constrained, the actions are simple, and the success criteria are close to patterns seen in training, the system can appear agentic. But for general-purpose problem solving, the gap becomes fatal. A rational agent needs stable preferences, calibrated beliefs, causal models of the world, the ability to evaluate consequences, and the discipline to choose the action with maximal expected utility even when that action is boring, non-linguistic, or unlike the examples in its training data. An LLM-based agent has none of that by default. It has fluency. It has pattern completion. It has a remarkable ability to compress and recombine human text. But fluency is not rationality, and a plausible plan is not an expected-utility calculation. This is why these systems so often fail in strange, brittle, and irreparable ways when given open-ended responsibility. They are not failing because the prompts are insufficiently clever. They are failing because we are asking a simulator of rational agency to be a rational agent.
English
72
107
576
35.4K
code viber. retweetledi
Ross Hendricks
Ross Hendricks@Ross__Hendricks·
Beautifully explained... RIP anyone long "agents" in any capacity, explicitly or implicitly
BURKOV@burkov

If you don't understand this, you will not understand why LLM-based agents are irreparably failing for a general-purpose problem solving. An agent (by the way it was the topic of my PhD 20 years ago) to be useful, must be rational. Being rational means to always prefer an outcome that results in the maximal expected utility to its master/user. Let’s say an agent has two actions they can execute in an environment: a_1 and a_2. If the agent can predict that a_1 gives its user an expected utility of 10, and a_2 gives an expected utility of -100, then a rational agent must choose a_1 even if choosing a_2 seems like a better option when explained in words. The numbers 10 and -100 can be obtained by summing the products of all possible outcomes for each action and their likelihoods. Now here is the problem with LLM-based agents. The LLM is not optimizing expected utility in the environment. It is optimizing the next token, conditioned on a prompt, a context window, and a training distribution full of examples of what helpful answers are supposed to look like. Those are not the same objective. So when we wrap an LLM in a loop and call it an “agent,” we have not created a rational decision-maker. We have created a text generator that can imitate the surface form of deliberation. It may say things like: “I should compare the expected outcomes.” “The best action is probably a_1.” “I will now execute the optimal plan.” But the internal mechanism is not selecting actions by maximizing the user’s expected utility. It is generating a continuation that is statistically appropriate given the prompt and prior context. This distinction matters enormously. For narrow tasks, the imitation can be good enough. If the environment is constrained, the actions are simple, and the success criteria are close to patterns seen in training, the system can appear agentic. But for general-purpose problem solving, the gap becomes fatal. A rational agent needs stable preferences, calibrated beliefs, causal models of the world, the ability to evaluate consequences, and the discipline to choose the action with maximal expected utility even when that action is boring, non-linguistic, or unlike the examples in its training data. An LLM-based agent has none of that by default. It has fluency. It has pattern completion. It has a remarkable ability to compress and recombine human text. But fluency is not rationality, and a plausible plan is not an expected-utility calculation. This is why these systems so often fail in strange, brittle, and irreparable ways when given open-ended responsibility. They are not failing because the prompts are insufficiently clever. They are failing because we are asking a simulator of rational agency to be a rational agent.

English
1
1
32
6.4K
code viber. retweetledi
code viber. retweetledi
PT Zoomer
PT Zoomer@ChristisKing123·
Men already live as 2nd class citizens. And as if that's not good enough, they want to tax us into oblivion, enslaving us and completely destroying whatever chances of reproducing we have left. Why exactly would men just go along with this? What do they think they're banking on?
More Births@MoreBirths

But instead of arguing for a return of social norms around marriage, the authors say that the solution to the fertility crisis lies in "further marginalization of men" (yes, that's in the title!) and getting single women to have more kids. Does that seem likely to work? 5/5

English
8
10
178
3.2K
code viber. retweetledi
🌘ʀᴇᴠᴇɴᴀɴᴛ⚡
Zoomer men are constantly attacked for being "weak" or "too cowardly" to court women, but it's worth pointing out that most Millennials also aren't getting married or having kids, largely for the same reasons. It's not "cowardice." We're all just fucked at the same stages in life
🌘ʀᴇᴠᴇɴᴀɴᴛ⚡ tweet media
🌘ʀᴇᴠᴇɴᴀɴᴛ⚡@revenant_MMXX

You should expect to see a lot of this as Zoomers turn 30 and up the ante on the LARP that they're the first generation to get screwed by their elders. It makes no sense for Millennials and Zoomers to be at odds because we have all the same problems, caused by the same people.

English
11
16
234
3.6K
code viber. retweetledi
code viber. retweetledi