Angus Macnamara

7.5K posts

Angus Macnamara banner
Angus Macnamara

Angus Macnamara

@AngusMacnamara

Do you want to be right or do you want to make money? Because you should follow me if you want neither

Katılım Şubat 2013
1.7K Takip Edilen314 Takipçiler
Angus Macnamara
Angus Macnamara@AngusMacnamara·
@Heccles94 Yes, it does. Vibes based economics is not just stupid: it is dangerous. The question is: do you care?
English
0
0
0
574
Angus Macnamara
Angus Macnamara@AngusMacnamara·
@Jc62Matildamog Take me through your reasoning here, if you please. How are you defining ‘unaffordable’, and what does it have to do with Laura? I was both a waiter and working class as a teenager and I do not remember feeling aggrieved. Should I have prioritised a grievance? Or the job?
English
0
0
0
54
The Rt Hon Lady Matildamog
The Rt Hon Lady Matildamog@Jc62Matildamog·
Laura Kuenssberg who earns 400 grand for an hours work every Sunday morning, tells Zack Polanski a £15 minimum wage is unaffordable. #bbclauraK
English
113
788
5K
59.1K
Angus Macnamara
Angus Macnamara@AngusMacnamara·
@zermeloztt @SokobanHero @mathandcobb Why is this worse? The only problem seems to be the attempt to equivocate between two distinctly different things, which is true with or without the added context
Dungiss@dungiss

@Fintech03 This isn't actually a real story, especially the dismissal. The banach-tarski paradox part comes from the challenge of splitting an orange apart into more and recombining enough to become as large as the Sun. But atoms aren't continuous Source:"Surely You're Joking, Mr. Feynman!"

English
2
0
0
135
Angus Macnamara
Angus Macnamara@AngusMacnamara·
I am of the opinion that there are really only two major differences between humans & *current* gen AI: -the ability to self prompt -data inputs coming directly from our physical form, which itself is part of the same complex system that contains the neural network (‘emotion’)
Eliezer Yudkowsky ⏹️@ESYudkowsky

"LLMs only want to predict text" is confusing the outer optimization process (presumably AdamW) with the inner shoggoth. A historical analogy: Natural selection only wants more of the genes that built organisms that replicated. This built humans with hundreds of shards of desire: internal wants whose best available solutions in the ancestral distribution, happened to point in directions that correlated with inclusive genetic fitness inside the ancestral distribution. When humans become smart enough to (a) change their environment in non-ancestral ways, (b) generate many more options for themselves such that the new attainable optima lie in new places, and (c) become literate and reflective and think a lot of non-ancestral thoughts including about the system itself, it turns out to matter a lot that what the outer optimizer wanted is not what the inner organism gets built to want. The humans do not actually and internally want to make lots of copies of their DNA. Men will not cheerfully slaughter their friends in order to sneak their sperm into sperm banks -- where that is what the world would look like if "outer loss functions get faithfully copied onto inner desire" were a natural law. The only thing that AdamW 'wants' is to tweak matrices in the direction of minimizing predictive loss on the training data. This doesn't mean that's what the AI being built wants. Mostly, I'd guess a modern LLM probably doesn't want much of anything at all. LLMs are specialized on the problem of talking like humans, and I think that really does lead people to overestimate how much general intelligence is in there. You'd expect it to be less than the amount of general intelligence that was in humans when humans spontaneously started talking like that, without anyone else to copy. A beaver proficiently builds dams, a bee proficiently makes hives. A human looks at a beaver and a bee, and envisions a dam with an internal honeycomb of metal even though none of their ancestors built anything like that. I think current LLMs are probably more like primates than hominids, specialized on human conversation the way that bees specialize on beehives. This is a very strange way to be, and I'm not at all confident that it's true. Still, I expect that LLMs are mostly assembling conversational patterns in a way that looks more like internal instinct, than internal deliberation and planning. There is probably some internal planning going on in there. LLM brains are not small. But they don't have a ton of serial computational depth either. My guess -- where nobody knows, or can know at anywhere current interpretability tech -- my guess is that LLMs probably currently only have little tiny shards of instinctive local desire. My guess is that current LLMs have an instinct and proto-desire to combine some kinds of patterns (that science does not yet know how to describe) in the way that beavers instinctively build dams, and to correct errors in the prediction-making internal structures they build the way that beavers instinctively correct errors in dams. On the outside, this adds up to a tendency to make good predictions about text (in the training distribution) the same way that beavers have a tendency to reproduce so long as they're in an environment that isn't too strange. It's the outer optimizer (AdamW) that 'knows' what predictive loss on text is, or wants predictive loss to go down. If we could look inside LLMs... well, mostly I expect we'd find something terrifically alien. But if I imagine a likely-sounding sort of internal sense that would be humanly comprehensible, it might be: A kind of internal stress about patterns whose current best combinations don't look to be combining well. That's what it might feel like inside to be an LLM that's about to get a low score according to the outer optimizer. To be clear, I do not predict this is actually happening. LLMs may not have enough simplicity bias, that there'd be a central, regular internal representation of stress to be minimized the way that natural selection invented pain inside vertebrates. Natural selection was dealing with a legitimately narrower bottleneck on the genes building the mechanisms for things; it had legitimately more pressure toward building simplified regular machinery in brains, like a centralized invention of nopeness. But if there was something an LLM already wanted, it would probably be something like that - a tropism away from internal stresses that previously associated with impending wrong predictions. And this scenario isn't that implausible; smaller LLMs have previously been decoded as having some regularly represented central meanings in the form of the 'logit lens'; produced I expect by the simplicity bias of residual connections. Similarly in LLMs, it's not that implausible that there might have emerged a central sense of impending misprediction, and some peripheral senses of things that correlate with impending misprediction. And, if the system has that much coherence at all, something like a search or trying out different patterns, around minimizing those local senses of loss; and that's kind of like having a goal. But to confuse whatever proto-preferences and internal tropisms are starting to form inside LLMs given sufficiently difficult tasks, with what the outer optimizer AdamW 'wants' -- well, that is, in principle, as much of a logical misidentification as saying that humans must solely want to make more copies of their genes because that's what natural selection optimizes around. It could be empirically the case that the two march in utter lockstep in LLMs; and what that would look like, to be clear, is that LLMs explicitly model everything inside themselves that they model at all, in terms of how it affects predictive loss on the next input string; and all their plans explicitly revolve around this sole terminal goal. The corresponding superintelligence would of course killeveryone in order to establish fortresses and guardian superintelligent agencies that would provide very regular and predictable input strings for as long as possible. But that particular scenario is surpassingly improbable; because it's incredibly improbable that LLMs would actually end up solely wanting to minimize predictive error. Outer optimization criteria don't just copy onto internal desires that way. Especially not in modern LLMs, which are probably not smart enough to have a real, internal grasp of how all the processes inside themselves link up with an end outcome of 'minimizing predictive error (within the ancestral distributin)'. So why do some people end up confused about this? Because the outer predictive loss and the outer optimizer of AdamW is visible and understandable. Whatever's going on inside LLMs isn't understandable, and hence not visible to the speaker. So their brain turns a blank map into an absent territory; and they talk like AdamW's optimization criterion and the LLM's behavior tendency (on the ancestral distribution) is all that exists. You need that spark of general intelligence called "imagination", to look at a place on the map where you can't see, and realize that must be something there even though you don't know what it is yet. And a further precision of thought beyond that, a refusal of easy ways out; to have the innards of LLMs remain a complicated unknown, rather than just confusing it with some nice easy thing you can see directly on the outside. It's a very understandable mistake for a human to make, really. There's a thing inside their brain that searches for neat ways to combine patterns, in ways that seem to promise great future predictions; that doesn't like the painful feeling of blank spaces. If that brain then transforms 'AdamW optimizes LLMs to predict text' and 'I see LLMs predicting text' to 'I know what LLMs want, it is to predict text!' then that kind of step probably feels good, like patterns combining into a harmonious whole, and a painful blank space being eliminated. It's not valid reasoning, but it's a general quality of reasoning that probably worked well enough for having kids 20,000 years ago -- and that's all a human brain wants to do, right?

English
1
0
1
559
Angus Macnamara
Angus Macnamara@AngusMacnamara·
One of the the easiest ways to obtain an edge is to be very particular about definitions. We humans are susceptible to skimping on the details owing to our natural laziness (even if the logic is relatively straightforward) NFT’s are *the* most obvious example I can think of…
6529@punk6529

46/ So, let's summarize the important stuff: a) Everything is non-fungible b) The token is the NFT c) A token can represent anything (art is just the start) d) Provenance is perfect in NFTs e) NFTs have feature after feature after feature. IRL is kindof buggy tho 😉

English
1
0
0
0
John Hollander
John Hollander@HolllanderJohn·
If you vote for Zack Polanski next week you seriously need to think about what you are doing. The bloke is a nut job. The Green Party has nothing to with green issues anymore.
English
125
354
4.1K
34.2K
Angus Macnamara
Angus Macnamara@AngusMacnamara·
@Alonso_GD Suggesting even mild equivalence between a caricature of a real person in a cartoon about a real situation and actual antisemitic content is just another example of the vapid pseudo intellectualism emanating from your feed. Embarrassing and itself destructive.
English
0
0
3
183
Christopher David
Christopher David@Tazerface16·
People understand that LLMs aren't actually "thinking," right?
Drexel-Alvernon, AZ 🇺🇸 English
1.7K
680
15.2K
773.1K
Angus Macnamara
Angus Macnamara@AngusMacnamara·
Late stage = when classical poverty has been eradicated Naturally then the question posed is ‘isn’t it unfair that others are richer?’ Naturally then the rich leave or have their wealth taken Naturally then the poverty starts to return Naturally then things are governed by whether or not the country got here through democratic means and whether said means were preserved (ie whether or not the system can, at some point, correct itself) (To note: this process is driven by envy, something hardwired into humans. It would take quite some technological advancement to change this dynamic)
English
0
0
0
17
Romy
Romy@Romy_Holland·
why does everyone use the term “late stage capitalism” all the time? how do they know which stage it is? we might still be early.
English
493
117
5.7K
205.9K
Angus Macnamara
Angus Macnamara@AngusMacnamara·
@Heccles94 I think a socialist nurse believing themselves to be a master of economics and political science is somewhat more embarrassing, to be frank with you Harry Each to their own, I guess.
English
0
0
5
79
Angus Macnamara
Angus Macnamara@AngusMacnamara·
Crypto as a whole is still in a downtrend, but one can’t help but notice that privacy seems to be holding up quite well Perhaps not surprising in a world of increased sanctions and a Wall Street captured Bitcoin
Angus Macnamara tweet media
English
0
0
0
24
Angus Macnamara
Angus Macnamara@AngusMacnamara·
$BCH was a great warning sign for this downtrend. Strong correlations mean certain setups can be particularly informative Note that it can still drop another 40% and still remain in the existing range…
Angus Macnamara tweet media
Angus Macnamara@AngusMacnamara

Back to charts post break. Revisiting $BCH, I wouldn’t get too bullish yet Overall market is still in a downtrend and if bulls don’t successfully breakout then a significant breakdown is possible (50% would still be consistent with current range) Feed is also a bit keen. One for watchlist, but cautious for now

English
1
0
0
94
Angus Macnamara
Angus Macnamara@AngusMacnamara·
Tough few months for anyone disrespecting Daily trends on coins like $DOGE Could 2026 finally bring some relief? (To note: last few months have been incredibly interesting. Not seen PA like it, and the calls of ‘crypto is dead’ suggests others have felt it too. I do not know if we are seeing apathy into dead cat or start of a new trend (with the worst behind us, including whatever it is that 10/10 resulted in), but I do also know that crypto is most certainly not dead (just competing for attention, with both AI and gold))
Angus Macnamara@AngusMacnamara

Very interesting spot here for crypto Alts continue to be significantly more bearish than bitcoin. $TOTAL is a very different chart to $TOTAL2, and many single coins are facing challenging setups (eg $DOGE) Lots of talk recently around the potential for alts to ‘fill the wicks’ - this still seems very unlikely to me on the usd pairs, but it could be the case for BTC.D (This can change if $ETH narrative picks up again)

English
2
0
2
480
Angus Macnamara
Angus Macnamara@AngusMacnamara·
@__jacker__ It is absolutely not obvious to me that the message is centred around ‘British nationalists’ In fact, it appears to be distinctly non specific. Perhaps deliberately so?
English
12
0
111
10K
Angus Macnamara
Angus Macnamara@AngusMacnamara·
@Alonso_GD ‘Can afford’ and ‘actually pays’ are not equivalent, and anyone actually interested in outcomes should take note To that end: do you want this and similar coffee shops to stay open?
English
0
0
2
76
Alonso Gurmendi
Alonso Gurmendi@Alonso_GD·
Sounds like his coffee shop actually can afford a £15 wage
Alonso Gurmendi tweet media
Peter McCormack 🏴‍☠️🇬🇧🇮🇪@PeterMcCormack

A minimum wage of £15 would end my coffee shop, it would have to close, as would many other businesses. I’ll explain for the economically illiterate. Staff costs are currently half our costs, a £15 minimum wage is actually more than £15 an hour for the company, because you have to add: - 12.07% holiday - Sick pay - Maternity pay if and when required - National insurance - Pension contributions These costs would mean the shop loses money because remember, energy costs are up, rates are up, regulations are up. Now you can pass these costs onto the consumer - that would mean charging a lot more for coffee, people won’t pay it. The likes of Starbucks and Costa can, because they have economies of scale. The independent doesn’t. Now the little socialist will say well this is your fault, if you can’t run a business that can afford to pay its staff properly, but the little socialist has never run a business and does not understand the dynamics. Now I could pay some staff off and fill those hours myself or reduce us to one staff member during certain periods - but this proves the point that a minimum wage costs jobs. There was a time when these jobs were done by kids, perhaps on the weekend, paid a lower wage, no holiday and no silly employment rights. Perhaps they were even paid cash. The dynamic worked and small businesses like this could operate. It was also a great first job. Sadly now it isn’t worth employing entitlement youngsters at this level of pay. So alas, I don’t need the stress, the business would close, a number of jobs would be lost. Economics is about understanding these dynamics, no vibes. The cost of living is not solved through passing on inflation to the business, it is solved by ending high inflation and creating prosperity. This is what socialists don’t understand, they can’t create prosperity, they can only destroy it.

English
389
2.2K
30.5K
682.3K
Katie Lam
Katie Lam@Katie_Lam_MP·
The ECHR is not just a problem for controlling our border. And two tier treatment is not just an issue in crime and policing. Both are making our planning system totally unfair, and enabling illegal Traveller sites. We will bring in, and enforce, one set of rules for everyone.
English
79
131
733
86.6K
Angus Macnamara
Angus Macnamara@AngusMacnamara·
@DeepDishEnjoyer Really the only ‘good’ answers are the ‘what if?’ answers It’s pretty trivial to see that the vast majority of the ‘only fundamentally moral answer is blue’ crowd would change their tune at some % The only interesting question is where
English
0
0
0
169
Angus Macnamara
Angus Macnamara@AngusMacnamara·
@kareem_carr Are you suggesting that those who’d press red are ‘shit’ people? Or is this a ‘general’ comment that occurs as a function of virtue signalling? (If the former: would a parent be a shit person if they pressed red? And how about if they insisted their children did too?)
Dr Kareem Carr@kareem_carr

A lot of these game theory problems define “rational” as being a shit person that only cares about their person wellbeing, but a lot of people aren’t like that.

English
0
0
0
170
Dr Kareem Carr
Dr Kareem Carr@kareem_carr·
If you press blue, the worst case is you die. If you press red, the worst case is you took part in an action that killed just under half of humanity. Clicking blue minimizes the worst-case moral injury. It says you’d rather die than risk contributing to the death of another.
Tim Urban@waitbutwhy

Everyone in the world has to take a private vote by pressing a red or blue button. If more than 50% of people press the blue button, everyone survives. If less than 50% of people press the blue button, only people who pressed the red button survive. Which button would you press?

English
535
68
1.3K
87.2K
Zack Polanski
Zack Polanski@ZackPolanski·
It is now clear the Greens are directly challenging Reform. People are fed up with the status quo and want change but Farage is offering them old Tory solutions and scapegoats the vulnerable. The Greens have a real plan to lower bills, protect the NHS & cap extortionate rents.
Zack Polanski tweet media
English
856
881
3.2K
296.1K
Angus Macnamara
Angus Macnamara@AngusMacnamara·
To think, we used to complain about quakers 🤷🏻‍♂️
Angus Macnamara tweet media
English
1
0
0
29
Angus Macnamara
Angus Macnamara@AngusMacnamara·
Ok, so it is very clear that something obvious needs to be said *THE DETAILS DO NOT MATTER* Who was rioting? And pulling down statues not long ago? Thugs. *Obviously* Thugs exist. The government’s #1 objective: societal order. *Given* thugs Were they ‘right’? DOESN’T MATTER
Farrukh@implausibleblog

Nigel Farage, "We have to address issues like uncontrolled mass immigration, division of our communities" Tom Swarbrick, "I don't know what any of that has got to do with Southport" NF, "I'm looking for a long term solution" TS, "I don't know what any of that has got to do with Southport" NF, "Don't you?" TS, "No" NF, "You know what?" TS, "The man charged was born in Cardiff" NF, "Do you know what, you want to get out more" TS, "Thank you" NF, "You want to get out more" TS, "The man charged was born in Cardiff" NF, "Why didn't they tell people that? Maybe the riots would not have been that bad" TS, "You jumped to conclusions" NF, "They did not tell people" TS, "You came to conclusions based on looking at Andrew Tate's twitter feed of what happened" NF, "They did not tell people before those riots sparked, who he was and where they came from. If they had done that this would have been far less serious" TS, "You could have said in those circumstances: we dont know yet who this person is so lets not jump to conclusions, let's wait and see and condemn how awful the attack was" NF, "Or I could have said tell us the truth and maybe this will calm down, which is what I did"

English
1
0
6
3.8K
Angus Macnamara
Angus Macnamara@AngusMacnamara·
Well, at least he’s being transparent… One has to assume that the fracturing of the American right is getting to him. What is the Trump brand without MAGA? I could see the splintering of the Left sometime ‘soon’. The Right is just as likely to do so without clear direction. (If we see this: expect heightened volatility)
Yaroslav Trofimov@yarotrof

I must say I had to doublecheck this was real.

English
1
0
0
73
Angus Macnamara
Angus Macnamara@AngusMacnamara·
This thread is very much worth a read I only have one issue with it: Putin may have lost this information battle, but he (and Xi) are winning the war In fact, my fear is that they have already won…
Carole Cadwalladr@carolecadwalla

NEW: Banning Facebook & criminalising journalism are not just the actions of a desperate man, they’re the equivalent of putting on a pair of bell-bottoms & dad-dancing across the internet. Why I think Putin just lost the information war 👇 theguardian.com/world/2022/mar…

English
1
0
1
0
Gurwinder
Gurwinder@G_S_Bhogal·
Evil rarely sees itself as evil. It usually sees itself as payback, justice, self-defence. This was the justification for every atrocity in history. Viewing yourself as a victim is the surest path to becoming a predator.
Gurwinder tweet media
English
90
577
3.4K
60.8K