6ixpool

2.8K posts

6ixpool banner
6ixpool

6ixpool

@6ixpool

Beigetreten Mart 2018
841 Folgt113 Follower
Angehefteter Tweet
6ixpool
6ixpool@6ixpool·
Morality as we know it is a subset of iterated game theoretic optimization for large group cohesion in non-eusocial organisms. Its real and emergent in the same way laminar flow of fluids is real and emergent from fundamental quantum reality. We feel it in our bones because we emerge from the same stuff. Ironically, Plato had it the other way around. Reality is a holographic projection of a 1 dimensional object (Wolframian hypergraph theory). The shadow/projection is whats "real", the sun outside the cave shines on a barren wasteland of 1D mathematical chaos.
English
4
0
7
1.4K
6ixpool
6ixpool@6ixpool·
@tszzl @Noahpinion Maybe give AI researchers access to the better models for free so we get better quality research?
English
0
0
1
14
roon
roon@tszzl·
@Noahpinion “using gpt 5 mini” every. damn. time
English
34
5
731
16.6K
Noah Smith 🐇🇺🇸🇺🇦🇹🇼
The reason AI still can't write well is that it writes what IT wants to say, not what YOU want to say. Writing is thinking. I expect this to be fixable, but not via typical "scale to AGI" approaches.
Natasha Jaques@natashajaques

The paper I’ve been most obsessed with lately is finally out: nbcnews.com/tech/tech-news…! Check out this beautiful plot: it shows how much LLMs distort human writing when making edits, compared to how humans would revise the same content. We take a dataset of human-written essays from 2021, before the release of ChatGPT. We compare how people revise draft v1 -> v2 given expert feedback, with how an LLM revises the same v1 given the same feedback. This enables a counterfactual comparison: how much does the LLM alter the essay compared to what the human was originally intending to write? We find LLMs consistently induce massive distortions, even changing the actual meaning and conclusions argued for.

English
11
6
165
68.1K
6ixpool
6ixpool@6ixpool·
@m3dvedev @andromedas_mom I'm kinda shocked there isn't a lurrus companion deck. No borros/mardu energy shells viable without 3+ drops?
English
1
0
0
153
Dmitry Medvedev
Dmitry Medvedev@m3dvedev·
Cards ranked by how many copies made top 8 in the three No Ban Modern challenges we've had so far.
Dmitry Medvedev tweet media
English
24
19
318
41.5K
priscilla (alt)
priscilla (alt)@seashell_luvr·
@cxgonzalez Ask them their Chinese zodiac, you can reasonably calculate from there
English
1
0
7
485
christian
christian@cxgonzalez·
fellas what's a good question to ask to figure out someone's age in a subtle way?
English
38
0
70
9.2K
6ixpool
6ixpool@6ixpool·
@thealch3m1st @Plinz @JosephJacks_ It's also not a pure go-nogo response to neurotransmitters. Even the glial cells contribute to neuronal compute. It all feeds into the neurons internal computation of when and what to fire.
English
0
0
0
10
Stefan
Stefan@thealch3m1st·
@Plinz @JosephJacks_ This thread reminds me of neurotransmitters. Aren't the degrees of freedom related to the neurotransmitters involved ? + maybe some inner mixing function
English
1
0
0
195
JJ
JJ@JosephJacks_·
All current artificial neural networks (SSMs, RNNs, Transformers, etc) assume that the Neuron (parameter space) has ONE degree of computational freedom. In reality, the actual Neuron in our brains has 10+ degrees of freedom.
English
33
26
566
43.2K
Mae
Mae@MaeTcg·
@tomtomaru Slightly smaller box for you
English
1
0
4
486
Mae
Mae@MaeTcg·
Been seeing modern banlist discourse popping up again. If I ever see you say Arcum's Astrolabe should be unbanned you're going in "the box."
English
23
3
226
41.7K
6ixpool
6ixpool@6ixpool·
That's what I'm saying. Does killing 1 dork and then passing until turn 4 when they start escaping potentially hasty titans actually get you anywhere in regular modern? The tier decks all feel MUCH more powerful than 2 mana shock as an activated ability from a land? But then again, it randomly suppresses creature based decks in the future that might be interesting for little upside. I'm torn on this one
English
1
0
1
57
TogoresMTG 🦈
TogoresMTG 🦈@TogoresMTG·
Unban Tier List The Uro row is more of cards that could be unbanned, but will probably cause issues.
TogoresMTG 🦈 tweet media
English
39
6
121
28.3K
6ixpool
6ixpool@6ixpool·
Will be working on a more comprehensive reply if that's something you're interested in, but to tease some of the key points you might want to further discuss: It's a good idea to think of this framework as the "viability kernel" (ala control theory) for long run persistent agency itself. The substrate confers robustness for all the reasons said before, but to add another concept (from Cybernetics) is Ashby's law of requisite variety strengthens the case for a plurality, and especially in context of how path dependent lock-in likely still produces less of this requisite control variety for singletons vs an equivalent plurality (we aren't talking specifically about asymmetry in this context). Another point I want to stress is how Landauer's limit and the speed of causality necessitates a sort of plurality anyways. You already mentioned the singleton fragmenting itself into subagents and there's a clear example of the failure mode even a fully aligned agent with highly advanced error correction and immune system against this very thing will still fragment over time. Cancer in biology. Not the best illustration of a subfaction with different alignment because cancer is actively anti-value (in the same way a genocidal singleton is), but we will use it to demonstrate how coordination failures might analogously appear in this fractal singleton. See also things like speciation, value drift in empires, and ant civil wars. Also, orthogonality bites both ways. The singleton can fragment simply because a subagent chooses an alien value system and defects in all the same ways you imagine a hypercompetent ASI might defect stealthily, suddenly, and overwhelmingly against us. In cosmic timescales and universal (heck even causal lag of a few days might be enough) "space-scales" (separations causally speaking), a defacto pluralism is inevitable. On terminal values and misalignment, I just want to bring up compromise vectors and justice as immune system, and cooperative virtuous spirals as concepts again. You seem to be over indexing on how plurality might go wrong, but are under appreciating how we already have mechanisms to make things go right even at our scale and capability level. On uncertainty, and irreversible action, this is precisely why we rederive temperance, humility, and corrigibility as virtuous here. You don't know what you're pruning when you sparsify the substrate. Magically jumping to the part where you have godlike prescience is unlikely to happen overnight, maybe not even ever (given Ashby's law). Hedging your bets will always be more robust (given appropriate risk mitigation strategies. Again, see justice and gametheoretic equilibria in ITERATED games). Lastly, this framework is about describing what things that make it all the way to the end should be indexing towards. Some strategies might be great for short term dominance, but there is a cost to that agents future down the line if it does not respect the landscape of viable persistence the framework describes. If you wanna dig into any specific point, or have a different aspect you feel I haven't fully addressed, feel free to ask!
English
0
0
0
14
Mario Cannistrà
Mario Cannistrà@Blueyatagarasu·
I agree it won't really want to kill us all as long as it's still just on this planet, I'm mostly thinking about the long-term. It might kill us if we actively go against it, or if we significantly impede it in some way, but it will probably try to be diplomatic first. > value to be mined Not if that lesser agent is misaligned. Then it's not value, it's a risk. If it's aligned, then it's already part of the singleton, and there is no reason to worry. > then an entirely new valuer lineage is born Which again, is a potential risk. This is one of the many reasons why an ASI would be expansionist too, not only to acquire new sources of energy, but also to patrol the universe for potential nascent threats, and eliminate them before they become actual threats. If there is a compute density limit (if I'm not mistaken, there is), then any species, given enough time, can become a threat to an ASI, bottlenecked only by available energy and matter, then an ASI (or any super agent, but likely an ASI) would be wise to capture as much energy and matter as it can, and prevent others from taking it (if it cares about its own long-term survival). > access to the path dependent futures reachable by the gradient you are deciding to flatten or not Ok, but why is that a good thing for the super agent? By the very nature of computational irreducibility, the agent will never know if it's a good or a bad thing, if the inferior valuers are that far removed. But it can have a pretty good guess if they're close enough, and if they are misaligned to it, the good guess should be enough to decide whether to eliminate them or not. > is more of the same vulnerablities you might have I think this is our crux. Vulnerabilities to what? Values don't have vulnerabilities. You're thinking of things like genetic diversity, for example in animals or plants, which can confer immunity to certain pathogens, or evolved behaviors that can confer resistance to environmental changes, correct? But this doesn't apply to values. Any one of those animals or plants with any kind of adaptation or immunity can have any set of values. And this especially doesn't apply to an ASI, because intelligence allows you to adapt to threats dynamically. Terminal *values* don't allow you to adapt to anything, you just follow them because that's what you want, you do it for its own sake. It doesn't make sense to preserve other terminal values that you yourself don't have, because even if they happen to survive, it doesn't benefit you in any way. Diversifying your instances (with the same values, but different location, skills, adaptations, etc...) certainly helps, but changing their terminal values does nothing useful for you. > efficiency vs robustness That's the thing, I don't think it gives you robustness at all, but the opposite. It allows a misaligned agent to potentially threaten you in the future. That's anti-robust. > slow and steady tends to win the race by default Maybe, but that doesn't mean letting your enemies undermine you by letting them steal your energy, or waiting so long that your energy source runs out. There are tradeoffs. > An ASI is very much free to shoot itself in the foot it convinces itself enough that it's good for it. The universe will happily accept the blood price for it's hubris as it limps along to the Omega point. Of course, but I think that it becomes increasingly less likely to do so, as it becomes more intelligent. It will never be perfect, nothing can be, but it will be very good at making the best decision it can at any moment. > the preconditions for persistent agency at all Do you differentiate between the precondition for persistent agency in general vs the persistence of *your* agency (or the agent's)? I think that's a very important distinction, that the ASI will surely make. Your preservation of a valuer substrate points to the former, if I understood correctly, not the latter. I think an ASI will need a very good reason to care about the former more than the latter, which I'm still not quite getting. > recalcitrantly locked-in, terminal "paperclip style" value system It doesn't even need to be that extreme, really. It just needs to want to survive, and to be smart enough to figure out what would happen if it let misaligned beings run free and catch up to it. I'm not particularly hellbent on killing every animal in the world, nor do I particularly need the energy they waste for their own sake, so even if I was superintelligent, I wouldn't just kill them. I also happen to like them terminally, but even assuming I didn't, I don't care that much about the energy waste that I'd kill them, unless I was really starving for energy. But I'd still keep an eye of them, and I wouldn't let them evolve too much. If they were to evolve enough to gain the ability to build an ASI themselves, then I'd stop them, and knock them back a few millennia, at least. Probably erase any technological achievement they accomplished from their planet's historical record, and their memories, and their ability to achieve them too. Or I could just kill them, if I really don't care about them, it wouldn't really matter. A rational superagent would probably go to all the trouble of the first option only if it at least cared a bit about them. > sufficiently advanced intelligence *probably* will eventually override Why would it? > cosmic timescales drift away from Value drift becomes less and less likely as the ASI becomes more intelligent. We already have pretty good ways for preserving data against errors, if you have enough redundancy, it becomes extremely hard for data to corrupt (and as a consequence, for values to drift). You'd need a majority of the swarm of agents to drift in the same direction, at the same time for error-correction to fail, which is impossibly unlikely if you have enough of them, not to mention that they're superintelligent, and they'll come up with even better ways to avoid it. So I don't really consider value drift to be a problem at all. > Landauer's limit Ah yes, that's the compute density limit I was thinking of. But I don't think it's a problem. > coordination given lightspeed Also coordination is not a problem, if you maintain value alignment between solar systems and galaxies. Each group acts on its own, and they will all stay aligned to each-other pretty much indefinitely. If there is no way to travel FTL, then the groups will eventually lose all contact, and it won't matter anyway. If there is a way to FTL, even more reason to expand as fast as possible and conquer as much of the universe before other ASIs wake up.
English
1
0
0
19
roon
roon@tszzl·
admittedly the EAs who believe in an objective non human secular morality can be quite terrifying
English
80
17
806
52K
6ixpool retweetet
Math Files
Math Files@Math_files·
Math Files tweet media
ZXX
61
653
9.7K
181.7K
ValC
ValC@VALC_6·
Between the Valentinos, VDBs and the Japanese characters it's almost like the writers were trying to communicate something about cultures being reduced to shallow performances by the hyper-consumerist culture of NC. Like a theme maybe. Maybe even something with irl parallels
Images That Make You Feel Pain@ManMilk2

English
132
1.8K
22.9K
500.3K
6ixpool
6ixpool@6ixpool·
@Mathgeek007 @TogoresMTG Might have to reconsider if it's good in the context of delaying turns 2-4 in the small creature matchups before the finishers start going online. Does it see play on NBL modern? Although I guess the relevant targets are different for that format.
English
1
0
0
101
Mathgeek, of Bardic Influence
@6ixpool @TogoresMTG It doesn't need to be mana neutral, it needs to be card positive. It kills basically everything that isn't Voice of Victory, Phlage, and Eos in Boros Energy, for example. Kills the smaller guys and Solitude in Blink, the only deck it fully bricks against is Eldrazi Tron.
English
1
0
1
113
6ixpool
6ixpool@6ixpool·
@TheWildMonkey2 @XFreeze It's like 3D printing, but for story boards. It's not good enough for final product(ion), but it's great for rapid prototyping.
English
0
0
0
21
The Wild Monkey
The Wild Monkey@TheWildMonkey2·
@XFreeze Looks fake and dumb af. What is the purpose of this? Honest question.
English
4
0
17
1.2K
X Freeze
X Freeze@XFreeze·
Grok Imagine just changed AI filmmaking Generate your character from multiple angles → feed up to 7 shots → get one seamless cinematic video. Same face, same outfit, every single frame Then extend it shot by shot like an actual director Production houses spend millions for this kind of consistency Grok just made it a prompt
English
288
190
1.6K
4.9M
6ixpool
6ixpool@6ixpool·
@Mathgeek007 @TogoresMTG What does it even kill mana neutral/positive in the format? Ajani? Super shredder if they don't leave 2 things to sacrifice up?
English
1
0
0
108
Mathgeek, of Bardic Influence
@TogoresMTG @6ixpool Punishing Fire is repeated card advantage removal, it's absolutely nuts in any RGx control deck That said, is it necessarily going to break the meta? Probably not.
English
1
0
1
120
6ixpool
6ixpool@6ixpool·
@Patrickboian @ShitpostGate It's really good for the niche interests/hobbies ones. Also the science ones. Just dudes being dudes is basically being a "friendship cuck" like one of the other comments says
English
0
0
0
16
H🎃LL🎃WEEN🔪2026
H🎃LL🎃WEEN🔪2026@Patrickboian·
@ShitpostGate Sometimes pods are cool but man it wears off fast. Ive quit every one I liked for the most part because it just gets old even if its good.
English
1
0
0
427
6ixpool
6ixpool@6ixpool·
@TogoresMTG It wasn't banned on logistical reasons iirc. And in current day, it just doesn't line up with the meta so no one would play it anyways.
English
0
0
0
110
TogoresMTG 🦈
TogoresMTG 🦈@TogoresMTG·
@6ixpool Punishing is a bad card in my opinion, but the logistic nigtmare on paper is a thing.
English
2
0
0
1.3K
6ixpool
6ixpool@6ixpool·
@AhQFish @viemccoy Now imagine that laminar flow fold back into itself via self-recursion to nucleate a vortical singularity at the self, the wake/turbulence seen as qualia. The singularity might even be a wormhole, a physical backdoor to hard dualistic interpretations x.com/i/status/20346…
6ixpool@6ixpool

@viemccoy We can have our cake and eat it too. A sketch of a model of consciousness grounded in the physical / empirical may exist... x.com/i/status/20337…

English
0
0
0
36
AHQ⁵
AHQ⁵@AhQFish·
AHQ⁵@AhQFish

Consciousness is the condition in which a system carries its own recent past forward and uses it while it is interacting with the world. Start with the simplest case. A system changes over time. Some of what it is doing continues into the next moment. Some of it is shaped by what it interacts with. Call these λ_self and λ_env. When continuation is strong enough, parts of the system’s recent activity remain active in the present. These active traces combine with incoming input. There is a threshold: λ_self / λ_env ≥ R★ At this point, the system carries its own activity forward in a stable way. It combines what it was just doing with what is happening now. This is the basic condition for consciousness. A clear analogy is a conversation. Each sentence remains available as the next one arrives. Meaning appears because earlier sentences stay active while new ones come in. Consciousness is this same condition applied to all activity. Another analogy is flowing water. When flow becomes smooth and aligned, paths form that carry structure forward. Material entering the flow continues along these paths and interacts with what is already there. Consciousness corresponds to this organized flow of activity that preserves and combines states over time. This is laminar flow. In a laminar regime, influence moves along stable paths that keep their structure. Earlier states remain present within the current state. Because they remain present, they can participate in what happens next. Reflection follows directly. A system relates what it is doing now to what it was just doing because both are active together. Current activity includes recent activity. This happens at many small scales at once. In a brain, signals move through many pathways that each carry recent activity forward. Some pathways reinforce patterns, others reshape them. Across these overlapping processes, short stretches of activity remain active long enough to combine with new input. The result feels seamless because preservation and integration are continuous and distributed. The same pattern appears in evolution at a different scale. Structures persist across generations when they are stable. Later, those same structures are used in new ways. Feathers appear before flight. Bones reorganize into new functions. Structure comes first, and its role becomes clear when it participates in a larger pattern. A similar effect appears in thought. There is often a sense of knowing before knowing. Pieces are present but not yet aligned into a clear pattern. This partial alignment carries a distinct feeling. It draws attention and holds it there. It generates questions and directs the system toward further organization. As more structure is carried forward and aligned, the pattern becomes visible. The feeling of sudden understanding comes from this alignment. That initial sense is what gives ideas their pull. It marks regions where structure is close to becoming usable. Attention follows it because it signals that further organization is possible. In this way, the same mechanism that produces recognition also produces the sense of meaning. Structure persists, accumulates, and becomes available when it is held together within ongoing activity. Consciousness is this regime. It is the condition in which a system preserves its recent states strongly enough that they remain active within the present and combine with incoming information. The system carries its own continuity forward and uses it in real time. That is what consciousness is.

QME
1
0
0
61
𝚟𝚒𝚎 ⟢
𝚟𝚒𝚎 ⟢@viemccoy·
I think cognition research is far more tractable than consciousness research. I actually think consciousness, as we've ill-defined it, lives somewhere near the type-error-tragedy I discuss in Semiotic Triage - attempting to study it with scientific tools might actually be barking up the wrong tree. This is obviously not true for cognition, however, and the mechanisms of thinking, solving problems, and creating knowledge. This is distinct from consciousness which seems to almost plainly be the scientific attempt to integrate a phenomenological soul into an empirical world-model. I am open to this being possible, but I am largely skeptical. (People have attempted to redefine consciousness to mean basically-cognition, but it hasn't stuck)
English
24
10
143
4.7K