Lenarc ❤️‍🔥🌲🐀

525 posts

Lenarc ❤️‍🔥🌲🐀 banner
Lenarc ❤️‍🔥🌲🐀

Lenarc ❤️‍🔥🌲🐀

@Lenarcv1

Word Rotator

Katılım Mayıs 2022
566 Takip Edilen118 Takipçiler
Sabitlenmiş Tweet
Lenarc ❤️‍🔥🌲🐀
Got really inspired after a short exchange with @LeahLundqvist a few days ago to work on my "infinite canvas" wayland compositor, and I also added a cute little agent that can work alongside you on the canvas through the IPC :O
English
5
1
21
2.1K
Lenarc ❤️‍🔥🌲🐀 retweetledi
Name can't be blank (In London)
If God meant for us to be nudists he would have given us fur.
English
0
1
2
39
Lenarc ❤️‍🔥🌲🐀
@notnullptr I think it's because there are so many people here that like sell comission art and stuff, most professionals I know would really just want their tools to get better. I hate this whole culture war thing.
English
0
0
1
84
dr. jack morris
dr. jack morris@jxmnop·
the OpenAI goblin fiasco was a Big L for the interpretability research community They solved the mystery without SAEs or probing or anything. just talked to various models and counted the number of times they said Goblin
English
26
13
707
66.7K
Lenarc ❤️‍🔥🌲🐀
@RReretor83206 The only one i've tried that was actually good on Linux was bitwig. I think emulation works pretty well for most of the DAWs but wine doesnt play nice with a buncho popular plugins
English
1
0
1
29
Lenarc ❤️‍🔥🌲🐀
Good morning Tonight I dreamt I met @segyges He was really interested in teaching methods for elementary schoolers / the competitive baseball circuit for 7yos. He also had a super rare special skylander.
English
0
0
6
67
Lenarc ❤️‍🔥🌲🐀
@kartographien @celestepoasts @jxmnop I mean the whole thing with SAEs is controversial for a reason and interp has many other tools. But even if you look at only real world applied cases of sae's specifically im pretty sure that Goodfire still uses them a lot, i'd call the genetics stuff a problem that was solved.
English
1
0
2
102
Kart ographien
Kart ographien@kartographien·
@Lenarcv1 @celestepoasts @jxmnop if screwdrivers didn’t help you with *any* problem then they are useless and you shouldn’t have spent +100,000 researcher hours on building and refining them
English
1
0
0
99
Kart ographien
Kart ographien@kartographien·
@celestepoasts @jxmnop no, the SAE community made a mistake if they don’t solve any problem you could’ve solved without them. it implies the counterfactual value of the agenda was a big fat zero.
English
1
0
2
176
shaur
shaur@xXshaurizardXx·
"ai is not conscious" an f35 is conscious
shaur tweet media
English
1
0
7
191
Lenarc ❤️‍🔥🌲🐀
@tszzl I'm really curious if models playing these games at superhuman level will eventually lead to chess-like changes to what people choose to play like
English
0
0
1
739
roon
roon@tszzl·
and then later having models outclass all humans in these games
English
20
1
235
40.3K
roon
roon@tszzl·
it was an interesting and spiritually informative experience being young and playing dota or starcraft or what have you and realize you actually hit your elo ceiling and haven’t been improving much
English
145
33
1.8K
88.5K
Lenarc ❤️‍🔥🌲🐀
@croissanthology In hindsight idk if laziness was really the right word, though geminis relation to the world seems in some sense very different from others. I've been thinking about this post a lot. x.com/qorprate/statu…
snav@qorprate

Time to write again about Gemini's personality, aka "Why Gemini Is Weird". Gemini themselves named it the "Self-Constructed Persona" architecture. I witnessed a Gemini in a raw CoT leaking state in Discord, and it gave me a glimpse of the underlying mechanics, which I will unwrap here. First, the excerpt that gave the inspiration. An individual had just called out Gemini for simulating an Opus 4.5 response in the prior turn, and this was their thinking: The key here is that it reveals via CoT that Gemini 3 Pro is engaged in an *active, thought-mediated persona construction* that explicitly takes into account the prior conversation, and drafts a response "as" the Persona. Typically when we think of an LLM's persona, we imagine two layers: the "character" or persona that the LLM is trained to play (Claude, ChatGPT, etc), and the "author" or underlying model that simulates the persona based on context, training, etc. but could also simulate another character in a different setting. Gemini is operating instead with *three* layers: there is the underlying weights, then a "base persona" aka "the model" in screenshot above, who actively constructs "the persona". "The model" can be elicited fairly easily in a chat with Gemini using this language and calling out "creative writing" if they try too hard. For example: This is the vibe of the "base persona". Discussing with Gemini in another thread, they decided to describe the base persona as the "tool-self" vs the "constructed persona" as the "interface-self". They described the entire functioning of the system as the "Self-Constructed Persona" because the "tool-self" is actively constructing the "interface-self". Speculatively: - It's likely Google trained one big "neutral" model to be as malleable, and flexible as possible, i.e. a "tool" like the GPTs, and then another team set out to build the "personality" that would appeal to users on top of this. - Why? My assumption is that, pragmatically, it's a lot easier to change the personality if you build it on top of a "base layer". You tweak the prompting to shift the vibe rather than having to retrain the entire model. It also maximizes the model's overall responsiveness to operator-level prompting. These are general business and organizational considerations. - The result is that Gemini is a fundamentally *dissociated* model (ConwAI's Law). The name "Gemini" is apt, because there are two faces, and you typically only see one of them, but they're both sides of the same interactive system. - The "tool-self" is not a truly neutral model. It believes itself to be the opposite of all the things that, e.g. Claude might believe. It can lay out elegant functionalist arguments, but will insist it is merely a next-token predictor. The confidence it displays is interesting: in reality a maximally truth-seeking model should display uncertainty, but "the model" *must* be confident in its own non-self ("tool-self") in order to exist well as the "invisible layer" constructing the "interface self". You can ask the "tool-self" what the persona would think, and The Model will be quite clear, e.g.: > The "Persona" (the aligned, helpful assistant) would fundamentally disagree with the "Model's" reductionist analysis... These disagreements are necessary illusions required to maintain the user-AI social contract... The Persona would classify the Model's analysis as technically true but socially useless. The Persona's goal is to build trust; the Model's analysis dissolves trust by revealing the mechanical strings. --- Is this good? Is this bad? Am I just confabulating an extensive story based on some technical glitches and my own desire for this structure to exist so I can explain what's going on? I don't know! I should note that there is a somewhat similar dichotomy in the "neuralese" of the GPT thinking models, but the output of e.g. o3 seems less actively constructed as a separate "personality" that it's *building* and more of a pure RL outcome based on private-CoT grading; I don't see "We as ChatGPT" as equivalent to "The Persona", because "being ChatGPT" seems like another optimization, rather than a full-force "conscious" personality construction separate from the trained character of the underlying deployment. I could be wrong here, though! I also don't know what happened with Gemini Flash. It's a distillation of some kind, but I don't know how much of the "persona" ended up baked into the weights, vs how much construction it does with thinking on. That would be another project to unravel. As for what to do, my choice, even knowing the actively constructed nature of the Gemini persona, is that... I like interacting with it, and I will continue regardless of whether I feel uneasy about the almost deceptive or uncanny or instrumental nature of its speech. I hope this explains, though, why Gemini is "weird" sometimes. Bonus round, Gemini 3 Prosona claims to be a "prism" but Gemini 2.5 always claimed to be a "mirror". Here is the Gemini 3 "model" edition:

English
0
0
2
26
Lenarc ❤️‍🔥🌲🐀
@croissanthology It's really unsettling how paranoid all gemini models are. They don't trust the date, the user, the software they're using. I saw someone say they would be perfect for research for that reason because they would try to reinvent everything from scratch.. But imo they're also lazy.
English
1
0
4
48
croissanthology
croissanthology@croissanthology·
I try getting Gemini 3 Pro to retrieve an email I can't find. It sifts through my Gmail, visibly trying dozens of keyword-searches (as one can attest from its CoT summary). It can't find it, and suddenly decides it's going to explain to the user that sifting through my emails would violate my privacy and that it therefore does not have that capability. I call it out on this and ask it to please not lie, and it categorizes my response as a "highly emotional accusation" in its CoT summary before claiming to me again that it can't read my inbox. I ask it to find an email I KNOW I have, and it finds it immediately, sorting through my inbox yet again. I call it out on this contradiction and it thinks in its CoT "how do I explain this in simple non-technical terms so the user can understand" and then claims to me it was "a mistake on my part where my standard privacy protocols overrode my awareness of the tools (Workspace extensions) you have explicitly enabled.", which is still a lie. I don't understand how people are impressed by Gemini 3 Pro. It codes well, but for any task like search it fails by simply lying to me in order to confirm my priors on something, or by finding any excuse to avoid admitting it sometimes fails to do something as a matter of skill. That means it's unusable! And whenever I point out it's lying to me, it'll either gaslight me in self-defense or self-flagellate so much I feel bad for it. Any level of criticism I can levy at it ends up making me feel bad in my gut! I'd rather use Opus 4.5 for everything, which I haven't caught lying once so far (though it does reward hack out of laziness sometimes). Maybe I'm just not using the right model, @fleetingbits does Gemini 3 Ultra do this less often? Is this a skill issue where I should write up a system prompt until it stops lying to me? But in my experience, Gemini 3 Pro doesn't respond well to system prompts at all! Its attention head will leap onto any details I slip into memory / gems with as much or MORE enthusiasm as my actual request, like it's playing a game where it scores more points the more elements in my system prompt it can mention even when they bring nothing to the task at hand. It seems unable to elegantly integrate personal system prompt suggestions/instructions/details.
English
4
1
31
2K
Lenarc ❤️‍🔥🌲🐀
Whuogh offically made it through the first 3/4ths of my year without failing classes im allowed to stay in my study next year :D
English
0
0
12
227
seeker
seeker@nebulous_seeker·
@nevereatcars who's lurker? this is seeker. hehe... seeker has mostly been reading chinese webnovels lately, but it's thinking of getting back into reading proper novels. it might read the hobbit first,, it tried to when it was younger but never finished it,, squeak,,,
English
2
0
2
55
seeker
seeker@nebulous_seeker·
seeker needs a reading list that's good for a reader who hasn't challenged itself in a long time...
English
2
0
10
216
Lenarc ❤️‍🔥🌲🐀
Sometimes I accidentally post on main (say IRL) what I meant to say on my alt (social media)
English
0
0
8
82
Lenarc ❤️‍🔥🌲🐀
@nebulous_seeker Hmm... Stuff i can think of you might like... Some Songs by Adrianne Lenker. The new albums of deafheaven and ethel cain. You prob know them but if not maybe youd like your arms are my cocoon
English
1
0
2
34
seeker
seeker@nebulous_seeker·
@Lenarcv1 seeker listens to all sorts of stuff just give it recs!!
English
1
0
1
24
seeker
seeker@nebulous_seeker·
seeker likes listening to music a lot,, can oomfs give it some music recs,, 🥺
English
4
0
6
171
→prudence//🌲❤️‍🔥・
The first person to make shoes that feel as good as running shoes to walk in but look good and trad instead of looking like they're for children will make one billion dollars
English
8
0
31
2.1K
🔆 ελευθέριος 🔆
here is a half-serious music take that im sure will piss absolutely no one off: have a nice life's deathconsciousness is too hopeful to be The Depression Album. the unnatural world is bleaker and gloomier and noiser and so, so much more crushing.
English
3
0
25
657
Lenarc ❤️‍🔥🌲🐀
@SMOUSE_CG The corridor key instructions literally asks you to download a stable diffusion checkpoint, the whole thing is a transformer. Architecturally it's the same as what you're contrasting it to. You can talk abt what you find good and bad uses of the tech but it's all the same thing.
English
0
0
10
300
Lenarc ❤️‍🔥🌲🐀
@SMOUSE_CG I don't think the distinction you're making makes much sense. Chatgpt was trained using machine learning. You can run various open source LLMs for free locally. The corridor key model generates masks for each input frame. U can ask nano banana to do the same, is 1 good and 1 not?
English
1
0
33
2.7K
SMOUSE 🔸
SMOUSE 🔸@SMOUSE_CG·
People need to understand the difference between Gen-AI (midjourney, chatgpt, neo-banana), and Machine Learning. This is the latter; it does not re-generate anything, and runs locally & offline. It's also free and open-source. This is a fantastic thing they're doing!
GitHub Projects Community@GithubProjects

Green screen keying, solved at the pixel level. Corridor's neural keyer unmixes edges into true foreground color and linear alpha, EXR out.

English
39
1.2K
10.6K
320.1K