Eric Conklin

5K posts

Eric Conklin banner
Eric Conklin

Eric Conklin

@EricConk32

Security red teamer. Psych researcher. I study when politics functions as emotional regulation. Building falsifiable tests for it. The construct is PER.

Entrou em Mart 2025
2.7K Seguindo731 Seguidores
Tweet fixado
Eric Conklin
Eric Conklin@EricConk32·
Most arguments aren’t about facts. They’re about stability. When a belief is holding you together, “being wrong” feels dangerous.
English
2
2
6
928
Eric Conklin
Eric Conklin@EricConk32·
“We used Kerberoasting to compromise a service account” gets filed away. “This one SCP is the only thing separating an attacker from production across four business units” gets a budget approved. Same team. Different question. We spend weeks proving what worked instead of finding what was load-bearing.
English
1
0
2
138
Eric Conklin
Eric Conklin@EricConk32·
So when someone dismisses AI as “just prediction,” sometimes they’re right. But sometimes they’re stabilizing something. When debates get heated, it’s often not about architecture. It’s about protecting the self.
English
0
0
1
51
Eric Conklin
Eric Conklin@EricConk32·
If your status comes from mastering complex systems, and a system starts doing your cognitive moves, that’s destabilizing. Not because it’s magic. Because it compresses what used to signal expertise.
English
1
0
2
65
Eric Conklin
Eric Conklin@EricConk32·
Every time someone says “LLMs are just probabilistic sequence prediction engines,” they think they’re demystifying AI. They’re not. They’re accidentally describing the human brain.
English
1
0
3
206
Eric Conklin
Eric Conklin@EricConk32·
@vxunderground You just described something uncomfortably close to how human brains work too. Prediction, updating weights, next-token forecasting. We just add narrative on top of it.
English
3
0
8
581
vx-underground
vx-underground@vxunderground·
yeah, so pretty much when you talk to an LLM (chatgpt, claude, grok) and get fancy schmancy stuff from it, youre just interfacing with a probabilistic sequence-prediction engine each word provided to the interface (or subwords like "ing" or "un", whatever) goes through a thingy called a tokenizer. the tokenizer transforms the words (or subwords) into tokens. although if you want to get super technical the tokenizer doesnt even know words its just raw text but whatever the tokens are stored in a big ass fuck off prebuilt in-memory dictionary for the tokenizer thingy. the words (tokens) match a 32bit integer (literally just a number). this is basically like a dictionary where "i like cats" is translated to something like "1 200 1337" "i" = 1 " like" = 200 " cats" = 1337 those tokenized numbers are vendor specific, they dont really mean anything, but these tokens are then sent to a "embedding lookup table" where theyre actually important once the LLM has the tokens its passed to the embedding lookup table which just does a bunch of fancy math, nerds try to make it all complicated, but its literally just arrays and indexes and stuff in this "embedding lookup table" (im just gonna write lookup table) each token (text to number) has a bunch of numbers associated with it (weights). " cats" = 1337 lookup table entry 1337 = a bunch of numbers so the word cats has a bunch of numbers associated with it, each LLM is different, but usually its 768 numbers, 1024 numbers, 2048 numbers, or 4096 numbers. these numbers associated with a token are called dimensions. each LLM has different numbers of dimensions for representing words the llm then takes these numbers and stacks them on top of each other i like cats = 1 200 1337 1 200 1337 = (768 numbers) (768 numbers) (768 numbers) its like a height by width thingy basically if you get fancier its a 3x768 matrix (or 1024, 2048, whatever). the more stuff you feed the LLM the larger this matrix becomes. if you feed is 9000 word essay its 9000 words-to-tokens x 768 numbers matrix each vendor will handle the words different, 9000 words could be 9000 tokens, or 10000 tokens, or 14000 tokens ok thanks, now you understand llm tokenization, llm lookups, and the basics of llm weights (matrixing), this doesnt cover llm lookups with position matrixes, transformers, probability output, and transforming back to text. im tired of writing
vx-underground tweet media
English
51
94
1.6K
55.7K
Eric Conklin
Eric Conklin@EricConk32·
Yesterday Aqua Security’s Trivy repo was compromised through a GitHub Actions workflow issue. A stolen PAT was used to delete releases, rename the repo, and publish a malicious VS Code extension. Microsoft, DataDog, and several CNCF projects were reportedly targeted in the same campaign. What stood out to me is that none of this required a zero day. It was structural. A token with too much access. A workflow with write permissions. A publish pipeline without an approval gate. Each of those choices can look reasonable on its own. Together they create an attack path. Most CI/CD security failures look like this. Not platform bugs. Control interaction. I’ve been working on a small project that tries to map those paths across GitHub orgs. Instead of flagging individual misconfigurations it models what an attacker at different privilege levels can actually do and surfaces where gates can be bypassed. Added a few new rules based on the Trivy incident. Still very early but sharing in case it's useful. github.com/InfoSecHack/gh…
Eric Conklin tweet media
English
0
4
21
1.8K
Eric Conklin
Eric Conklin@EricConk32·
@sbkaufman Thanks for posting this, really interesting. The point about moving beyond self-report seems especially important given how much impression management you’d expect in that population.
English
0
0
1
91
Scott Barry Kaufman ⛵
Scott Barry Kaufman ⛵@sbkaufman·
We may have to rethink psychopathy: It might not even exist. Scientists have yet to find consistent evidence for many common assumptions about psychopaths-- i.e., that they lack emotions, have empathy deficits, or have impulse control issues. aeon.co/essays/psychop…
English
14
11
66
8.8K
Eric Conklin
Eric Conklin@EricConk32·
@JustinWolfers A lot of modern politics operates like a product launch. The announcement matters more than the product.
English
2
0
17
920
Justin Wolfers
Justin Wolfers@JustinWolfers·
If you were worried the Supreme Court tariff decision undermined the Administration's "deals", let me put your mind at rest. Those "deals" were more reality TV than trade policy: Press conferences, bullet points, and praise, but little substance. The tell: None were ever signed
English
37
885
3.1K
108.4K
Eric Conklin
Eric Conklin@EricConk32·
@samstein Institutional trust erodes fastest when the first response is certainty instead of investigation.
English
2
6
220
20.8K
Eric Conklin
Eric Conklin@EricConk32·
@washingtonpost The Fed can’t function if the White House can threaten its leadership whenever it dislikes monetary policy.
English
10
0
16
1.1K
The Washington Post
The Washington Post@washingtonpost·
The Federal Reserve is asking a court to throw out subpoenas issued as part of the Justice Department’s inquiry into Fed Chair Jerome H. Powell and the central bank’s renovations of its building, according to a person familiar with the matter. wapo.st/4aSQiJn
English
25
50
126
24.5K
Eric Conklin
Eric Conklin@EricConk32·
@kaitlancollins @VanJones68 The job of democratic politics is turning disagreement into workable compromise. Without that, nothing gets done.
English
0
0
7
906
Kaitlan Collins
Kaitlan Collins@kaitlancollins·
"Listen, if Mamdani and Trump can get along to figure out something positive to do, you can get along with your relatives and your friends from high school," @VanJones68 says.
English
90
115
1.6K
202.4K