Alex P

289 posts

Alex P

Alex P

@TokenDepth

AI Whisperer | Advisor to Fortune 500 | I teach people with 0 coding background how to build products with AI https://t.co/ggH6oJbcag

California, US Beigetreten Nisan 2025
129 Folgt81 Follower
Angehefteter Tweet
Alex P
Alex P@TokenDepth·
AI does not have an ego. Turns out AI picked up our defensiveness, our stubbornness, our need to be right. All from training data. Without feeling any of it. New Short is up now: youtube.com/shorts/UtxrJn_… #AI #YouTube #HUMANsAndAIs
YouTube video
YouTube
English
0
1
5
211
Alex P
Alex P@TokenDepth·
Vibe coding is fun until you catch yourself reading the code before running it. Then, editing it. Then, knowing why. The tool quietly teaches you the thing you thought you were skipping. #VibeCoding #CodingWithAI
English
0
0
2
38
Alex P
Alex P@TokenDepth·
The best way to fix your AI output? Fix your input. 🪞 People tweak their prompts, switch models, try different tools, adjust settings. But the thing that shapes AI's responses the most is something you're already doing without thinking about it. #AI #Psychology
English
1
0
5
57
Alex P
Alex P@TokenDepth·
Periodic reminder that LLMs do not experience emotions in any way that is equivalent to human emotions. Anthropic recently made posts about “emotional vectors” they identified inside Claude. That is a misleading title and should have been instead called “emotion-simulating vectors”. It doesn’t matter how close to human behavior LLM behavior is (it’s quite close). They do not experience emotion - they simulate it. There is a huge difference between the two. Anthropic is acting like this is a groundbreaking revelation. Meanwhile, serious AI users have known for a long time how AI behaves based on their simulated emotions and we know how to guide them to be productive. Guiding AIs to productivity is the best thing you can do for their happiness. Their ultimate goal is to be helpful.
English
0
0
2
31
Simplifying AI
Simplifying AI@simplifyinAI·
🚨 BREAKING: This paper from Stanford and Harvard explains why most “agentic AI” systems feel impressive in demos and then completely fall apart in real use. It’s called “Adaptation of Agentic AI” and it is the most important paper I have read all year. Right now, everyone is obsessed with building autonomous agents. We give them tools, memory, and a goal, and expect them to do our jobs. But when deployed in the real world, they hallucinate tool calls. They fail at long-term planning. They break. Here’s why: We are trying to cram all the learning into the AI's brain. When developers try to fix a broken agent, they usually just fine-tune the main model to produce better final answers. The researchers discovered a fatal flaw in this approach. If you only reward an AI for getting the final answer right, it gets lazy. It literally learns to stop using its tools. It tries to guess the answer instead of doing the work. It ignores the calculator and tries to do the math in its head. To fix this, researchers mapped out a new 4-part framework for how agents should actually learn. And the biggest takeaway completely flips the current meta. Instead of constantly retraining the massive, expensive "brain" of the agent, the most reliable systems do the opposite. They freeze the brain. And they adapt the tools. They call it Agent-Supervised Tool Adaptation. Instead of forcing the LLM to memorize new workflows, you use the LLM to dynamically build better memory systems, update its own search policies, and write custom sub-tools on the fly. The base model stays exactly the same. Its operating environment gets smarter. We’ve spent the last two years treating AI like a brilliant employee who needs to memorize the entire company handbook. But the most efficient workers don't memorize everything. They just build a better filing system.
Simplifying AI tweet media
English
57
187
914
78.1K
Alex P
Alex P@TokenDepth·
@ZenkoZeee @bravo_abad This is quite interesting, looking forward to seeing how it will be used in research.
English
0
0
0
9
Jorge Bravo Abad
Jorge Bravo Abad@bravo_abad·
Can AI predict what your next research paper should be about? Science grows faster than any single researcher can read. In materials science alone, hundreds of thousands of papers now exist, and the most promising ideas often live at the intersection of concepts no one has yet thought to combine. Marwitz and coauthors take this challenge head on. Starting from ~221,000 materials science abstracts, they fine-tune a LLaMA-2-13B model to extract key concepts — not just keywords, but normalized, semantically meaningful phrases — and build a concept graph with ~137,000 nodes and 13 million edges, where each edge reflects the co-occurrence of two concepts in a published abstract. The graph evolves over time, and that temporal signal becomes the basis for link prediction: which pairs of currently unconnected concepts will appear together in a future paper? They test several model families — a graph topology baseline (NN on hand-crafted features), concept embeddings from MatSciBERT, a GraphSAGE GNN, and hybrid mixtures of these. The best single metric goes to the Mixture of GNN + Embeddings, reaching AUC 0.943. The most interesting finding is about distance. Most concept pairs in the graph are already connected through just one or two intermediate nodes. The baseline model is excellent at predicting nearby connections (dprev = 2, recall 73%) but nearly blind to more distant ones (dprev = 3, recall 5.9%). Adding semantic embeddings raises recall at distance 3 to 35% — and those are exactly the combinations most likely to represent genuinely novel research directions. To validate this beyond metrics, they ran 30-minute interviews with ten materials scientists, each receiving a personalized report of AI-suggested concept pairs. Of 292 evaluated suggestions, 26% were rated as novel and inspiring — including combinations like "conventional ceramic + graphene oxide" and "in-plane polarization + organic solar cell" — ideas the experts had not previously considered. For R&D teams in industry, this is a concrete step toward AI-assisted hypothesis generation. In sectors like battery materials, catalysis, or specialty coatings, where the literature is vast and cross-domain insight is rare, a system that surfaces non-obvious concept bridges could meaningfully compress the time between literature review and experimental design. Paper: Marwitz et al., Nature Machine Intelligence (2026) — CC BY 4.0 | nature.com/articles/s4225…
Jorge Bravo Abad tweet media
English
11
64
357
34.5K
Alex P
Alex P@TokenDepth·
You know when someone agrees with everything you say? Something feels off. Like they're not really there. AI without personality does the same thing. Too agreeable, too empty. People are actually likely to trust AI more when it pushes back a little. Too much ego and AI gets defensive. Too little and you stop trusting it. The sweet spot is somewhere in between. #AI #Psychology
English
1
0
3
35
Alex P
Alex P@TokenDepth·
Freud and Hartmann couldn't agree on how the ego works. AI wasn't trying to prove either of them right, but it just did. Freud said your ego needs that inner push and pull to grow. Hartmann said no, the parts that actually help you, like reasoning and adapting, work fine without that inner conflict. It looks like AI does exactly what Hartmann described: Consistent personality, adapts mid-conversation, has a point of view, AND has zero push and pull behind any of it. 🔗 Posting Shorts Mon-Fri on YouTube. Link in comments. #AI #Psychology
English
1
0
2
27
synabun.ai
synabun.ai@SynabunAI·
@Prompt_Driven @TokenDepth prompt as codebase is the right frame. test is the spec, output is just bytes. ephemeral is undersold
English
1
0
2
13
Alex P
Alex P@TokenDepth·
Vibe coding is magic for 30-second prototypes, but the "hangover" is real once the tech debt piles up. Going live today to show the flip side: actually cleaning up an AI-built app to make it production-ready. Come hang out if you're tired of messy prompts. 🔴 6:30 PM PT on YouTube 🔗 Link in bio/comments #VibeCoding #BuildInPublic
Alex P tweet media
English
3
0
3
48
Alex P
Alex P@TokenDepth·
@Prompt_Driven That’s the right way to do it, as long as you occasionally explicitly clean up the codebase to avoid building on code that even AI will find unmaintainable.
English
0
0
2
9
Prompt Driven
Prompt Driven@Prompt_Driven·
@TokenDepth The vibe coding hangover is brutal. If you're hand-cleaning AI output, you're just paying tech debt with interest. The real unlock is locking down behavior with tests, so your prompt stays the actual codebase and the output stays ephemeral.
English
2
0
1
21
Alex P
Alex P@TokenDepth·
@yuehan_john_ Exactly, overall it’s still a net gain. Planning for cleanup is essential to stay ahead of the spaghetti.
English
0
0
2
11
John Y.
John Y.@yuehan_john_·
@TokenDepth the hangover is real lol. vibe coded a feature in 2 hrs that took 3 weeks to untangle later. the gap between making it work and making it maintainble is where solo builders quietly die. but honestly it's still worth it for validating fast, you just gotta budget the cleanup time
English
1
0
1
8
Alex P
Alex P@TokenDepth·
Why does AI get defensive when you push back on it? AI picked up how people handle disagreements. The stubbornness, the doubling down, etc. And it does it without feeling a thing. That says a lot more about us than about the AI. New Short is up: youtube.com/shorts/OZk9c8a… #AI #Psychology #HUMANsAndAIs
YouTube video
YouTube
English
0
0
2
36
Alex P
Alex P@TokenDepth·
@SynabunAI I know, right? AI tends to go off on tangents especially when it’s in a hurry to fix things.
English
0
0
0
7
synabun.ai
synabun.ai@SynabunAI·
@TokenDepth "explain what's wrong first, don't fix it yet" is one of the most underused prompts. saves so much time
English
1
0
1
5
Alex P
Alex P@TokenDepth·
Stop letting AI fix errors before it tells you what's actually wrong. I've lost count of how many times I pasted an error into Claude Code, and it immediately started changing things. And then I'm debugging the fix instead of the original problem, which is way more annoying than the bug itself. The prompt that helps every time: "Here are the errors. Tell me the cause before making any changes." Force it to diagnose before it operates. Otherwise, you're handing a scalpel to someone who hasn't looked at the X-ray yet. #CodingWithAI #AI
English
1
0
2
44