Juliann · learning out loud

491 posts

Juliann · learning out loud banner
Juliann · learning out loud

Juliann · learning out loud

@JuliannPod

How do we learn, grow, and bet on ourselves in the AI era? AI · cognition · craft · culture. Writing what I find.

Paris · Tokyo in my head Katılım Aralık 2021
357 Takip Edilen128 Takipçiler
Sabitlenmiş Tweet
Juliann · learning out loud
Juliann · learning out loud@JuliannPod·
I've spent 3 years teaching AI to companies. But what I really think about is how we learn, grow, and bet on ourselves when everything keeps changing. AI, cognition, Japanese craft, investing : I think they're all the same question. Thinking out loud here. Follow along.
English
0
0
4
206
Juliann · learning out loud
I've seen the session-four effect without the EEG. Training teams on AI every week: the ones who arrive with their own rough draft, even a bad one, engage completely differently from those who open the chat first. The first group debates the output. The second accepts it. Same prompt, same model, two different brains. The brain that thought first catches what's wrong. The other one has nothing to catch it with.
English
0
0
0
44
Juliann · learning out loud
@DeRonin_ The finding that actually matters: cognitive debt compounds. You borrow against your own thinking to deliver faster today, and the interest accumulates against your ability to think at all. The fix isn’t less AI. It’s sequencing. Brain first, AI second. Always.
English
0
0
0
18
Ronin
Ronin@DeRonin_·
🚨WARNING: MIT proved AI is worse for your brain than narcotics (PROVES BELOW) they wired 54 students to EEG machines and tracked their brains across 4 months of essay writing 3 groups: one used ChatGPT, one used Google, one used nothing the ChatGPT group's neural connectivity weakened every single session.. alpha, beta, theta, delta bands, ALL DECLINED... by session 3, 78% couldn't quote a single line from the essay they wrote minutes earlier their own words.. written minutes ago.. gone from memory in session 4 researchers swapped the groups the hand-writers who got AI for the first time? their neural connectivity measured HIGHER than any AI-only session across the entire study the AI-only group who lost their tools? couldn't produce a single coherent essay without help the researchers called it "cognitive debt", you borrow against your own thinking to ship faster today, and the interest compounds against your ability to think at all the takeaway ISN'T "stop using AI" the takeaway is: think first, let your brain build the structure, then use AI to extend it the main discovery which we should take that you have to use your brain first, after AI to complete some manual stuff + where you're weak acknowledged most people won't change. they'll read this, close the tab, and ask ChatGPT to summarize it
Ronin tweet media
Rohit@rohit4verse

x.com/i/article/2050…

English
28
10
86
9.2K
Juliann · learning out loud
@justinskycak Luck works until the environment changes. Surface knowledge earns in stable conditions. The moment the context shifts, foundational builders adapt and surface players freeze. Luck has an expiry date. Foundations don’t.
English
1
0
0
66
Justin Skycak
Justin Skycak@justinskycak·
The cost of skipping foundational skill-building is becoming the kind of person who needs luck to look competent.
English
10
112
780
12.3K
Juliann · learning out loud
You know that feeling when you read a post and it just smells off? Like nothing's wrong with the words, but something is. That feeling has a name. Eagleman calls it the effort phenomenon: we instinctively assign value to things that look like they cost the writer something. A handmade letter. A reply someone clearly sat with. A piece of writing you can tell took its author somewhere new. The brain has been doing this for 200,000 years. Long before AI. So when a post feels "slop," people aren't reading the words. They're sensing the absence of cost. Of struggle. Of someone actually being there. Writing in public after AI means the bar isn't "is this true?" or "is this clever?" It's "did this cost me anything?" If it didn't, no algorithm will save it. @davideagleman calls it the effort phenomenon. Here's how I understand it. Tell me where I'm wrong.
English
0
0
0
19
Juliann · learning out loud
@ihtesham2005 The difference lives in what you do in the 30 seconds after the output arrives. Do you think, or do you move on? One of those is learning. The other is just faster forgetting.
English
0
0
1
27
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
Keep using AI for everything. I'm serious. Don't change a thing. Just know what researchers found when they studied 666 people across multiple age groups and measured the relationship between AI tool usage and critical thinking ability. Significant negative correlation. The more frequently someone relied on AI tools, the weaker their independent reasoning became. Younger users showed the highest AI dependence and the lowest critical thinking scores. The paper: "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking" MDPI Societies, January 2025. The irony is precise. The people using AI most aggressively to appear smarter are the ones quietly losing the ability to think without it. There's a version of AI use that makes you sharper. You use it to challenge your thinking, pressure-test your reasoning, find gaps in your argument. And there's the version most people use. Outsource the thinking entirely. Accept the output. Move on. One builds you. The other slowly replaces you.
Ihtesham Ali tweet media
English
55
119
512
27.2K
Juliann · learning out loud
@koylanai The one-terminal rule is underrated. The constraint isn’t just practical : it forces you to stay in the problem long enough to actually understand it. Discomfort as a feature, not a bug.
English
1
0
1
168
Muratcan Koylan
Muratcan Koylan@koylanai·
Don’t outsource your thinking. Letting LLMs do your thinking is the new TikTok (or slot machine) addiction; intermittent reward, attention fragmenting across parallel streams you’re not really tracking. State management is one of the hardest problems in agent harnesses, and the moment you’re running more than three terminals at once, you’ve stopped managing agents and you’re just nudging them along while losing track of what they’re actually doing. I’d even cut it to one session. Make yourself sit with the problem before you hand it off. The reading, thinking and planning muscles returning in the first week will cause some discomfort. Slow down. Do the deep work.
Muratcan Koylan tweet media
Austin Kennedy@astnkennedy

I'm 22 years old and Claude Code is deteriorating my brain. Every single day for the last 6 months I've had 6 to 8 Claude Code terminals open, waiting for a response just so I can hit 'enter' 75% of the time. And it's doing something to me. In convos with a couple of friends, it's been a point that's been brought up pretty frequently. None of us feel as sharp as we used to. I don't know if it's just us, or others in their 20s are feeling the same thing, but it's something I've been thinking about a lot. P.S. I know this is a problem with my reliability/usage of it, not Claude Code itself, but the effects are real nonetheless

English
26
15
228
34.9K
Austin Kennedy
Austin Kennedy@astnkennedy·
I'm 22 years old and Claude Code is deteriorating my brain. Every single day for the last 6 months I've had 6 to 8 Claude Code terminals open, waiting for a response just so I can hit 'enter' 75% of the time. And it's doing something to me. In convos with a couple of friends, it's been a point that's been brought up pretty frequently. None of us feel as sharp as we used to. I don't know if it's just us, or others in their 20s are feeling the same thing, but it's something I've been thinking about a lot. P.S. I know this is a problem with my reliability/usage of it, not Claude Code itself, but the effects are real nonetheless
English
1.3K
372
9.2K
2M
Juliann · learning out loud
A study just tested what happens when students use ChatGPT to study. 45 days later, they remembered less than those who studied the hard way. 57.5% vs 68.5%. Not even close. I'm not surprised. I see this every week training teams on AI. The people who copy-paste ChatGPT's answers feel productive in the moment. They nod along. They check boxes. They leave the room thinking they learned something. They didn't. The ones who actually retain? They argue with the AI. They ask "why?" three times. They get frustrated. They slow down. The researchers call it "cognitive offloading": your brain stops encoding when it knows something else is doing the work for you. I call it the fluency illusion: consumption feels like competence. Until it doesn't. AI doesn't make you dumber. But using it as a shortcut instead of a sparring partner does.
English
0
0
2
84
Juliann · learning out loud
The biggest mistake in AI training: treating it as a separate skill. I train teams on AI every week. The programs that work don't teach "AI" — they teach people to do their existing job with a new layer. The ones that fail are the ones that put AI in its own box, disconnected from the daily work. Curious to see if this portal gets that right.
English
1
0
9
1.2K
Rohan Paul
Rohan Paul@rohanpaul_ai·
The U.S. Department of Labor just launched a national AI apprenticeship portal for preparing workforce for the AI era. The site splits resources into general AI skills, industry-specific modules, and 3 integration pathways for apprenticeship programs. Employers can either join an existing program, build a new AI-focused Registered Apprenticeship, or update a current apprenticeship so AI becomes part of the skill stack instead of a separate topic. Apprenticeship opportunities are offered through an employer or the program sponsor, and career seekers should use the Apprenticeship Job Finder to search and then apply directly with the employer or sponsor.
Rohan Paul tweet media
English
15
140
656
62.1K
Juliann · learning out loud
I see this every week training teams on AI. Same tool, same prompt, same room, two opposite outcomes. Those who use AI as a shortcut forget everything by Friday. Those who use it as a sparring partner remember more than they would have alone. The variable isn't the AI. It's the intent behind the question.
English
0
1
21
584
Juliann · learning out loud
@signulll The hidden cost isn't money : it's learning. You can swing every day for free, but if you're not updating your swing, you're compounding noise, not signal. AI makes this 10x worse. Now you can take 100 swings a day without ever learning why you're missing.
English
0
0
1
132
signüll
signüll@signulll·
one of the coolest things (also detrimental in some ways) about the internet is the optionality it gives you. to become extraordinary, you only need to do one or two things people remember you for that are life altering. & the internet lets you keep compounding attempts at ~zero marginal cost until one of them lands. you can see examples everywhere like peter from openclaw. you see this time & time again in the internet era. that is a profound break from the past. take for example ray kroc… he didn’t get infinite shots. every attempt for him cost years, capital, geography, leases, labor, & distribution. he got maybe a handful of real swings in a lifetime. now one person can swing every day, in public basically for free forever.
English
32
13
350
17.5K
Juliann · learning out loud
@justinskycak "Impressive" is what looks effortless. "Capable" is what survived the struggle. I see this every week teaching AI to teams, people want the shortcut that looks smart. The ones who actually level up are the ones who sat with the confusion long enough to learn.
English
0
0
0
41
Justin Skycak
Justin Skycak@justinskycak·
If you want a better life, stop asking what sounds impressive and start asking what makes you more capable.
English
7
15
154
2.4K
Juliann · learning out loud
Juliann · learning out loud@JuliannPod·
@Doug_Lemov Thanks Doug. Your work on retrieval practice changed how I structure AI training sessions. The same 'be specific in your questions' principle applies everywhere : classrooms, prompts, even how we ask AI to challenge our thinking.
English
0
0
2
20
Doug Lemov
Doug Lemov@Doug_Lemov·
Retrieval practice is so important in helping learners retain knowledge. Love to see more and more of it. One of the most common opportunities to improve though is in avoiding very general questions. Eg “What’s one thing you remember from yesterday’s lesson?” This Q will likely result in learners recalling the easiest thing, the most familiar… Ie “low hanging fruit.” Much better to say: “Yesterday we talked about four key principles of x. What were they (or even: how many can you recall?) and which one did we discuss as being the most important?” Now learners are prompted to recall full schema and to recall things that don’t come easily.
English
6
36
201
15.6K
Juliann · learning out loud
Juliann · learning out loud@JuliannPod·
The deeper cost: easy tools don’t just waste your time, they erode your tolerance for difficulty. You stop attempting hard things not because you can’t, but because the bar for ‘too hard to bother’ keeps dropping. AI that removes all friction doesn’t make you more capable. It makes difficulty feel abnormal.
English
0
0
6
237
Justin Skycak
Justin Skycak@justinskycak·
The tragedy of passive consumption is not just lost time. It is also the gradual erosion of your appetite for more difficult, and more fulfilling, things.
English
9
99
810
12.8K
Juliann · learning out loud
Juliann · learning out loud@JuliannPod·
Most people think they learned something when they understood it. But understanding ≠ learning. Cognitive science calls this the fluency illusion — information that feels easy to process feels like knowledge. It isn't. I see this every week. People leave an AI training session feeling sharp. They followed along, they got it. Two weeks later, they can't reproduce a single thing. They didn't learn. They spectated. AI makes this worse. You read the output, it makes sense, you move on. You never struggled, so your brain never encoded anything. Consumption has never felt more like competence. The uncomfortable truth: the things that feel inefficient — testing yourself, rebuilding from scratch, explaining without notes — are the only things that actually work. Here's how I understand it. Tell me where I'm wrong.
English
0
1
3
86
Juliann · learning out loud
Juliann · learning out loud@JuliannPod·
@justinskycak AI makes this worse. You can now be exposed to 100x more output, faster than ever, and feel like you’re learning. The gap between exposure and skill is the same — but the illusion of progress is much stronger. Consumption has never felt more like production.
English
0
0
3
237
Justin Skycak
Justin Skycak@justinskycak·
The #1 most important thing to understand about learning: Exposure is not skill. You can consume endlessly and not make any progress in your ability to produce. Happens all the time.
English
5
38
406
6.9K
Juliann · learning out loud
Juliann · learning out loud@JuliannPod·
@levie The hidden cost: AI makes it easy to start things that probably shouldn’t be started. The constraint of time and labor wasn’t just friction — it was also a filter. When everything becomes cheap to begin, the bottleneck shifts to knowing what’s actually worth finishing.
English
0
0
0
32
Aaron Levie
Aaron Levie@levie·
Sorry to anyone who thought AI would mean we’d work less (at least for now). AI makes it easy to explore more than you did before, and so you start doing far more as a result. I regularly have seemingly small things that end up quickly consuming 3 hours because the agent made it easy to get started, but you still have to do the rest of the work to complete the project. This is work that I wouldn’t previously have handed out to anyone else, it’s just stuff that never got done because it took too long to do fully manually. And, counterintuitively, for some of these tasks as AI gets good enough at doing them, it even becomes economically worth it to hire someone to do it on an ongoing basis with agents. But until you could try doing them at a low cost you would never have tried. This is why AI won’t automatically reduce work in the way we imagine because work isn’t static. Most companies have far more they can do than they have today, it was just hard to get started on it all because of the natural constraints of time and labor availability.
Yasser@yasser_elsaid_

AI promised to do the work for us so we could enjoy our time doing other things. Since llms, me and everyone ambitious around me has been working harder than ever. I don't think this stops anytime soon.

English
80
81
833
105.6K
Juliann · learning out loud
Juliann · learning out loud@JuliannPod·
~unteachable is doing a lot of work here. You probably can’t teach taste directly. But you can teach people to notice discomfort before they dismiss it — to stay with the feeling that something’s off instead of shipping anyway. That’s not taste. It’s taste-adjacent. And in my experience it’s learnable, just slow and uncomfortable.
English
0
0
2
297
signüll
signüll@signulll·
everyone assumed ai would flatten the talent distribution.. turns out it amplifies the hell out of it. it used to be: can you build it. now it’s: do you know what’s worth building, & can you feel when it’s wrong. that’s ~unteachable & ~unautomatable right now. models can generate 100 variants of anything but they still can’t tell you which one matters. amazing talent is roughly priceless in the ai era because with ai it’s leverage++++++.
English
124
179
2K
106.1K