Ripsimé K. Bledsoe, Ph.D

47 posts

Ripsimé K. Bledsoe, Ph.D banner
Ripsimé K. Bledsoe, Ph.D

Ripsimé K. Bledsoe, Ph.D

@ripbledsoe

UIW I Director of Academic Success I Rehab Sciences

San Antonio, TX Присоединился Kasım 2015
153 Подписки47 Подписчики
Ripsimé K. Bledsoe, Ph.D
Ripsimé K. Bledsoe, Ph.D@ripbledsoe·
How do we prepare students to WORK with AI rather than around it? Presenting this week at the TXST AI in Teaching & Learning Symposium: *AI-Resilience in OTD capstone writing *AI-Assisted qualitative analysis Sharing pedagogy+methodology! @txst @uiwcardinals
English
0
0
0
58
Ripsimé K. Bledsoe, Ph.D
Ripsimé K. Bledsoe, Ph.D@ripbledsoe·
This is excellent because it is nuanced and realistic. You have hit on some of the most challenging aspects of ensuring assessment is truly giving you a snapshot of student learning starting from practice!
Paul Kirschner@New_Old_Paul

Transfer-appropriate processing vertelt ons welk soort denken het leren moet omvatten. Contextual interference zorgt ervoor dat het denken ook daadwerkelijk plaatsvindt. Als leren er op dat moment wat slechter uitziet, is dat misschien wat we willen. kirschnered.nl/2026/02/13/de-…

English
0
0
1
54
Ripsimé K. Bledsoe, Ph.D
Ripsimé K. Bledsoe, Ph.D@ripbledsoe·
Number 2 is perhaps at the core of all of this not only because it establishes worth but also curriculum needs to afford intentional opportunities for retrieval and consolidation. Largely coverage dominates teaching and leaves little for much else!
Carl Hendrick@C_Hendrick

Retrieval practice is a powerful tool but it's rapidly becoming a lethal mutation. Here are five principles to think about when trying to apply it more effectively.

English
0
1
2
943
Ripsimé K. Bledsoe, Ph.D ретвитнул
Andrej Karpathy
Andrej Karpathy@karpathy·
+1 for "context engineering" over "prompt engineering". People associate prompts with short task descriptions you'd give an LLM in your day-to-day use. When in every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step. Science because doing this right involves task descriptions and explanations, few shot examples, RAG, related (possibly multimodal) data, tools, state and history, compacting... Too little or of the wrong form and the LLM doesn't have the right context for optimal performance. Too much or too irrelevant and the LLM costs might go up and performance might come down. Doing this well is highly non-trivial. And art because of the guiding intuition around LLM psychology of people spirits. On top of context engineering itself, an LLM app has to: - break up problems just right into control flows - pack the context windows just right - dispatch calls to LLMs of the right kind and capability - handle generation-verification UIUX flows - a lot more - guardrails, security, evals, parallelism, prefetching, ... So context engineering is just one small piece of an emerging thick layer of non-trivial software that coordinates individual LLM calls (and a lot more) into full LLM apps. The term "ChatGPT wrapper" is tired and really, really wrong.
tobi lutke@tobi

I really like the term “context engineering” over prompt engineering. It describes the core skill better: the art of providing all the context for the task to be plausibly solvable by the LLM.

English
529
2.1K
14.3K
2.4M
Ripsimé K. Bledsoe, Ph.D
Ripsimé K. Bledsoe, Ph.D@ripbledsoe·
The zombie myth of learning styles that just won't go away! There is no evidence whatsoever that using or being taught with a """learning style""" produces better learning outcomes. Strategies, prior knowledge and the domain dictate far more. Stick to tech pls!
Mustafa Suleyman@mustafasuleyman

Biggest career accelerator in the next decade: get really, really good at learning - figure out your learning style - use AI to convert material into that format/style (podcasts, quizzes, etc.) - apply knowledge - repeat Learn fast, grow fast

English
0
0
1
65
Ripsimé K. Bledsoe, Ph.D
Ripsimé K. Bledsoe, Ph.D@ripbledsoe·
@karpathy Your list is why I have moved away from long threads and often start new conversations with different llms. I intentionally stopped the memory function when chatgpt first rolled it out because I found it stifled innovative responses and got stuck on a few knowledge pieces.
English
0
0
0
78
Andrej Karpathy
Andrej Karpathy@karpathy·
When working with LLMs I am used to starting "New Conversation" for each request. But there is also the polar opposite approach of keeping one giant conversation going forever. The standard approach can still choose to use a Memory tool to write things down in between conversations (e.g. ChatGPT does so), so the "One Thread" approach can be seen as the extreme special case of using memory always and for everything. The other day I've come across someone saying that their conversation with Grok (which was free to them at the time) has now grown way too long for them to switch to ChatGPT. i.e. it functions like a moat hah. LLMs are rapidly growing in the allowed maximum context length *in principle*, and it's clear that this might allow the LLM to have a lot more context and knowledge of you, but there are some caveats. Few of the major ones as an example: - Speed. A giant context window will cost more compute and will be slower. - Ability. Just because you can feed in all those tokens doesn't mean that they can also be manipulated effectively by the LLM's attention and its in-context-learning mechanism for problem solving (the simplest demonstration is the "needle in the haystack" eval). - Signal to noise. Too many tokens fighting for attention may *decrease* performance due to being too "distracting", diffusing attention too broadly and decreasing a signal to noise ratio in the features. - Data; i.e. train - test data mismatch. Most of the training data in the finetuning conversation is likely ~short. Indeed, a large fraction of it in academic datasets is often single-turn (one single question -> answer). One giant conversation forces the LLM into a new data distribution it hasn't seen that much of during training. This is in large part because... - Data labeling. Keep in mind that LLMs still primarily and quite fundamentally rely on human supervision. A human labeler (or an engineer) can understand a short conversation and write optimal responses or rank them, or inspect whether an LLM judge is getting things right. But things grind to a halt with giant conversations. Who is supposed to write or inspect an alleged "optimal response" for a conversation of a few hundred thousand tokens? Certainly, it's not clear if an LLM should have a "New Conversation" button at all in the long run. It feels a bit like an internal implementation detail that is surfaced to the user for developer convenience and for the time being. And that the right solution is a very well-implemented memory feature, along the lines of active, agentic context management. Something I haven't really seen at all so far. Anyway curious to poll if people have tried One Thread and what the word is.
English
668
550
6.6K
829.7K
Ripsimé K. Bledsoe, Ph.D ретвитнул
Carl Hendrick
Carl Hendrick@C_Hendrick·
What is learning? @P_A_Kirschner and I discuss why learning is inextricably linked with memory.
English
9
77
191
18.5K
Ripsimé K. Bledsoe, Ph.D ретвитнул
Ethan Mollick
Ethan Mollick@emollick·
New randomized, controlled trial of students using GPT-4 as a tutor in Nigeria. 6 weeks of after-school AI tutoring = 2 years of typical learning gains, outperforming 80% of other educational interventions. And it helped all students, especially girls who were initially behind
Ethan Mollick tweet mediaEthan Mollick tweet media
English
357
2.2K
11.5K
4.3M
Ripsimé K. Bledsoe, Ph.D
Ripsimé K. Bledsoe, Ph.D@ripbledsoe·
Have been sharing this research, along with attention residue, with my first year college students as I help them develop executive management skills. It has made a big difference as they understand why I build certain class norms for focused learning!
Andrew Watson@AndrewWatsonTTB

We have research showing that #attention-contagion is a thing in the psychology lab. Turns out: newer research shows that it's also a thing in real classrooms. Here's the story: ow.ly/8fAa50TUzaA

English
0
0
1
60
Ripsimé K. Bledsoe, Ph.D
Ripsimé K. Bledsoe, Ph.D@ripbledsoe·
Absolutely! Features like this could remove the time consuming implementation for educators presented with endless guides and so many choosing not to allow use at all. It can also remove the need for external AI tools.
Matt Beane@mattbeane

Stumper: why publish user guides instead of just changing your features to enable "student writing mode"? Encode this stuff! Then it's just a flip of a switch and students get the scaffolding they need to learn and write better with less risk of deskilling, demotivation, etc?

English
0
0
1
55
Ripsimé K. Bledsoe, Ph.D
Ripsimé K. Bledsoe, Ph.D@ripbledsoe·
@emollick Absolutely. Code interpreter useful for qual research especially in ways that often limit its use. OpenAI's custom GPTs can be extremely useful for educators w ability to feed quality content and a means to scale up use in whole departments. But improvements much needed.
English
0
0
0
220
Ethan Mollick
Ethan Mollick@emollick·
A side effect of OpenAI’s focus on AGI above all is that many of their most practical & useful tools are mostly left to rot. Code Interpreter is still the best AI analyst, but barely changed in a year. GPTs are still the best prompt sharing tool, but same. 💰 left on the table.
English
43
23
623
62.6K
Ripsimé K. Bledsoe, Ph.D
Ripsimé K. Bledsoe, Ph.D@ripbledsoe·
@BradleyKBusch I'm intrigued! I'm getting ready to do some professional development for faculty on the myth of learning styles and GenAI.
English
1
0
0
53
Brad Busch
Brad Busch@BradleyKBusch·
Putting the finishing touches to a blog to come out on Thursday. It explores the age old problem (learning styles) in a modern world (AI). I think it is my favourite blog I’ve ever wrote and I can’t wait to share it with everyone 😀🧠
English
2
4
16
3.3K
Ripsimé K. Bledsoe, Ph.D
Ripsimé K. Bledsoe, Ph.D@ripbledsoe·
Great reminder points in this article for those of us in the classroom as we approach the Fall semester!
Inside Higher Ed@insidehighered

#StudentSuccess | Harnessing the Power of Generative AI: A Call to Action for Educators If educators steer the integration of AI with intention and purpose, it can reach its potential as an immense and versatile student success tool, writes faculty member Ripsimé K. Bledsoe. She offers six areas on which to focus efforts. #HigherEd bit.ly/4byz6r4

English
0
0
3
263
Ripsimé K. Bledsoe, Ph.D ретвитнул
elvis
elvis@omarsar0·
Very interesting study on comparing RAG and long-context LLMs. Main findings: - long-context LLMs outperform RAG on average performance - RAG is significantly less expensive On top of this, they also propose Self-Route, leveraging self-reflection to route queries to RAG or LC. Report that Self-Route significantly reduces computational cost while maintaining comparable performance to LC. Interesting result: "On average, LC surpasses RAG by 7.6% for Gemini-1.5-Pro, 13.1% for GPT-4O, and 3.6% for GPT-3.5-Turbo. Noticeably, the performance gap is more significant for the more recent models (GPT-4O and Gemini-1.5-Pro) compared to GPT-3.5-Turbo, highlighting the exceptional long-context understanding capacity of the latest LLMs." Again, not sure why Claude was left out of the analysis. I would love to see that including other custom LLMs trained to perform better at RAG. I am not entirely convinced that long-context LLMs generally can outdo RAG systems today. But I think it's interesting to see a combination of the approaches which is something I've been advocating for recently.
elvis tweet media
English
10
172
679
50.8K
Ripsimé K. Bledsoe, Ph.D
Ripsimé K. Bledsoe, Ph.D@ripbledsoe·
Presenting at the Annual Teaching & Learning w/ AI conference in Orlando, FL (July 22-24) hosted by @UCFDigitalLearn. Pres topics: AI and study support & AI in the classroom w/ a HumanAIzing learning cycle.
English
0
1
4
134
Ripsimé K. Bledsoe, Ph.D
Ripsimé K. Bledsoe, Ph.D@ripbledsoe·
Excited to present some of my recent work--Uncovering Student Insights: Using GenAI to Analyze Qualitative Feedback.
Ripsimé K. Bledsoe, Ph.D tweet media
English
0
0
1
85
Ripsimé K. Bledsoe, Ph.D
Ripsimé K. Bledsoe, Ph.D@ripbledsoe·
My recent article on how faculty can lead the way to shape GenAI in education especially as a student success tool. @TAMUSanAntonio @StanfordHAI @TingLiu_TAMUSA @HsunYuChan @krwickersham @XueliWang1 @britwag13 @VaSansone @BeverlyIrby
Inside Higher Ed@insidehighered

#StudentSuccess | Harnessing the Power of Generative AI: A Call to Action for Educators If educators steer the integration of AI with intention and purpose, it can reach its potential as an immense and versatile student success tool, writes faculty member Ripsimé K. Bledsoe. She offers six areas on which to focus efforts. #HigherEd bit.ly/4byz6r4

English
0
1
10
1K