Post

Shakeel
Shakeel@ShakeelHashim·
I have been doing a lot of interviews recently, and have asked LLMs to try to clean up the transcripts — no substantive changes, just removing filler words and adding paragraph breaks to make them more readable. But every time I've asked the models have hallucinated HUGE chunks of the transcripts — just totally inventing sections that don't exist. This is happening with both Claude and ChatGPT, with all sorts of prompts telling it not to do this. Just completely unreliable!
English
13
4
36
5.3K
Shakeel
Shakeel@ShakeelHashim·
The issue seems to be that the model truncates the original transcript, so just skips over the middle bit (even when told repeatedly to read the whole thing). These aren't particularly long transcripts though; well within the context limit.
English
3
0
6
994
rishi
rishi@RishiBommasani·
@ShakeelHashim Yes, I have this problem often as well when processing (not so) long documents. Getting models to handle the text verbatim and not truncate (and therefore not make other downstream errors given truncation) seems super hard to ensure.
English
1
0
0
48
Peter Hartree
Peter Hartree@peterhartree·
@ShakeelHashim For this kind of task, I've seen reliability trail off sharply between 20-50% of context window length. Keep your chunks below 20% of context length and it should be fine. If it isn't, send me an example prompt and transcript—happy to help figure it out.
English
1
0
0
61
Paylaş