Cheng-Yu Hsieh

211 posts

Cheng-Yu Hsieh

Cheng-Yu Hsieh

@ChengYuHsieh1

Born & raised in Taiwan 🇹🇼 PhD student at @RHULPsychology. Interested in the science of reading and statistics.

Egham, South East Katılım Nisan 2020
415 Takip Edilen161 Takipçiler
Sabitlenmiş Tweet
Cheng-Yu Hsieh
Cheng-Yu Hsieh@ChengYuHsieh1·
🎉 Excited to share my 1st PhD paper co-authored w/@MarelliMar & @Kathy_Rastle 🎉 It shows the quantity of word experience is not everything! it is also important the experience is consistently meaningful! Intrigued? Read the threads 👇 and the paper at: tinyurl.com/54rkmbud
English
6
8
39
7.7K
Cheng-Yu Hsieh retweetledi
Roan
Roan@RohOnChain·
This 2 hour Stanford lecture will teach you more about how LLMs like ChatGPT & Claude are built than most people working at top AI companies learn in their entire careers. Bookmark this & give 2 hours today, no matter what. It'll be the most productive thing you do this week.
English
134
3.2K
19.1K
2.1M
Cheng-Yu Hsieh retweetledi
Kathy Rastle
Kathy Rastle@Kathy_Rastle·
✨ Delighted to discuss our recent findings in tes ✨ The subtitles myth: why children don’t learn to read from TV tes.com/magazine/teach…
English
0
5
12
880
Cheng-Yu Hsieh retweetledi
Cognition
Cognition@CognitionJourn·
Why do languages share common properties? Adults learned novel quantifiers satisfying semantic universals faster than those violating them. This suggests that learnability helps explain why certain meanings are lexicalized across cultures. @Logic_Cognition
Cognition tweet mediaCognition tweet mediaCognition tweet media
English
1
27
116
6.1K
Cheng-Yu Hsieh retweetledi
Matt Dancho (Business Science)
RIP data analysis and modelling jobs. Don't take my word for it. Take Nature's.
Matt Dancho (Business Science) tweet media
English
30
193
1.1K
133.2K
Cheng-Yu Hsieh retweetledi
Guri Singh
Guri Singh@heygurisingh·
🚨BREAKING: If you've used ChatGPT for writing or brainstorming in the last 6 months, your creative ability may already be permanently damaged. A controlled experiment just proved the effect doesn't reverse when you stop using it. 3,302 creative ideas. 61 people. 30 days of tracking. Researchers split students into two groups. Half used ChatGPT for creative tasks. Half worked alone. For five days, the ChatGPT group outperformed on every metric. Higher scores. More ideas. Better output. AI was making them better. Then day 7. ChatGPT removed. Every creativity gain vanished overnight. Crashed to baseline. Zero lasting improvement. But that's not the bad part. ChatGPT users' ideas became increasingly identical to each other over time. Same content. Same structure. Same phrasing. The researchers called it homogenization. Everyone using ChatGPT started producing the same ideas wearing different clothes. When ChatGPT was removed, the creativity boost disappeared -- but the homogenization stayed. 30 days later, same result. Their creative range had been permanently compressed. Five days of use. Permanent damage 30 days later. A separate trial confirmed it. 120 students. 45-day surprise test. ChatGPT users scored 57.5%. Traditional learners scored 68.5%. AI reduces cognitive effort. Less effort means weaker encoding. Weaker encoding means less creative raw material. You're not renting a productivity boost. You're financing it with your originality. The interest rate is permanent.
Guri Singh tweet media
English
324
1.3K
5.6K
829.5K
Cheng-Yu Hsieh retweetledi
Tal Linzen
Tal Linzen@tallinzen·
New version of @linluqiu's heroic Google Student Research project, with a lot more experiments! I think it's a nice demonstration of why LLM fine-tuning works so well: you fine-tune the models to adapt to users by having them mimic the optimal Bayesian way to adapt, and they generalize this ability to other contexts:
Tal Linzen tweet media
Linlu Qiu@linluqiu

LLMs are increasingly used as agents that interact with users. To do so successfully, LLMs need to form beliefs and update them when new information becomes available. Do LLMs do so as expected from an optimal strategy? If not, can we get them to follow this strategy? 🧵

English
4
7
72
17.2K
Cheng-Yu Hsieh retweetledi
Joachim Schork
Joachim Schork@JoachimSchork·
The ggalign package in R brings flexibility to your visualizations by helping align multiple plots and incorporate complex hierarchical data structures, like dendrograms, alongside your primary charts. This extension of ggplot2 makes it easier to organize heatmaps with clustering and other detailed data views, all in a clean, cohesive layout. With ggalign, you can combine multiple plot types effectively, making it a powerful tool for exploring high-dimensional data. Here’s why it’s worth exploring: ✔️ Seamless Integration with ggplot2: Works alongside ggplot2 for consistent, customizable visuals. ✔️ Multi-Plot Alignment: Aligns heatmaps with other elements like dendrograms, gene annotations, or K-means clusters, as seen in this example, for clear, layered insights. ✔️ Enhanced Data Exploration: The ability to layer information enables users to observe patterns across different groupings, making it easier to spot relationships. In this visualization from the package website, ggalign combines a heatmap of gene expression data for various cell types, clustering information, and additional data panels for gene annotations, distribution, and K-means groupings. The heatmap uses a gradient from dark to light to show gene expression intensity, with accompanying panels for cell type and gene attributes, offering a comprehensive view of this complex data set. If you're interested in learning more about creating powerful visuals in R, check out my Data Visualization in R Using ggplot2 & Friends course. Take a look here for more details: statisticsglobe.com/online-course-… #database #datavis #RStats #DataViz #ggplot2 #RStudio #tidyverse #Data #Rpackage
Joachim Schork tweet media
English
0
35
173
7K
Cheng-Yu Hsieh retweetledi
Kathy Rastle
Kathy Rastle@Kathy_Rastle·
Brilliant news that our RCT looking at whether same-language subtitles improve children’s reading fluency was accepted. Mammoth effort involving a great deal of help from families and schools, and I’m so pleased for lead author @a_lopukhina who brought it all together. 👏
English
3
4
25
2.9K
Cheng-Yu Hsieh retweetledi
Daniël Lakens
Daniël Lakens@lakens·
A great new preprint on the importance of pilot studies for the validity of studies that are performed. Such an important tooic, that is discussed too little. I especially liked the section on the need for transparent reporting. osf.io/t968e_v1 By @YashvinSeetahul et al.
English
0
16
55
6.4K
Cheng-Yu Hsieh retweetledi
Yu Togashi
Yu Togashi@yutogashi158722·
Excited to share our new preprint w/@yukoyy, Chihiro Hiramatsu, @NaoTsuchiya & @oizumim! We present the first empirical evidence supporting that "my red" is "your red", through an unsupervised alignment of color qualia structures at the individual level. x.gd/oX1To
English
9
56
199
48.1K
Cheng-Yu Hsieh retweetledi
Oliver Prompts
Oliver Prompts@oliviscusAI·
Google just dropped another banger 🤯 It’s called PaperBanana, a new tool that generates publication-ready academic illustrations directly from your methodology text. 100% Free.
Oliver Prompts tweet media
English
49
697
4.4K
347.5K
Cheng-Yu Hsieh retweetledi
Cognition
Cognition@CognitionJourn·
Do language users really represent abstract syntactic structures and rules? Or can constraints on linguistic form be fully explained by communicative function and general cognitive pressures?
English
3
12
43
3.4K
Cheng-Yu Hsieh retweetledi
Daniël Lakens
Daniël Lakens@lakens·
You can sign up until January 23 for the free Paul Meehl Graduate School workshop on Scientific Criticism and Peer review taught by René Bekkers. It promises to be an extremely interesting day, so do join us in Eindhoven on January 30th! paulmeehlschool.github.io/workshops/crit…
English
0
7
21
2.8K
Cheng-Yu Hsieh retweetledi
Griffiths Computational Cognitive Science Lab
Excited to announce a new book telling the story of mathematical approaches to studying the mind, from the origins of cognitive science to modern AI! The Laws of Thought will be published in February, and is available for pre-order now.
Griffiths Computational Cognitive Science Lab tweet media
English
36
261
1.7K
94.9K
Cheng-Yu Hsieh retweetledi
Daniël Lakens
Daniël Lakens@lakens·
There are many dubious papers on correcting for multiple comparisons. And the issue is so simple: 1) You need to control error rates of claims. If multiple tests inflate error rates, you must lower alpha. 2) you adjust on the level of a claim. #why-you-dont-need-to-adjust-your-alpha-level-for-all-tests-youll-do-in-your-lifetime" target="_blank" rel="nofollow noopener">lakens.github.io/statistical_in….
English
0
13
48
8.5K