Andrew Barrett

196 posts

Andrew Barrett

Andrew Barrett

@AndrewBarrettCa

Leaving this platform

Beigetreten Haziran 2009
700 Folgt396 Follower
Andrew Barrett
Andrew Barrett@AndrewBarrettCa·
@emollick I shared your prompt and images with o3 and asked for a 90s style inspirational poster that could be hung in the Mordor office. Pretty good.
Andrew Barrett tweet media
English
1
0
1
56
Ethan Mollick
Ethan Mollick@emollick·
"o3, You are a consultant hired by the Dark Lord, analyze the org chart of Mordor. How would you improve it for today's changing Middle Earth" o3 does some actual humor: “One Org to rule them all, One Org to find them, One Org to bring them all, And in the darkness, align them.”
Ethan Mollick tweet mediaEthan Mollick tweet mediaEthan Mollick tweet media
English
30
73
878
86.3K
Andrew Barrett
Andrew Barrett@AndrewBarrettCa·
@Annettelevesqu_ Thanks for hosting such a thoughtful conversation! It was a privilege to discuss AI’s potential in reshaping education and share @scalelearning's mission to create impactful, human-centric solutions for our clients. Looking forward to continuing the dialogue!
English
0
0
1
52
Annette Levesque, Elearning Strategist, M.Ed., OCT
𝗥𝗲𝗶𝗺𝗮𝗴𝗶𝗻𝗶𝗻𝗴 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗧𝗵𝗿𝗼𝘂𝗴𝗵 𝘁𝗵𝗲 𝗟𝗲𝗻𝘀 𝗼𝗳 𝗔𝗜 ️🎙️ In this episode of The Global Education Station Presents, host Annette Levesque engages in a thought-provoking discussion with @AndrewBarrettCa , PhD, Co-Founder and Director of R&D at Scale Learning. Together, they delve into the transformative role of AI in education, offering practical insights and ethical considerations for its implementation. 📺 𝗪𝗮𝘁𝗰𝗵 𝘁𝗵𝗲 𝗳𝘂𝗹𝗹 𝗲𝗽𝗶𝘀𝗼𝗱𝗲 𝗻𝗼𝘄 👉 𝗹𝗶𝗻𝗸 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀! 🌍 𝗪𝗵𝗮𝘁 𝘁𝗼 𝗘𝘅𝗽𝗲𝗰𝘁: • Andrew’s journey in educational technology and the mission of Scale Learning. • The importance of understanding problems before jumping into AI solutions. • Common mistakes organizations make when adopting AI in education. • Practical strategies for leveraging AI in formative assessment and brainstorming. • The boundaries and ethical guidelines necessary for responsible AI use. • Why empathy, diversity, and a human-centric approach are crucial in AI-driven education. Whether you’re an educator, administrator, or ed-tech enthusiast, this episode is packed with actionable insights to help you navigate the intersection of AI and education responsibly. 📌 𝗖𝗼𝗻𝗻𝗲𝗰𝘁 𝘄𝗶𝘁𝗵 𝗨𝘀: • Subscribe to The Global Education Station for more episodes • Follow us on social media for the latest updates and resources • Leave your thoughts and feedback in the comments below! 🌟✏️Share: Enjoyed this episode? Please share it with a fellow education & training enthusiast! #GlobalEducationStation #Podcast #Education #AnnetteLevesque #ElearningGold #Edtech #TheGlobalEducationStationPresents #AndrewBarrett #ScaleLearning #AIinEducation #EducationalTechnology #HumanCentricAI
English
2
0
2
81
Spencer Greenberg 🔍
Spencer Greenberg 🔍@SpencrGreenberg·
I'm running a small census to help understand who follows me on Twitter. It would be great if you'd take a moment to participate! What country do you live in? (if it's "Other", and you have a moment, it would be great if you'd leave your country in a comment below)
English
116
1
10
10.5K
Nathan 🔎
Nathan 🔎@NathanpmYoung·
you can use a bluetooth mouse to disconnect itself, but not to reconnect
English
20
3
76
6.2K
Andrew Barrett
Andrew Barrett@AndrewBarrettCa·
@emollick Could be a timing play. Cash out before larger models or AGI eats them. If they really believe AGI is coming soon from one of the big labs, they don’t have great alternatives.
English
0
0
0
75
Ethan Mollick
Ethan Mollick@emollick·
VCs are betting against (1) continued scaling, where larger models beat specialized models while rapidly decreasing costs & (2) AGI, which would invalidate a lot of the underlying assumptions for AI startups (& many firms) It is an okay bet but I wonder if it is a conscious one
PitchBook@PitchBook

Poolside AI, which has an LLM for software developers, is courting a $450M inside-led round from Bain Capital Ventures. It would be a big step from its August seed of $126M at a $526M valuation, reflecting the prices genAI can get even at early stages. pitchbook.com/news/articles/…

English
22
10
169
42K
Tamay Besiroglu
Tamay Besiroglu@tamaybes·
Language models have come a long way since 2012, when recurrent networks struggled to form coherent sentences. Our new paper finds that the compute needed to achieve a set performance level has been halving every 5 to 14 months on average. (1/10)
Tamay Besiroglu tweet media
English
8
51
291
43.2K
Andrew Barrett
Andrew Barrett@AndrewBarrettCa·
@GaryMarcus @emollick What's the recent data that no longer fits the curve? IMO The paper focuses more on LLM efficiency "compute required to reach a set performance" rather than growth of new capabilities. Efficiency seems easier to measure than new capability growth.
English
0
0
1
55
Gary Marcus
Gary Marcus@GaryMarcus·
That paper is old news. You are ignoring the fact that the recent data no longer fit the curve by any reasonable metric. Also you ignoring statements by Demis, Bill Gates, myself etc all pointing to/predicting a plateau (because of data, power consumption, outliers), without acknowledgement that there is an alternative view. Things could pick up again but recent data simply don’t fit, despite many billions invested, and many companies trying very hard.
English
4
1
7
886
Linch
Linch@LinchZhang·
Creating a for-profit C corp owned by B corp owned by public benefit corporation owned by 501c4 owned by 501c3 is not always as glamorous as it sounds...
Linch tweet media
English
1
0
28
1.2K
Andrew Barrett
Andrew Barrett@AndrewBarrettCa·
@dwarkesh_sp #1 reminds me of the quote: "Not everything that counts can be counted, and not everything that can be counted counts." It's easier to eval AI (& humans) on knowing stuff that can be counted. Easier to train on too. There's fuzzy stuff that counts that's hard to count/eval.
English
0
1
0
681
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
Not what I believe, but if I had to steelman the scaling bear case: 1. We are being fooled by evals and use cases that just test the model on knowing stuff. Aka the exact thing it was trained to do - predicting random wikitext. And we're not paying attention to how bad these models are at everything else - it took multiple ICO winners a year after GPT-4 was released to get that model to tree-search its way to a coding agent that's not atrocious. 2. Maybe people are too optimistic about lunging over the data wall. As far as I'm aware, there's not compelling public evidence that we can substitute for the language tokens we would gotten from a bigger internet with synthetic data or RL. 3. People aren't taking power laws seriously. Each model generation takes orders of magnitude more compute, which means if you don't get automated AI researchers by GPT-7, no intelligence explosion for you.
English
34
5
244
38K
Andrew Barrett
Andrew Barrett@AndrewBarrettCa·
@DonaldClark @NickCorston This is accurate. Faculty that do invest in their teaching do so despite recommendations to the contrary from peers, course evaluations that often aren't aligned with promoting teaching & learning, and a tenure and promotion process where research output rules.
English
2
0
2
23
Donald Clark
Donald Clark@DonaldClark·
@NickCorston Teaching in Higher Education is almost universally done by people whose primary focus is research. It has a completely secondary function with people not recruited to teach at all. Views?
English
4
0
4
224
Nick Corston FRSA 🔥🚀
Nick Corston FRSA 🔥🚀@NickCorston·
Teaching is the most important job in the world and should be respected as such IMHO it should only be done by people who care and think about it and how it could be the best it can be And to say and DO SOMETHING One day maybe…
English
1
0
3
648
Mark Gilson 🌱
Mark Gilson 🌱@markwgilson·
@emollick I thought Yann LeCun's comments about the limits of LLMs in Lex's podcast were very interesting. He thinks LLMs will never be capable of handling the environment like we can (self driving cars, routine human activities, etc...) youtu.be/5t1vTLU7s40?si…
YouTube video
YouTube
English
2
1
6
664
Ethan Mollick
Ethan Mollick@emollick·
There is real disagreement over this issue among AI insiders, including among much less prominent people. It makes it hard for outsiders to take the position that AGI is definitely possible or definitely impossible, especially in the medium term, based on the evidence available.
Ethan Mollick tweet media
English
35
14
207
27.6K
Andrew Barrett
Andrew Barrett@AndrewBarrettCa·
@emollick Agreed. People want simple answers but assessing intelligence of AIs (and people for that matter) is tricky business with all sorts of tensions & high stakes consequences. Standardized benchmarks are important but, by design, may not predict performance in specific contexts.
English
0
0
0
129
Ethan Mollick
Ethan Mollick@emollick·
AIs are getting more capable but the methodology behind this chart, which I see everywhere, is really not good. It doesn’t provide a useful comparison with humans, or a useful ranking between AIs, or a real statistical analysis. There are better benchmarks (flawed as they are)
Ethan Mollick tweet media
English
11
18
191
20.9K
Ethan Mollick
Ethan Mollick@emollick·
A direct comparison between the Big Five personality test, Meyers-Briggs & astrology in predicting life outcomes finds that the Big Five (the standard psychology test) is the best, astrology is useless and MBTI (& Enneagram) falls between. Nice discussion: scientificamerican.com/article/person…
Ethan Mollick tweet media
English
46
241
1.2K
269.5K
Pradyumna (in Bay Area)
Pradyumna (in Bay Area)@PradyuPrasad·
serious question: what does it take to self learn a CS degree?
English
56
0
118
22.1K
Andrew Barrett
Andrew Barrett@AndrewBarrettCa·
@aigs_ca 😍 Loving Chart 1! It clearly articulates 3 areas of uncertainty in a straightforward way. 1. How many technical breakthroughs are needed to reach AGI 2. Whether the current exponential rate of progress will continue 3. How much time governments have to prepare...
Andrew Barrett tweet media
English
0
0
1
15
AI Governance and Safety Canada
AIGS Canada is proud to announce its new white paper "Governing AI: A Plan for Canada"! #white-paper" target="_blank" rel="nofollow noopener">aigs.ca/advocacy/#whit
AI Governance and Safety Canada tweet media
English
3
2
6
385
Siméon
Siméon@Simeon_Cps·
@tobi Thanks for sharing your point of disagreement. I think that independently from takeoff, if you assume an entity capable of doing any task at human expert level (AGI), there are many problems that arise. Do you disagree with that?
English
1
0
5
113
Siméon
Siméon@Simeon_Cps·
There are papers on AI extreme risks. Is there one single paper out there trying to argue for why it is not an actual issue?
English
23
6
61
16.8K
Andrew Barrett
Andrew Barrett@AndrewBarrettCa·
Great thread exploring potential negative impacts of AI in education.
English
0
0
0
206
Andrew Barrett
Andrew Barrett@AndrewBarrettCa·
Our second post in the weekly #WisdomGap series is out! 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗳𝗿𝗼𝗺 𝗖𝗢𝗩𝗜𝗗-𝟭𝟵 𝘁𝗼 𝗣𝗿𝗲𝗽𝗮𝗿𝗲 𝗳𝗼𝗿 𝗔𝗜 𝗗𝗶𝘀𝗿𝘂𝗽𝘁𝗶𝗼𝗻: 𝗪𝗶𝘀𝗱𝗼𝗺 𝗚𝗮𝗽 𝗜𝗺𝗽𝗮𝗰𝘁𝘀 𝗢𝗻 𝗟&𝗗 Let us know what you think! scalelearning.com/learning-from-…
English
0
1
0
145
tobi lutke
tobi lutke@tobi·
Canadian government is announcing a code of conduct on AI today, another case of EFRAID. I won’t support it. We don’t need more referees in Canada. We need more builders. Let other countries regulate while we take the more courageous path and say “come build here.”
Yann LeCun@ylecun

The UK Prime Minister has caught the Existential Fatalistic Risk from AI Delusion disease (EFRAID). Let's hope he doesn't give it to other heads of state before they get the vaccine. "An AI safety summit at Bletchley Park in November is expected to focus almost entirely on existential risks and how to negate them." telegraph.co.uk/business/2023/…

English
92
197
1.7K
502K