Jane Chamberlain

704 posts

Jane Chamberlain banner
Jane Chamberlain

Jane Chamberlain

@ChambJane

College & Career Coach. I help students identify and earn admission to great-fit colleges and majors at https://t.co/9GNH4kiJF6

Grand Rapids, MI Katılım Aralık 2023
122 Takip Edilen76 Takipçiler
Jane Chamberlain retweetledi
Justin Skycak
Justin Skycak@justinskycak·
If you’re asking someone to be your mentor, you’re doing it wrong. Mentorship is not a favor you ask for. It’s a relationship that develops when you demonstrate that you are a cannon worth pointing at a big problem.
English
8
30
323
18.2K
Kyle Saunders
Kyle Saunders@profgoose·
NEW FREE & SEARCHABLE RESOURCE: I mapped all 4-year colleges in US higher education along 2 dimensions (institutional resilience & post-college market position) using 8 federal data/indicators PLUS a new institutional AI exposure measure (@AnthropicAI). kylesaunders.com/university-map/
English
18
51
421
1.3M
Jane Chamberlain
Jane Chamberlain@ChambJane·
@thetect0nic @pmarca It’s also not dystopian for the user who has the app that does exactly what they want without a lot of bloat.
English
0
0
1
5
The Tectonic
The Tectonic@thetect0nic·
Andreessen is right and it is also his business model. a16z has invested in the infrastructure that makes personal apps possible: the models, the APIs, the deployment layer. "Everyone has an app, zero shared users" is not a dystopia for the people who own the picks and shovels. The app economy consolidated into platforms. The platform economy is consolidating into infrastructure. Everyone gets their own app. One or two people own everything underneath it.
English
1
0
0
105
Jane Chamberlain
Jane Chamberlain@ChambJane·
@johnarnold Greed rationalizes corruption. After engaging a lot of young men interested in finance, instead of supporting an investment mindset, Robinhood is corrupting it by promoting self-destructive habits.
English
0
0
0
30
John Arnold
John Arnold@johnarnold·
To stay aligned with its stated mission to “democratize finance for all,” Robinhood has to persuade both itself and its users that sports betting isn't actually gambling, but trading. This conflation of investing, trading, and gambling is anything but democratizing finance.
John Arnold tweet media
English
16
13
157
27.6K
Jane Chamberlain
Jane Chamberlain@ChambJane·
@PrestonCooper93 Yep. “Unintended consequences”…easily anticipated…incentives to get students to enroll without concern for impact on student, and often family, finances.
English
0
0
1
25
Preston Cooper
Preston Cooper@PrestonCooper93·
Why we have a Pell shortfall: "Disbursement data for the 2024–25 award year which ended June 30, 2025, shows that students received $39 billion in Federal Pell Grants, which represents a 24% increase in Pell Grant disbursements... from the same period for award year 2023–24."
English
2
0
2
294
Robert Bortins
Robert Bortins@TheRobertBshow·
It is time for March Madness. Whose is your pick to win it all?
English
1
0
0
317
Jane Chamberlain
Jane Chamberlain@ChambJane·
As a college consultant, I often encourage students interested in computer to combine it with another field of interest. As a human, his ability to work and think across domains will help him know how to guide AI, know what questions to ask, and how to recognize the typical failure points of AI output.
English
0
0
2
113
Natasha Crain
Natasha Crain@Natasha_Crain·
My son is a high school junior who plans to majoring computer science. It would be hard to imagine him doing anything else because computers have been his passion since he was in third grade. But that said, posts like the one below concern me and I don’t really know what to do with the info. Tell him to find a totally different career? Forge ahead hoping that things are more clear when he graduates in 5 years from now? Make sure he picks a school offering AI focus? Don’t send him to college and buckle down for an apocalypse? I’d love to hear from anyone working in tech as to what you’d tell my son from your view. If anyone has kids currently or recently in comp sci college programs, would LOVE to hear if your experience matches up with what this guy is saying.
Tech Layoff Tracker@TechLayoffLover

Spoke to a career counselor at a top 10 CS program yesterday Her graduating class of 410 computer science majors just hit the market 67 have full-time offers. That's a 16% placement rate Last year the same program placed 340 out of 380 students. An 89% rate The remaining 343 new grads are now competing for "junior developer" roles that require 2-3 years experience and AI tool proficiency Average student debt in the cohort: $174k. Average starting salary for the lucky 67: $89k, down from $145k two years ago The kids who got hired? All at companies building AI infrastructure. They're literally coding their own career's extinction One grad told me he's been applying since February. 1,240 applications sent. 8 phone screens. 2 final rounds. Zero offers His last rejection email said they went with "an experienced developer who can work autonomously with AI assistance rather than requiring traditional mentorship" The career center is still advertising a 94% placement rate on their website. They're counting food delivery and retail as "technology adjacent careers" Three students in the program already switched to nursing school mid-semester when they saw the job market data Parents who took out Parent PLUS loans for these degrees are watching their kids compete with offshore contractors using Claude who work for $28k annually But hey, at least they learned data structures really well The knowledge that's already obsolete

English
27
0
15
9K
Jane Chamberlain
Jane Chamberlain@ChambJane·
“AI-enhanced productivity is not a shortcut to competence…[If] leaning on AI during [learning] damages how real competence forms, then we've got a generation building careers on ground that was never properly packed down. [And] the skills you'll need to properly supervise AI [deep understanding, the ability to read between the lines, to catch its mistakes] are exactly the ones getting eroded right now.”
Muhammad Ayan@socialwithaayan

🚨 BREAKING: Anthropic just published a study proving their own AI makes you worse at learning new skills. Not some outside critic taking shots. The company that made Claude themselves. They put together this experiment with about fifty developers learning a brand new programming library they'd never seen before. One group had AI help the whole way through. The other group went at it without any assistance. The ones with AI felt productive as hell. Answers came quick. They were shipping code left and right. Everything felt smooth. Then the tests hit. Real understanding of the library? The AI group got crushed. Weaker conceptual grasp. They struggled more just reading through code. Debugging became a nightmare. The AI had been doing the thinking so their own brains never had to step up. I caught myself doing the exact same thing a while back when I was forcing through a new framework. Felt like a genius until I had to explain it without the chat open. Brutal. They went deeper and mapped out the different ways people actually interact with these tools while coding. Only some of those ways let real learning happen. The others give you this fake sense of progress — you're moving fast, tasks are getting done, but your actual skill level stays zero. The worst offender by far was full delegation. People who just handed the whole thing over to the AI got a little speed boost but walked away knowing less than they did at the start. They used the tool. The tool used their time. And here's what really lands different. This isn't some random researcher warning about AI from the outside. These folks work at Anthropic. They build the models. They put this line straight in the paper: AI-enhanced productivity is not a shortcut to competence. That sentence is going to stick with a lot of people. The thing is, this isn't just about developers. Every field right now is pushing beginners to use AI to "learn faster." Law, medicine, writing, data stuff, finance, engineering you name it. But if leaning on AI during the actual learning phase quietly damages how real competence forms, then we've got a generation building careers on ground that was never properly packed down. They can get the model to spit out answers. Thinking for themselves when it counts? Different story. What they also pointed out that most people are missing completely is that the skills you'll need to properly supervise AI in the future the deep understanding, the ability to read between the lines, to catch its mistakes are exactly the ones getting eroded right now. You can't audit what you never learned to build yourself. It's kind of like learning guitar by only ever playing along with perfect backing tracks and auto-tune. You can perform songs pretty quick, but take the training wheels off in a real jam session and suddenly your ear and timing never developed the way they should have. Anthropic isn't out here saying ditch the AI completely. They're saying learn the thing first on its own terms. Bring the AI in after. If you're starting something new, maybe sit in the suck for a bit longer than feels comfortable before calling in the assistant.

English
0
0
0
128
Jane Chamberlain
Jane Chamberlain@ChambJane·
@PrestonCooper93 @AEIeducation It will be interesting to consider how the FAFSA debacle that impacted new enrollments of lower income students for 2024-2025 plays out. It may be tempting to interpret enrollment increases at less selective schools in 2025-2026 as something other than a return to the mean.
English
0
0
1
44
Preston Cooper
Preston Cooper@PrestonCooper93·
Last year, I published a report showing that most of the recent decline in college enrollment occurred at low-quality colleges. But the most recent data shows these schools might be making a comeback.
Preston Cooper tweet media
English
2
4
13
1.9K
Jane Chamberlain
Jane Chamberlain@ChambJane·
Don’t expect originality from AI models, and don’t allow them to flatten yours.
Alex Prompter@alex_prompter

🚨 BREAKING: Researchers at UW Allen School and Stanford just ran the largest study ever on AI creative diversity. 70+ AI models were given the same open-ended questions. They all gave the same answers. They asked over 70 different LLMs the exact same open-ended questions. "Write a poem about time." "Suggest startup ideas." "Give me life advice." Questions where there is no single right answer. Questions where 10 different humans would give you 10 completely different responses. Instead, 70+ models from every major AI company converged on almost identical outputs. Different architectures. Different training data. Different companies. Same ideas. Same structures. Same metaphors. They named this phenomenon the "Artificial Hivemind." And the paper won the NeurIPS 2025 Best Paper Award, which is the highest recognition in AI research, handed to a small number of papers out of thousands of submissions. This is not a blog post or a hot take. This is award-winning, peer-reviewed science confirming something massive is broken. The team built a dataset called Infinity-Chat with 26,000 real-world, open-ended queries and over 31,000 human preference annotations. Not toy benchmarks. Not math problems. Real questions people actually ask chatbots every single day, organized into 6 categories and 17 subcategories covering creative writing, brainstorming, speculative scenarios, and more. They ran all of these across 70+ open and closed-source models and measured the diversity of what came back. Two findings hit hard. First, intra-model repetition. Ask the same model the same open-ended question five times and you get almost the same answer five times. The "creativity" you think you're getting is the same output wearing a slightly different outfit. You ask ChatGPT, Claude, or Gemini to write you a poem about time and you keep getting the same river metaphor, the same hourglass imagery, the same reflection on mortality. Over and over. The model isn't thinking. It's defaulting to whatever scored highest during alignment training. Second, and this is the one that should really alarm you, inter-model homogeneity. Ask GPT, Claude, Gemini, DeepSeek, Qwen, Llama, and dozens of other models the same creative question, and they all converge on strikingly similar responses. These are models built by completely different companies with different architectures and different training pipelines. They should be producing wildly different outputs. They're not. 70+ models all thinking inside the same invisible box, producing the same safe, consensus-approved content that blends together into one indistinguishable voice. So why is this happening? The researchers point directly at RLHF and current alignment techniques. The process we use to make AI "helpful and harmless" is also making it generic and boring. When every model gets trained to optimize for human preference scores, and those preference datasets converge on a narrow definition of what "good" looks like, every model learns to produce the same safe, agreeable output. The weird answers get penalized. The original takes get shaved off. The genuinely creative responses get killed during training because they didn't match what the average annotator rated highly. And it gets even worse. The study found that reward models and LLM-as-judge systems are actively miscalibrated when evaluating diverse outputs. When a response is genuinely different from the mainstream but still high quality, these automated systems rate it LOWER. The very tools we built to evaluate AI quality are punishing originality and rewarding sameness. Think about what this means if you use AI for brainstorming, content creation, business strategy, or literally any task where you need multiple perspectives. You're getting the illusion of diversity, not the real thing. You ask for 10 startup ideas and you get 10 variations of the same 3 ideas the model learned were "safe" during training. You ask for creative writing and you get the same therapeutic, perfectly balanced, utterly forgettable tone that every other model gives. The researchers flagged direct implications for AI in science, medicine, education, and decision support, all domains where diverse reasoning is not a nice-to-have but a requirement. Correlated errors across models means if one AI gets something wrong, they might ALL get it wrong the same way. Shared blind spots at massive scale. And the long-term risk is even scarier. If billions of people interact with AI systems that all think identically, and those interactions shape how people write, brainstorm, and make decisions every day, we risk a slow, invisible homogenization of human thought itself. Not because AI replaced creativity. Because it quietly narrowed what we were exposed to until we all started thinking the same way too. Here's what you can actually do about it right now: → Stop accepting first-draft AI output as creative or diverse. If you need 10 ideas, generate 30 and throw away the obvious ones → Use temperature and sampling parameters aggressively to push models out of their comfort zone → Cross-reference multiple models AND multiple prompting strategies, because same model with different prompts often beats different models with the same prompt → Add constraints that force novelty like "give me ideas that a traditional investor would hate" instead of "give me creative ideas" → Use structured prompting techniques like Verbalized Sampling to force the model to explore low-probability outputs instead of defaulting to consensus → Layer your own taste and judgment on top of everything AI gives you. The model gets you raw material. Your weirdness and experience make it original This paper puts hard data behind something a lot of us have been feeling for a while. AI is getting more capable and more homogeneous at the same time. The models are smarter, but they're all smart in the exact same way. The Artificial Hivemind is not a bug in one model. It's a systemic feature of how the entire industry builds, aligns, and evaluates language models right now. The fix requires rethinking alignment itself, moving toward what the researchers call "pluralistic alignment" where models get rewarded for producing diverse distributions of valid answers instead of collapsing to a single consensus mode. Until that happens, your best defense is awareness and better prompting.

English
0
0
0
11
Jane Chamberlain
Jane Chamberlain@ChambJane·
For students who already struggle to differentiate from “hive mind” this is especially dangerous. We will see even bigger shifts of “different in the same way.”
Ihtesham Ali@ihtesham2005

🚨 Stanford researchers just exposed a weird side effect of AI that almost nobody is talking about. The paper is called “Artificial Hivemind.” And the core finding is unsettling. As language models get better, they also start sounding more and more the same. Not just within a single model. Across different models. Researchers built a dataset called INFINITY-CHAT with 26,000 real open-ended questions things like creative writing, brainstorming, opinions, and advice. Questions where there isn’t a single correct answer. In theory, these prompts should produce huge diversity. But the opposite happened. Two patterns showed up: 1) Intra-model repetition The same model keeps producing very similar answers across runs. 2) Inter-model homogeneity Completely different models generate strikingly similar responses. In other words: Instead of thousands of unique perspectives… We’re getting the same few ideas recycled over and over. The authors call this the “Artificial Hivemind.” It happens because most frontier models are trained on similar data, optimized with similar reward models, and aligned using similar human feedback. So even when you ask something open-ended like: • “Write a poem about time” • “Suggest creative startup ideas” • “Give life advice” Many models converge toward the same phrasing, metaphors, and reasoning patterns. The scary implication isn’t about AI quality. It’s about culture. If billions of people rely on the same systems for ideas, writing, brainstorming, and thinking… AI might slowly compress the diversity of human thought. Not because it’s trying to. But because the models themselves are drifting toward the same answers. That’s the real risk the paper highlights. Not that AI becomes smarter than humans. But that everyone starts thinking like the same machine.

English
0
0
1
18
أحمد الردادي
أحمد الردادي@ahmedalradadi·
@pmarca Disconnected from free markets and reality in general.. I remember Taleb talked about it in Black Swan “A field that is judged by peers becomes self-referential; a field judged by reality becomes truthful.”
English
2
7
145
10.3K
Marc Andreessen 🇺🇸
What do the fields/domains/industries with prizes all have in common?
English
338
19
818
240.9K
Grand Rapids Businessman
Grand Rapids Businessman@GR_businessman·
NEWS: The Grand Rapids tech firm has received approval to transform the former Rockwell Republic restaurant on South Division into their new headquarters. Big Picture: • The Move: 35–40 employees are relocating to the Heartside neighborhood this spring. • The Upgrade: The office will feature West Michigan’s first accredited SCIF (Sensitive Compartmented Information Facility) for handling classified defense info. • The Changes: It’s exciting growth for the downtown tech hub, but sad for some that a local favorite had to close its doors.
Grand Rapids Businessman tweet media
English
1
1
9
993
Jane Chamberlain
Jane Chamberlain@ChambJane·
“Rubio’s speech appealed to that deeper bond. He called for an unapologetic pride in a common inheritance. Pride here does not mean arrogance. It means gratitude and stewardship. By contrast, a purely materialist understanding of culture risks flattening the transcendent dimension of human life. If culture is only the product of economic forces, then beauty, faith, and tradition lose intrinsic value. They become instruments in a struggle for power.”
Kevin Briggins@KJBrigg

x.com/i/article/2023…

English
0
0
1
40
Jane Chamberlain
Jane Chamberlain@ChambJane·
@eduleadership Yes, when the student-system fit is there, it clearly “works” at producing testable learning outcomes.
English
0
0
0
41
Jane Chamberlain
Jane Chamberlain@ChambJane·
“Ambition redirected from optimizing engagement metrics to building rockets. From scaling users to scaling factories. From virtual products to physical infrastructure. That shift matters more than any vehicle or spacecraft Musk delivered. Products obsolesce. Redirecting an entire generation’s engineering ambition from digital to physical compounds across decades and rebuilds industrial capability at civilizational scale.”
Dustin@r0ck3t23

Katherine Boyle just identified Elon Musk’s most important contribution to America, and it has nothing to do with the products he shipped. Boyle, General Partner at a16z: “I think Elon’s most important contribution to this country is training two generations of engineers to work with their hands again.” For ten years, America’s sharpest technical minds optimized ad clicks and built messaging apps. Software consumed ambition. The physical world became something you abstracted into APIs, not something you touched or understood. Elon didn’t reverse that through inspiration. He reversed it by building companies that required understanding manufacturing or failing completely. SpaceX and Tesla forced engineers to learn how metal fractures, how tolerances cascade through systems, how physical iteration costs months and millions per failure. No debugging. No patches. Just physics that doesn’t negotiate. Boyle: “Training two generations of engineers.” The product isn’t the cars. It’s the people. Look at who’s founding America’s critical hard-tech companies now. The common thread isn’t Stanford or MIT. It’s time on factory floors at SpaceX or Tesla. They learned welding. They learned that “impossible” just means unsolved engineering, not violated physics. They learned failure in the physical domain where mistakes compound instead of reverting. Elon didn’t build companies. He accidentally rebuilt industrial knowledge that had been decaying for thirty years while America’s best minds chased digital scale. Boyle: “Work with their hands again.” Three words that sound quaint but describe a civilizational inflection point. Software dominated because it scaled infinitely at zero marginal cost. Physical manufacturing was slow, expensive, unfashionable. Building real things became what you did if you couldn’t code. Elon made atoms matter again. Made manufacturing the hardest problem worth solving. Made physical engineering prestigious in ways it hadn’t been since humans walked on the moon. The evidence is everywhere now. Technical talent that doesn’t default to “which app” but asks “which physical thing should exist that currently doesn’t.” Ambition redirected from optimizing engagement metrics to building rockets. From scaling users to scaling factories. From virtual products to physical infrastructure. That shift matters more than any vehicle or spacecraft Musk delivered. Products obsolesce. Redirecting an entire generation’s engineering ambition from digital to physical compounds across decades and rebuilds industrial capability at civilizational scale. We stopped just coding the future. We started machining it, welding it, breaking it in reality until physics confirms it works. That transformation from virtual to tangible ambition is reconstructing American manufacturing one engineer at a time. And those engineers are now training the next wave. The compounding has started. The School of Elon doesn’t need Elon anymore. It’s self-sustaining, spreading through an entire generation that learned building real things matters more than building virtual ones. That’s not just a business achievement. That’s a civilization remembering how to make things that matter in the physical world again. And it might be the only thing that saves American technological leadership when the competition is just building faster because they never forgot.

English
1
0
0
25