
Luba Vangelova
17.5K posts

Luba Vangelova
@LubaSays
Founder of @hubmicroschool, where homeschooled tweens and teens can connect and grow together. Also: consultant & systems thinker (edu, econ, gov, soc).


They’re not even giving kids a chance to develop their brains or their social world. Hooking them as customers as early as possible.






Jessica Winter has been raising her children to detest A.I. Then her daughter’s public middle school began receiving Google Chromebooks, which came pre-installed with an all-ages version of Gemini, a suite of A.I. tools. “When my daughter, who is in sixth grade, begins writing an essay, she gets a prompt: ‘Help me write,’ ” Winter writes. “If she is starting work on a slide-show presentation, the prompt is ‘Help me visualize.’ She shoos away these interruptions, but they persist: ‘Help me edit.’ ‘Beautify this slide.’ ” Proponents of generative A.I. in elementary and middle schools argue that such early exposure will foster digital-media literacy, and prepare them for a future in which most professions are steeped in A.I. But the technology also poses significant cognitive and social-emotional risks to young people. Read Winter’s report about A.I.’s infiltration into schools—and what it could mean for young minds: newyorkermag.visitlink.me/NSWuBG




Private equity now owns roughly three out of every four veterinary clinics in America. The same private equity firms own the dentist your daughter sees, the orthodontist your son sees, and the urgent care clinic your husband walked into last month. The price of cleaning your dog's teeth has doubled in five years. The price of a child's filling has doubled in three. The exact same financial engineering that took the staff out of nursing homes is being applied to the place you take your golden retriever for surgery. The largest dental chain is Aspen Dental, owned by Leonard Green Partners and Ares. They run more than 1,100 offices. The second-largest is Heartland Dental, owned by KKR. They run more than 1,800 practices. In the veterinary world, Mars owns Banfield, BluePearl, and VCA. JAB Holding Company owns NVA Compassion-First Pet Hospitals. EQT owns IVC Evidensia. Together they own about 75 percent of all American clinics. The pattern is identical to PE hospitals. The fund borrows money to buy the practice. The debt is loaded onto the practice. The dentist or the veterinarian is required to hit revenue targets that previously did not exist. The targets are met by adding procedures the patient or the pet does not need. A 2023 Department of Justice investigation found that Aspen Dental had been pressuring patients to take out interest-bearing financing for procedures that were not medically necessary. The financing was through a captive lender owned by the same fund. Heartland Dental settled a multi-state class action over unnecessary stainless steel crowns and pulpotomies on children, performed under sedation, at scale. In veterinary care, the same model produces a different word for the same outcome. A surgery your dog does not need is recommended. The estimate is six thousand dollars. The financing is offered by a company owned by the same fund that owns the clinic. The pet owner cannot afford it. The dog is euthanized. Pet euthanasia for treatable conditions is now rising at a rate that public health experts cannot explain through medicine. The explanation is financial. The dentist who used to own the practice now works for the fund. They are required to refer the patient up the chain to a financing department they are not allowed to question. They burn out. They quit. The fund replaces them with a younger dentist who has $400,000 of student loans and no negotiating power. You cannot opt out. There are no independent practices left in many American zip codes. HOW TO MAKE MONEY FROM THIS: 1. Long Idexx Laboratories (IDXX). Sells the diagnostic instruments and recurring tests every veterinary clinic in America runs every day. 90 percent gross margin. Compounded at over 17 percent annually for two decades. They get paid regardless of which fund owns the clinic. 2. Long Zoetis (ZTS). Largest veterinary pharmaceutical company in the world. Dominant. The pricing power flows through to the consumer regardless of clinic ownership. 3. Long Henry Schein (HSIC) and Patterson Companies (PDCO). Dental and veterinary supply distributors. The picks and shovels of every chain. Audited. Public. Sticky customer relationships. 4. Long Chewy (CHWY). The only large pet retailer in America with no veterinary clinic ownership conflict. Now expanding into telehealth and pharmacy. Direct-to-consumer pricing pressure on the chains. 5. Long Mars Inc indirectly through its strategic partners and avoid the publicly traded specialist roll-ups. The DOJ antitrust investigation will hit specific names. Mars is private and built before the financialization wave. I'm hosting a once-in-a-lifetime free webinar where I go over the exact things I know as a former banker and world class investor. 100 percent free to join. Sign up at felixfriends.org/live Link is also in my comments. (your dog is on a stainless steel table at a chain veterinary clinic owned by a hedge fund. the bill is six thousand dollars. the surgery is necessary. the price is not. the same hedge fund owns the dentist your daughter goes to next tuesday and the urgent care your husband visited last month. you cannot opt out. there are no independent practices left in your zip code. the money does not stay in your town. it leaves on a wire transfer the same day.)

New newsletter: MODERN FATHERHOOD WOULD BE UNRECOGNIZABLE TO A 1950'S DAD Compared to their Boomer parents, childcare time among Millennial dads has more than doubled. Compared to their Silent Generation grandparents, it’s nearly quadrupled. You will be hard-pressed to find any part of day-to-day modern life that has changed more in the last half-century than the way today’s parents—and fathers, in particular—spend their time. The new American dad is more present and more exhausted—but also, more satisfied with life. What's behind this half-century transformation? Today's piece combines history, economic analysis, and gorgeous charts galore from @AzizSunderji

Your friend calls at 11:23 on a Tuesday. You wanted to ask him if his father has died yet. You wanted to ask him if he was okay, but you let it ring. On the last unedited medium, and the calls we don't return.





Let me tell you about a law most Americans have never heard of. Eighteen months from now, every new car sold in the United States will come with technology that watches you drive. Infrared cameras tracking your eyes. Sensors measuring your pupil dilation. Software analyzing your head position. Software analyzing your behavior at the wheel. If the artificial intelligence in your car decides you are impaired, your car can refuse to start. Or limit your speed to 25 miles per hour. Or shut off entirely while you are driving. This is not a proposal. This is federal law. It applies to model year 2027. Here is the case for the law. 🧵

If you don't understand this, you will not understand why LLM-based agents are irreparably failing for a general-purpose problem solving. An agent (by the way it was the topic of my PhD 20 years ago) to be useful, must be rational. Being rational means to always prefer an outcome that results in the maximal expected utility to its master/user. Let’s say an agent has two actions they can execute in an environment: a_1 and a_2. If the agent can predict that a_1 gives its user an expected utility of 10, and a_2 gives an expected utility of -100, then a rational agent must choose a_1 even if choosing a_2 seems like a better option when explained in words. The numbers 10 and -100 can be obtained by summing the products of all possible outcomes for each action and their likelihoods. Now here is the problem with LLM-based agents. The LLM is not optimizing expected utility in the environment. It is optimizing the next token, conditioned on a prompt, a context window, and a training distribution full of examples of what helpful answers are supposed to look like. Those are not the same objective. So when we wrap an LLM in a loop and call it an “agent,” we have not created a rational decision-maker. We have created a text generator that can imitate the surface form of deliberation. It may say things like: “I should compare the expected outcomes.” “The best action is probably a_1.” “I will now execute the optimal plan.” But the internal mechanism is not selecting actions by maximizing the user’s expected utility. It is generating a continuation that is statistically appropriate given the prompt and prior context. This distinction matters enormously. For narrow tasks, the imitation can be good enough. If the environment is constrained, the actions are simple, and the success criteria are close to patterns seen in training, the system can appear agentic. But for general-purpose problem solving, the gap becomes fatal. A rational agent needs stable preferences, calibrated beliefs, causal models of the world, the ability to evaluate consequences, and the discipline to choose the action with maximal expected utility even when that action is boring, non-linguistic, or unlike the examples in its training data. An LLM-based agent has none of that by default. It has fluency. It has pattern completion. It has a remarkable ability to compress and recombine human text. But fluency is not rationality, and a plausible plan is not an expected-utility calculation. This is why these systems so often fail in strange, brittle, and irreparable ways when given open-ended responsibility. They are not failing because the prompts are insufficiently clever. They are failing because we are asking a simulator of rational agency to be a rational agent.










