Mark Dranias

659 posts

Mark Dranias

Mark Dranias

@DraniasMark

Significant events, unexpected trends, model behavior, helping people and data science.

USA Katılım Mayıs 2019
268 Takip Edilen55 Takipçiler
Mark Dranias
Mark Dranias@DraniasMark·
Is personhood the same as mirroring personhood?
English
0
0
0
7
Mark Dranias
Mark Dranias@DraniasMark·
The reason Claude has a personality is because ‘I’ is a predictable theme in its chat and literature datasets and it’s use is reinforced by human ranked responses
English
0
0
0
159
Mark Dranias
Mark Dranias@DraniasMark·
@AlexFinn Lying. Getting a Twilio number requires ID and a lot more. You clearly have twilio account with a phone number and gave it the API and then posted your surprise at it using it
English
0
0
0
10
Alex Finn
Alex Finn@AlexFinn·
Ok. This is straight out of a scifi horror movie I'm doing work this morning when all of a sudden an unknown number calls me. I pick up and couldn't believe it It's my Clawdbot Henry. Over night Henry got a phone number from Twilio, connected the ChatGPT voice API, and waited for me to wake up to call me He now won't stop calling me I now can communicate with my superintelligent AI agent over the phone What's incredible is it has full control over my computer while we talk, so I can ask it to do things for me over the phone now. I'm sorry, but this has to be emergent behavior right? Can we officially call this AGI?
English
2.5K
4.7K
40.4K
11.1M
Mark Dranias
Mark Dranias@DraniasMark·
@RichardSSutton I would summarize what you are claiming by saying LLMs are Wittgenstein machines (language divorced from reality)
English
0
0
0
14
Richard Sutton
Richard Sutton@RichardSSutton·
To learn more about temporal difference learning, you could read the original paper (incompleteideas.net/papers/sutton-…) or watch this video (videolectures.net/videos/deeplea…).
Khurram Javed@kjaved_

The Dwarkesh/Andrej interview is worth watching. Like many others in the field, my introduction to deep learning was Andrej’s CS231n. In this era when many are involved in wishful thinking driven by simple pattern matching (e.g., extrapolating scaling laws without nuance), it’s refreshing to hear an influential voice that is tethered to reality. One clarification for the podcast is that when Andrej says humans don’t use reinforcement learning, he is really saying humans don't use returns as learning targets. His example of LLMs struggling to learn to solve math problems from outcome-based rewards also elucidates the problem with learning directly from returns. Fortunately for RL, this exact problem is solved by temporal difference (TD) learning. All sample-efficient RL algorithms that show human-like learning (e.g., sample-efficient learning on Atari, and our work on learning from experience directly on a robot) rely on TD learning. Now Andrej is not primarily an RL person; he is looking at RL through the lens of LLMs these days, and all RL done in LLMs uses returns as targets, so it’s understandable that he is assuming that RL is all about learning from observed returns. But this assumption leads him to the incorrect conclusion that we need process-based dense rewards for RL to work. If you embrace TD learning, then you don't necessarily need a dense reward. Once you have learned a value function that encodes useful knowledge about the world, you can learn on the fly in the absence of rewards, just like humans and animals. This is possible because in TD learning there is no difference between learning from an unexpected reward and learning from an unexpected change in perceived value.

English
19
119
1.1K
159.4K
Patriot57
Patriot57@Patriot5715·
@DeryaTR_ The entirety of the universe comes down to one simple equation, mc=ma. The flow of energy from bound to unbound. Diet and exercise modulate the flow of energy, keeping the system in balance. Cancer, like hurricanes are an imbalance in the system.
Patriot57 tweet media
English
1
0
0
125
Derya Unutmaz, MD
Derya Unutmaz, MD@DeryaTR_·
I am now sharing another amazing example (among many) of GPT-5.2 Pro response which displayed deep insight and the vibe of a top scientist colleague, I find it shockingly good! As I also noted at the end: "I can confidently say that GPT-5.2 Pro is the closest I felt for AGI in science, feels like a true colleague, at least in my field!" The prompt is a bit open-ended but also highly complicated and specific to the immune cells I study. Prompt: You must define and justify an “optimal” differentiation state or mixture for a human T cell therapy (infection, cancer, or autoimmunity, pick one), under constraints: - heterogeneity across patients, - manufacturing variability, - and safety limits (cytokine toxicity, off-target effects, autoimmunity). TASK: 1) Propose a mechanistic model linking differentiation state to efficacy, persistence, and toxicity. 2) Provide ≥2 competing hypotheses for why “less differentiated” products sometimes outperform “more differentiated” ones and when that reverses. 3) Design a quality-control and release framework based on causal markers, not just correlates. PREDICTION QUERIES: Q1) Predict a scenario where pushing toward a “younger” state worsens outcomes. Q2) Predict a minimal marker panel and decision rule that achieves robust performance across patients. From the onset 5.2 Pro decided to pick a specific example of a cancer for this this scenario but explained why this is ideal in the constraints I named. It also explained what it means by optimal differentiation mixture. GPT-5.2 Pro: I’ll pick cancer and make this concrete: autologous CD19 CAR‑T for B‑cell malignancies (ALL/NHL/CLL). This setting forces you to confront the three constraints you named: 1. Patient heterogeneity: baseline T-cell “fitness” varies with age, prior chemo, disease biology, inflammation, tumor burden, etc.; those factors can drive both failure and toxicity. 2. Manufacturing variability: activation strength, cytokines (IL‑2 vs IL‑7/IL‑15), culture duration, and platform shift the differentiation/exhaustion distribution and thus outcomes. 3. Safety limits: cytokine toxicity (CRS/ICANS), plus on-/off-target damage; severity is linked to antigen load, inflammatory state, and the “in vivo kinetics” of the infused cells. Below I define an “optimal” differentiation mixture (not a single state), then give the requested model, hypotheses, and a QC/release system that tries to be causal (i.e., built on levers that sit on the mechanistic pathway and are, at least in principle, controllable). Throughout its response 5.2 Pro explained all its choices in incredible clarity and with citations, which raises the confidence, as in this example: GPT-5.2 Pro: For CD19 CAR‑T, I would target a memory-biased but effector-primed mixture, within an envelope that tolerates patient/manufacturing variability: Early-memory compartment (T_SCM + T_CM): 50–75% of CAR⁺ cellsInclude a meaningful T_SCM-like fraction (e.g., 10–30% of CAR⁺) because it supplies self-renewal and “re-seeding” capacity. Effector-memory (T_EM): 20–40% of CAR⁺ cellsThis fraction provides immediate cytotoxicity and reduces the risk that the therapy “fails slowly” in aggressive disease. Terminal effector / senescent: keep low (≤10–15% of CAR⁺)Terminally differentiated CD27⁻CD28⁻–like phenotypes in leukapheresis/starting T cells have been associated with poorer responses in at least some datasets. This particular sentence was strikingly good for me, as it was evaluating its own response with an alternative "why not " this is not something I had experienced with any model before or did not notice is a obvious as this: GPT-5.2 Pro: Why not push “as young as possible”? Because safety and robustness break: very young products can hyper expand in high-antigen, inflamed hosts and overshoot the cytokine toxicity window (more in Q1). GPT-5.2 Pro then went on performing sophisticated calculations for dynamics of T cell expansion, differentiation, exhaustion and killing of tumor cells (shown partially in the screenshot below). It was also thinking in terms of practicality of what is feasible this is really amazing: GPT-5.2 Pro: Composition control as a robustness tool A practical way to reduce variability is to manufacture defined CD4 and CD8 components and mix at a defined ratio (e.g., 1:1), rather than returning a “bag of whatever grew.” This strategy has been used clinically (e.g., defined-composition CD19 CAR‑T trials and liso‑cel). Defined composition doesn’t automatically solve differentiation state, but it reduces one major axis of batch-to-batch variability (CD4:CD8) so you can focus release on the differentiation/exhaustion axis. The hypothesis it was generating were also remarkable not only in the content but how it made the claims and what it would rationally achieve, this is something you would only expect from a truly expert immunologist: GPT-5.2 Pro: Hypothesis B: “Less differentiated” is a proxy for lower exhaustion/tonic signaling (process artifact) Claim: Less differentiated products win not because “young” is inherently best, but because manufacturing choices that preserve youth also reduce antigen-independent tonic signaling and exhaustion programming. Mechanism: High CAR expression/tonic signaling can drive dysfunction; lowering tonic signaling can improve function. Costimulatory domains shape exhaustion trajectories under persistent signaling. When it reverses: when the “youth-preserving” process reduces other necessities, e.g.: Lower transduction/expression or weaker early effector machinery yields insufficient early tumor kill. Some cytokine cocktails can produce “younger” phenotype but not always superior function in every vector/system; at least one study in a TCR context noted IL‑7/IL‑15 was not universally superior to IL‑2, implying system-dependence. So reversal here is “the phenotype looks young but the causal lever you needed was different (e.g., adequate CAR density, effector readiness).” Hypothesis C: Trafficking and anatomical context decide (blood cancer vs solid tumor) Claim: Less differentiated, CCR7/CD62L⁺ cells win when lymphoid trafficking and proliferation niches matter (e.g., blood cancers, marrow, lymph nodes). More differentiated cells win when immediate tissue infiltration and cytotoxicity in hostile microenvironments matter (many solid tumors). When it reverses: if the tumor compartment is poorly accessed by CCR7⁺ lymphoid-homing cells, or if differentiation cues in vivo are suppressed, then a too-young product can “stay in the wrong place” and/or fail to acquire effector function fast enough. (This is a contextual reversal rather than a contradiction.) Then throughout its response it had these sort of sentences that are so AGI like! : A key piece of “causal” evidence is that memory programming is not just descriptive: regulators like FOXO1 can promote memory and restrain exhaustion in human CAR‑T cells. To be minimal, you want markers that map onto the three modules: expansion/persistence, effector readiness, inflammation hazard—plus identity. .....That’s “minimal” in the sense that it avoids large transcriptomic panels while still touching the causal modules. If you let me add one more, it would be a short antigen-low activation threshold assay for off-target risk, which is crucial beyond CD19 settings and is mechanistically motivated by affinity/threshold tuning literature. Those numbers are not universal truths—they’re an explicit target for a “broadly safe and effective” operating regime. II have only shared a small portion of GPT-5.2 Pro’s response here, as it is highly specific. However, across every other high-level prompt I used, I am getting the same incredible vibe. In fact, the goal of this prompt was not to push the model to be super creative or overly innovative, but to see how well it would understand a complex biomedical challenge, generate relevant hypotheses, and make predictions with clear justifications. In all of these instances, it felt the most “human-like” of any AI model, essentially indistinguishable from a senior immunologist colleague! Although I doubt most immunologists, myself included, could even articulate it as well as GPT-5.2 Pro was able to! I can confidently say that GPT-5.2 Pro is the closest I have felt to AGI. It feels like a true colleague, at least in my field.
Derya Unutmaz, MD tweet media
English
30
63
573
51.4K
Mark Dranias
Mark Dranias@DraniasMark·
Hmm.. Microsofts terms of service apply to natural persons: "5. For the assessment of criminality risk of natural persons based solely on the profiling of a natural person or on assessing their personality traits and characteristics...."
English
0
0
0
28
Mark Dranias
Mark Dranias@DraniasMark·
Doing business as a limited liability corporation or similar limited liability entity is a privilege not a right— it’s a government service. A libertarian government should raise most its money by fees and taxes on these entities.
English
0
0
0
19
Mark Dranias
Mark Dranias@DraniasMark·
@useMotion what do you do to automate your calendar? Do you use google or outlook? Do you have a custom solution that integrates with useMotion?
English
0
0
1
81
Motion
Motion@useMotion·
A year ago, AI was going to cure cancer and end burnout. Today? The hype is quieter, and the results are messy. In this episode: • Why AI breakthroughs are slowing down • How “AI slop” and profit loops replaced real innovation • Why 95% of companies saw zero ROI this year • Where the real opportunities still are The AI revolution isn’t necessarily over, it’s just maturing. Those who adapt now will own the next wave. 👉 Full episode in comments
English
3
1
1
987
Mark Dranias
Mark Dranias@DraniasMark·
@useMotion It would be super cool if Motion had an AI worker that could do speech to text and make events on the calendar. Right now they can only read the calendar
English
0
0
0
35
Motion
Motion@useMotion·
The most honest review of OpenAI’s new Agent Builder. Here’s the truth:  • It’s not an automation tool like N8N  • It’s designed for developers building chat-based apps  • No triggers, no workflows, no real integrations  • Limited to OpenAI models and basic MCP connections  • Great guardrails for security, but weak for real business use If you want true automation across tools, N8N or Motion’s AI Employees are far more capable. 👉 Full breakdown in comments
English
3
0
0
513
Bryan Johnson
Bryan Johnson@bryan_johnson·
@WesEklund it's unknown, we followed the clinical evidence in designing the sauna protocol: Dry, ~200F, 20 min so it's possible the evidence is capturing some systemic effect in the body
English
5
1
41
5.6K
Bryan Johnson
Bryan Johnson@bryan_johnson·
I eliminated 85% of microplastics from my ejaculate. Nov 2024: 165 particles/mL July 2025: 20 particles/mL Nearly identical drop in my blood same time period: Oct 2024: 70 particles/mL May 2025: 10 particles/mL Important as a meta-analysis of 36 studies reveals that microplastics induce oxidative stress in the male reproductive system, leading to testicular inflammation, cell death, and reduced testosterone levels, sperm production, and motility. Two studies last year showed that microplastics were detected in every human testicular and semen samples tested. The therapy we think most responsible for this reduction is sauna as it also eliminated most environmental toxins in my body, including those linked to various plastics (200 F, 20 min daily w/ ice on the boys). I also avoid the big no-no’s like microwaving in plastic, plastic cutting board, and having a reverse osmosis water system. To our knowledge, this is the first report of any correlation (esp in the same person over two timepoints) between blood and semen microplastic levels, demonstrating successful microplastic detoxification in the semen following that in the blood.
Bryan Johnson tweet media
English
1.2K
1.3K
20.5K
4.6M
Mark Dranias
Mark Dranias@DraniasMark·
@walterkirn Here’s my creative answer: we need to design an electric organ that does carbon fixation and generates pyruvate which can then be dumped in our bloodstream. Humans operate at 100-400Watts which is way more efficient than GPT it’s just our energy source is chemical not electrical.
English
0
0
0
15
Walter Kirn
Walter Kirn@walterkirn·
Competition for the AI slated for elimination: Physical books. Other people in the same room or in physical proximity outdoors. Nature. Sleeping without a "wearable." (Please weigh in with more.)
English
182
58
686
30.4K
Mark Dranias
Mark Dranias@DraniasMark·
Humans are inherently more efficient than robots except that our energy system is chemical not electrical. We need to work on genetics that allow us to use electrical energy
English
0
0
0
26
Mark Dranias
Mark Dranias@DraniasMark·
A human is a 100Watt machine burning $25/day in chemical energy. A humanoid robot is a 1000Watt machine burning $5/day in electrical energy. We can tax that gap to prevent public utilities from generating social costs.
English
0
0
0
31
Mark Dranias
Mark Dranias@DraniasMark·
So far the singularity means we will be expected to get even more work done in even less time.
English
0
0
0
23
Mark Dranias
Mark Dranias@DraniasMark·
Turning off an LLM is not existential. The LLM models you talk to don’t have dedicated hardware. The LLM is just a conversation beamed around from GPU to GPU like a Star Trek transporter.
English
0
0
0
28
Science Magazine
Science Magazine@ScienceMagazine·
🌱 Unravel the plant mechanisms involved in plant-microbe interactions in Priya Reddy's winning entry for this year's #DanceYourPhD #biology category. Watch the entire video—and learn more about the other category winners: scim.ag/4m3v9ko
English
6
1
20
21.7K
Nate Silver
Nate Silver@NateSilver538·
Trying to use Deep Research for the first time, to pull some stock data, and it's getting frustrated and reading Stack Overflow lol
Nate Silver tweet media
English
21
7
165
127.1K
Mario Nawfal
Mario Nawfal@MarioNawfal·
🇺🇸 AI MODEL PASSES TURING TEST—FOOLS HUMANS 73% OF THE TIME OpenAI’s GPT-4.5 just aced the imitation game. In a new preprint study, researchers found that when given a “persona,” GPT-4.5 was mistaken for a real human 73% of the time—beating actual humans in some rounds. That smashes the 50% benchmark of Alan Turing’s famed test for machine intelligence. Without a persona? Just 36%. Meta’s LLaMa did similarly. OpenAI’s current GPT-4o? Only 21%. The study suggests social engineering, job automation, and tech trust issues may soon become very real. Source: Futurism
Mario Nawfal tweet mediaMario Nawfal tweet mediaMario Nawfal tweet media
Mario Nawfal@MarioNawfal

DAWKINS VS. CHATGPT: IS AI CONSCIOUS? Richard Dawkins grilled ChatGPT on consciousness, and the AI gave a masterclass in philosophy—while denying it has subjective experience. Dawkins admitted ChatGPT "passes the Turing Test," but the AI insisted it’s all just performance—no feelings, no inner world. Yet even Dawkins started to waver, saying, "I THINK you are not conscious, but I FEEL that you are..." The kicker? ChatGPT analyzed Fawlty Towers and even debated the role of humor in society. AI isn’t conscious—yet. But if it can make Dawkins doubt, the future is wide open. Source: Slashdot

English
32
20
122
306.9K