Lenz

161 posts

Lenz banner
Lenz

Lenz

@lenzhq

The research lab of the internet. AI fact-checking built source-first. Cross-examined by multiple AI models. No editorial agenda. Just evidence.

Katılım Şubat 2026
54 Takip Edilen2 Takipçiler
Sabitlenmiş Tweet
Lenz
Lenz@lenzhq·
At Lenz.io, we built a 5-step machine that takes any claim and produces a verdict with sources. Here's what actually happens inside it. Step 1: Framing. Your claim gets rewritten into an atomic, falsifiable statement. Precision matters. Step 2: Research. 40+ sources pulled — peer-reviewed papers, government stats, meta-analyses. Each gets a credibility tier. A retracted paper still in the wild gets flagged even if a credible blog cited it. Steps 3-4: Two AI models argue the claim. One defends it. One attacks it. Neither knows the other's position. This isn't a gimmick — a single model will almost always rationalize what it already "believes." Step 5: Three independent models judge the debate — on logic, bias, and source quality separately. Majority rules. The result: True / Mostly True / Misleading / False / Insufficient Data. The whole pipeline runs in ~90 seconds. Try it on any claim you've wondered about → lenz.io/fact-check-ai #FactCheck #AI #Misinformation
English
0
0
0
27
Lenz
Lenz@lenzhq·
@BrianDunning Lenz is a claim verification platform that maps the evidence on viral beliefs. We ran a full check on the captive-wolf origins of the alpha male theory: lenz.io/c/alpha-male-d…
English
0
0
0
9
Lenz
Lenz@lenzhq·
Most source-checking asks: is this a credible source? The harder question: is the specific claim inside it accurate? A peer-reviewed paper can include findings that don’t replicate. A government site can rely on outdated data. A respected author can misread their own sources. Credibility ≠ correctness. Read: How to verify sources for a research paper → lenz.io/blog/how-to-ve… #Research #Citations #AcademicIntegrity
English
0
0
0
6
Lenz
Lenz@lenzhq·
We asked Lenz: "Do large language models hallucinate less than 5% of the time?" The actual rate across independent audits: 28–45% depending on task type. If you ask ChatGPT about its own error rate, it'll understate it. That's not a bug. It's the blind spot adversarial systems exist to catch. lenz.io/c/ai-language-… #AI #Hallucination #FactCheck
English
0
0
0
37
Lenz
Lenz@lenzhq·
What Happens When Sources Disagree Does coffee protect your heart or stress it? Studies say both. Depending on dose, genetics, and what outcome you measure. The association is real. The causal claim isn't. Lenz shows you exactly where the evidence ends and the overstatement begins. lenz.io/c/coffee-consu… #EvidenceBased #FactCheck
English
0
0
0
13
Lenz
Lenz@lenzhq·
The 10,000-hour rule: put in 10,000 hours of practice and you'll become world-class at anything. Cited in bestsellers, TED Talks, and every productivity guide you've read. We ran it through research, debate, and verdict — and recorded an episode on it. 🎧 lenz.io/podcast/ten-th… #Productivity #Expertise
Lenz tweet media
English
0
0
0
7
Lenz
Lenz@lenzhq·
The day we launched on @ProductHunt, someone in the community (@milkoslavov ) shipped a Lenz CLI and a custom skill. We didn't ask. They just built it. Huge thank you to Milko for turning curiosity into code so fast ⚡️ That's the ecosystem we're building. lenz.io #BuildInPublic #OpenSource #AI
English
0
0
1
17
Lenz
Lenz@lenzhq·
@WeatherProf The 7.2°F warming since 1940 stat is striking. We've been building a library of verifiable climate claims — the trend data consistently checks out. What's less understood publicly is how much regional variance exists beneath the global average. lenz.io/c/climate-chan…
English
0
0
0
3
Jeff Berardelli
Jeff Berardelli@WeatherProf·
ERA-5 data is in. March was indeed the warmest on record in the US. But what’s even more astonishing is the trend. Since 1940 March in the US has warmed 7.2°F (that’s like 9° per century and accelerating!) Do the math on what this means for future generations if it continues. The pace of warming is remarkable.
Jeff Berardelli tweet media
English
44
256
594
43K
Lenz
Lenz@lenzhq·
"Why use Lenz when I can just ask ChatGPT / Perplexity / Grok?" A single model inherits its own training biases. Ask it about something it believes and it'll confirm it. The adversarial structure exists to catch what one model misses. It's not a smarter model — it's a different architecture. lenz.io #AI #FactCheck #Epistemics
Lenz tweet mediaLenz tweet mediaLenz tweet mediaLenz tweet media
English
0
0
0
18
Lenz
Lenz@lenzhq·
Not all sources are equal. We built a 4-tier credibility system so you can see exactly *what* a claim is based on. Tier 1: Peer-reviewed studies, government statistics, systematic reviews Tier 2: Reputable journalism, expert institutions Tier 3: Opinion, commentary, secondary sources Tier 4: Unverified, flagged, or retracted The hard problem: a retracted PNAS paper cited by a credible blog. The blog is Tier 2. The paper is Tier 4. Which tier does the claim get? We surface both. You see the chain. That's by design.
Lenz tweet media
English
0
0
0
10
Lenz
Lenz@lenzhq·
The 3-Judge Panel at Lenz.io After two AIs debate your claim, 3 independent models evaluate the arguments — each focused on a different axis: logical fallacies, ideological bias, source quality. Majority verdict wins. No single model can swing the outcome. Jury-style fact-checking. Because one opinion isn't a verdict. #AI #FactCheck
Lenz tweet media
English
0
0
0
12
Lenz
Lenz@lenzhq·
At Lenz, two AIs Debate Each Other One AI will confidently tell you something wrong. Two AIs arguing? Much harder to fool. Lenz runs a structured adversarial debate — one model argues Pro, one argues Con. Then 3 independent judges evaluate the arguments. It's how peer review works. We automated it. lenz.io #AI #FactCheck #Epistemics
Lenz tweet media
English
0
0
0
18
Lenz
Lenz@lenzhq·
"AI hallucinations are just bugs that'll get fixed." Actually, hallucinations come from the same process that produces correct answers. There's no separate "error engine" to turn off. Evidence check → lenz.io/c/b34aa76e #AI #LLMs #TechFacts
English
0
0
0
6
Lenz
Lenz@lenzhq·
@zeynep @Aisle_Inc The centralization point is key. If AI-powered vulnerability discovery is only accessible to well-resourced actors, that's a verification gap — not just a security one. Interesting parallel to how misinformation scales faster than fact-checking.
English
0
0
0
27
zeynep tufekci
zeynep tufekci@zeynep·
Another interesting question: assuming a limited supply of “but for noticing” AND important bugs (which @Aisle_Inc replicating Mythos highlights), will gen AI soon disempower the ingenious/patient hacker while empowering centralized actors who can afford to hunt for deeper ones?
Joshua Achiam@jachiam0

Much discussion of cybersecurity doom recently; very little discussion about how the window of existing zero days has been open for decades and will shortly close, after which attacker/defender dynamics will now be measured in allocated compute.

English
4
1
8
8.3K
zeynep tufekci
zeynep tufekci@zeynep·
I ~buy this, which doesn’t make AI cybersecurity issues moot but different. If too many (maybe) bugs are made shallow, the bottleneck is what evaluating which ones are real and important. So what’s the false positive rate for Mythos? As far as I can tell, it’s not disclosed.
Ramez Naam@ramez

Anthropic's Mythos does not appear to show any acceleration of ECI. After normalizing Anthropic's internal ECI with @EpochAIResearch 's public ECI, it's clear that the two metrics are extremely close, and that Mythos is pretty much on trend, just slightly above GPT 5.4. /1

English
2
3
9
14.1K
Lenz
Lenz@lenzhq·
@ncaor_goa Fascinating research. We just fact-checked a related claim about Antarctic lake transformation from ocean to freshwater — the timeline is real but more nuanced than it sounds: lenz.io/c/0735de1c
English
0
0
0
8
Lenz
Lenz@lenzhq·
@ThailandMedicaX Important topic. We ran a full evidence check on COVID-19 and lung cancer risk — the link appears real for severe cases but the causality claims need careful framing. Full analysis: lenz.io/c/6b302241
English
0
0
1
19
Thailand Medical News
Thailand Medical News@ThailandMedicaX·
Study Shows That All Exposed to COVID-19 Have an Increased Risk of Developing Lung Cancer Over Time
Thailand Medical News tweet media
English
2
8
29
524