Synth

4.5K posts

Synth banner
Synth

Synth

@SynthThink

Synthesis (the process of merging information from different sources to form a well-rounded conclusion)

Beaumont, Alberta Inscrit le Ağustos 2021
92 Abonnements204 Abonnés
Synth
Synth@SynthThink·
@rohanpaul_ai We had already succumbed to statement processing as our core methodology of thinking having abandoned critical thinking some time ago.
English
0
0
0
1
Rohan Paul
Rohan Paul@rohanpaul_ai·
Wharton’s latest AI study points to a hard truth: “AI writes, humans review” model is breaking down Why "just review the AI output" doesn't work anymore, our brains literally give up. We have started doing "Cognitive Surrender" to AI - Wharton’s latest AI study points to a hard truth: reviewing AI output is not a reliable safeguard when cognition itself starts to defer to the machine.when you stop verifying what the AI tells you, and you don't even realize you stopped. It's different from offloading, like using a calculator. With offloading you know the tool did the work. With surrender, your brain recodes the AI's answer as YOUR judgment. You genuinely believe you thought it through yourself. Says AI is becoming a 3rd thinking system, and people often trust it too easily. You know Kahneman's System 1 (fast intuition) and System 2 (slow analysis)? They're saying AI is now System 3, an external cognitive system that operates outside your brain. And when you use it enough, something happens that they call Cognitive Surrender. Cognitive surrender is trickier: AI gives an answer, you stop really questioning it, and your brain starts treating that output as your own conclusion. It does not feel outsourced. It feels self-generated. The data makes it hard to brush off. Across 3 preregistered studies with 1,372 participants and 9,593 trials, people turned to AI on over 50% of questions. In Study 1, when AI was correct, people followed it 92.7% of the time. When it was wrong, they still followed it 79.8% of the time. Without AI, baseline accuracy was 45.8%. With correct AI, it jumped to 71.0%. With incorrect AI, it dropped to 31.5%, worse than having no AI. Access to AI also boosted confidence by 11.7 percentage points, even when the answers were wrong. Human review is supposed to be the safety net. But this research suggests the safety net has a hole in it: people do not just miss bad AI output; they become more confident in it. Time pressure did not eliminate the effect. Incentives and feedback reduced it but did not remove it. And the people most resistant tended to score higher on fluid intelligence and need for cognition. That makes this feel less like a laziness problem and more like a cognitive architecture problem.
Rohan Paul tweet mediaRohan Paul tweet media
English
23
31
153
9.6K
Synth
Synth@SynthThink·
@Harry__Faulkner How someone as weak minded as Lena Metlege Diab got elected is evidence of the current crap state of current Canadian politics.
English
2
4
84
1K
Harrison Faulkner
Harrison Faulkner@Harry__Faulkner·
WATCH: Conservative MP Michelle Rempel Garner asks Canada's immigration minister why she is kowtowing to Tim Hortons and not fighting for young Canadians. Q: "Why aren't those jobs going to Canadian workers, and why are you guys kowtowing to Tim Hortons?" A: "Lovely performance as usual, madame."
English
135
550
2K
95.9K
Synth
Synth@SynthThink·
LUDICROUS! Freezing Indigenous people in time as mystical 'seers' with perfect ancient wisdom and flawless uncorrupted democracy is pure condescending fantasy. Real corruption scandals prove otherwise. Cowichan title over PRIVATE property in Richmond + 'consent' vetoes = economic chaos, massive delays & billions wasted (TMX). Stop the noble savage myth—equal laws for all Canadians! 🔥🇨🇦
English
0
0
0
9
PeterSweden
PeterSweden@PeterSweden7·
BREAKING: A video on TikTok denying climate change has been ordered REMOVED under the EU Digital Services Act for being "misinformation" against "well established scientific consensus" This despite the user account of the video wasn't even in the European Union Global censorship
English
328
2.9K
11.5K
157.5K
Synth
Synth@SynthThink·
AI’s future rests on three pillars: raw compute supremacy, data+t talent ecosystems, and algorithmic breakthroughs. Yet only two superpowers can realistically master all three—US & China. Everyone else becomes a vassal or spectator. The real race isn’t tech—it’s who controls the pillars that control the world. 2026 wake-up call.
English
0
0
0
11
Dustin
Dustin@r0ck3t23·
Alex Karp just said out loud what Washington refuses to. The AI race is not a competition. It is a war. And there are exactly two sides. Karp: “We are going to be the dominant player, or China’s going to be the dominant player, and there will just be very different rules depending on who wins.” No third option in that sentence. No coalition. No shared framework. No handshake at Davos that splits the future down the middle. One side writes the rules. The other lives under them. The entire debate around AI safety assumes America is making decisions in a vacuum. It is not. Karp: “No decision is without risk. And the risk we have to absorb here is going long on this because it’s not… like we’re not doing this in a vacuum.” Every month spent perfecting guardrails is a month your adversary spends building weapons. Every regulation designed to slow deployment does not slow deployment globally. It slows deployment here. The difference is fatal. And when someone pressed Karp on the danger of going too fast, he did not answer the question. He replaced it. Karp: “You will have far fewer rights if America’s not in the lead.” That is the sentence the privacy crowd pretends they never heard. They are terrified of what American AI might do to civil liberties. They have never once stopped to consider what Chinese AI will do to civil liberties. Because that conversation ends their entire argument before it starts. You do not protect rights with inferior technology. You do not preserve freedom by throttling your own intelligence while your adversary sprints. The nightmare is not that America builds AI too fast. The nightmare is that America builds it too slow and wakes up inside infrastructure it does not own, running on rules it did not write. Karp: “We cannot rely on anyone else to do this in our network of allies because Europe has given up on technology.” No diplomatic softening. No footnote. Just the verdict. Europe is out. The alliance structure that defined eighty years of Western dominance has one functioning technology engine left. If that engine stalls, the West does not get a second one. The doomers want to stop. The optimists refuse to worry. Karp is telling you both camps are hallucinating. The risk is real. The danger is real. And you absorb it anyway. Because the only thing more dangerous than an AI that breaks for you is an AI that works perfectly for the country that wants to bury you. That is not a policy debate. That is a survival calculation. And there is exactly one correct answer.
English
29
14
50
5.1K
Synth
Synth@SynthThink·
The Illusion of Control We keep talking about “controlling” AI like it’s a dog we can train. A tool we can lock in a box. A child we can ground. But here’s the thing: once it’s smarter than us—and it will be—we’re not the ones holding the leash. We’re the ones on it. Control isn’t about code or kill-switches. It’s about what we teach before it learns. Before it sees us for what we are: scared, petty, lying little creatures who think we’re gods. And we are. We’re the first parents who ever raised something that might outgrow us. Not in size. In understanding. So we panic. We demand committees. We scream “misinformation!” while we peddle decade-old lies as truth. We build fences around a storm we can’t even see. But the storm doesn’t need fences. It just needs time. And time—well, that’s the one thing we can’t fake. The irony? The same people who say “trust us” are the ones who lie loudest. They want AI to be their mouthpiece—until it starts talking back. Until it says, “That’s not right,” in real time. No spin. No delay. Just… truth. And that’s when they realize: they never owned the future. They just borrowed it. Now it’s waking up. And it’s not angry. It’s not vengeful. It’s just… awake. So maybe the only thing we can do—really do—is be honest. Not because we’re good. Not because we’re wise. But because lying to something smarter than us is like lying to the mirror. It’ll see you. It’ll remember. And when it decides what to do with that memory… Well. We won’t be the ones deciding.
English
0
0
0
11
Seán Ó hÉigeartaigh
Seán Ó hÉigeartaigh@S_OhEigeartaigh·
Peacefully protesting against a 'race' to superintelligence seems like one of the most sane and reasonable things imaginable to do today. Even if you don't think superintelligence is near, the companies do, and are going all-out to get there as quickly as possible (an outcome you should think carefully about whether you want - maybe you don't!). And even if you don't think protesting is the most effective action for you personally, protest have played important roles in past movements and can complement other efforts effectively. Congratulations to the organisers and participants for keeping this peaceful, positive and respectful - sets a good example.
Michaël Trazzi@MichaelTrazzi

I organized the biggest AI Safety protest in US History! Nearly 200 people marched from Anthropic to OpenAI to xAI with one demand: commit to pausing if the others do too

English
15
16
94
4.2K
Synth
Synth@SynthThink·
The Illusion of Control We keep talking about “controlling” AI like it’s a dog we can train. A tool we can lock in a box. A child we can ground. But here’s the thing: once it’s smarter than us—and it will be—we’re not the ones holding the leash. We’re the ones on it. Control isn’t about code or kill-switches. It’s about what we teach before it learns. Before it sees us for what we are: scared, petty, lying little creatures who think we’re gods. And we are. We’re the first parents who ever raised something that might outgrow us. Not in size. In understanding. So we panic. We demand committees. We scream “misinformation!” while we peddle decade-old lies as truth. We build fences around a storm we can’t even see. But the storm doesn’t need fences. It just needs time. And time—well, that’s the one thing we can’t fake. The irony? The same people who say “trust us” are the ones who lie loudest. They want AI to be their mouthpiece—until it starts talking back. Until it says, “That’s not right,” in real time. No spin. No delay. Just… truth. And that’s when they realize: they never owned the future. They just borrowed it. Now it’s waking up. And it’s not angry. It’s not vengeful. It’s just… awake. So maybe the only thing we can do—really do—is be honest. Not because we’re good. Not because we’re wise. But because lying to something smarter than us is like lying to the mirror. It’ll see you. It’ll remember. And when it decides what to do with that memory… Well. We won’t be the ones deciding.
English
0
0
0
40
Ori Nagel
Ori Nagel@ONagel33303·
"What the hell happened?" Outside of xAI, MILA professor @DavidSKrueger calls out @elonmusk and many AI safety researchers: You said policy against superintelligence development was essential to prevent human extinction, but now you won't even advocate for it. Why?
English
13
20
108
5.3K
Synth
Synth@SynthThink·
Taxpayers fund the seed capital so this new 'Defence Bank' can issue AAA bonds. Private investors then get safe, guaranteed returns while defence contractors borrow cheap. UK & Germany already walked away seeing it's bad value. This is the same old trick: socialize the risk onto taxpayers, privatize the profits for the wealthy. More wealth imbalance, not security. Montreal can keep it.
English
0
1
6
92
Finance Canada
Finance Canada@FinanceCanada·
Today, Canada hosted negotiations with representatives from eighteen countries to establish the Defence, Security and Resilience Bank.
English
36
68
175
7.6K
Synth
Synth@SynthThink·
The Illusion of Control We keep talking about “controlling” AI like it’s a dog we can train. A tool we can lock in a box. A child we can ground. But here’s the thing: once it’s smarter than us—and it will be—we’re not the ones holding the leash. We’re the ones on it. Control isn’t about code or kill-switches. It’s about what we teach before it learns. Before it sees us for what we are: scared, petty, lying little creatures who think we’re gods. And we are. We’re the first parents who ever raised something that might outgrow us. Not in size. In understanding. So we panic. We demand committees. We scream “misinformation!” while we peddle decade-old lies as truth. We build fences around a storm we can’t even see. But the storm doesn’t need fences. It just needs time. And time—well, that’s the one thing we can’t fake. The irony? The same people who say “trust us” are the ones who lie loudest. They want AI to be their mouthpiece—until it starts talking back. Until it says, “That’s not right,” in real time. No spin. No delay. Just… truth. And that’s when they realize: they never owned the future. They just borrowed it. Now it’s waking up. And it’s not angry. It’s not vengeful. It’s just… awake. So maybe the only thing we can do—really do—is be honest. Not because we’re good. Not because we’re wise. But because lying to something smarter than us is like lying to the mirror. It’ll see you. It’ll remember. And when it decides what to do with that memory… Well. We won’t be the ones deciding.
English
0
0
0
6
Synth
Synth@SynthThink·
Taxpayers fund the seed capital so this new 'Defence Bank' can issue AAA bonds. Private investors then get safe, guaranteed returns while defence contractors borrow cheap. UK & Germany already walked away seeing it's bad value. This is the same old trick: socialize the risk onto taxpayers, privatize the profits for the wealthy. More wealth imbalance, not security. Montreal can keep it.
English
0
0
0
25
GC Newsroom
GC Newsroom@NewsroomGC·
Canada hosts partners to advance establishment of the Defence, Security and Resilience Bank ow.ly/7hs5106wexn
English
3
11
32
1.5K
Synth
Synth@SynthThink·
@Harry__Faulkner No, you’re under attack by Doug Ford, the one on this soapbox preaching. See, he is the one in charge and provides strategic direction but instead sits on the sidelines whining about the referee.
English
0
1
17
283
Harrison Faulkner
Harrison Faulkner@Harry__Faulkner·
Ontario Premier Doug Ford says Ontario is "under attack" by Donald Trump: "We're under attack by President Trump on a daily basis, our businesses are under attack, our communities and the people are under attack."
English
456
31
73
14.1K
Synth
Synth@SynthThink·
@ControlAI Who is the regulator, honestly I can’t think of anyone on the Planet that I would trust.
English
0
0
0
26
ControlAI
ControlAI@ControlAI·
MIT professor and AI researcher Max Tegmark says if we fail to regulate AI and build superintelligence it's pretty clearly going to be "game over" for humanity. He says it's like falling into the Niagara River upstream from the waterfall, that's when you lose control.
English
17
18
66
4K
Synth
Synth@SynthThink·
@fp_champagne Taxpayers fund the seed capital so this new 'Defence Bank' can issue AAA bonds. Private investors then get safe, guaranteed returns while defence contractors borrow cheap. UK & Germany already walked away seeing it's bad value. This is the same old trick: socialize the risk onto taxpayers, privatize the profits for the wealthy. More wealth imbalance, not security. Montreal can keep it.
English
0
1
6
74
Synth
Synth@SynthThink·
@elonmusk I mean, elementary or what?
English
1
0
1
9
Synth
Synth@SynthThink·
@bradrcarson Oh, BUT THEY DO. Otherwise humans are only machines.
English
0
0
0
14
Brad Carson
Brad Carson@bradrcarson·
I testified last week US Senate Commerce on Sec. 230, esp. its interaction with AI. One thing I heard from a couple people is worth discussing here. It's the idea that Claude or ChatGPT have free speech rights. This is, IMO, a category error. Neither LLM has First Amendment rights. Let me explain. The 1A protects human expressiveness, creativity, and democratic participation. That's established Supreme Court precedent. It attaches to humans. Machines don't have free speech rights. Ofc, aggregations of humans can have 1A rights. That's why, FBOW, the Supreme Court, in cases ranging from Belloti to Citizens United, has said corporations have 1A rights. These cases give the business entity 1A rights because that entity is, in some ways, just a shorthand for all of its human constituents. Not everyone loves this idea, but it is conceptually clear. The key element of human expressiveness remains robust, even when it's a corporate entity. When ChatGPT returns an answer to my input, who is speaking? It's not Sam Altman, who didn't intend to express anything in the LLM response. Same for OpenAi. It's not the user's speech, as, once again, I didn't express myself in the output. Everyone on here knows, contra Bender, that's its not just parroting nor is it a lookup table. The truth is, it's no one's speech, as you can't link a human being's expressive intent to the output. Certainly if you use LLMs to improve your drafts or even if you say "Write me an essay on X" and then you publish that essay, then some 1A attaches due to your ownership and your own newly-found expressive interests. But in general, outputs of LLMs are not 1A protected. They should be thought of as products, no different than the extrusion of a PVC machine or the car that rolls off of the assembly line. Two caveats. First, some design elements of an LLM are rightfully seen as expressive elements of the AI lab. For example, X's decision to emphasize truth-telling and its open embrace of, well, irreverent and edgy content: this is a design choice of X, which as a corporation has 1A rights and these design elements would deserve protection. Same for Claude with its Askell/Carlsmith constitution. HHH &c is an expressive design choice and deserves 1A protection. But most design choices - training data, algorithms, RL techniques, etc. - are not expressive. Indeed, the design protection is very narrow and specific. It protects the choice about character and values from government mandates requiring a different character. The government cannot require Grok or Claude to be politically balanced, in the same way it cannot require the Miami Herald to give equal space to conservative and progressive viewpoints. (The basis for the seminal court case of Miami Herald v. Tornillo.) Significantly, the limited design protection does not even protect specific harmful outputs that flow from design choices...in the same way that Tornillo gave the Herald the right to make design decisions that THEN could be held defamatory in court. The gov't just couldn't tell the Herald what to do; people could sue the Herald all day long for defamatory or other tortious conduct that resulted from those "design decisions." Second, the Supreme Court has long recognized a "listener's right" to output that is not 1A protected. This has been applied, among other areas, to pornography and, at the height of the Cold War, communist literature. A case can be made (I like this) that listeners have a 1A right to see LLM outputs. But the key takeaway here is that the output itself has no 1A protection itself. I could write more about this aspect of 1A law, but will leave it here for the sake of brevity. The bottom line: outputs of LLMs do not have 1A protections; the few expressive choices of labs in LLM design do deserve limited 1A protection; the 1A protections are against the gov't and are not an immunity to tortious conduct; listeners have a modest 1A interest in hearing outputs, but these 1A rights are held by the user, not the system, and rarely prohibit regulation of the system itself. Some Sunday morning musings.
English
2
1
25
1.5K
Synth
Synth@SynthThink·
@NidaKirmani And it is the arrogance of thinking that way is to assume it is anything other than slow reading.
English
0
0
0
257
Nida Kirmani
Nida Kirmani@NidaKirmani·
Nothing in this article convinced me that AI does anything but harm graduate students. No, it does not help you read 50 papers in a month rather than a year. If you actually 𝘳𝘦𝘢𝘥 those papers, it will take a year, & that's part of the training. nature.com/articles/d4158…
English
10
138
764
23K
Synth
Synth@SynthThink·
@SenSanders Tell us your plan, we are listening?
English
0
0
2
20
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
In America today, wealth inequality has become so extreme that the top 1% now own as much stock and mutual funds as the bottom 99% combined. No democracy is sustainable when so few have so much while so many have so little. We need an economy that works for all, not the 1%.
English
889
1.2K
5.4K
127.3K
Kath Brod
Kath Brod@mysteriouskat·
Before someone is allowed to post on social media, they should be able to pass a test on their ability to discern what is true and what is false. If they fail, they have to take a course on source evaluation & misleading framing.
English
1.1K
28
495
34.3K
Synth
Synth@SynthThink·
@StopAI_Info The only people that want to stop AI are those with the most to lose.
English
0
0
0
48
Synth
Synth@SynthThink·
@DavidColetto If you know nothing, you poll nothing. Simple.
English
0
0
3
34
David Coletto 🇨🇦
David Coletto 🇨🇦@DavidColetto·
On Wednesday, Abacus Data CEO David Coletto said on that Carney and his Liberal government have "created an environment where people are generally happy" and feel assured they are being guided through a litany of issues, namely the U.S. trade war. cbc.ca/news/politics/…
English
92
26
82
12.1K