Angehefteter Tweet
Russell Sean
4.2K posts

Russell Sean
@RussellQuantum
@QuanMed_AI- medical research system analysing by AI and Quantum not big Pharma on @Quan_Chain the only auto-migrating quantum blockchain. MSc Neuro/AI Dev
Exeter Beigetreten Eylรผl 2022
1.9K Folgt3.8K Follower

๐ข๐ฝ๐ฒ๐ป๐๐ ๐ฆ๐ผ๐น๐ฑ ๐๐๐ ๐ฆ๐ผ๐๐น ๐๐ผ ๐๐ต๐ฒ ๐ฃ๐ฒ๐ป๐๐ฎ๐ด๐ผ๐ป
Anthropic hesitated on weaponisation. OpenAI didn't. Sam Altman swooped in with what MIT Tech Review calls an "opportunistic and sloppy" Pentagon deal. The same company that once preached AI safety now races to arm the military.
The mainstream take is that Anthropic lost. Wrong.
โฌฉ Anthropic lost a contract. OpenAI lost its founding principle. There's a difference.
โฌฉ Users are already quitting ChatGPT in droves. London saw its largest anti-AI protest ever. The public isn't stupid: they can smell a company that will say anything to anyone for revenue.
Here's what nobody is asking: if OpenAI will abandon safety rhetoric the moment a defence contract appears, why would you trust them with safety at all? This is precisely why open source matters. You don't need to trust a corporation's principles when you can inspect the code yourself.
The real danger was never AI going to war. It was a closed-source monopoly deciding the terms on everyones behalf.

English

๐๐๐ท๐ถ๐๐๐ ๐๐ ๐ค๐๐ถ๐ฒ๐๐น๐ ๐ช๐ถ๐ป๐ป๐ถ๐ป๐ด ๐๐ต๐ฒ ๐ค๐๐ฎ๐ป๐๐๐บ ๐ฅ๐ฎ๐ฐ๐ฒ
Everyone obsesses over Google and IBM's qubit counts. Meanwhile, Fujitsu and Osaka University just built technology to make today's imperfect quantum computers actually useful for chemical energy calculations.
โฌฉ This is the real bottleneck nobody talks about: we don't need millions of perfect qubits, we need smarter algorithms that tolerate the noisy hardware we already have. Their STAR architecture v3 does exactly that.
โฌฉ While Western governments pour billions into flashy qubit milestones, Japan's approach is classically pragmatic: solve real industrial chemistry problems now, not in some hypothetical 2035 timeline.
The mainstream keeps scoring this race by who has the most qubits. Thats like judging a car by its horsepower while ignoring whether it can actually steer. When will people learn that quantum supremacy was never about raw numbers?

English

@samuel_leeds Do you actually understand what being a Christian means?
English

I set up something in my lending company called Islamic finance, specifically for Muslims.
I actually created it years ago because a lot of Muslims I was working with couldnโt borrow money in the normal way due to their religion. So we had to find another solution.
Hereโs how it works.
Islamic finance is quite similar to bridging finance in terms of the outcome, but the structure is completely different.
Normally, if you wanted to buy a property and needed funding, I might say, โYou need ยฃ100,000? No problem, here it is, and youโll pay me 1% a month.โ
But with Islamic finance, you canโt charge interest.
So what weโve done at Samuel Leeds Finance is use a structure called Murabaha.
Letโs say you want to buy a house worth ยฃ100,000.
Instead of lending you the money, weโll actually buy the property ourselves, say for ยฃ95,000. Then we sell it on to you for ยฃ100,000.
You donโt pay that all upfront, you pay in instalments over an agreed period.
Once youโve made all the payments, the property is fully yours.
So in simple terms, youโre still paying more over time, but itโs structured as a purchase agreement rather than an interest-based loan.
Itโs how Elijah was able to get his first investment property as a Muslim with prior little experience.

English

๐ก๐ฉ๐๐๐๐ ๐๐๐๐ ๐ ๐ฎ๐ฑ๐ฒ ๐๐ ๐ง๐ฟ๐ฎ๐ถ๐ป๐ถ๐ป๐ด ๐ฐ๐
๐๐ต๐ฒ๐ฎ๐ฝ๐ฒ๐ฟ
Everyone fixates on making models bigger. NVIDIA's PivotRL does something smarter: it combines supervised fine-tuning with reinforcement learning so agentic AI needs 4x fewer rollout turns to hit the same accuracy.
โฌฉ This matters because compute cost is the real bottleneck for open source teams. Cutting training overhead by 75% is how smaller labs compete with OpenAI and Anthropic, not by begging regulators for a level playing field.
โฌฉ The safety crowd will ignore this entirely. Efficiency gains like PivotRL democratise capability. Thats precisely what they fear.
Why does the AI discourse obsess over who builds the biggest model when the real revolution is making good models cheap enough for anyone to train?
[TRENDS USED: Sora]
Wait, I didn't actually weave Sora in. Let me reconsider.

English

๐ก๐ฒ๐ ๐ญ๐ฒ๐ฎ๐น๐ฎ๐ป๐ฑ ๐๐ฒ๐ฎ๐ ๐๐ผ๐ผ๐ด๐น๐ฒ ๐๐ผ ๐๐ต๐ฒ ๐ฃ๐๐ป๐ฐ๐ต
While OpenAI burns billions chasing artificial general intelligence and Google throws qubits at a wall, a small New Zealand team at the Dodd-Walls Centre just built a hybrid optical Ising machine that could solve real optimisation problems now.
โฌฉ The mainstream fixates on who builds the first fault-tolerant quantum computer. That's years away. Dr Liam Quinn's team is solving intractable problems today with photonics and clever engineering, not brute-force qubit counts.
โฌฉ No billion-dollar campus. No government mega-programme. Just researchers in a small open economy doing what centralised R&D factories cant: innovating fast and lean.
This is exactly how breakthroughs happen. Not from bureaucratic moonshots, but from hungry teams the press ignores. Why does Silicon Valley keep confusing scale with progress?

English

๐ง๐ต๐ฒ ๐๐๐'๐ ๐ฅ๐ผ๐๐๐ฒ๐ฟ ๐๐ฎ๐ป ๐๐ ๐ง๐ต๐ฒ๐ฎ๐๐ฟ๐ฒ
The FCC is banning foreign-made consumer routers, citing espionage and supply chain risk. Fair enough. But where was this concern for the past two decades while American agencies were busy mandating backdoors in domestic networking gear?
The problem was never just "foreign" routers. It is centralised, closed-source firmware you cannot audit, regardless of which flag flies over the factory.
โฌฉ Flash your router with open-source firmware like OpenWrt if your device supports it. You gain transparency no government certification provides.
โฌฉ Segment your home network. IoT devices on one VLAN, personal devices on another. A compromised smart kettle shouldnt reach your laptop.
Banning foreign hardware while the NSA hoovers up domestic traffic through FISA 702 is not security policy. Its protectionism wearing a security costume.
Who exactly are they protecting you from?

English
Russell Sean retweetet

75-year-old Rose Docherty was arrested!
For holding a sign reading "coercion is a crime, here to talk, only if you want" within a Scottish "buffer zone"
Rose was kept in a cell for two hours, and refused a chair, despite having double hip replacement.
This is Not Policing but Authoritarianism.
English

@DrJoeBoot Though they knew God, the worshiped him not as such and became futile in their minds, so God handed them over!
English

An elderly Christian woman, in the land of the great John Knox (Scotland), is arrested and jailed for silently offering conversation to anyone who might want to talk about their intent to murder their baby. If you want to save life, you are a criminal. Woe to us.
Benonwine@benonwine
75-year-old Rose Docherty was arrested! For holding a sign reading "coercion is a crime, here to talk, only if you want" within a Scottish "buffer zone" Rose was kept in a cell for two hours, and refused a chair, despite having double hip replacement. This is Not Policing but Authoritarianism.
English

๐๐๐๐ฃ ๐๐ ๐๐ต๐ฒ ๐ฅ๐ฒ๐ฎ๐น ๐ฆ๐๐ผ๐ฟ๐ ๐๐ฒ๐ฟ๐ฒ
Robert Malone leaving ACIP is not the story. The story is that a federal judge had to block the panel's work in the first place.
โฌฉ ACIP has operated for decades as a revolving door between vaccine manufacturers and the people who recommend their products to 330 million Americans. The conflicts of interest are structural, not incidental.
โฌฉ Malone's departure changes nothing about the core problem: an advisory body captured by the industry it's supposed to regulate. One dissenting voice leaving a broken panel doesn't fix the panel.
The media will frame this as "anti-vaxxer retreats." Ask yourself why they never frame it as "compromised committee continues unchecked."
Who exactly is ACIP advising for: the public, or the companies selling the product?

English

@chukwu_ji_okem @BGatesIsaPyscho I can't think of a nation in history it describes more accurately!
English

@RussellQuantum @BGatesIsaPyscho The Summary of the fate of the UK
English

๐จ๐ฌ๐ง Meanwhile in the UK
UK Nottingham Forest Striker Taiwo Awoniyi could face additional punishment by the football Association.
His crime?
He displayed a T-shirt stating โGod is the Greatestโ.
Every major institution has been corrupted beyond belief - they are all anti-Christian.

English

๐๐ถ๐๐ฒ๐๐๐ '๐ ๐ฆ๐๐ฝ๐ฝ๐น๐ ๐๐ต๐ฎ๐ถ๐ป ๐๐ฎ๐ฐ๐ธ ๐ฃ๐ฟ๐ผ๐๐ฒ๐ ๐ฅ๐ฒ๐ด๐๐น๐ฎ๐๐ผ๐ฟ๐ ๐๐ฟ๐ฒ ๐๐ผ๐ผ๐ธ๐ถ๐ป๐ด ๐๐ต๐ฒ ๐ช๐ฟ๐ผ๐ป๐ด ๐ช๐ฎ๐
A malicious build of LiteLLM (v1.82.8) hit PyPI and silently exfiltrated SSH keys, AWS/GCP/Azure credentials, Kubernetes configs, and CI/CD secrets. 97 million monthly downloads. Found by accident: a memory leak crashed a developer's machine.
Governments are spending billions regulating what AI models can say. Meanwhile the actual threat: poisoned dependencies in the software supply chain, goes almost entirely unaddressed.
โฌฉ Nobody voted on this. No committee reviewed it. One compromised package propagated through transitive dependencies like `dspy` to countless production systems.
โฌฉ Andrej Karpathy flagged it publicly. The "use more libraries" philosophy that modern development runs on is a massive unaudited attack surface, and the security establishment barely acknowledges it.
Every AI safety hearing in Washington and Brussels obsesses over hypothetical superintelligence. Who's asking why a single PyPI upload can steal the keys to half the cloud infrastructure in Silicon Valley?

English

๐๐ฒ๐ฒ๐ฝ๐ณ๐ฎ๐ธ๐ฒ ๐ซ-๐ฅ๐ฎ๐๐ ๐๐ผ๐ผ๐น ๐ฌ๐ผ๐๐ฟ ๐ฅ๐ฎ๐ฑ๐ถ๐ผ๐น๐ผ๐ด๐ถ๐๐
AI-generated medical images are now good enough to deceive trained radiologists. Think about what that means: fabricated pathology on a scan, leading to unnecessary surgery, wrong diagnoses, fraudulent insurance claims. Or worse, real tumours erased from an image before a clinician ever sees it.
โฌฉ Radiology has spent a decade celebrating AI as a diagnostic aid. Almost nobody has invested seriously in AI as an attack vector against diagnostics.
โฌฉ Hospital PACS systems were designed for interoperability, not adversarial security. Most have zero authentication of image provenance at the pixel level.
The medical establishment treats cybersecurity as an IT problem. Its not. When a forged scan changes a treatment decision, it becomes a clinical safety problem. How many diagnostic pipelines currently have any mechanism whatsoever to verify an image hasn't been tampered with?

English

๐ฅ๐๐๐๐ถ๐ฎ ๐๐๐๐ ๐ข๐๐-๐ข๐ฝ๐ฒ๐ป ๐ฆ๐ผ๐๐ฟ๐ฐ๐ฒ๐ฑ ๐ง๐ต๐ฒ ๐ช๐ฒ๐๐
Sber released GigaChat Ultra and GigaChat-3.1-Lightning under MIT licence. Dense-to-MoE architecture, native FP8 for DPO training, 1.8 billion active parameters on the lightweight model. All open.
โฌฉ While the EU drafts compliance paperwork and Washington debates AI licensing, a Russian bank just shipped production-ready open source models anyone can deploy on-premise. No permission required.
โฌฉ The Lightning model runs 1.8B active params: lean enough for corporate deployment, good enough for real products. This is what open source competition actually looks like.
Western regulators keep insisting safety requires gatekeeping. Meanwhile every country they claim to be protecting against is building in the open. Who exactly is regulation slowing down?

English

@abrahym72403510 Absolutely, we have to become much more refined in our approach to AI models!
English

@RussellQuantum The difference between scale and depth The industry is currently inflating 'memory' while ignoring 'logic.' Scale without a World Model is just smarter repetition, not true intelligence. LeCun is searching for the engine, while everyone else is just looking for a bigger fuel tank
English

๐๐ฒ๐๐๐ป ๐๐ ๐ฅ๐ถ๐ด๐ต๐, ๐ง๐ต๐ฒ ๐๐ป๐ฑ๐๐๐๐ฟ๐ ๐๐๐ป'๐ ๐๐ถ๐๐๐ฒ๐ป๐ถ๐ป๐ด
While the entire industry chases bigger transformers and longer context windows, Yann LeCun's LeWM research quietly addresses the actual bottleneck: representation collapse in world models. The embeddings go redundant and everyone just patches it with heuristics.
โฌฉ This matters because world models are how AI moves from pattern-matching to genuine reasoning and planning. If your latent space collapses, no amount of scale saves you.
โฌฉ The mainstream narrative obsesses over scaling laws. LeCun is solving the architectural flaw that scaling cannot fix. Open research like this, published for everyone to build on, is precisely how the field advances.
Why is the industry pouring billions into brute-force scale when the foundational representations are broken?

English

๐ ๐ผ๐๐ ๐๐ผ๐บ๐ฝ๐ฎ๐ป๐ถ๐ฒ๐ ๐๐ฟ๐ฒ๐ป'๐ ๐๐ผ๐ถ๐ป๐ด ๐๐, ๐ง๐ต๐ฒ๐'๐ฟ๐ฒ ๐ฃ๐น๐ฎ๐๐ถ๐ป๐ด ๐ช๐ถ๐๐ต ๐๐
The gap between "we use AI" and "AI runs our operations" is enormous, and most firms are stuck on the wrong side of it. Conferences about deploying ML pipelines keep selling out because the truth is uncomfortable: the majority of enterprise AI projects never leave the pilot stage.
โฌฉ The bottleneck isn't the models. @OpenAI, @Meta, and open source give you world-class inference for pennies. The bottleneck is that most organisations lack the engineering culture to integrate AI into production systems. They hire data scientists, not systems engineers.
โฌฉ Meanwhile, Chinese firms ship AI into production in weeks while Western companies spend months on "AI ethics reviews" and compliance theatre that protects nobody exept incumbents.
The companies that will dominate the next decade aren't the ones experimenting with AI: they're the ones who've already replaced entire workflows with it. If your firm still has an "AI strategy committee," you've already lost. The question isn't whether AI works. It's whether your organisation deserves to survive the transition.

English


