تغريدة مثبتة
❢
4.2K posts


@LauraLoomer she openly says she converted to Catholicism lol.
the only person trying to twist the truth here is you
English

Candace Owens was living with a man named Ryan who she was dating for 7 years when she came home one day and told him she was engaged to George Farmer.
How is she a devout Catholic?
2 weeks before she was engaged to George Farmer, she was living in Philadelphia with Ryan and her cat. She was planning to marry Ryan. Then she met George and 2 weeks later, she was engaged to him because he had more money than Ryan.
Don’t let her tell you she’s some devout Catholic.
She’s for the streets.
Ryan was her Republican boyfriend who “converted her” from Democrat to Republican by helping her make videos under the username RedpillBlack.
How do I know? Because Candace invited me to dinner with her and Ryan in Connecticut multiple times in 2017.
She is a social climber and will say and do anything for money and fame, including abandon a man she spent 7 years with for a lady like Brit with a trust fund and rich daddy.
Isn’t that right, @TheLordFarmer?
Please tell us why you don’t follow your daughter in law on X.
English


I can’t believe Candace has $12 to $18 million hidden away in a trust she’s hiding from the Macrons, and her gullible cult followers are donating to her so she can pay her legal fees.
Meanwhile, half of her viewers probably can’t even afford to take a vacation each year.
This is what moral decay looks like.
A true grifter mentality.
English
❢ أُعيد تغريده
❢ أُعيد تغريده

Hallelujah … FINALLY someone in Congress stepping forward and acting like the adult in the room!
Rep. Anna Paulina Luna: “Today I referred any and every criminal found guilty of fraud regarding Minnesota fraud scheme to be denaturalized and sent for detainment in El Salvador’s high security prison CECOT.”
“U.S. citizenship is a privilege not a right, and according to legal statute any person found guilty of fraud within the first ten years of being naturalized can be legally denaturalized.”
“I take defrauding the American people VERY seriously. This is a ZERO tolerance policy."
English
❢ أُعيد تغريده

@Holden_Culotta lol i love seeing all the comments based on that new video about how Bob just makes shit up as he goes just to stay ahead of the curve... like they're original in their cleverness and indignation based on propaganda.
people are still too sheep-like; still blind and foolish.
English

Bob Lazar just dropped a bombshell.
He revealed that he replicated the anomalous force which powered the UFO he worked on at Area 51 in his own experiments.
“There’s another force—and it’s not gravity.”
“There’s gigantic chunks that are missing from physics … ”
Watch this incredible exchange he just had with Jesse Michels:
Michels: “If you look at dark energy … it’s not one of the four fundamental forces, but it’s just the universe is inflating—”
Lazar: “I’m not sure dark energy really exists.”
Michels: “And dark matter.”
Lazar: “I’m not really buying either one of them.”
“It’s a placeholder.”
Michels: “As early as the 50s, there’s all this crazy hardcore anti-gravity research.”
“And then it kind of disappears.”
“The two things that come up for gravity where there’s a lot of smoke, but no fire, is … extremely high electric field differentials creating thrust.”
“The second thing is very fast rotating, spinning, superconductors.”
Lazar: “But is that actually gravity?”
Michels: “So there’s another force.”
Lazar: “30 years has gone by, and I’ve been doing my own research.”
“And I’m just more convinced that I am right … that there’s another force, and it’s not gravity.”
Michels: “What does it look like?”
Lazar: “It’s a repelling force.”
“I think gravity is just an attractive force.”
“I think this other force, you can, to simplify it, push or pull.”
“And I think it also affects the flow of time, exactly like gravity does.”
“I think it affects light.”
Michels: “Have you ever measured this force?”
Lazar: “Next question.”
“Alright … yeah.”
Michels: “You have?”
Lazar: “Yeah.”
Michels: “Is there anything high level that you can say as far as the goal of your research post-[Area 51]?”
Lazar: “To duplicate anything.”
Michels: “You think you can?”
Lazar: “I’m 100% confident.”
Michels: “Have you already gotten some interesting results?”
Lazar: “Yeah, that’s why I’m 100% confident.”
@AlchemyAmerican @AmericanALCHMY
English

@Joshua_Seal21 @InterstellarUAP oh I forgot to add: **obviously these would be biodegradable semiconductors**
English

There is a reason I chose the 3 technologies I did. It's impossible that they had circuitry so I excluded it. You can have amazing machines without circuits. Modern tech is impossible without plastics or ceramic insulation and we know they didn't have them. (They don't decay)
So again where is the evidence to support a machine more advanced than a pulley and lever combination? No advanced engineering, no advanced society. They accomplished wonders with what they had but a mythical splinter advanced society is just silly.
English

🚨 **BREAKING** - Matt Le Croix : "The Great Pyramids & Sphinx Could be 38,000 years old"
Dr. Robert Schoch’s research shows unmistakable water erosion on the Sphinx enclosure from prolonged heavy rainfall the kind the Sahara hasn’t seen in over 10,000 years.
Its original lion body aligns perfectly with the constellation Leo through planetary precession cycles.
He told Danny Jones Podcast - “The Sphinx is likely 38,000 years old.”
This clip lays out the evidence that could rewrite human history.
What if an advanced lost civilization built it long before the Pharaohs?
How old do you really think the Sphinx is? Drop your honest take below 👇
English

@Joshua_Seal21 @InterstellarUAP this is just one example, but the fact is silicon isn't the only semiconductor available. also, things don't have to be small to be considered advanced.
interestingengineering.com/innovation/sem…
English
❢ أُعيد تغريده


@Ac7ionMann leave it to the jew to immediately think of another dude's dick.
English

You look like a smoked out mulan movie character with Down syndrome.
I can’t tell if the wind is blowing in your face or if you’re staring at the sun.
And the physiognomy of this picture tells me you have a small dick and are probably around 5 foot 6.

Arthur Kwon Lee@badazn
What does his physiognomy tell you?
English
❢ أُعيد تغريده

@TendiesOfWisdom @ArtificialAnlys @Alibaba_Qwen @GoogleDeepMind as base model comparisons go, this is true. what makes cloud models seemingly so much better is vast compute resources and applied propietary frameworks that significantly boost the base LLM.
English

@ArtificialAnlys @Alibaba_Qwen @GoogleDeepMind I wish this was true but the gap is still much larger.
Local models are good enough for basic tasks, however, but can't replace a cloud model yet.
English

Sub-32B open weights models now offer GPT-5 level intelligence with Qwen3.5 27B (Reasoning) matching GPT-5 (medium) at 42 and Gemma 4 31B (Reasoning) matching GPT-5 (low) at 39 on the Artificial Analysis Intelligence Index
@Alibaba_Qwen's Qwen3.5 and @GoogleDeepMind's Gemma 4 are the two recently released open weights families pushing the sub-32B total parameter model class forward. Both are available across multiple sizes with reasoning and non-reasoning variants and offer native multimodal input. Together, they represent the state of the art in open weights intelligence at this parameter count. Qwen3.5 27B reaches higher absolute intelligence on the Artificial Analysis Intelligence Index, while Gemma 4 31B is more token-efficient.
While these sub-32B models now match GPT-5 tier intelligence, the composition of that intelligence differs. Both open weights models trail significantly on factual knowledge and hallucination avoidance compared to GPT-5 variants: AA-Omniscience scores of -42 (Qwen3.5 27B) and -45 (Gemma 4 31B) vs. -10 for GPT-5 (medium) and -10 for GPT-5 (low). Where the open weights models have made progress is largely in agentic performance and critical reasoning: Qwen3.5 27B substantially outperforms GPT-5 (medium) on the Artificial Analysis Agentic Index of 55 vs. 46 and Gemma 4 31B leads GPT-5 (low) on TerminalBench Hard (36% vs. 27%) and HLE (23% vs. 18%).
Both Qwen3.5 27B and Gemma 4 31B fit on a single NVIDIA H100 (80GB) in BF16 precision, and with quantization, can run locally on a MacBook. This is a practical threshold that makes these models accessible beyond the data centre. This is a significant shift from the previous generation: Gemma 3 was released in March 2025 as a non-reasoning model and scored 10 on the Intelligence Index. Qwen3 had two iterations - the original Qwen3 family and the 2507 update, with the flagship Qwen3 235B A22B (Reasoning) scoring 20 on the original and 30 on the 2507 variant.
Key takeaways:
➤ Qwen3.5 27B (Reasoning) scores 42 on the Intelligence Index using 98M output tokens, while Gemma 4 31B (Reasoning) scores 39 using 39M. This 2.5x token efficiency gap is the key tradeoff. Qwen3.5 27B's strength is broad - GPQA (86%) and IFBench (76%) - while Gemma 4 31B leads on SciCode (+3.9 p.p.) and TerminalBench Hard (+3.8 p.p.).
➤ Both families ship native multimodal input across the sub-32B class. Qwen3.5 27B (Reasoning) scores 75% on MMMU-Pro and Gemma 4 31B (Reasoning) scores 73%, making them the two leading open weights options at this parameter count for applications requiring vision understanding.
➤ Despite matching GPT-5 score tiers, both models trail by a wide margin on AA-Omniscience - a known correlation with smaller model size. Qwen3.5 27B (Reasoning) scores -42 and Gemma 4 31B (Reasoning) scores -45. By comparison, GPT-5 (medium) scores -10, and Qwen3.5's own 397B A17B sibling scores -30. Knowledge recall benefits from larger parameter counts, and these sub-32B models cannot close that gap with reasoning effort alone.
➤ The open weights frontier has also advanced significantly beyond 32B parameter size. GLM-5.1 (Reasoning) leads open weights on the Intelligence Index at a score of 51, Kimi K2.5 (Reasoning) at 47, and Qwen3.5 397B A17B (Reasoning) at 45, although these models are much larger than the sub-32B models. The gap between top open weights models and the proprietary frontier (Gemini 3.1 Pro Preview and GPT-5.4 (xhigh) at 57) has narrowed to just 6 points.
➤ Qwen3.5 occupies the Pareto frontier for Intelligence vs. Total Parameters and Intelligence vs. Active Parameters among open weights models under 32B. The dense 27B (Reasoning) at 42 matches the 122B A10B MoE at one-fifth the total weights, and the 35B A3B (Reasoning) scores 37 while activating just 3B parameters. Gemma 4 31B (Reasoning) at 39 is the main challenger on total parameters, and the 26B A4B (Reasoning) at 31 competes at the ~4B active tier.

English

@lporiginalg @grok and people fail to understand one fundamental issue. the only reason they're sharing this information now is to get you caught up in the fraud to help bury theirs. the game already changed. you don't really believe they are giving away state secrets out of kindness, do you?
English










