Code and Covenant🇨🇦

872 posts

Code and Covenant🇨🇦 banner
Code and Covenant🇨🇦

Code and Covenant🇨🇦

@codecovenant

Jesus freak Pentecostal. ✝️ AI & LLMs. 🤖 Cyber Security. 🛡️ .22lr. 🎯 Tylenol Canadian. Saved by Grace.

london ontario Joined Mart 2025
913 Following555 Followers
moonchild💫
moonchild💫@moonchild23580·
MEN ONLY: who saved you at your lowest???
English
21
0
16
703
BURKOV
BURKOV@burkov·
Did anyone try to use Gemini 3.1 Pro with Codex as the harness? Is Antigravity the problem with using Gemini for agentic coding, or is it the LM?
English
49
1
303
41.8K
Code and Covenant🇨🇦
Code and Covenant🇨🇦@codecovenant·
@The_Holy_Sea Technically I'm chairforce but yes that's my point. I'm living blue button. Police are another group who put their life at risk for others. But the altruism ceiling is about 15% The blue vote is suicide.
English
1
0
1
39
The Pope of Fishgard
The Pope of Fishgard@The_Holy_Sea·
@codecovenant A veteran? So you are saying that you risked your own life to (in theory) protect your fellow countrymen? Sounds pretty blue button to me.
English
1
0
2
40
Code and Covenant🇨🇦
Code and Covenant🇨🇦@codecovenant·
Logistically it's absolutely more likely to get red to 100% as opposed to getting blue above 30%. I'm a war veteran, I put my life on the line. Not many others will. There's 0% chance you get enough blue votes ever.
🔞 Danger N00dle 🔞@DangeN00dle

Here's a different way to approach it. Forget morals and all, just think about logistic Best case scenario for everyone to survive Red = 100% votes Blue = 50%+1 votes Which one is easier to achieve from a logistical standpoint?

English
37
5
110
4.7K
Code and Covenant🇨🇦
Code and Covenant🇨🇦@codecovenant·
@KeruboSk My childhood predates internet. I was a bubble boy twice. A few surgeries and isolations. Im taught about in med school. I had 4 people try to kill me. I almost died dozens of times. My first friend was hit by a car when i was like 6. Childhood...
English
0
0
0
3
Sophia ❣️
Sophia ❣️@KeruboSk·
Do you think your childhood was better before social media took over?
English
25
17
49
1.2K
Josh Barzon
Josh Barzon@JoshuaBarzon·
Which logo design is your favorite from these 4 options that I designed? I am working on a logo + brand identity for a church that wants the main logo mark to have a prominent central cross with a traditional yet modern vibe. Thanks in advance for your vote!
Josh Barzon tweet media
English
194
1
48
8.9K
Mustafa
Mustafa@oprydai·
X is a group chat of nerdy autists
English
12
6
69
2.5K
David Hendrickson
David Hendrickson@TeksEdge·
👑 Qwen3.6-27B is the master of personal AI. @ArtificialAnlys has crowned this model the best performing medium open source LLM currently available. 🏆 I use this model every day and benchmarking new models rarely outperform it except for very large expensive models. If you haven’t tried it, you’re missing out.
David Hendrickson tweet media
Artificial Analysis@ArtificialAnlys

Alibaba's Qwen3.6 27B is the new open weights leader under 150B parameters scoring 46 on the Artificial Analysis Intelligence Index, but uses ~3.7x the output tokens and costs ~21x more than Gemma 4 31B (39) to run the full Intelligence Index @Alibaba_Qwen has released two open weights models in the Qwen3.6 family: Qwen3.6 27B (Dense, 46 on the Intelligence Index) and Qwen3.6 35B A3B (MoE, 43). The MoE variant has 36B total parameters but only activates 3B per forward pass. Both are Apache 2.0 licensed, support 262K context, include native multimodal input, and use the unified thinking/non-thinking hybrid architecture. Unlike Qwen3.5, Alibaba has not released larger Qwen3.6 models as open weights - Qwen3.6 Plus and Qwen3.6 Max Preview remain proprietary, so the Qwen3.6 open weights family is currently all under 50B models. All scores below are for reasoning mode. The Intelligence Index is our synthesis metric incorporating 10 evaluations covering agentic tasks, coding, and scientific reasoning. Key takeaways: ➤ Qwen3.6 27B is the most intelligent open weights model under 150B parameters. At 46 on the Intelligence Index, Qwen3.6 27B is ahead of Qwen3.6 35B A3B (43), Qwen3.5 27B (42), and Gemma 4 31B (39). It is also ahead of larger open weights models including NVIDIA Nemotron 3 Super 120B A12B (Reasoning, 36), Qwen3.5 122B A10B (42) and gpt-oss-120b (high, 33). In native BF16 precision, the 27B takes ~56GB to store the weights, fitting on a single H100, and in 4-bit quantization the weights fit on consumer hardware with 16GB+ of RAM ➤ Qwen3.6 35B A3B is the most intelligent open weights model with ~3B active parameters, 6 points ahead of Qwen3.5 35B A3B (37) and 13 points ahead of GLM-4.7-Flash (30). Other ~3B active peers include Gemma 4 26B A4B (31), Qwen3 Coder Next (80B total, 28), and NVIDIA Nemotron Cascade 2 30B A3B (28) ➤ AA-Omniscience improvement is driven entirely by abstention rather than accuracy. Qwen3.6 27B's hallucination rate falls from 80% to 48% versus Qwen3.5 27B, while accuracy is roughly flat - consistent with our finding that AA-Omniscience accuracy typically correlates with total parameter count and Qwen3.6 27B retains the same 27B parameter count as its predecessor. The 35B A3B shows the same pattern whereby hallucination drops from 84% to 50% while accuracy remains equivalent ➤ Token usage is up across both models versus Qwen3.5 and significantly higher than Gemma 4 31B. Qwen3.6 27B used ~144M output tokens to run the Intelligence Index (~1.5x Qwen3.5 27B at 98M, ~3.7x Gemma 4 31B at 39M). Qwen3.6 35B A3B used ~143M (~1.4x Qwen3.5 35B A3B at 100M, ~3.7x Gemma 4 31B) ➤ The 27B got materially more expensive while the 35B A3B is roughly flat versus predecessor. Per-token pricing on Alibaba Cloud moved differently, with the 27B going from $0.30/$2.40 to $0.60/$3.60 while the 35B A3B (Reasoning) remains nearly flat at $0.248/$1.485 (vs $0.25/$2.00 for Qwen3.5 35B A3B). Qwen3.6 27B costs ~$659 to run the Intelligence Index, ~2.2x Qwen3.5 27B (~$299) and ~21x Gemma 4 31B (~$31 at median third-party pricing of $0.14/$0.40 per 1M input/output tokens). Qwen3.6 35B A3B costs ~$280, roughly tied with Qwen3.5 35B A3B (~$302) and ~9x Gemma 4 31B ➤ Qwen3.6 27B is competitive with leading models on agentic real-world work tasks despite its size. At 1414 Elo on GDPval-AA, Qwen3.6 27B is ahead of recent open weights peers Qwen3.6 35B A3B (1297), Qwen3.5 27B (1157) and Gemma 4 31B (1115), but trails larger open weights leaders including DeepSeek V4 Pro (Reasoning, Max Effort, 1554) and GLM-5.1 (Reasoning, 1535). It matches DeepSeek V4 Flash (Reasoning, High Effort, 1414) at 284B total parameters, and sits roughly in line with GPT-5.4 mini (xhigh, 1436) and Muse Spark (1421). ➤ Non-reasoning variants remain equivalent versus Qwen3.5. Qwen3.6 27B (Non-reasoning, 37) is effectively tied with Qwen3.5 27B (Non-reasoning, 37); Qwen3.6 35B A3B (Non-reasoning, 32) is equivalent to Qwen3.5 35B A3B (Non-reasoning, 31). The Qwen3.6 generation gains are concentrated in reasoning mode Other information: ➤ Context window: 262K tokens (equivalent to Qwen3.5) ➤ License: Apache 2.0 ➤ Multimodality: Native vision input (text and image), text output ➤ API pricing (Alibaba Cloud): Qwen3.6 27B: $0.60/$3.60, Qwen3.6 35B A3B (Reasoning): $0.248/$1.485 ➤ Availability: Available on Alibaba Cloud first-party API. Qwen3.6 35B A3B is available on several third-party APIs such as @DeepInfra, @parasail_io, @clarifai and @novita_labs

English
2
0
9
560
Why women get Ls
Why women get Ls@ywomendeservles·
What can women do to improve themselves?
English
35
1
39
3K
Code and Covenant🇨🇦
Code and Covenant🇨🇦@codecovenant·
@Pregory1 You are absolutely correct. There will probably be about 1 billion people who accidentally choose blue. Nobody wants them to die. But the altruism ceiling is about 15%. So best chance for blue is still death. The least death is maximizing red.
English
2
0
3
85
Pregory McRonald III née Citadel (in memoriam)
@codecovenant Bullshit. You really aren’t thinking this through. Half the children on earth will pick blue because they like the color. So might your mom or grandma. Maybe a cousin or two, and aunt or uncle. Unless you’re fine with all those people dying, you’re picking blue.
English
2
0
1
92
herefor1reason
herefor1reason@sonic7ischaos·
@codecovenant Counterpoint: self sacrifice was never the deciding factor for blue anyway. People taking the path of least resistance and reading it the simplest "blue saves everyone" way possible. Most people who read the rules aren't even going to be thinking about death as a possibility.
English
2
0
2
154
Code and Covenant🇨🇦
Code and Covenant🇨🇦@codecovenant·
🔴 Choose Red: 100% survival. 🔵 Choose Blue: You die unless >50% of the world also chooses Blue. ​Babies (10% of pop) always vote Blue. History says only 15% of adults will risk their lives for others. ⚫️ Choose Black: You individually die to save all the blue.
English
1
0
0
276
rafaelcaricio
rafaelcaricio@rafaelcaricio·
Give me Qwen3.6-27B at >150 tok/s and I that’s all I need right now
English
1
0
3
26
Chris
Chris@CinemaPrincipia·
@codecovenant You used selfishness to draw the conclusion that blue couldn't win, genius.
English
1
0
0
22
Virileth
Virileth@VirilethDerg·
@codecovenant Current military, red is the only sensible option in a real stakes situation
English
2
0
5
137
Code and Covenant🇨🇦
Code and Covenant🇨🇦@codecovenant·
@A_Sober_Drunk I knew sergeant who got captured as pow at korengal. Was able to escape on his own. That guy was not alright, ptsd bigtime.
English
0
0
0
12
Zack
Zack@Tasnek·
@codecovenant I love how 9/10 polls favor blue but he busts out the one that favors red lol.
English
4
0
4
98