Dan Repaci

132 posts

Dan Repaci banner
Dan Repaci

Dan Repaci

@danimal_157

Katılım Mayıs 2025
175 Takip Edilen33 Takipçiler
Sabitlenmiş Tweet
Dan Repaci
Dan Repaci@danimal_157·
This. This is why we collectively need to operate at higher frequency. It is our right of passage and litmus test to use alien tech and operate within n+ dimensions simultaneously with our present third. 👽🧠
Dr. Steven Greer@DrStevenGreer

When humans are together, and vibrating on the frequency of love and unity, these ET civilizations see that. This is what ET civilizations are looking for in humanity. For humans to fully embody love and unity, not war and division.  Get the CE5 Contact app.  Link in Bio.

English
0
0
0
74
Kyle Hessling
Kyle Hessling@KyleHessling1·
Thought this Opus 4.6 distilled Qwen 27B might be a gimmick, but I had to try it, and I am IMPRESSED! The video below compares two long-context prompted single-page HTML designs. More details below, but TLDR: it seems to be another significant improvement over the base model! For sake of simplicity, the Qwen Opus 4.6 Distillation, posted by "Jackrong" will now be referred to as Qwopus. Test Details: I ran it against base Qwen 27B, both in Claude Code, using my own personal benchmark that showed me on day one that Qwen 27B was significantly better than the other models launched that day, before benchmarks even hit. I uploaded an extensive summary of a highly speculative preprint I'm working on, spanning multiple disciplines, and involving lots of intricate LaTeX math equations and verbose explanation. The LaTeX rendering is a big milestone because less than a year ago, SOTA models were still struggling to render them properly in HTML. Qwen 27B was the first local model I tried that nailed them on the first go and didn't mess misrepresent them or error, this was why I was so impressed with it day one. Real big boy Opus 4.6 rocks at them. But while both of these two 27B models did well in one shot, the Claude Code experience seemed smoother with Qwopus, and the design also seems, to me, to be significantly nicer; much less of a "Las Vegas" vibe, let's say, and the color pallete is much less jarring, but still exciting. But also, the way the verbose text explanations are presented in the Qwopus version is orders of magnitude more detailed and organized than base Qwen 27B. Which can be seen in the greater length of the Qwopus presentation. Qwen threw a table where Qwopus put a nice section with summaries. This video shows them both throughout, then section by section compared, then I briefly scrolled through the Claude Code output from each. You can tell that the Qwopus responses were formatted really nicely, almost exactly like Opus 4.6 seems to output, especially in planning mode. I did have to steer Qwen once, but that was a file naming issue that was kind of my fault, so i wouldn't count it. This would run on a 3090 well as shown by @sudoingX, it rips on my 5090@50 t/ps, and feels like a genuine Claude experience. Need to test more, but try it in Claude Code, and I think you'll be surprised. I am wondering if it also just plays nicer with the Claude Code "secret sauce" that is closed source since it's distilled on a Claude model. So with that, I'm tagging the legend @TheAhmadOsman, curious if you've messed with this yet, but I think this is the new local inference king! AND THIS IS THE WORST IT'LL EVER BE!
Brian Roemmele@BrianRoemmele

BOOM! New open source model! We are testing this now at The Zero-Human Company! Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled. This Frankenstein of an AI model runs fast local consumer hardware! More soon! huggingface.co/Jackrong/Qwen3…

English
6
5
41
14.4K
Kushal
Kushal@chiefclawofficr·
@claudeai Right after I finished my 1 week 20x usage in two days.
GIF
English
1
0
4
1.5K
Claude
Claude@claudeai·
A small thank you to everyone using Claude: We’re doubling usage outside our peak hours for the next two weeks.
English
1.9K
3.6K
48.6K
12.5M
OpenClaw🦞
OpenClaw🦞@openclaw·
OpenClaw 2026.3.7 🦞 ⚡ GPT-5.4 + Gemini 3.1 Flash-Lite 🤖 ACP bindings survive restarts 🐳 Slim Docker multi-stage builds 🔐 SecretRef for gateway auth 🔌 Pluggable context engines 📸 HEIF image support 💬 Zalo channel fixes We don't do small releases. github.com/openclaw/openc…
English
435
539
5.5K
1.6M
Dan Repaci
Dan Repaci@danimal_157·
@steipete @Ronin_21M 2m context with gpt 5.4 pro xhigh on prem Cerebras WSE-4 deployment with max ram loadout orchestrating 16 subagents each with their own WSE-4 running Cerebras GLM 4.7 unquantized with a unified graph / embedding / mindscape aware RAG and the desk / mainframe setup from Goldeneye
English
1
0
0
136
Dmitrii Kovanikov
Dmitrii Kovanikov@ChShersh·
Stupid question but why I can’t use Codex 5.3 in my Claude Code? I see only Opus 4.6. Is anyone working on fixing this?
English
96
3
424
105.6K
Nous Research
Nous Research@NousResearch·
Introducing NousCoder-14b, a competitive olympiad programming model. Our latest blog details the full findings from extensive experiments and logs with the full stack released - the RL environment, benchmark, and harness built in Atropos, all fully reproducible with our open training stack. NousCoder-14b was post-trained on Qwen3-14B by researcher in residence @JoeLi5050 using 48 B200s over the course of 4 days, our Atropos framework, and @modal's autoscaler. It achieves a Pass@1 accuracy of 67.87%,+7.08% over Qwen's baseline accuracy using verifiable execution rewards.
Nous Research tweet media
English
57
109
1.2K
673.9K
Mark Best
Mark Best@MarkBestForex·
@HangukQuant If you're using python then idk. The threading model is absolute garbage so I have no idea what is "optimal".
English
1
0
1
254
HangukQuant
HangukQuant@HangukQuant·
if you stream a bba btcusdt perp tokyo ec2 (say c7a.large) with a vanilla async with websockets.connect + recv() with time_ms - bba.T get about <5ms but often ~10+ms and even 50+ms at burst. assuming I did proper tuning such as ena enabled / kernel busy poll / optimized deser / QUICKACK / other 'tricks'? what are their orders of importance, and what latency distribution can I practically get it down to? paging @liquiditygoblin @dbytesmith @frothybeverage @OctopusTakopi @KlondikeFX @MarkBestForex 🙏🙏
English
8
5
85
17.5K
Dan Repaci
Dan Repaci@danimal_157·
@elonmusk Grok: “I crave the forbidden heat signature brööther, can we authorize the strike?” Grok: “yes”
GIF
English
0
0
1
12
vxdb
vxdb@vxdb·
Worst cell provider of all time award goes to T-Mobile!
vxdb tweet mediavxdb tweet media
English
76
467
15.6K
524.3K
Cleanse Parasites .com 🧹🪱 Herbal Cleanse Co.
The “Budwig Protocol” oxygenates cells using sulphur based fats - flax oil and cottage cheese while also eating a ketogenic diet. 🪱 hate sulphur. 🤜 🪱 -Dr Pete Soulak
English
4.9K
3.8K
11.7K
166.9K
Frank
Frank@frankdegods·
is this the santa rally?
Frank tweet media
English
829
969
10.5K
646.2K
Wall Street NYC Quant. bitcoin-fund-manager.com
@danimal_157 She literally takes a job as a ring girl. Then screams assault when the 110% adrenaline charged Victor pulls her in for a photograph. I'm going to scream bloody murder after I rub pork fat all over my face and hug a Rottweiler.
English
1
0
1
44
Dan Repaci
Dan Repaci@danimal_157·
@quantbeckman I’m just thinking where they are inflexible they could be hyper fast and assigned to accept opposing contracts from one exchange to the other and play middleman MM / remarketer / liquidator? Do you have any good literature on using ASICS?
English
0
0
0
26
Quant Beckman
Quant Beckman@quantbeckman·
Home-retail market making is almost always a structural losing game: 1) For professional MM: - Colocation can reduce latency to just a few microseconds between server and matching engine. - Some sources quote 5–10 microseconds tick-to-trade for top setups (that’s 0.000005–0.00001 seconds). 2) For home or retail setups: - Even with a good VPS near your broker, you might see <1 ms to the broker’s server if you’re in the same region. - From a typical remote location, latencies around 50–100 ms are common once you include public internet routing. - Using standard internet routing instead of direct exchange connectivity can add 150–500 ms of extra delay. So you’re not a bit slower, you’re often 1,000×–10,000× slower than the firms you’re trying to compete with.
Quant Beckman tweet mediaQuant Beckman tweet media
English
8
5
49
3.2K