Klay

81 posts

Klay banner
Klay

Klay

@Klay0723

Katılım Ekim 2024
58 Takip Edilen2 Takipçiler
Klay
Klay@Klay0723·
@MackinleyZamora Very weak tong Baste na to! Hanggang hamon lang naman! pwe
Filipino
0
0
0
24
Mac Zamora
Mac Zamora@MackinleyZamora·
AGAIN? After challenging a PNP General to a fist fight only to scurry out of the country after, the biggest Barbie of Davao now challenges a PMAer to a duel. Provocation is cheap when the one initiating it refuse to show up when the bluff is called. Time to MAN-UP, duwag!
Mac Zamora tweet media
English
143
164
981
52.2K
Klay
Klay@Klay0723·
@imchristan I couldn't agree more. Sobrang kapal !!
English
0
0
1
339
Klay
Klay@Klay0723·
@ABSCBNNews Taas talaga ng tingin ng mga Duterte sa sarili nila. Who are you VP? ang kakapal ng mga pagmumukha ninyo. Wala na kayong ginawang maganda. Tapos nagaambisyon ka pang maging Presidente! Pwe.
Filipino
1
1
12
4.2K
ABS-CBN News
ABS-CBN News@ABSCBNNews·
Vice President Sara Duterte called the attention of the Office of the President for what she calls ‘last-minute’ issuance of her travel authority, saying that her plans have changed “due to uncertainty whether I will be permitted to depart.” "You will be receiving a new request soon. Please ensure that the necessary documents be processed and issued promptly, allowing sufficient time for travel preparations rather than only a few hours before the intended departure,” she said. (📷: OVP) | via @HarleneDelgado
ABS-CBN News tweet media
English
280
26
220
186.7K
Klay
Klay@Klay0723·
@KyleHessling1 Of the many version, don't know which one is best anymore. 😅. But thanks for this man. Should this be better than the original versions?
English
1
0
1
125
Kyle Hessling
Kyle Hessling@KyleHessling1·
Good morning, everyone! TLDR: full base weights for healed 18b merge are LIVE! I am so blessed and excited by all of the support for my frankenmerge of Jackrongs models. Positive and negative feedback have been excellent! This community is so incredibly patient and intelligent. You guys all rock! Big thing is some people are having trouble when using Q4 K or V cache compression, so try to keep it toward Q8, fp16 if you have the space. I haven't tried turboquant from @no_stp_on_snek yet, but I would think it would be far superior to basic Q4 kv if you're trying to get more context on tight VRAM! Also, make sure you get the "healed" version of the model weights from my repo. Lots of people were downloading and sharing the unhealed version from Jackrongs repo yesterday and having issues; however, Jackrong was able to reclone the healed repo last night as well, so either repo will work now! Also, I have uploaded the full base weights to a new repo so all of you quantization scholars can do your worst! I'm having some slow upload speed issues still today, so I figured you guys could get mlx and tq going in the meantime with the base weights below. I can't thank you all enough for all of the support, it's been very motivating and incredibly enjoyable to discuss with all of you! Follow me for more insane experiments, follow my friend Jackrong on Hugging Face for more incredible fintunes! Will be posting his stuff here as well while he waits on getting a phone that can get his X account back! Happy Sunday! huggingface.co/KyleHessling1/…
English
20
17
197
11.3K
Kyle Hessling
Kyle Hessling@KyleHessling1·
Hello again everyone, the official Qwopus-GLM5.1 frankenmerged 18B is LIVE! 12-16GB GPU bros, today is your day! The healing fine tune worked beautifully to fix the seam between the two merged models! It was unusable before the healing run, now it’s surprisingly excellent! Getting a nice mix of Qwopus and GLM 5.1 intelligence in a tiny model! Of course it would be nothing without my friend Jackrongs two excellent Qwen 9b fine tunes! Qwopus3.5-9B-v3.5 and Qwen3.5-9B-GLM5.1-Distill-v1 Please use it and let me know what’s awesome and what needs work! Send us examples of what it can do! It one shotted a really nice snake game, as well as some other great front end tests, all raw outputs in the repo! And I’ll link the video of them below as well! Have fun everyone! This is just the start! I have so many potentially great theories to further this method! I have included a full documentation of the merge and finetune-heal method in the repo, feel free to experiment with it! Will link that below in this thread as well! Everything done locally in my RTX 5090! huggingface.co/KyleHessling1/…
English
44
65
767
35.2K
Klay
Klay@Klay0723·
@songjunkr @huggingface Prompt: You must follow every instruction exactly. Write exactly 8 lines. Each line must contain exactly 6 words. Topic: why self-discipline builds wealth. Do not use numbers. Do not use bullet points. Do not add any intro or conclusion.
English
1
0
0
60
Klay
Klay@Klay0723·
@songjunkr @huggingface I have tried it. Both text only and multimodal. it keeps looping and thinking for a instruction following prompt. I don't know why that is.
English
3
0
4
1.1K
송준 Jun Song
송준 Jun Song@songjunkr·
Supergemma4-26b가 전체 gemma4-26b 모델 중 @huggingface 트렌딩 1위에 올랐습니다! Unsloth 모델보다, 심지어 구글의 원본 모델보다도 트렌딩중입니다. 같은 1페이지에 mlx와 gguf 버전이 둘다 올라가있네요. 정말 감사합니다! 31b는 마무리 작업중입니다. 그리고 다음은 e4b입니다.
송준 Jun Song tweet media
한국어
42
22
506
24.3K
Eric ⚡️ Building...
Eric ⚡️ Building...@outsource_·
🚨 SUPER GEMMA 4 26B UNCENSORED IS INSANE LLM WIZARD COOKING AGAIN @songjunkr Dropped SuperGemma4-26B-Uncensored GGUF v2 and it’s trending on @huggingface🤗 This thing SMOKES the regular Gemma-4 26B: 🤯0/100 refusals (actually uncensored) 🚀Fixed all the tool-call + tokenizer jank ⚡️90% faster prompt processing 🏆Sharper, smarter, way more capable responses - Perfect local beast for llama.cpp ✅ Runs ~18-22 GB VRAM (16.8 GB Q4_K_M file) - Run on 16 GB GPUs! The 31B version in the works, should be out SOON 🤯 Pull this version on hugging face below 👇🏻
Eric ⚡️ Building... tweet media
English
100
220
2.4K
274.1K
Klay
Klay@Klay0723·
@outsource_ @songjunkr @huggingface Prompt Sample: it just keeps thinking&looping: You must follow every instruction exactly. Write exactly 8 lines. Each line must contain exactly 6 words. Topic: why self-discipline builds wealth. Do not use numbers. Do not use bullet points. Do not add any intro or conclusion.
English
1
0
1
136
Klay
Klay@Klay0723·
@outsource_ @songjunkr @huggingface I ran it. Tested it. As for my testing, I'm not contented with its performance. It keeps thinking and looping. Very poor performance when it comes Instruction Following and Writing.
English
0
0
0
12
송준 Jun Song
송준 Jun Song@songjunkr·
Supergemma4 31B 작업이 끝나갑니다. - 0/100 완벽 무검열 - 벤치마크 성능 18% 향상 - 툴콜/토크나이저 수정 - 속도 25% 향상 Gemma4 hackaton 참여작품으로 만들었으나, 그것은 오픈소스 정신에 맞지 않는것 같아서 오늘중으로 모두에게 공개할 예정입니다. 팔로우하고 조금만 기다려주세요.
송준 Jun Song tweet media
한국어
57
47
936
81.9K
Klay
Klay@Klay0723·
@lafaiel And why is this giving me sh*t? haha
Klay tweet media
English
0
0
0
8
INIYSA
INIYSA@lafaiel·
Qwen 3.5 27B Gemma 4 31B Qwen 3.5 35B-A3B Gemma 4 26B-A4B
INIYSA tweet media
Magyar
13
37
326
43.4K
Klay
Klay@Klay0723·
@outsource_ Tried running Hermes with ollama using Gemma4 and qwen3.5. I can't make it work. Hermes launches and replies with ordinary chat, but when I test it with tools like create a txt file, it doesn't work. Please help.
English
1
0
1
73
Eric ⚡️ Building...
Eric ⚡️ Building...@outsource_·
NEW to the Agentic / open source community? I will setup HERMES Agent or OPENClaw for YOU 🫵🏻 For FREE No strings Just wanna help out anyone in need DM ME / Comment 👇🏻👇🏻
English
14
3
39
3.8K
Klay
Klay@Klay0723·
@Shishe3722801 They could've just placed a stone or steel, or anything equivalent with his weight, and let him leave the seat. 😅
English
1
0
3
3.4K
Shishe
Shishe@Shishe3722801·
The negotiator 2025 must watch
English
41
460
13.4K
1.2M
Eric ⚡️ Building...
Eric ⚡️ Building...@outsource_·
🚨BRAND NEW QWOPUS 27B MODEL > QWEN 3.5 27B Jackrong just dropped Qwopus3.5-27B-v3-FP8-vLLM- it performs better than standard Qwen 3.5 27B 🤯 🧠Claude Opus Distilled Reasoning – v3 Edition -Full v3 upgrade with cleaner, verifiable CoT chains + RL -Beats previous Claude-Distilled-v2 by +3.05 points -First version to beat Qwen3.5-27B (95.73% vs 94.51%) ⚡️FP8 vLLM/SGLang Ready -Zero metadata issues! Loads instantly -Quality way closer to BF16 than GGUF quants -Blazing inference on single GPU (4090/5090 🚀Built for the Real Use Cases -Competitive programming -Advanced math & coding -Multilingual (EN/CN/KR) -Conversational + structured output This is currently the strongest 27B reasoning model👇🏻 huggingface.co/Jackrong/Qwopu…
Eric ⚡️ Building... tweet media
English
9
16
162
9.9K
Eric ⚡️ Building...
Eric ⚡️ Building...@outsource_·
🤯 GEMMA 4 + OPUS 4.6 REASONING DROPPED @kaiostephens goal: produce a Gemma 4-31B reasoning adapter trained only on Opus reasoning 🧠 What the model is: 🧬 Tiny QLoRA adapter on Gemma 4 31B-it 📊 Fine-tuned on ~1,900 curated Opus Examples ⚡ Trained in ~1 hour on a single GH200 GPU 📖 Fully open Apache 2.0 What it does: ✨ Boosts overall quality, coherence, and personality 🧮 Stronger math, code, and Opus problem solving 💬 More refined, thoughtful responses 🏠 Built for local agents, workflows, and heavy daily Vs base Gemma 4 31B: 📐 Same efficient base model, no extra size or speed 📈 Noticeable step up in real-world depth and quality 💪 Base was already strong this levels It up! Grab the adapter here 👇🏻 huggingface.co/kai-os/gemma4-…
Eric ⚡️ Building... tweet media
English
18
122
1.2K
83.2K