orkun

202 posts

orkun banner
orkun

orkun

@alephtr

low taste tester

انضم Ağustos 2022
523 يتبع29 المتابعون
orkun
orkun@alephtr·
@feelug paid sublara mi geldi? public yazilarinda goremedim bir sey
Türkçe
1
0
0
114
efe
efe@feelug·
Citrini son yazısında long turkey short germany demiş :)
Türkçe
2
1
37
3.8K
orkun
orkun@alephtr·
@0xSero bro I thought he actually died for a second 😭 let's use b&w photos more responsibly hahah
English
0
0
0
274
orkun
orkun@alephtr·
@sadi_kaymaz claude ekstra kullanim hediyeni aldin mi abi? 3. parti servisleri banladiklari icin aylik abonelik bedelin kadar bir ekstra kredi verdiler her yerde kullanilabilen
Türkçe
1
0
0
58
orkun
orkun@alephtr·
@0xSero toyotas are cool. i like toyotas
Eesti
0
0
1
10
0xSero
0xSero@0xSero·
Let me local AI pill you: 1. It sucks compared to SOTA 2. It can’t code so well 3. It can be a good agent 4. It can be great at chat 5. It can be fine as a researcher 6. It can be a great automation engine 7. It can be tuned however you want 8. It teaches you how the sausage is made 9. It works on a plane, or in an outage 10. It costs your electric bill + hardware 11. It is better than the AI we gave up coding for a year or 2 ago. Local AI is self defence, it is a go kit, it is a rebalancing of power. It’s delusional to think it approaches or will ever approach SOTA, the scale of private labs blows anything you can get for less than 25k USD out the water. Local AI is a bet that prices won’t stay this low, that private corporations with closed source weights can’t be trusted to stay consistent. I am more than happy to rent a Ferrari for dirt cheap, but i should also have a beater Toyota if I can afford it. Local AI is the car I can depend on to be there tomorrow, something that’s mine.
0xSero tweet media
English
126
127
1.4K
46.4K
orkun
orkun@alephtr·
@BLUECOW009 i dont think so. what model are you using? -im not checking if you are a bot or not lol-
English
1
0
1
312
@bluecow 🐮
@bluecow 🐮@BLUECOW009·
Model says it can know who it is, is that normal for agent harness?
@bluecow 🐮 tweet media
English
18
1
28
4K
orkun
orkun@alephtr·
took 600 mg after seeing this and it fucked me up so bad i just left the computer and enjoyed nature
SELO@seloesque

life changing

English
0
0
0
86
orkun
orkun@alephtr·
@sadi_kaymaz abi terminal bile boyle servis ediyorsa.. ifadenin tam hali rescued but not safe. base’e geldik fakat pilot hayatini kaybetti ya da korfezdeyken kurtarma ekibinin helikopterinde ariza cikti ve dustu demeye yol yapiyorlar
Türkçe
0
0
2
257
orkun
orkun@alephtr·
@0xSero none of these would have happened if guts railed griffith just once and i'm not even joking
English
0
0
1
68
0xSero
0xSero@0xSero·
Finishing Berserk this weekend. You’d think I’d be having fun without any responsibilities/kids. I’m losing my mind
0xSero tweet media
English
15
0
124
4.4K
orkun
orkun@alephtr·
@promptprincess some matters are simply too serious for dwerking and chudlie knows this
English
1
0
2
61
✩ em ✩
✩ em ✩@promptprincess·
✩ em ✩ tweet media✩ em ✩ tweet media
✩ em ✩@promptprincess

{ "calico_critters_easter_glitch_prompt": { "format": [ "Aspect ratio: 1:1", "One single image only (not a grid, not a contact sheet)", "Looks like a real at-home eBay listing photo", "No words anywhere. No text, no numbers, no symbols that resemble text. Zero tolerance." ], "core_concept": "A collectible toy display photo. Everything is normal consumer toy realism, but one subtle physics glitch appears from cheap plastic optics, static, or phone-camera artifacts. It must read as plausible, not magical.", "subject": [ "Calico Critter (authentic Epoch Calico Critters toy realism, not Calico-style)", "Calico Critter rabbit figure dressed as an Easter fairy princess in an Easter-coded outfit (no branding, only motifs)", "Flocked body with extreme fluffy fur overlay", "Very dense plush fur: fluffy cheeks, belly, arms, and legs", "Layered fiber lengths: long guard hairs plus soft underfur", "Individual strands visible with clumping and flyaways like real plush", "Micro-glitter sparkle fibers mixed through the fur, popping under harsh flash", "Factory authenticity details: visible seam line at head/body split, tiny foot scuffs, slight paint variance, dust specks" ], "easter_accessories_and_outfit": [ "Small crown or tiara and a wand or egg charm", "Wings made from sheer organza film with glossy holographic shimmer", "Semi-transparent wings with glitter threads", "Micro fraying on edges and slight wrinkling like heat-cut fabric", "Wings attached via small molded harness with seam lines", "Pastel Easter palette: pinks, yellows, soft greens, baby blues", "Motifs only: bows, eggs, bunny-ear silhouettes, rounded cute shapes", "No branded marks, tags, or packaging text" ], "hard_rule_clear_display_cube": [ "Entire scene inside a clear acrylic or plastic display cube", "Cube clearly visible with realistic refraction and reflections", "Micro-scratches and dust in corners", "One fingerprint smudge with wiped pattern", "Tiny edge chip or molding imperfection", "Internal reflections and harsh flash glare lines" ], "set_inside_the_cube": [ "Miniature dreamcore Easter shrine diorama with mass-produced toy realism", "Tiny pastel props: decorated Easter eggs, mini baskets, spring flowers, satin pillow, micro pearls, lace-like decals", "Decor shapes: bows, egg motifs, heart tokens, bunny-ear silhouettes", "Everything injection-molded with seam lines, molded texture, slight misalignment, and small scuffs" ], "props": [ "wand accessory", "mini crown stand", "decorated egg gem", "micro pearls", "lace ribbon strip", "star tokens", "mini bow clip", "tiny carrot", "pastel egg tokens", "micro chick figure" ], "lighting": [ "Harsh on-camera flash like a real at-home listing photo", "Strong specular highlights on cube edges", "Flash hotspot reflections", "Fur sparkle fibers must pop", "Organza wings shimmer with holographic highlights", "Optional tiny internal fairy lights at low intensity" ], "camera_and_composition": [ "Single 3/4 view or slight isometric angle", "Clear focus on Calico Critter with breathing room above head and crown", "Real phone camera vibe with slight perspective distortion", "Minor exposure inconsistency and natural depth of field", "Real tabletop environment outside cube, minimal and uncluttered", "Dark-ish room, no identifiable branding", "No stylization, no cartoon look, no warped face, no extra limbs, no melted props" ], "physics_glitch": { "instruction": "Pick exactly one option and make it visible but still plausible. Do not explain or label it.", "options": [ "Over-refraction corner bending background lines", "Static glitter arcs gathering near plastic edge", "Tampo ghosting from faint double-print misregistration", "Shadow mismatch from reflective tabletop bounce", "Cheap camera RGB edge split at cube edge", "LED smear through plastic refraction", "Micro-scratch flare creating starburst glint", "Internal reflection suggesting impossible angle", "Dust vortex spiral cluster from static attraction" ] }, "pose": { "instruction": "Pick exactly one option. Do not explain or label it.", "options": [ "Standing victory pose with wand raised and wings fully spread", "Sitting cross-legged holding heart gem, wings folded like cloak", "Mid-jump captured by flash with motion blur", "Leaning forward shocked expression, tiara tilted", "One-knee kneel casting pose with wand outward", "Back turned looking over shoulder mischievously", "Twirling pose with extended arm and flared dress", "Reclining on cushion with smug grin", "Tiptoe sneaking pose with wand hidden behind back" ] }, "absolute_text_ban": [ "No words, letters, numbers, captions, labels, receipts, signage, UI overlays, or watermarks", "No packaging text, stamps, tags, or branded wordmarks", "No readable symbols resembling text anywhere" ], "final_check": [ "One single square image only", "Clear acrylic cube visible with realistic imperfections", "Calico Critter Easter fairy princess rabbit with plush sparkle fur and organza wings", "Exactly one pose and one physics glitch applied", "Zero text or text-like symbols anywhere" ] }, "usage_note": "To generate variations, copy the full structured prompt into your image generator and randomly select one item from physics_glitch.options and one from pose.options each time." }

ZXX
5
0
42
1.8K
orkun
orkun@alephtr·
@JasonBotterill was kinda expected after glm 5 was trained entirely on ascend chips
English
0
0
1
73
JB
JB@JasonBotterill·
Lmao
JB tweet media
HT
1
0
16
698
Lotto
Lotto@LottoLabs·
@alephtr When he says spend I’m assuming he means token cost per task done, he could mean time but spend makes me feel otherwise
English
1
0
1
78
orkun
orkun@alephtr·
@teortaxesTex how do you think it compares against the goated qwen3.5 27b?
English
0
0
2
615
Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Gemma 4 31B might be the best open dense model on the market. Gemma-MoE is… not close. Surprising.
Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞) tweet media
Artificial Analysis@ArtificialAnlys

Google has released Gemma 4, four open weights models with multimodality support. The flagship 31B model (39 on the Intelligence Index) uses ~2.5x fewer output tokens than Qwen3.5 27B (Reasoning, 42) but trails it by 3 points on intelligence @GoogleDeepMind's Gemma 4 includes four sizes: Gemma 4 31B (dense, 39 on the Intelligence Index), Gemma 4 26B A4B (MoE, 4B active, 31), Gemma 4 E4B (8B, 19), and Gemma 4 E2B (5.1B total, 2.3B active, 15). Gemma 3 was instruct-only at 27B, 12B, 4B, 1B, and 270M; Gemma 4 adds reasoning mode, native video and image support across all sizes (with audio input for Gemma 4 E2B and E4B), doubled context windows, and Apache 2.0 licensing. The nearest open weights models by intelligence to the 31B are Qwen3.5 27B (Reasoning, 42), GLM-4.7 (Reasoning, 42), MiniMax-M2.5 (42), and DeepSeek V3.2 (Reasoning, 42). Qwen3.5 also supports images and video natively; DeepSeek V3.2 and MiniMax-M2.5 are text-only. Key benchmarking results for the reasoning variants: ➤ Gemma 4 represents a large intelligence jump over Gemma 3. Gemma 4 31B (Reasoning, 39) is +29 points over Gemma 3 27B Instruct (10), Gemma 4 E4B (19) is +13 points over Gemma 3n E4B Instruct (6), and Gemma 4 E2B (15) is +10 points over Gemma 3n E2B Instruct (5). Context windows also doubled from 128K to 256K for the larger models, and increased 4x from 32K to 128K for E2B and E4B ➤ Gemma 4 31B (Reasoning, 39) trails Qwen3.5 27B (Reasoning, 42) by 3 points, primarily due to weaker agentic performance. On non-agentic evaluations, the models are more competitive: Gemma 4 31B leads on SciCode (43% vs 40%) and TerminalBench Hard (36% vs 33%), while scoring similarly on GPQA Diamond (86% vs 86%), IFBench (76% vs 76%), and HLE (23% vs 22%) ➤ Gemma 4 31B is notably token efficient, using 39M output tokens to run the Intelligence Index vs 98M for Qwen3.5 27B (Reasoning). This is ~2.5x fewer output tokens for a model scoring 3 points lower. For context, the other models at the 42-point intelligence level also use significantly more tokens: MiniMax-M2.5 (56M), DeepSeek V3.2 (Reasoning, 61M), and GLM-4.7 (Reasoning, 167M) ➤ Gemma 4 26B A4B (Reasoning, 31) activates just 4B of its 27B total parameters and is ahead of select peers in the ~3-4B active parameter range. Qwen3.5 35B A3B (Reasoning, 37) leads models with ~3B active parameters and is 6 points ahead of Gemma 4 26B A4B, with notably stronger agentic capabilities (Agentic Index 44 vs 32). GLM-4.7-Flash (Reasoning, 30) scores slightly lower than Gemma 4 26B A4B with 3B active parameters ➤ The smaller Gemma 4 E4B and E2B models perform better on AA-Omniscience than the larger Gemma 4 variants. Gemma 4 E4B scores -20 on AA-Omniscience and Gemma 4 E2B scores -24, both substantially better than Gemma 4 31B (-45) and comparable to or better than much larger models like DeepSeek V3.2 (Reasoning, -21). The larger Gemma 4 models' AA-Omniscience scores are in line with Qwen3.5 27B (-42) and Gemma 4 26B A4B (-48) ➤ Gemma 4 E2B has 2.3B active parameters and 5.1B total, designed for on-device deployment. In 4-bit quantization, the model weights fit in under 3GB of RAM, making it suitable for background tasks, basic function calling, and multimodal understanding on mobile and edge hardware Key model details: ➤ Context window: 256K tokens (31B, 26B A4B), 128K tokens (E4B, E2B). ➤ Multimodality: All models support text, images, and video input. E2B and E4B also support native audio input ➤ License: Apache 2.0. Gemma 3 models are available under a "Gemma Terms of Use" license ➤ Size/Parameters: 31B dense, 27B total/4B active (26B A4B MoE), 8B (E4B), 5.1B total/2.3B active (E2B) ➤ API availability: The two larger models are available for free on Google AI Studio. There are several third-party providers hosting the larger Gemma 4 variants such as @novita_labs, @LightningAI, and @parasailnetwork

English
13
6
196
17.8K
orkun
orkun@alephtr·
@LottoLabs or the context window problem
English
1
0
1
71
orkun
orkun@alephtr·
@LottoLabs he might be referring to speed. since most of the tokens are used for reasoning you would be essentially waiting more
orkun tweet media
English
2
0
1
1.1K
orkun
orkun@alephtr·
impressive. let's see m5 max metrics. maybe we could even sneak in a MoE model performance there as well. or maybe a power draw comparison. heck, maybe add MLX performance as well while we're at it
orkun tweet media
NVIDIA AI PC@NVIDIA_AI_PC

.@GoogleGemma 4 31B is up to 2.7X faster on RTX using llama.cpp. Thanks to @ggerganov for working with us to make this model fast.

English
0
0
0
76
orkun
orkun@alephtr·
@loktar00 interesting. the chuky bois? not expecting dead silent obviously but quieter than the fans of the chassis?
English
1
0
0
42
Loktar 🇺🇸
Loktar 🇺🇸@loktar00·
Alright got all 6x3090s hooked up.. but I might go back down to 4 on this machine for now, the secondary PSU backfeeds even though I'm using a add2psu.. I guess it makes sense but bleh. With 4 it was all good. Not too bad honestly RPC is what I was using forever anyway. haha not worth risking it.
Loktar 🇺🇸 tweet media
English
23
2
74
6K
orkun
orkun@alephtr·
detayli cevap icin tesekkurler 🙏 3x4090 icin prefill ve generation hizi disinda bir arti olacagini dusunuyor musunuz? cunku anakart -o da ayri bir dert turkiyede :)- ve psu haric sadece kart maliyeti olarak baktigimizda tek ikinci el 4090 fiyatina 4x ikinci el 3090 alabiliyorum. ve 4. nesil tensor core olmasa da ayni 96 gb oluyor elimde..
Türkçe
0
0
0
96
Alican Kiraz
Alican Kiraz@AlicanKiraz0·
@alephtr Selamlar, ben naçizane Nvidia CUDA tabanlı bir GPU tercih ederdim. Çünkü en iyi toolset ve kurcalama imkanı şuan Nvidia kartlarında. MLX tarafı hala çok genç ama çok büyük bir potansiyele sahip, o nedenle 3090 x4 yada 4090x3 stack yapardım. 🙏🏻
Türkçe
1
0
4
195
Alican Kiraz
Alican Kiraz@AlicanKiraz0·
Gemma 4-31b GGUF’da M3 Ultra’yı deniyeyim dedim (henüz tam mlx desteği çıkmadığından) üretim çok kötü, umarım mlx-cuda dönüşüm kütüphanesinin gelişimiyle daha iyi hale gelir. 10.52 tok/sec, 711 tokens
Alican Kiraz tweet media
Türkçe
2
1
14
3.6K