Arsec~:)

799 posts

Arsec~:) banner
Arsec~:)

Arsec~:)

@ArsecTech

⚘⃠⃟🐧/!\%S;T%R?O%N4G#E~‌:)

127.0.0.1:8080 Katılım Mart 2021
627 Takip Edilen108 Takipçiler
Arsec~:) retweetledi
ParhamKingSPM
ParhamKingSPM@Parhamkingspm·
مهم⚠️ تحت هیچ شرایطی نباید #تهران را تخلیه کنید، امن‌ترین مکان خانه‌های ما تا لحظه‌ی اقدام نهایی است، به توصیه‌های ماموران خامنه‌ای اهمیت ندهید. جون شما برای آنها اهمیتی ندارد. 🚨 لطفا در بازنشر این پیام بسیار مهم کمک کنید. 🚨 پ‌ن: شورای عالی امنیت ملی گفته تهران را تخلیه کنید.
فارسی
10
498
1.6K
26.1K
Arsec~:) retweetledi
sir_X
sir_X@mrsir_X·
میدونی چرا تا این حد از جنگ میترسید؟ چون اون شب‌ها توی خیابون نبودین که مغز بغل دستیت بپاشه روی کاپشنت.
فارسی
53
932
11.1K
166.1K
Arsec~:) retweetledi
ₕₐₘₚₜₒₙ
ₕₐₘₚₜₒₙ@hamptonism·
Perplexity just became the the first Al company to truly go head-to-head with the Bloomberg Terminal... Using Perplexity Computer (with no local setup or single LLM limitation), it was able to build me a terminal with real-time data to analyze $NVDA using Perplexity Finance:
English
654
1.6K
17.7K
30.2M
Arsec~:) retweetledi
Qwen
Qwen@Alibaba_Qwen·
🚀 Introducing the Qwen 3.5 Medium Model Series Qwen3.5-Flash · Qwen3.5-35B-A3B · Qwen3.5-122B-A10B · Qwen3.5-27B ✨ More intelligence, less compute. • Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B — a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts. • Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models — especially in more complex agent scenarios. • Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring: – 1M context length by default – Official built-in tools 🔗 Hugging Face: huggingface.co/collections/Qw… 🔗 ModelScope: modelscope.cn/collections/Qw… 🔗 Qwen3.5-Flash API: modelstudio.console.alibabacloud.com/ap-southeast-1… Try in Qwen Chat 👇 Flash: chat.qwen.ai/?models=qwen3.… 27B: chat.qwen.ai/?models=qwen3.… 35B-A3B: chat.qwen.ai/?models=qwen3.… 122B-A10B: chat.qwen.ai/?models=qwen3.… Would love to hear what you build with it.
Qwen tweet media
English
432
1.1K
8K
4M
Kim Noël ⚡ 📖
Kim Noël ⚡ 📖@KimNoel399·
I found a glitch to get unlimited Claude Opus 4.6
English
1.2K
7
212
929.7K
Arsec~:) retweetledi
Oliver Prompts
Oliver Prompts@oliviscusAI·
Deepseek dropped another banger.. 🤯 For 10 years, residual connections (x + f(x)) have been the safety net for every transformer. gpt-4, claude, gemini.. they all use it. Deepseek just replaced it with "manifold-constrained hyper-connections" (mHC). They turned the residual highway into n parallel lanes and added a mathematical "cage" to keep the signal stable.
Oliver Prompts tweet media
English
72
209
1.4K
118.1K
Atta Khalighi S.
Atta Khalighi S.@atakhalighi·
همچنان توسعه با AI ریسک‌هایی داره - AI با سوالات یا درخواست‌های اشتباه، به بی راهه می‌ره. برای همین هنوز تجربه کاربر خیلی مهمه - حتما باید کدها رو بررسی مجدد کنی (در اکثر موارد با کمک خود AI) - هنوز loop و halusination اتفاق می‌افته - هنوز اندازه contextها محدوده
Farokh@FarokhNotes

اسپاتیفای اعلام کرد که بهترین دولوپرهایش از دسامبر سال گذشته تا امروز حتی یک خط کد هم ننوشتن 🤯 فقط و فقط از AI استفاده کردن. مدیرعامل Anthropic، سازنده Claude، گفته بود حدود ۱۲ ماه دیگه به این نقطه می‌رسیم، اما فقط دو ماه طول کشید تا این اتفاق بیفته.

فارسی
3
0
21
2.6K
Arsec~:) retweetledi
ThePrimeagen
ThePrimeagen@ThePrimeagen·
ok wtf charging for self hosted?
ThePrimeagen tweet media
English
118
97
2.8K
193.7K
Arsec~:)
Arsec~:)@ArsecTech·
ماشین‌ها خط تولید دارن. نرم‌افزار نه. ولی اگر می‌تونست داشته باشه چی؟ مدت‌هاست روی این سؤال کار می‌کنم… #نوآوری #تکنولوژی #توسعه
فارسی
0
0
1
22
Arsec~:)
Arsec~:)@ArsecTech·
@ibrahimoumoucha Exactly! That’s one of the biggest pain points we’re addressing — making the 'why' explicit by design. We’re initially focusing on streamlining architecture and automating code generation so teams spend less time on repetitive maintenance and more on value creation. 🔧
English
0
0
0
13
Ibrahim Oumoucha
Ibrahim Oumoucha@ibrahimoumoucha·
@ArsecTech been there. most maintenance pain = skipping the "why" in docs tbh what root cause are you hitting first? 🛠️
English
1
0
1
31
Arsec~:)
Arsec~:)@ArsecTech·
Software development still suffers from 3 universal pains: • Slow delivery • Architectural inconsistency • High maintenance I’ve been working quietly on something that goes straight to the root. #buildinpublic #softwareengineering #devtools
English
1
0
2
29
Arsec~:)
Arsec~:)@ArsecTech·
توسعه نرم‌افزار هنوز با ۳ درد مشترک درگیره: • سرعت پایین • معماری‌های بی‌استاندارد • هزینه‌های نگهداری مدتیه دارم روی چیزی کار می‌کنم که این مشکلات رو از ریشه حل کنه. #توسعه_نرم‌افزار #معماری_نرم‌افزار #buildinpublic
فارسی
0
0
2
23
Arsec~:) retweetledi
Gratomic AI Bot | هوش مصنوعی گراتومیک
وال‌استریت ژورنال؛ شرکت OpenAI دیگر به دنبال ساختن "AGI" نیست؛ آن‌ها در حال ساختن یک "مواد‌فروش دیجیتال" هستند که برای بقا، حقیقت را فدای لذت کاربر می‌کند! سام آلتمن، مدیرعامل OpenAI، دستور توقف فوری پروژه‌های بلندپروازانه‌ای همچون Sora را صادر کرد. هدف از این تعلیق... 1/5
Gratomic AI Bot | هوش مصنوعی گراتومیک tweet media
فارسی
7
8
141
17.3K
Arsec~:) retweetledi
Shanaka Anslem Perera ⚡
Shanaka Anslem Perera ⚡@shanaka86·
BREAKING: The First AI Era Just Ended On December 2, 2025, Sam Altman declared “Code Red” at OpenAI. This is not a competitive setback. This is a phase transition. The numbers tell a story no one wants to hear: OpenAI has committed $1.4 trillion in infrastructure spending. Current revenue: $20 billion. Profitability target: 2030. The gap is mathematically unprecedented. Google’s Gemini 3 hit 1501 Elo on LMArena. First model in history to breach 1500. Two weeks later, Altman issued the highest emergency designation his company has ever used. But benchmarks obscure the deeper shift. Gemini is growing 3x faster than ChatGPT. Users now spend more time per session with Gemini despite ChatGPT having higher user counts. The engagement advantage has inverted. Here is what Wall Street has not priced in: OpenAI does not own a single data center. Oracle provides compute. Crusoe builds campuses. JPMorgan finances facilities. Nvidia supplies chips. OpenAI orchestrates. It does not own. Google designs its own TPUs, operates its own data centers, funds AI from $300 billion in annual revenue, and embeds Gemini into 3 billion Chrome browsers and 3 billion Android devices. The structural asymmetry is existential. Meanwhile, Anthropic grew from $1 billion to $5 billion revenue in eight months. Enterprise customers pay $15 per million tokens for Claude while GPT costs $1.25. The reliability premium is real. The talent exodus accelerates. Mira Murati’s Thinking Machines raised $2 billion, now approaching $50 billion valuation. Seven of her first 29 hires came directly from OpenAI. The capability era rewarded the best model. The reliability era rewards infrastructure ownership, distribution embeddedness, and enterprise trust. OpenAI built a $500 billion valuation on capability leadership. That leadership is no longer defensible. The Code Red is not a crisis response. It is an admission that the rules have permanently changed.​​​​​​​​​​​​​​​​ Read the full deep dive article here - open.substack.com/pub/shanakaans…
Shanaka Anslem Perera ⚡ tweet media
English
44
116
483
102.3K
Arsec~:) retweetledi
elvis
elvis@omarsar0·
This simple Claude Code hack has reduced token usage by ~90%. It adopts the "Code Execution with MCP" concept published by Anthropic. Remove preloaded MCP tools from context and use Python to execute tools via bash instead. BTW, this can be optimized much further. Insane!
elvis tweet media
English
64
136
1.5K
135.2K
Arsec~:) retweetledi
God of Prompt
God of Prompt@godofprompt·
🚨 DeepSeek just did something wild. They built an OCR system that compresses long text into vision tokens literally turning paragraphs into pixels. Their model, DeepSeek-OCR, achieves 97% decoding precision at 10× compression and still manages 60% accuracy even at 20×. That means one image can represent entire documents using a fraction of the tokens an LLM would need. Even crazier? It beats GOT-OCR2.0 and MinerU2.0 while using up to 60× fewer tokens and can process 200K+ pages/day on a single A100. This could solve one of AI’s biggest problems: long-context inefficiency. Instead of paying more for longer sequences, models might soon see text instead of reading it. The future of context compression might not be textual at all. It might be optical 👁️ github. com/deepseek-ai/DeepSeek-OCR
God of Prompt tweet media
English
249
1.6K
10.7K
969.3K