Murat Aslan

277 posts

Murat Aslan banner
Murat Aslan

Murat Aslan

@iammurataslan

AI Operator

Katılım Ekim 2020
845 Takip Edilen160 Takipçiler
Murat Aslan retweetledi
Mert Koseoglu — Context Mode
context-mode just made @nateherk's top 6 Claude Code skills out of 100+ tested. watch skill #5 at 8:06 → youtu.be/eRS3CmvrOvA?t=… every install, every issue, every share got us to: → 125,400+ users → 103.9k+ npm installs → 21.4k+ Claude Code marketplace installs → 14 agents supported thank you Nate. thank you to every person who put context-mode here. genuinely. grateful.
YouTube video
YouTube
English
1
1
8
567
Murat Aslan retweetledi
Mert Koseoglu — Context Mode
context-mode just crossed 120,000 users. solo built. $0 funding. 14 AI coding agents: Claude Code, Cursor, Codex CLI, Gemini CLI, VS Code Copilot, JetBrains Copilot, OpenCode, OpenClaw, KiloCode, Qwen Code, Antigravity, Kiro, Zed, Pi. 98% context saved per session. 56 KB -> 299 bytes. 30 min sessions -> 3 hours. I did not get here alone. every issue, every PR, every DM, every retweet, every quiet install -> thank you. genuinely. you got us here. I will never stop being grateful for that. now I need 3 partners to take this from 120k to a million: -> DevRel who opens the camera in English -> Growth Hacker who builds AI listening pipelines (no spam, no sock puppets) -> Community lead who knows the dev creator graph cold what you get now: -> contributor credit pinned in README and every CHANGELOG -> shoutout from the project account on every release > public proof of work for your next interview what you get if we raise: -> first offer at market rate -> founding equity, real numbers on paper what you do not get: -> salary today -> vague equity with no math -> a promise this works out how to apply -> ship a public deliverable in the issue: -> DevRel: 60 second demo, post the link -> Growth: 3 contextual YouTube comments + your heuristic -> Community: 5 creators + personalized DM drafts first good one in each track wins. AI-assisted is fine. AI-pasted is obvious and disqualifies. if you have ever wanted to ride a small OSS project from 120,000 to a million users before anyone else has heard about it -> this is that ride. ship something: github.com/mksglu/context… Mert
English
6
4
38
1.5K
Murat Aslan retweetledi
BuilderMare
BuilderMare@buildermare·
İstanbul, hazır mısın? 🇹🇷 ClawCon’a sadece 2 gün kaldı. Gel, tanış, konuş, bağlantı kur. Etkinliğe, Onur Solmaz’ın @openclaw üzerine açılış konuşmasıyla başlıyoruz. Ardından Mert Koseoglu, Murat Aslan, Ibrahim Okan Sariirmak, Alp Onaran ve sürpriz konuşmacılarla dolu bir akış bizi bekliyor. Ama olay sadece sahne değil. Gerçek networking session’larıyla hem konuşmacılarla hem katılımcılarla birebir, samimi bağlantılar kurabileceksin. Ev sahibi olarak bizleri ağırlayan Tech Istanbul ekibine çok teşekkür ederiz 🙏 📍 6 Mayıs, İstanbul 🎟️ Luma üzerinden kaydol (yorumlarda 👇) Yerini al, bu hikâyenin parçası ol. 🇬🇧 English: Istanbul, are you ready? ClawCon is just 2 days away. Come, meet, talk, and connect. We kick off with Onur Solmaz’s opening talk on OpenClaw. Then continue with Mert Koseoglu, Murat Aslan, Ibrahim Okan Sariirmak, Alp Onaran and a lineup of surprise speakers. But the real value isn’t just on stage: Through dedicated networking sessions, you’ll get the chance to connect face-to-face with both speakers and fellow builders. This isn’t just an event, it’s where like-minded people meet. Special thanks to Tech Istanbul for hosting us 🙏 📍 May 6, Istanbul 🎟️ Register via luma (via comments 👇) Take your place, be a part of this story.
BuilderMare tweet media
Türkçe
4
6
28
2.8K
Murat Aslan retweetledi
Mert Koseoglu — Context Mode
I'll be giving a short talk at @clawcon Istanbul on Wednesday, May 6. "The Other Half of the Context Problem." Five minutes on why your AI coding agent keeps re-sending megabytes of stale tool output every turn, and what to actually do about it. I ran the numbers on 80 hours of Opus pair programming. $487 saved, 22 hours of re-explaining I got back, 6.2 MB of raw output became 124 KB in context. That's a 98% reduction. Three things I'll cover: → Intercept. Five lifecycle hooks pull tool output out of the conversation before it ever lands. 59 KB drops to 1.1 KB. → Think in Code. Send code to the data instead of pulling data into the model. 700 KB drops to 3.6 KB. → Session Persistence. 26 event categories carry over through compaction, so you stop re-teaching the agent your codebase every twenty minutes. 12.5K stars. 104K npm. 14 platforms. Open source, no telemetry. Organized by @BuilderMare (@0xVeliUysal , @SolBridgeNW) through @clawcon and @openclaw. Special thanks to @iammurataslan. İstanbul'daysan gel.
Mert Koseoglu — Context Mode@mksglu

80 hours of AI pair programming. Here's what context-mode saved me. → $487.20 in API costs. Opus pricing. Real money, not estimates. → 22.3 hours of re-explaining context after compaction. Time I got back to ship. → 268 sessions resumed from memory. The agent never asked "what were we doing?" → 47 preferences auto-learned. "use TS strict" once, remembered forever. → 14,847 events indexed. Searchable across every session, every project. Without context-mode |████████████████████████████████████████| 6.2 MB With context-mode |█░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░| 124 KB 98% of raw data never entered my conversation. That's a 50× longer session. Same context window. Multiply across a 13-engineer team: $487 × 13 = $6,331/month saved 22 hours × 13 = 286 hours/month recovered Open source. Local-first. No telemetry. No account. No SaaS lock-in. github.com/mksglu/context…

English
4
4
21
1.1K
Murat Aslan retweetledi
Veli UYSAL 🦀
Veli UYSAL 🦀@0xVeliUysal·
We are finished Ankara @solana 🇹🇷@SuperteamTR buildstation! Only for builders🤙 Are you ready for the next one?
Veli UYSAL 🦀 tweet mediaVeli UYSAL 🦀 tweet mediaVeli UYSAL 🦀 tweet media
BuilderMare@buildermare

We just wrapped up our 2nd @Solana Buildstation in Ankara with @SuperteamTR 🇹🇷 Across two sessions (Apr 25 & May 2), we brought together local Solana builders to dive into smart contracts, AI integrations, and security plus hands-on mentoring. Special thanks to @WhiteMoonDev who showcased the tool he built for Solana developers during their session. While walking through Solana fundamentals, he also showed how to quickly build Solana programs using AI and how to connect and actually use them with @orquestradev What a chad. Solid mentor, and an impressive project to back it up. Special thanks to @iammurataslan for showcasing what context mode actually does and why it matters. Context mode basically prevents AI tools from getting flooded by raw data, keeping only what’s relevant in the context and reducing usage by ~98% (built by @mksglu) Seeing it in action across different AI like Claude and OpenClaw made it even clearer how powerful this approach is for building faster and more efficient workflows. Special thanks to Emrah Urhan (@raxetul ) for joining us today and sharing his expertise in embedded systems, low-level hardware/software integration, and modern software development practices. All part of getting ready for the Solana @colosseum Frontier Hackathon. See you next week. 🤘

English
2
2
28
1.1K
Erdem Demirci
Erdem Demirci@_erdemdemirci·
@mksglu Codex windows app için yüzde yüz uyum istiyoruz :)
Türkçe
2
0
3
221
Mert Koseoglu — Context Mode
context-mode v1.0.103 just shipped. 📊 105K+ users · 10.8K+ GitHub stars · 745 forks · 14 platforms Your AI coding sessions now have analytics. `ctx insight` — a personal dashboard that reads your session history and tells you what's working, what's not, and what to fix. What it tracks: → 23 event categories. Every file edit, git commit, error, subagent delegation, plan approval, blocker, and CLAUDE.md load — captured automatically. No config needed. → 37 insight patterns. Not vanity metrics. "Your session ended with 7 errors and zero commits — all effort was lost." "CLAUDE.md loaded → error resolution rate jumped to 85%." "Same file edited 12 times — write a spec first." → 4 composite scores (0-100). Productivity, Quality, Delegation, Context Health. One number per dimension. Track weekly. → Error Intelligence. Resolution rate, retry storm detection, P95 latency by tool, top error sources. Not just "you had errors" — "here's which tool fails most and how long it takes to recover." → Delegation analytics. Agent completion rate, parallel burst detection, time saved. "You launched 45 agents, 38 completed, 12 parallel bursts — saved ~76 minutes." → CLAUDE.md correlation. The killer insight: sessions with rules loaded show measurably higher error resolution. Sessions without → errors pile up. This single metric convinced 3 teams to adopt CLAUDE.md this week. The numbers from 29 sessions: Files tracked 391 ██████████████████████████████ Git operations 98 ████████░░░░░░░░░░░░░░░░░░░░░░ Rejected approaches 83 ██████░░░░░░░░░░░░░░░░░░░░░░░░ Errors caught 66 █████░░░░░░░░░░░░░░░░░░░░░░░░░ Latency events 52 ████░░░░░░░░░░░░░░░░░░░░░░░░░░ Decisions tracked 21 ██░░░░░░░░░░░░░░░░░░░░░░░░░░░░ Skills used 15 █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 1,100 events remembered across 29 sessions — searchable after compact & restart. Also in this release: → Windows hook fixes. `process.execPath` replaces bare `node` — no more MSYS path mangling on D: drives, no more Git Bash PATH failures. All 8 adapters updated. → Node v24 compatibility. `ensure-deps` no longer skips `better-sqlite3` install on modern runtimes. The SIGSEGV-prone probe is skipped — the install isn't. Every insight runs locally. Your data never leaves your machine. Open source. Free tier. No account required. github.com/mksglu/context…
Mert Koseoglu — Context Mode tweet mediaMert Koseoglu — Context Mode tweet mediaMert Koseoglu — Context Mode tweet mediaMert Koseoglu — Context Mode tweet media
English
4
9
68
5.1K
Murat Aslan
Murat Aslan@iammurataslan·
@Holydogy @mksglu VSCode Codex extension içinde de çalışıyor hocam belirtilen şekilde
Murat Aslan tweet mediaMurat Aslan tweet media
Türkçe
0
0
1
21
Batuhan
Batuhan@Holydogy·
@mksglu Vscode da codex plugin ile kullanımda kotaya faydası oluyor mu?
Türkçe
1
0
0
139
Mert Koseoglu — Context Mode
225 sessions, 8,337 tool calls. I ran /ctx-insight on my own data and the numbers surprised me. I read 5.2x more than I write. 1,992 files read, 386 written. I thought I was mostly writing code. Turns out I spend most of my AI time understanding code. Review mode 45% of the time, implementation only 34%. My context window overflows in just 4% of sessions, which apparently puts me well below the 60%+ most developers hit. The part I didn't expect: 19 tasks running in parallel across 6 bursts saved me roughly 26 minutes. And my error rate is 2.7%, meaning almost everything lands on the first try. 143 commits in 225 sessions, but most sessions are pure research. The commits come in focused bursts. All of this was already sitting in a local SQLite database on my machine. Every session writes tool calls, errors, file edits, context overflows. I just never had a way to see it until now. /ctx-insight to see yours. Nothing leaves your machine. github.com/mksglu/context…
Mert Koseoglu — Context Mode tweet mediaMert Koseoglu — Context Mode tweet mediaMert Koseoglu — Context Mode tweet media
English
6
29
394
261.1K
Murat Aslan retweetledi
Mert Koseoglu — Context Mode
80 hours of AI pair programming. Here's what context-mode saved me. → $487.20 in API costs. Opus pricing. Real money, not estimates. → 22.3 hours of re-explaining context after compaction. Time I got back to ship. → 268 sessions resumed from memory. The agent never asked "what were we doing?" → 47 preferences auto-learned. "use TS strict" once, remembered forever. → 14,847 events indexed. Searchable across every session, every project. Without context-mode |████████████████████████████████████████| 6.2 MB With context-mode |█░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░| 124 KB 98% of raw data never entered my conversation. That's a 50× longer session. Same context window. Multiply across a 13-engineer team: $487 × 13 = $6,331/month saved 22 hours × 13 = 286 hours/month recovered Open source. Local-first. No telemetry. No account. No SaaS lock-in. github.com/mksglu/context…
Mert Koseoglu — Context Mode tweet media
English
9
16
116
13.1K
Murat Aslan retweetledi
Mert Koseoglu — Context Mode
context-mode v1.0.104 just shipped. ⚡ 103K+ users · 12K+ stars · 829 forks · 14 platforms The performance release. Every session reclaims 1.8s on macOS, 7.5–12.5s on Windows. Plus parallel I/O, a live statusline, and a security layer that stops the model from leaking what it touches. What's new: → Per-tool-call latency cut across all 14 adapters. Biggest: a 17,000× speedup on a memoized git worktree call that was forking on every ctx_* invocation. Bulk SQLite inserts replaced N-transaction loops. → Opt-in concurrency. ctx_batch_execute and ctx_fetch_and_index now accept concurrency: 1–8. Multi-URL research and gh API fan-outs finish 3–5× faster. Default stays 1 — existing callers unchanged. → Statusline. One line in ~/.claude/settings.json and Claude Code shows your savings live: "context-mode ● $21.92 saved this session · 83% efficient · 7h1m." → Lifetime stats. ctx_stats now reads across every past session: "5.8K events · 173 sessions · ~$22.45 saved lifetime." No telemetry. All local. → Security. Bearer tokens and api_keys in mcp__* tool_input masked before SQLite. ctx_fetch_and_index blocks AWS/GCP/Azure IMDS + file:// schemes. SHELL env override gated behind a basename allowlist. → Concurrency observability. ctx_stats now shows median + max concurrency per tool — tells you whether you're actually using the new feature, not just whether it's available. → Platform detection audit. Same 14 adapters, but every entry in PLATFORM_ENV_VARS verified against each platform's runtime source code. KILO bare, IDEA_HOME, JETBRAINS_CLIENT_ID — all dropped (no source evidence). Antigravity, Zed, Pi promoted. → Brew upgrade node no longer breaks context-mode. Cache-heal hooks self-heal stale Cellar paths. Windows hooks.json placeholders normalized on every boot. Cross-session bleed in 6 SessionStart adapters fixed. The numbers: 137 files changed. 11,541 added. 855 removed. 2,184 tests pass. Your agent should be faster, more visible, and never leak credentials it touches. Open source. Local-first. No telemetry. github.com/mksglu/context…
English
3
3
41
2.5K
Murat Aslan retweetledi
BuilderMare
BuilderMare@buildermare·
We just wrapped up our 2nd @Solana Buildstation in Ankara with @SuperteamTR 🇹🇷 Across two sessions (Apr 25 & May 2), we brought together local Solana builders to dive into smart contracts, AI integrations, and security plus hands-on mentoring. Special thanks to @WhiteMoonDev who showcased the tool he built for Solana developers during their session. While walking through Solana fundamentals, he also showed how to quickly build Solana programs using AI and how to connect and actually use them with @orquestradev What a chad. Solid mentor, and an impressive project to back it up. Special thanks to @iammurataslan for showcasing what context mode actually does and why it matters. Context mode basically prevents AI tools from getting flooded by raw data, keeping only what’s relevant in the context and reducing usage by ~98% (built by @mksglu) Seeing it in action across different AI like Claude and OpenClaw made it even clearer how powerful this approach is for building faster and more efficient workflows. Special thanks to Emrah Urhan (@raxetul ) for joining us today and sharing his expertise in embedded systems, low-level hardware/software integration, and modern software development practices. All part of getting ready for the Solana @colosseum Frontier Hackathon. See you next week. 🤘
BuilderMare tweet mediaBuilderMare tweet mediaBuilderMare tweet mediaBuilderMare tweet media
BuilderMare@buildermare

Frontier Buildstation Ankara! May 2 Come and build with @SuperteamTR & @turkiyerustcom & @buildermare this saturday ⭐️Surprise guests ⭐️Private mentoring sessions to get ready for @colosseum 🤘 register via luma 👇

English
7
7
27
3.1K
Murat Aslan retweetledi
GitHubDaily
GitHubDaily@GitHub_Daily·
最近看到一个开源项目 Context Mode,能有效解决 AI 编程工具上下文被超出的问题。 核心思路是让原始数据留在沙盒里,只把处理结果送进上下文窗口。 据介绍,能把 315 KB 的原始输出压缩到 5.4 KB,长度节省高达 98%。 同时用本地数据库记录会话状态,对话压缩后也能无缝恢复。 GitHub:github.com/mksglu/context… 支持 14 个平台,包括 Claude Code、Cursor、Gemini CLI、VS Code Copilot 等主流 AI 编程工具。 内置 11 种语言运行时,还有知识库索引和智能搜索功能,方便按需检索而不是一股脑塞进上下文。 所有数据都在本地处理,不联网不上传。如果你经常遇到 AI 编程到一半就出现幻觉的情况,不妨试试这个工具。
GitHubDaily tweet media
中文
13
20
99
16.7K
Murat Aslan retweetledi
Murat Aslan retweetledi
Mert Koseoglu — Context Mode
116.5K+ users · 11.7K+ GitHub stars · 793 forks · 14 platforms Thanks a lot for sharing, really appreciate it !🫡 中国开发者社区的朋友们,期待你们的支持与反馈。 我非常重视你们的意见,也感谢一直以来的支持。当前有一个关于代理支持的 PR,但我不确定是否真的有必要或实现是否正确。如果这是中国网络环境中的实际需求,请多多指点。 感谢大家的帮助。 github.com/mksglu/context…
GitHubDaily@GitHub_Daily

最近看到一个开源项目 Context Mode,能有效解决 AI 编程工具上下文被超出的问题。 核心思路是让原始数据留在沙盒里,只把处理结果送进上下文窗口。 据介绍,能把 315 KB 的原始输出压缩到 5.4 KB,长度节省高达 98%。 同时用本地数据库记录会话状态,对话压缩后也能无缝恢复。 GitHub:github.com/mksglu/context… 支持 14 个平台,包括 Claude Code、Cursor、Gemini CLI、VS Code Copilot 等主流 AI 编程工具。 内置 11 种语言运行时,还有知识库索引和智能搜索功能,方便按需检索而不是一股脑塞进上下文。 所有数据都在本地处理,不联网不上传。如果你经常遇到 AI 编程到一半就出现幻觉的情况,不妨试试这个工具。

中文
4
7
65
10.8K
Murat Aslan retweetledi