VrianCao

179 posts

VrianCao

VrianCao

@VrianCao

Katılım Temmuz 2024
152 Takip Edilen9 Takipçiler
Mert Koseoglu — Context Mode
80 hours of AI pair programming. Here's what context-mode saved me. → $487.20 in API costs. Opus pricing. Real money, not estimates. → 22.3 hours of re-explaining context after compaction. Time I got back to ship. → 268 sessions resumed from memory. The agent never asked "what were we doing?" → 47 preferences auto-learned. "use TS strict" once, remembered forever. → 14,847 events indexed. Searchable across every session, every project. Without context-mode |████████████████████████████████████████| 6.2 MB With context-mode |█░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░| 124 KB 98% of raw data never entered my conversation. That's a 50× longer session. Same context window. Multiply across a 13-engineer team: $487 × 13 = $6,331/month saved 22 hours × 13 = 286 hours/month recovered Open source. Local-first. No telemetry. No account. No SaaS lock-in. github.com/mksglu/context…
Mert Koseoglu — Context Mode tweet media
English
9
16
116
13.1K
VrianCao
VrianCao@VrianCao·
@mksglu I see it! But there's still a long session running so that i can't update now I will check it later🫡
English
0
0
1
10
VrianCao
VrianCao@VrianCao·
@mksglu Oh, I see. That’s because ctx-insight uses an environment variable INSIGHT_CONTENT_DIR which defaults to the Claude code directory. When using pi (or others), you’d expect the agent to set the correct directory manually, but the MCP doesn't have an argument to fill that in.
English
1
0
0
73
Mert Koseoglu — Context Mode
@VrianCao It's seems a bug i think. Let's investigate it as deep-dive. When you have a time please cross-check with `ctx-stats` and `ctx-insight` Thanks!
English
1
0
1
65
VrianCao
VrianCao@VrianCao·
@mksglu Idk why but seems nothing there Looks like it isn't a global stat but just for the current session Let me investigate it further Never mind, ctx is ULTIMATELY great
VrianCao tweet media
English
1
0
0
72
VrianCao
VrianCao@VrianCao·
@mksglu Wo i see it, seems like a skill, and will display via a webui. Will you create a terminal command for it and display it via ASCII?
English
0
0
0
11
VrianCao
VrianCao@VrianCao·
@mksglu Use it in my pi. It works pretty well! Nice work
English
0
0
1
116
Mert Koseoglu — Context Mode
116.5K+ users · 11.7K+ GitHub stars · 793 forks · 14 platforms Thanks a lot for sharing, really appreciate it !🫡 中国开发者社区的朋友们,期待你们的支持与反馈。 我非常重视你们的意见,也感谢一直以来的支持。当前有一个关于代理支持的 PR,但我不确定是否真的有必要或实现是否正确。如果这是中国网络环境中的实际需求,请多多指点。 感谢大家的帮助。 github.com/mksglu/context…
GitHubDaily@GitHub_Daily

最近看到一个开源项目 Context Mode,能有效解决 AI 编程工具上下文被超出的问题。 核心思路是让原始数据留在沙盒里,只把处理结果送进上下文窗口。 据介绍,能把 315 KB 的原始输出压缩到 5.4 KB,长度节省高达 98%。 同时用本地数据库记录会话状态,对话压缩后也能无缝恢复。 GitHub:github.com/mksglu/context… 支持 14 个平台,包括 Claude Code、Cursor、Gemini CLI、VS Code Copilot 等主流 AI 编程工具。 内置 11 种语言运行时,还有知识库索引和智能搜索功能,方便按需检索而不是一股脑塞进上下文。 所有数据都在本地处理,不联网不上传。如果你经常遇到 AI 编程到一半就出现幻觉的情况,不妨试试这个工具。

中文
4
7
65
10.8K
OpenAI
OpenAI@OpenAI·
This is not a screenshot.
OpenAI tweet media
English
1.4K
903
16.8K
7.5M
VrianCao
VrianCao@VrianCao·
My GPT-5.4 Pro just froze mid-thought. Even after I steered it to 'Continue,' it only thought for a second before getting stuck again. Considering yesterday’s massive OpenAI outage, I can’t think of any other reason besides some major infrastructure adjustments. Big model smell.
English
0
0
0
39
VrianCao
VrianCao@VrianCao·
@iamai_omni @dongxi_nlp @sama OAI的Intra比Anthropic好到不知道哪里去了 自己看看可用性检测和Status吧,都不是一个量级的东西 但凡了解过一些的都不会喷OAI的Intra
中文
0
0
1
25
马东锡 NLP
马东锡 NLP@dongxi_nlp·
2026年,Kimi 迭代到了 2.6,Minimax 迭代到了2.7,GLM 迭代到了 5.1,Claude 不管啥名字吧也到了 4.7 。 再看 OpenAI, 终于支持更换绑定邮箱了!但是无论怎么尝输入验证码,都不管用,震撼到了,呆坐中。 @sama Fix it!
马东锡 NLP tweet media
中文
11
0
61
21.7K
VrianCao
VrianCao@VrianCao·
@vista8 猜都不用猜,九成概率是CPU时间限制。自己去排查一下吧,这是普通站点唯一能触及到的Limit了,应该是代码优化的问题
中文
0
0
1
472
向阳乔木
向阳乔木@vista8·
本想赛博菩萨Cloudflare一切都可以免费用。 搭建博客后,没想到几篇文章访问量太大,立马网站访问不了。 立马乖乖交了费。😂
中文
38
1
99
50.2K
VrianCao
VrianCao@VrianCao·
@michellechen Spotted your name! Maybe we’ll see your name in the blog post again when GLM-5.1 launches on Cloudflare LOL. Anyway, great job—not just on these model inference services, but the entire Agent Week. It’s been absolutely impressive!
English
1
0
3
1.5K
michelle
michelle@michellechen·
serving models is hard. serving extra large language models with good quality, throughput, reliability, pricing, and gpu utilization is really hard. a look behind the scenes of how we do it on workers ai this agents week. there’s always more to do, but we’re just getting started blog.cloudflare.com/high-performan…
English
11
15
202
56K
面条
面条@miantiao_me·
现在你可以指定你的 Cloudflare Containers 在哪个区域运行了,不用担心分配香港地区无法调用某些模型了
面条 tweet media
中文
1
3
52
8.1K
VrianCao
VrianCao@VrianCao·
@VnetLink yep,如果是涉及到境内网络,还是推荐Tailscale+Tailscale Peer Relays(QoS不错的情况下,如果被Q那就老老实实DERP)
中文
2
0
3
574
VnetLink
VnetLink@VnetLink·
@VrianCao 说到与境外互联,这个可能是 mesh 的局限性,众所周知国内与境外的互联一直是蜜汁链路。这点 Tailscale还是有一些优势的。
中文
2
0
2
3.2K
VnetLink
VnetLink@VnetLink·
最近,Cloudflare 也下场做私有局域网了,推出了 Mesh。 区别在于: Tailscale:设备直连(P2P) Cloudflare Mesh:全部走 Cloudflare 本质区别👇 一个尽量不走中继 一个把中继变默认路径 选型: 要低延迟 👉 Tailscale 要 Zero Trust 👉 Cloudflare
VnetLink tweet media
中文
6
47
405
64.1K