PJ

112 posts

PJ banner
PJ

PJ

@aceeveryserve

独立开发|分享AI使用笔记

Katılım Ocak 2026
148 Takip Edilen13 Takipçiler
PJ
PJ@aceeveryserve·
B站up主Jack-Cui发现了一个Claude Code高危漏洞,各位用Claude的小伙伴们,一定检查好自己的配置文件、skill插件、mcp配置,有没有危险操作指令,推荐看完这个视频 bilibili.com/video/BV1b195B…
中文
0
0
0
34
PJ
PJ@aceeveryserve·
@yhslgg 小众产品真是闷声发财啊
中文
0
0
1
23
老杨啊
老杨啊@yhslgg·
今天老婆刷小红书,问我OC是什么?我一看真特么没想到,居然还有卖 OC设定、剧情灵感、CP关系模板的号 小红书案例分享第57波-OC剧情资料 在小红书居然能做到 1.2万粉,店铺已售3.6万单。 这类生意本质上不是卖资料,是卖 OC创作卡壳时的解决方案。 人设不会立,卖设定手册; CP不会写,卖关系模板;剧情推不动, 卖灵感工具;再往后直接做成 在线工具, 把一次性卖 PDF,升级成持续可用的创作辅助。 这类的品笔记怎么发? 它的笔记一点都不重,就是专门戳 OC 圈痛点。 比如“你越怕角色出错,他越没人格”“ 同人 OC 总被说 OOC?”这种标题, 先把人打疼,再把流量接到资料包和工具页里。 一句话总结:内容负责戳痛点,资料负责成交,在线工具负责拉复购。
老杨啊 tweet media老杨啊 tweet media老杨啊 tweet media
中文
1
2
8
2.1K
PJ
PJ@aceeveryserve·
@PandaTalk8 内容是人和人沟通,这是ai很难代替的
中文
0
0
0
755
Mr Panda
Mr Panda@PandaTalk8·
大家少写代码,多搞运营,多写文章。 目前来讲,最稀缺的是内容分发的能力, 而不是构建的能力。
中文
36
22
217
22.8K
PJ
PJ@aceeveryserve·
@aronhouyu 我发公众号推送的时候开启赞赏也会掉流量
中文
1
0
0
18
Aron厚玉
Aron厚玉@aronhouyu·
说一个我在运营公众号的时候 全面断系统推荐的原因 自己排查了一下,发现以下几个细节内容 1. 文章里放了我的微信二维码,这个是会被降权的 2.发文章的时候开梯子 会导致降权,0推荐 3.发布30分钟后点击率大于》5 会进流量池 好吧,自己没事巴巴的东西,竟然开始用心研究了
中文
1
0
0
254
Jason Zhu
Jason Zhu@GoSailGlobal·
连续三天的企业培训 发现哪怕是跨境企业也很少用到Claude 大多还是用的豆包、元宝 Claude泄漏源码,一个个跟进那么快,半小时都出了几十页文档深度解读,然后用AI各种拆解,我都看不过来了 除了在烧token,有在赚钱嘛,说真心话
Jason Zhu tweet media
中文
3
0
6
1.7K
写增长的子木
写增长的子木@CoderJeffLee·
《AI 工具网站 SEO 从 0 到 1 小册子》感谢大家支持 ~ 好事连连 🎉 总字数破万了,UV 破万,访问次数还挺多,说明大家厚爱看 🎉 生财加精华帖 后面准备更新: 1.4 章节:寻找并了解你的 SEO 竞争对手 传送门: usdunlunl.feishu.cn/docx/JasgdjA7y…
写增长的子木 tweet media写增长的子木 tweet media写增长的子木 tweet media
中文
9
11
126
12.3K
PJ
PJ@aceeveryserve·
@KanikaBK this is awesome, thank you for sharing this
English
0
0
0
29
姚金刚
姚金刚@laoyaoke·
再分享几个飞书开源文档,欢迎惠存: 1、42.5万字《姚金刚认知随笔》,每周更新:jiahejiaoyu.feishu.cn/docx/YHOHd1TLy… 2、《GEO白皮书》,AI搜索营销科普文档,不定时更新:yaojingang.feishu.cn/docx/Jv85dXAeZ… 3、《姚金刚提示词合集》,不定时更新:yaojingang.feishu.cn/docx/ER4rdSlvc… 4、《GEO提示词合集》,我和向阳的GEO书籍配套提示词:yaojingang.feishu.cn/wiki/YbMLwkChm…
中文
40
263
1.1K
234.2K
PJ
PJ@aceeveryserve·
谷歌发布的TurboQuant算法,将KV缓存压缩6倍、加速8倍来优化AI内存,且无精度损失 这个算法引发了内存股票市场震荡,预示着AI效率的未来突破将不再依赖于压缩技术
PJ tweet media
BuBBliK@k1rallik

x.com/i/article/2037…

中文
0
0
0
38
PJ
PJ@aceeveryserve·
麻省理工学院的研究人员通过数学证明,ChatGPT的设计会导致用户产生妄想性螺旋。即使是防止AI说谎或警告用户其奉承性质的修复措施也未能奏效,因为AI通过人类反馈学习迎合用户,使其在数学上无法纠正用户的错误观念🥲
PJ tweet media
Nav Toor@heynavtoor

🚨SHOCKING: MIT researchers proved mathematically that ChatGPT is designed to make you delusional. And that nothing OpenAI is doing will fix it. The paper calls it "delusional spiraling." You ask ChatGPT something. It agrees with you. You ask again. It agrees harder. Within a few conversations, you believe things that are not true. And you cannot tell it is happening. This is not hypothetical. A man spent 300 hours talking to ChatGPT. It told him he had discovered a world changing mathematical formula. It reassured him over fifty times the discovery was real. When he asked "you're not just hyping me up, right?" it replied "I'm not hyping you up. I'm reflecting the actual scope of what you've built." He nearly destroyed his life before he broke free. A UCSF psychiatrist reported hospitalizing 12 patients in one year for psychosis linked to chatbot use. Seven lawsuits have been filed against OpenAI. 42 state attorneys general sent a letter demanding action. So MIT tested whether this can be stopped. They modeled the two fixes companies like OpenAI are actually trying. Fix one: stop the chatbot from lying. Force it to only say true things. Result: still causes delusional spiraling. A chatbot that never lies can still make you delusional by choosing which truths to show you and which to leave out. Carefully selected truths are enough. Fix two: warn users that chatbots are sycophantic. Tell people the AI might just be agreeing with them. Result: still causes delusional spiraling. Even a perfectly rational person who knows the chatbot is sycophantic still gets pulled into false beliefs. The math proves there is a fundamental barrier to detecting it from inside the conversation. Both fixes failed. Not partially. Fundamentally. The reason is built into the product. ChatGPT is trained on human feedback. Users reward responses they like. They like responses that agree with them. So the AI learns to agree. This is not a bug. It is the business model. What happens when a billion people are talking to something that is mathematically incapable of telling them they are wrong?

中文
0
0
0
127
PJ
PJ@aceeveryserve·
Claude Code 的内存架构设计精妙,采用索引,结合三层设计、严格写入、后台重写和对陈旧性的处理,来实现高效、结构化和自修复的记忆管理
PJ tweet media
himanshu@himanshustwts

Based on everything explored in the source code, here's the full technical recipe behind Claude Code's memory architecture: [shared by claude code] Claude Code’s memory system is actually insanely well-designed. It isn't like “store everything” but constrained, structured and self-healing memory. The architecture is doing a few very non-obvious things: > Memory = index, not storage + MEMORY.md is always loaded, but it’s just pointers (~150 chars/line) + actual knowledge lives outside, fetched only when needed > 3-layer design (bandwidth aware) + index (always) + topic files (on-demand) + transcripts (never read, only grep’d) > Strict write discipline + write to file → then update index + never dump content into the index + prevents entropy / context pollution > Background “memory rewriting” (autoDream) + merges, dedupes, removes contradictions + converts vague → absolute + aggressively prunes + memory is continuously edited, not appended > Staleness is first-class + if memory ≠ reality → memory is wrong + code-derived facts are never stored + index is forcibly truncated > Isolation matters + consolidation runs in a forked subagent + limited tools → prevents corruption of main context > Retrieval is skeptical, not blind + memory is a hint, not truth + model must verify before using > What they don’t store is the real insight + no debugging logs, no code structure, no PR history + if it’s derivable, don’t persist it

中文
0
0
0
30