Gameisforever

7.8K posts

Gameisforever banner
Gameisforever

Gameisforever

@xboxvn31

Katılım Şubat 2016
1.3K Takip Edilen297 Takipçiler
Gameisforever retweetledi
沐阳
沐阳@yyyole·
什么场景用什么AI,大概梳理了一下我的组合! 我也想看看你的,或者会有一些更好的推荐? 日常高频:ChatGPT 5.5、Claude、豆包 桌面程序:Codex、Claude 轻代码:Codex、Claude Code 本地模型:暂无 文案:Claude 视频:Seedance2.0、Kling3、Veo 图片:GPT image-2、Genmini(Nano Banana Pro/2) 图片编辑:GPT image-2 视频解析:Gemini 3.1 TTS:Gemini 3.1 Flash TTS 工作流:Coze Agent:OpenClaw 付费情况: 年度订阅(之前因为bananapro):Gemini pro 每月续费(暂时离不开的刚需):GPT Plus、Claude Pro 按需付费:Coze、Claude Max、Seedance2.0(第三方)
沐阳 tweet media
中文
33
5
42
3.3K
Gameisforever retweetledi
Aurora Martel
Aurora Martel@AuroraMar1eL·
BREAKING: Anthropic just released a study showing which jobs its own AI is already replacing—right now. And the workers most at risk aren’t who anyone expected: they’re older, more educated, and higher paid. They earn 47% more than average—and they’re nearly four times more likely to hold a graduate degree than workers AI isn’t touching. The case is simple. Anthropic built a new metric called “observed exposure”—not what AI might do in theory, but what it’s actually doing today on the job—measured across millions of real Claude conversations from enterprise users. For computer and math workers, AI could theoretically handle 94% of their tasks. Today, it’s doing 33%. In office and administrative roles, the ceiling is 90%, and current usage is 40%. The gap between what AI can do and what it’s actually doing is massive—and the researchers don’t mince words about what happens next: as capability improves and adoption spreads, the red area expands until it swallows the blue. What makes the paper unsettling is the demographic twist. The most AI-exposed workers earn 47% more, on average, than the least exposed. They’re more likely to be women. More likely to be college educated. This isn’t a story about warehouse floors or long-haul routes. It’s about lawyers, financial analysts, market researchers, and software developers—the very people who were told their education would protect them. Computer programmers show the highest measured AI exposure: 74.5%. Customer service reps: 70.1%. Data entry: 67.1%. Medical records: 66.7%. Marketing and market research: 64.8%. These aren’t forecasts. They’re measurements of work already being done on AI platforms today. Then there’s the pipeline problem—still not getting nearly enough attention. Anthropic researchers found a 14% drop in the job‑finding rate for 22–25‑year‑olds in highly exposed occupations since ChatGPT launched. No comparable effect for workers over 25. Entry-level roles were never “just jobs.” They were the apprenticeship layer: where junior analysts became senior analysts, where junior lawyers learned how arguments actually PDF: anthropic.com/research/labor…
Aurora Martel tweet media
English
68
176
525
95.3K
Gameisforever retweetledi
New Tech
New Tech@Tech5353·
120 AI Tools powering the next generation of work. 1. Ideas - YOU - Claude - ChatGPT - Perplexity - Bing Chat 2. Presentation - Prezi - Pitch - PopAi - Slides AI - Slidebean 3. Website - Dora - Wegic - 10Web - Framer - Durable 4. Writing - Rytr - Jasper - Copy AI - Textblaze - Writesonic 5. AI Models - RenderNet - Glambase App - Luma AI - Sora (OpenAI) - Leonardo AI 6. Meeting - Tldv - Krisp - Otter - Avoma - Fireflies 7. Chatbots - Poe - Claude - Gemini - ChatGPT - HuggingChat 7. Automation - ClickUp - Drift - Outreach - Emplifi - Phrasee 8. UI/UX - Uizard - Visily - Khroma - Galileo AI - VisualEyes 9. Image - Stylar - Freepik - Phygital+ - StockIMG - Bing Create 10. Video - Pictory - HeyGen - Nullface - Decohere - Synthesia 11. Design - Looka - Clipdrop - Autodraw - Vance AI - Designs AI 12. Marketing - AdCopy - Predis AI - Howler AI - Bardeen AI - AdCreative 13. Twitter - Typefully - Postwise - Metricool - Tribescaler - TweetHunter Follow @Tech5353 for curated AI content you’ll actually use.
New Tech tweet media
English
20
30
49
865
Gameisforever retweetledi
Dr. Moyu 摸鱼局长🕵️
Dr. Moyu 摸鱼局长🕵️@Jason23818126·
Codex 新手必看!5 分钟上手保姆级中文教程 很多朋友想用 Codex,但安装配置、英文文档、AGENTS.md 这些常常让人头疼 今天分享一个专为小白用户整理的详细中文资源 主要内容包括: - 国内网络环境下一键安装配置指南 - Codex APP、CLI、Desktop App、VS Code 扩展的全形态保姆级图文教程 - AGENTS.md 模板编写方法 + MCP Server 配置实战 - Skills 使用案例与高效工作流技巧 - 常见问题排查和优化建议 内容以清晰的步骤和截图为主,从零基础到进阶都能快速跟上。特别适合新手快速入门,也方便有经验的用户查漏补缺 地址:github.com/xianyu110/gpt-… 如果你正在学习或使用 Codex,建议收藏,方便以后随时参考 欢迎在评论区分享你的使用心得,一起交流~
Dr. Moyu 摸鱼局长🕵️ tweet media
Dr. Moyu 摸鱼局长🕵️@Jason23818126

全网最容易上手的 Codex 速成教程来了! 全程仅用 38 分钟,把 Codex 从入门到进阶讲得清晰易懂 从安装配置、语音交互、GitHub 代码拉取、插件注入、MCP 连接、自动化测试,到完整工作流搭建,纯中文演示,小白看完基本就能自己上手 不管零基础还是想提升效率,都能帮你快速入门、少走弯路。推荐收藏,边看边练效果更好~

中文
54
110
430
37.8K
Gameisforever retweetledi
د.محمد مشبب القحطاني
في مرحلة الدكتوراه 🎓 PhD من اصعب الأشياء وأكثرها ضغطا اللي تمر على الطالب انه تبدا سنة الكتابة و باقي شغل كثير في المختبر ما انتهى/ لذا احرص على تفادي ذلك لأنه الكتابة تحتاج جو خاص من التفكير والتركيز للخروج برساله قوية . اكتب أولًا بأول، فالتأجيل يحوّل رحلة الدكتوراه إلى سباق مرهق في نهايتها. تمنياتي للجميع بالتوفيق
العربية
6
15
132
11.1K
Gameisforever retweetledi
vmiss
vmiss@vmiss33·
100% human generated. Includes what I use Hermes agent for (since I've seen a lot of people wondering what to do with this thing), and what models/providers I use to keep things cheap. @NousResearch
vmiss@vmiss33

x.com/i/article/2050…

English
7
4
141
18.5K
Gameisforever retweetledi
Wefaq Ahmad
Wefaq Ahmad@WefaqAhmad1·
I turned Claude Opus 4.7 into my personal writing partner. The speed. The clarity. The output. Unreal. Here are 10 powerful prompts to help you write anything faster — and better 👇
Wefaq Ahmad tweet media
English
8
12
23
650
Gameisforever
Gameisforever@xboxvn31·
@0x_zenya Ông nào vớ đc quả này chịu cho chơi some thì bấy bá luôn nhờ
Tiếng Việt
1
0
4
786
Hai Lúa
Hai Lúa@0x_zenya·
Vợ lỡ miệng vụng trộm với anh họ chồng, còn cứu được không ae 🤣🤣🤣 Kể chân thật như này thì chắc cũng thường xuyên lắm, không biết ô chồng có nghe được tâm sự này không nữa
Tiếng Việt
47
21
245
33.2K
Gameisforever retweetledi
Scarlett claira
Scarlett claira@AItechscarlett·
🚨BREAKING: You can now run Claude Code for FREE. No API costs. No rate limits. 100% local on your machine. Here's how to run Claude Code locally (100% free & fully private):
Scarlett claira tweet media
English
39
118
529
54.6K
Gameisforever retweetledi
The Whizz AI
The Whizz AI@TheWhizzAI·
🚨BREAKING: The book you have been postponing for 3 years can be finished in 48 hours. The only thing that was stopping you was not knowing these 9 Claude prompts: (Bookmark before they realize)
The Whizz AI tweet media
English
8
59
301
35.4K
Gameisforever retweetledi
د.مها
د.مها@res_pian3·
#النشر_العلمي المجلات المفهرسة في Scopus والتي تنشر البحوث بدون إجور نشر Free publication journals indexed in Scopus رابط المجلات journalsearches.com/free-publishin…عند الضغط على الرابط وظهور الصفحة سوف تجد على الجهة اليسرى للصفحة المجلات مرتبة حسب الموضوع. scopus.com/sources
د.مها tweet media
العربية
0
3
6
193
Gameisforever retweetledi
Meshal Alzakari | مشعل الزكري
مقالة قيمة توضح أن بناء أسئلة البحث عبارة عن عملية إبداعية تجمع بين الفضول المعرفي والنقد الذاتي.
Meshal Alzakari | مشعل الزكري tweet media
العربية
3
10
58
2.9K
Gameisforever retweetledi
爱丽丝呀!
爱丽丝呀!@BTCqzy1·
打破AI信息差 99%的人每天刷信息流,以为自己在获取信息,其实只是被算法喂垃圾。 而真正会用 AI 的人,早就开始用工具自动监控全网趋势了。 分享一个我最近在用的神器:TrendRadar(git斩获5万星标🌟) 它本质上就是你的私人 AI 情报系统: · 自动扫描全网热点(知乎、抖音、B站、微博、小红书、RSS 等) · 用一句话描述关注领域,AI 自动筛出高价值内容 · 自动生成翻译 + 深度分析简报,直接推送到微信/TG/飞书/钉钉/Bark等 · 支持 MCP,可直接接入 AI 做情绪分析、趋势预测 · Docker 30 秒部署,数据本地化,隐私拉满 GitHub:github.com/sansan0/TrendR… 信息差时代最可怕的不是没能力,而是别人比你早 24 小时知道趋势。 适合所有做自媒体 / 投资 / 电商 / AI / 独立开发的人 感兴趣的可以看看~dyor
中文
85
136
674
51.4K
Gameisforever retweetledi
Mushtaq Bilal, PhD
Mushtaq Bilal, PhD@MushtaqBilalPhD·
10 simple tips to help you build an academic habit: 1. Read academic prose every day even if it's for 10-15 min. Read slowly. Pay attention to how an argument gets constructed through prose. Don't ignore footnotes.
English
2
7
120
12K
Gameisforever retweetledi
Ahmad Awais
Ahmad Awais@MrAhmadAwais·
how did we make deepseek outperform opus 4.7? i've been thinking about why "open model bad at tool calling" is almost always a harness problem, not a model problem. context: spent the two days looking at billions of tokens in @CommandCodeAI (tb open source ai cli) using deepseek. I ended up writing a tool-input repair layer. the trigger was watching deepseek-flash fail on the simplest /review run, every shellCommand and readFile call bouncing back with a raw zod issues blob, the model unable to recover because the error wasn't in a form it could read. by the end deepseek v4 pro was beating opus 4.7 6/10 times on our internal evals. a few things i learned that feel general: 1/ the failure modes aren't random they're a small finite compositional set. across deepseek-flash, deepseek v4 pro, glm, qwen, the same four mistakes repeat almost exactly: - sending `null` for an optional field instead of omitting it - emitting `["a","b"]` as a json *string* instead of an actual array - wrapping a single arg in `{}` where the schema expected an array (an "empty placeholder") - passing a bare string where an array was expected (`"foo"` instead of `["foo"]`) four repairs, ~30-100 lines each, ordered carefully (json-array-parse must run before bare-string-wrap or `'["a","b"]'` becomes `['["a","b"]']`). that is the whole catalogue. when i hear "this open source model can't do tool calls" i now assume one of those four, and so far that's been right ~90% of the time. 2/ the funniest failure mode is also the most revealing. deepseek-flash, when asked to edit or write a file, sometimes emits the path as a *markdown auto-link*: filePath: "/Users/x/proj/[notes.md](http://notes. md)" our writeFile tool obediently trued creating files literally named `[notes.md](http://notes .md)` until we caught it. this is not a hallucination. it's the post-training chat distribution leaking through the tool boundary the model has been rewarded for auto-linking in conversational output, and is applying that prior in a context where it makes no sense. the fix is two regex lines that unwrap only the degenerate case where link text equals url-without-protocol real markdown like `[click](https://x .com)` passes through untouched. this is also conditioning of their own tools during RL which were different from all other tools we write and ofc can't predict. "tool confusion" is a more useful frame than "capability gap." the model knows how to format a path. it just hasn't been told clearly enough that this path is going to fopen, not into a chat bubble. so we encode that hint at the schema level `pathString()` instead of `z.string()` and the leak is plugged for every path field at once. 3/ the design choice that mattered was inverting preprocess-then-validate to validate-then-repair. my first attempt was the obvious one: a preprocessing pass that normalized inputs (strip nulls, parse stringified arrays, etc.) before zod ever saw them. it broke immediately, writeFile content that *happened* to be json-shaped got rewritten before it hit disk. silent corruption, easy to miss in a smoke test. then i made it less greedy - parse the input as-is. if it succeeds, ship it. valid inputs are never touched. - on failure, walk the validator's own issue list. for each issue path, try the four repairs in order until one applies. - parse again. on success, log `tool_input_repaired:${toolName}`. on failure, log `tool_input_invalid:${toolName}` and return a model-readable retry message. the structural insight here is: when you preprocess, you encode a prior about what's broken. when you let the validator complain first, the schema is the prior, and you only spend repair budget at the exact paths the schema actually disagreed at. the validator is doing the work of localizing the bug for you. it's the same shape as cheap-then-careful everywhere else try the fast path, fall back on evidence. (this also gives you per-tool telemetry for free. you can watch repair rates per (model, tool) and notice when a model regresses on a specific contract before users do.) 4/ shape invariants and relational invariants need different fixes. the four repairs above all handle shape problems wrong type, missing key, wrong container. but read_file had a *relational* invariant: "if you provide offset, you must also provide limit, and vice versa." deepseek kept calling `readFile({ absolutePath, limit: 30 })` and getting an `ERROR:` back. you can't fix this with input repair, because each field is independently valid the bug is in the relationship between them. so i taught the function the model's intent instead. `limit` alone → `offset = 0`. `offset` alone → `limit = 2000` (matches common read tool ops default). then surfaced the decision back to the model in the result: "Note: limit was not provided; defaulted to 2000 lines. To read more or fewer lines, retry with both offset and limit." no `Error:` prefix, so the tui doesn't paint it red. the model sees what we picked and can self-correct on the next turn if our guess was wrong. transparency over silent magic wins big. repair where you can. extend semantics where you can't. surface the choice either way. zoom out: a lot of what looks like model capability is actually contract design. a strict schema is a choice with a cost it filters out noise, but it also filters out recoverable noise from any model that hasn't memorized the exact json contract you happened to pick. the largest commercial models eat that cost invisibly and are linient on tool calling because they've seen enough of every contract during pretraining; open models pay it loudly and get dismissed for it. the harness is where you mediate between distributions. four small repairs (i'm sure more to follow as we have three more merging today), two regex lines for auto-links, one relational default, one prefix change. the model didn't change. the contract got more forgiving in exactly the places it needed to be. deepseek v4 pro now beats opus 4.7 6/10 times on our internal evals. imo "skill issue" applies to the harness more often than the model.
Ahmad Awais@MrAhmadAwais

Wow I just made DeepSeek V4 Pro beat Opus 4.7 6/10 times in our internal evals by auto repairing many of its quirks in tool calling. It’s performing super solid for such a cheap model.

English
23
68
689
70.3K
Gameisforever retweetledi
Mohammed Ashour
Mohammed Ashour@Dr_Ashour93·
🎓 طلاب الدكتوراه: تحديد الفجوة البحثية من أصعب التحديات… لكن له طريقة واضحة: 1️⃣ حدّد المجال 2️⃣ اقرأ 5–10 مراجعات أدبية 3️⃣ ركّز على التحديات المستقبلية 4️⃣ تأكد من عدم تكرار الأبحاث 5️⃣ ناقش مع المشرف 6️⃣ راجع الأدبيات بعمق 7️⃣ استخرج الفجوة بدقة 8️⃣ تأكد من توفر الموارد ابدأ صح 👇
Mohammed Ashour tweet media
العربية
0
14
92
3.1K
Gameisforever retweetledi
فهد احمد امين
فهد احمد امين@Fameenofficial·
اعزائي الزملاء أدوات الكشف عن محتويات الذكاء الاصطناعي غير دقيقة حسب آخر أبحاث من MIT حيث كشفت الأدوات المعتمدة مثل Turnitin عن نسبة ذكاء عالية جدًا بعد تجربة كتابات من الإنجيل و من أعمال شكسبير وغيرها وذلك بسبب أن الأدوات تتبع نمط معين في الكتابة. معظم جامعات أمريكا وبريطانيا ألغت اشتراكاتها مع كاشف الذكاء الاصطناعي واكتفت فقط بكشف نسبة تطابق الاقتباسات والسرقة الأدبية (plagiarism).
🌼 Just Aljouri@AljouriAlanezi

بحثي انا مسويته بنفسي من الصفر تعديل اخطاء املائية فواصل وعلامات ترقيم مصادر كل شي انا مسويته بنفسي دخلوا البحث باحد المواقع الي يستعملونها دكاترتنا عشان يكشفون المحتوى المصنوع بالذكاء الاصطناعي بتطلعلكم نسبة تتجاوز ال٨٠٪ هل من العدل انظلم عشان برنامج ؟ مع العلم مو اول يصير معاي جذي ومعظم الدكاترة لما يشوفون النسب مرتفعة عند اغلب الشعبة ما يعتمدون عالبرنامج مستحيل شلون وسيلة مفروض تسهل حياتنا صارت تصعبها حتى واحنا ما نستعملها وبأي حق الدكتور يحجب درجتي عشان البرنامج قرر اني مستعملة الAi

العربية
2
26
256
47.1K