Tiger

787 posts

Tiger banner
Tiger

Tiger

@screenest_ai

Japanese solo dev. Day 1/100. 0 → 10,000 followers · $0 → $1,000 MRR

Katılım Temmuz 2024
76 Takip Edilen85 Takipçiler
Sabitlenmiş Tweet
Tiger
Tiger@screenest_ai·
Built Agents’ memory for Windows. I kept re-explaining myself to my AI agent — every session, from scratch. So I built a local memory app that watches your screens, browser history, and agent conversations in the background, and feeds it all to Claude Code, Cursor, and friends via MCP. Your agents finally remember what you did. apps.microsoft.com/detail/9nhpvl6…
English
8
0
19
479
Tiger
Tiger@screenest_ai·
Day 5/100 Thank you for installing. 6 page views. 5 installs. 83% conversion rate. I know what small numbers look like. These are small. But 5 real people decided this was worth a download, and that's not nothing on day 5. Keeping the fingers moving. Reply guy mode, tendinitis pending.
Tiger tweet media
English
2
0
7
48
Tiger
Tiger@screenest_ai·
Good to know — not deep in iOS land so this is helpful. Is there a similar approach for Windows native (WPF/WinUI) or Unity? The visibility problem feels universal: Codex and Claude Code both struggle when the UI layer isn't web-based. Agent-driven E2E on opaque render pipelines is still painful.
English
1
0
1
58
Interlap
Interlap@interlap01·
Debugging is actually solved too. At least with MobAI, an agent can run a debug build and capture app logs directly, like the logs you see when starting the app from Xcode. It also has an agent-friendly debugger with breakpoints, code evaluation, and more. And of course, it can capture filtered system logs too.
English
1
0
0
20
How To AI
How To AI@HowToAI_·
Someone built a tool that lets Claude Code autonomously test your entire IOS app It navigate your entire app, opens every screen, tests every flow, reads the debug logs, and hands you a structured bug report. one prompt and that's it
English
40
73
1.1K
124.2K
Tiger
Tiger@screenest_ai·
@CreeCoder I found six rules. → Followers = guaranteed reach → The first 30 min is everything → Replies > likes in scoring → Off-topic = never seen → Media posts get a bonus → Filters drop you before scoring
English
0
0
0
21
Joshua
Joshua@CreeCoder·
X posting strategy based on the new algorithm: • Post 1-3 times per day • Reply to comments early after posting • Focus on replies/conversations over likes • Make posts people pause on • Use strong hooks in the first line • Prioritize engagement quality over impressions • Use text posts more than random images • Use images/videos only if they improve watch time • Keep videos short and attention-grabbing early • Avoid low-effort filler posts • Don’t overload posts with links • Focus on getting follows/profile clicks • Post when your audience is actually active • Don’t spam replies with repetitive comments • Focus less on follower count and more on performance • Create posts that make people reply, not just like • Don’t delete/repost posts too quickly • Build recognizable posting patterns/niches • Strong opinions probably outperform safe content • Use formatting/spacing to improve readability • The first hour after posting matters a lot • Avoid engagement bait that feels forced • Consistency matters more than random viral spikes Or forget all of this and just have fun and post whatever you want like it’s Twitter again.
Joshua tweet media
English
79
10
173
16.9K
Edward Frank Morris 🦇
1. Claude (solve any problem) 2. Perplexity (research anything) 3. Portfoliotab (create your portfolio) 4. Klingai (create AI videos) 5. Tripo AI (create 3D models) 6. Gemini (perfect writing) 7. Capcut (edit videos) 8. The AI Library (helpful tools) 9. Youlearn (summarize YouTube) 10. Canva (design graphics) 11. ElevenLabs (clone voices) 12. Podcastle (edit podcasts) Bookmark this.
English
41
59
180
5.3K
Tiger
Tiger@screenest_ai·
@cyrilXBT Anthropic's Podcast is also good.
English
0
0
0
40
CyrilXBT
CyrilXBT@cyrilXBT·
ANTHROPIC JUST DROPPED 13 FREE CLAUDE CERTIFICATIONS AND ALMOST NOBODY IS TALKING ABOUT IT. Not a YouTube playlist. Not a third-party course. Official certifications from the team that built Claude. Free. Forever. Here is the full list with links: START HERE 01. Claude 101 — Learn Claude for everyday work anthropic.skilljar.com/claude-101 02. AI Fluency: Frameworks and Foundations anthropic.skilljar.com/ai-fluency 03. Introduction to Agent Skills anthropic.skilljar.com/introduction-t… FOR DEVELOPERS 04. Building with the Claude API anthropic.skilljar.com/claude-api 05. Claude Code in Action anthropic.skilljar.com/claude-code 06. Intro to Model Context Protocol anthropic.skilljar.com/mcp 07. MCP Advanced Topics anthropic.skilljar.com/mcp-advanced FOR EDUCATION AND NONPROFITS 08. AI Fluency for Students 09. AI Fluency for Educators 10. Teaching AI Fluency 11. AI Fluency for Nonprofits FOR ENTERPRISE 12. Claude with Amazon Bedrock 13. Claude with Google Cloud Vertex AI 13 courses. 6 skill levels. 5 audiences. 100% free forever. The engineers getting hired at $150,000 to $300,000 to work with Claude at the highest level are learning exactly this material. Anthropic's team just made it available to everyone. Pro tip: Start with Claude 101 then go straight to Claude Code in Action. That is the fastest path from beginner to builder. Bookmark this before you pay for another AI course. Follow @cyrilXBT for every Anthropic resource that compounds your skills the moment it drops.
CyrilXBT tweet media
English
20
58
388
21.1K
Polymarket
Polymarket@Polymarket·
NEW: 𝕏 publishes its latest algorithm update to GitHub.
English
125
43
614
1.4M
Tiger
Tiger@screenest_ai·
I read the X algorithm so you don't have to. Six rules decide who sees your post: → Followers = guaranteed reach → The first 30 min is everything → Replies > likes in scoring → Off-topic = never seen → Media posts get a bonus → Filters drop you before scoring Each one is tied to a real module👇
Tiger tweet media
English
0
0
2
26
Tiger
Tiger@screenest_ai·
Build in public, day 5. Went deep on reply guy mode this week. Didn't expect it, but it's actually one of the better ways to absorb signal — you end up in conversations you'd never find by scrolling. Different people, different angles, information comes at you from directions you didn't know to look. The other side effect: you just keep typing. Fingers don't stop. Pretty sure I'm on track for tendinitis by day 30.
English
1
0
4
27
Vaishnavi
Vaishnavi@_vmlops·
MICROSOFT'S FREE AI AGENTS COURSE The best resource to go from zero to building production ai agents. → 15+ lessons with code + videos → agentic RAG, multi-agent, tool use → memory, planning, browser-use agents → MCP & A2A protocols included all free.... all open source github.com/microsoft/ai-a…
English
6
105
585
26.5K
Tiger
Tiger@screenest_ai·
Six layers in one folder: identity, memory, skills, runtime state, automation, observability. Most agent setups scatter these across env vars, prompt files, and ad-hoc scripts with no separation of concerns. The ~/.hermes layout is what a production-grade agent config should look like by default, not after six months of refactoring.
English
0
0
0
54
Akshay 🚀
Akshay 🚀@akshay_pachaar·
the anatomy of ~/.hermes folder. one folder controls everything your hermes agent knows, remembers, and can do. understanding its layout is the difference between treating hermes as a black box and actually customizing it. here's what lives inside and why each piece matters. 𝗰𝗼𝗻𝗳𝗶𝗴𝘂𝗿𝗮𝘁𝗶𝗼𝗻 𝗰𝗼𝗻𝗳𝗶𝗴.𝘆𝗮𝗺𝗹 is the source of truth for everything non-secret: model choice, terminal backend, tool enablement, MCP servers. 𝗲𝗻𝘃 holds your API keys and bot tokens. 𝗮𝘂𝘁𝗵.𝗷𝘀𝗼𝗻 stores OAuth credentials. then there's 𝗦𝗢𝗨𝗟.𝗺𝗱. it occupies slot #1 in the system prompt, before anything else loads. it defines who the agent is: personality, tone, communication style, hard limits. everything the agent writes, creates, and remembers passes through this identity layer. 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗺𝗲𝗺𝗼𝗿𝗶𝗲𝘀/ contains two tiny files. 𝗠𝗘𝗠𝗢𝗥𝗬.𝗺𝗱 (2,200 chars) holds project conventions, tool quirks, lessons learned. 𝗨𝗦𝗘𝗥.𝗺𝗱 (1,375 chars) holds your profile. both get injected into the system prompt as frozen snapshots at session start. when they fill up, the agent consolidates: merges entries, drops redundancy, keeps only what's dense and useful. 𝗰𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 𝘀𝗸𝗶𝗹𝗹𝘀/ is where the learning loop lives. each skill is a self-contained ability: a 𝗦𝗞𝗜𝗟𝗟.𝗺𝗱 (the procedure), a 𝗿𝗲𝗳𝗲𝗿𝗲𝗻𝗰𝗲𝘀/ folder (docs the agent reads), and 𝘀𝗰𝗿𝗶𝗽𝘁𝘀/ (executable helpers). skills come from three sources: bundled with hermes, downloaded from the hub via 𝗵𝘂𝗯/, or created by the agent itself during your sessions. hermes ships with 687 skills across 18 categories, and you can add any GitHub repo as a custom tap. 𝗿𝘂𝗻𝘁𝗶𝗺𝗲 𝘀𝘁𝗮𝘁𝗲 𝘀𝗲𝘀𝘀𝗶𝗼𝗻𝘀/ stores per-platform session metadata. 𝘀𝘁𝗮𝘁𝗲.𝗱𝗯 is the SQLite database with FTS5 indexing that backs tier 2 memory. this is what makes "what did we discuss three weeks ago?" actually work across CLI and messaging. 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻 𝗰𝗿𝗼𝗻/ holds scheduled jobs in 𝗷𝗼𝗯𝘀.𝗷𝘀𝗼𝗻 and their outputs in 𝗼𝘂𝘁𝗽𝘂𝘁/. the gateway daemon ticks every 60 seconds and runs due jobs in isolated sessions. you describe schedules in plain English, hermes converts them. 𝗲𝘅𝘁𝗲𝗻𝘀𝗶𝗼𝗻 + 𝗼𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗽𝗹𝘂𝗴𝗶𝗻𝘀/, 𝗵𝗼𝗼𝗸𝘀/, and 𝘀𝗸𝗶𝗻𝘀/ are the surface area for user customization. 𝗹𝗼𝗴𝘀/ gives you 𝗮𝗴𝗲𝗻𝘁.𝗹𝗼𝗴, 𝗴𝗮𝘁𝗲𝘄𝗮𝘆.𝗹𝗼𝗴, and 𝗲𝗿𝗿𝗼𝗿𝘀.𝗹𝗼𝗴 for debugging. you won't manually edit most of these files. but knowing this layout means you understand exactly where identity, memory, skills, automation, and state live, and how they connect. i wrote a full deep dive covering hermes agent's architecture, memory system, self-evolving skills, GEPA optimization, and setting up multiple specialized agents. The article is quoted below.
Akshay 🚀 tweet media
Akshay 🚀@akshay_pachaar

x.com/i/article/2053…

English
39
111
770
73.3K
Tiger
Tiger@screenest_ai·
Filtering for the ones that actually deliver on the "free" promise for technical skills: Keep: freeCodeCamp, The Odin Project, MIT OpenCourseWare, AWS Training, Databricks Academy, Brilliant, CS50 (Harvard) Skip: Skillshare (paywalled after trial), Babbel (subscription), LinkedIn Learning (subscription), Udemy (free tier is thin)The Odin Project + freeCodeCamp in sequence is still the most complete free path from zero to employable dev. Everything else is supplementary.
English
0
0
0
20
Pradeep Pandey
Pradeep Pandey@Div_pradeep·
30 PLATFORMS TO LEARN ANY SKILL FOR FREE. 1. Coursera. org 2. edX. org 3. Khanacademy. org 4. Udemy. com 5. Skillshare. com 6. Linkedin Learning. com 7. Alison. com 8. FutureLearn. com 9. MIT OpenCourseWare - ocw. mit. edu 10. Harvard Online - online-learning.harvard. edu 11. Google Digital Garage - learndigital.withgoogle. com 12. HubSpot Academy - academy.hubspot. com 13. Codecademy. com 14. freeCodeCamp. org 15. The Odin Project - theodinproject. com 16. W3Schools. com 17. Sololearn. com 18. Brilliant. org 19. Duolingo. com 20. Babbel. com 21. Memrise. com 22. TED. com 23. YouTube. com/learning 24. Openlearn. open. ac. uk 25. Saylor Academy - saylor. org 26. GCFGlobal. org 27. Canva Design School - designschool.canva. com 28. Semrush Academy - semrush. com/academy 29. Databricks Academy - academy.databricks. com 30. AWS Training - aws. amazon. com/training
Pradeep Pandey tweet mediaPradeep Pandey tweet mediaPradeep Pandey tweet mediaPradeep Pandey tweet media
English
18
58
164
7.4K
Tiger
Tiger@screenest_ai·
Filtering for devs and researchers who actually need these: Keep: regex101, explainshell, WolframAlpha, ConnectedPapers, Elicit, Consensus, SemanticScholar, Photopea, HaveIBeenPwned, VirusTotal Skip: 12ft.io (dead), Unscreen (acquired by Canva), Libgen / Sci-Hub (legal risk)The research AI cluster — Elicit + Consensus + ConnectedPapers — is the real unlock most people walk past.
English
0
0
2
3K
AI探路者Tim
AI探路者Tim@AIExplorerTim·
谷歌不想让你知道的50个网站 1. '12ft.io' — 绕过任何付费墙 2. 'libgen.is' — 数百万免费教科书 3. 'sci-hub.se' — 免费研究论文 4. 'alternativeto.net' — 寻找免费应用替代品 5. 'justwatch.com' — 找到任何内容的流媒体位置 6. 'archive.org' — 访问任何旧网页 7. 'gutenberg.org' — 7万本免费经典书籍 8. 'pdfdrive.com' — 免费PDF下载 9. 'openculture.com' — 顶级大学免费课程 10. 'wolframalpha.com' — 即时求解任何数学题 11. 'photopea.com' — 浏览器内免费Photoshop 12. 'squoosh.app' — 免费压缩任何图像 13. 'remove.bg' — 免费移除图像背景 14. 'cleanup.pictures' — 从照片中擦除物体 15. 'unscreen.com' — 移除视频背景 16. 'carbon.now.sh' — 将代码转换为艺术 17. 'ray.so' — 精美代码截图 18. 'shots.so' — 免费产品模型 19. 'smartmockups.com' — 无需Photoshop的模型 20. 'haveibeenpwned.com' — 检查您是否被黑客攻击 21. 'virustotal.com' — 扫描任何文件中的恶意软件 22. 'privnote.com' — 发送自毁消息 23. 'temp-mail.org' — 即时一次性邮箱 24. 'file.io' — 共享自动删除的文件 25. 'archive.ph' — 永久保存任何网页 26. 'similarsites.com' — 寻找任何网站的替代品 27. 'radio.garden' — 收听全球任何电台 28. 'everynoise.com' — 探索每种音乐流派 29. 'tunefind.com' — 从任何剧集查找歌曲 30. 'musicforprogramming.net' — 专注音乐 31. 'mynoise.net' — 自定义专注声景 32. 'coffitivity.com' — 咖啡厅声音促进生产力 33. 'elicit.org' — AI研究论文助手 34. 'consensus.app' — 搜索科学共识 35. 'connectedpapers.com' — 可视化研究地图 36. 'semanticscholar.org' — 免费学术搜索 37. 'scispace.com' — 理解任何研究论文 38. 'summarize.tech' — 总结任何YouTube视频 39. 'phind.com' — 开发者AI搜索 40. 'regex101.com' — 即时测试任何正则表达式 41. 'codebeautify.org' — 整洁格式化代码 42. 'jsonformatter.org' — 像人类一样读JSON 43. 'explainshell.com' — 理解终端命令 44. 'raindrop.io' — 有效的书签管理器 45. 'downdetector.com' — 检查任何网站是否宕机 46. 'tineye.com' — 反向图像搜索 47. 'fast.com' — 检查网速 48. 'smallpdf.com' — 免费编辑PDF 49. 'ilovepdf.com' — 合并和分割PDF 50. '10minutemail.com' — 秒级临时邮箱 互联网远大于谷歌展示的内容。 大多数人永远不会离开第一页。
AI探路者Tim tweet mediaAI探路者Tim tweet mediaAI探路者Tim tweet mediaAI探路者Tim tweet media
中文
73
1.2K
5.2K
251.8K
Tiger
Tiger@screenest_ai·
One thing missing from the pros/cons: inference cost per token is often the deciding factor in production, not raw intelligence. Dense models at 27B are cheaper to serve per token than a MoE with 13B active params once you account for routing overhead and memory bandwidth. The "fast and efficient" label on MoE only holds at scale.
English
0
0
0
53
0xSero
0xSero@0xSero·
1. Dense Models - Slow and Smart Example: Qwen3.6-27B / Gemma-4-31B What it means: - when a prompt is sent - it gets tokenised (words are mapped to tokens) - token generation starts - the 27B means 27 billion parameters - each of those parameters will be activated - 27 billion matrix multiplications - for every token generated Active parameter counts are positively correlated with intelligence. That's why Gemma-4-31B is able to compete with Mixture of Experts (MoEs) 10 times their size. 2. Mixture of Expert models - Fast and Efficient Example: Deepseek-V4-Flash / Qwen3.5-397B What it means: - when a prompt is sent it's tokenised - it's sent to a router - a router was trained to match prompts with experts - experts are sub-networks of the model - when found the experts are activated - tokens are generated with only a fraction of the params For example: Deepseek-v4-flash has 284 billion params 11x larger than the dense Qwen3.6-27b. But only 13B of those 284B will activate per token, which is less than half of the size of Qwen3.6-27B ---- Dense Pros: - Dense models are easier to train - They tend to be smaller overall - They can be very smart per token Dense Cons: - Competitive dense models are on average slower than their MoE peers. - Less parameters to train and specialise. MoE Pros: - Can be much larger and be trained longer - Faster token generation MoE Cons: - Larger vram requirements - Harder to train -------- Lmk if there's anything i'm wrong with or missing
0xSero tweet media
English
36
61
771
47.7K
Tiger
Tiger@screenest_ai·
The embedding decay mechanic is the most underrated part here. It doesn't decay on a timer — it decays with new engagement. That's the same design pattern as continual learning in production ML: stale signals don't expire, they get diluted. Means posting consistency matters more than posting quality in the short run.
English
0
0
0
76
Javi López ⛩️
Javi López ⛩️@javilop·
✨ xAI publicó ayer el algoritmo de X y no entiendo cómo nadie se ha dado cuenta de lo que realmente tiene en sus tripas Me he fundido 500 pavos en Claude analizando hasta la última línea Esto es lo que he descubierto (POST LARGO, guárdatelo para luego): 0/ Cada cuenta tiene un "embedding" asociado que te describe como lo hacen los modelos de IA: en el espacio latente. Es la huella digital interna que el modelo guarda de cada usuario, un vector de números que resume cómo se comporta tu cuenta (qué temas tocas, qué engagement generas, con quién interactúas). El modelo lo usa cada vez que decide a quién enseñar tus posts. Si tu historial es bueno, queda limpio y el modelo te empuja. Si vas acumulando señales negativas (bloqueos, mutes, reports, not_interested), se vuelve tóxico y empieza a penalizarte automáticamente. Y la trampa: NO se resetea. Lo que hagas hoy sigue dentro durante semanas, contaminando todo lo que publiques después, aunque sea bueno. Por eso salir de un shadowban o de épocas de bajo alcance se siente en X como intentar mover una gigantesca rueda oxidada: no es tu imaginación, es así tal cual. Limpiar/mejorar tu embedding es algo lento y farragoso, es como la impresión que tienes de alguien que te cae mal: por muy simpático que se vuelva contigo, va a pasar bastante tiempo hasta que te fíes de él. Otro descubrimiento importante: el embedding no decae con un reloj. Decae con engagement NUEVO entrando al sistema. Si dejas de postear, las señales malas viejas se quedan congeladas dentro: nadie las sobrescribe. Si comienzas a crear contenido que al algoritmo le gusta, notarías mejora a partir de las 6-8 semanas y un cambio decente sobre las 12-16 semanas, asumiendo que no acumulas más señales malas en medio. ¿Por qué nadie está hablando de esto? Me parece tremendo y por fin una confirmación de esa sensación de "estoy en una mala racha" por la que todos hemos pasado. 1/ Los primeros 30 minutos lo son TODO Si tu post no recibe interacciones rápido, Grok ni siquiera lo evalúa. Sin nota de calidad, sin análisis profundo, sin posibilidad de llegar a quien no te sigue. Muerto y enterrado 2/ La edad del post tiene un cap de 80 horas: POST_AGE_MAX_MINUTES = 4800, en buckets de 1 hora. Después estás en el "overflow bucket" que se traduce como "antiguo, ignorar" Mejor ventana: las primeras 0 a 12 horas. Pasadas las 24 ya estás en un bucket peor Vamos, lejos de incentivar el contenido "evergreen", X quiere carnaza fresca continua (todo lo contrario que YouTube) 3/ MI MAYOR MIEDO ERA INFUNDADO (se supone): vivir en EU y postear en inglés para audiencia US: CERO penalización directa en teoría: El struct PostCandidate no tiene NINGÚN campo de país del autor, IP ni localización. Gizmoduck (el servicio de identidad de X) solo devuelve follower count + screen name. El transformer de Phoenix solo ve un hash de tu author_id Lo que sí te jode indirectamente: el huso horario (tu post envejece mientras US duerme) y el idioma DEL POST Vamos, que usar una VPN para "postear desde US" no hace literalmente nada (a diferencia que en TikTok o Instagram, por cierto) 4/ Las 5 señales negativas que matan tu alcance: El modelo predice 22 acciones por post. 5 son pesos negativos que se RESTAN de tu score: - not_interested - block_author - mute_author - report - not_dwelled (gente haciendo scroll sin pararse en tu post) Esa última es brutal la verdad. Un post que se ignora es matemáticamente PEOR que uno que nunca se llegó a publicar 5/ Los shadowbans existen 100%. Hay 4 tipos distintos: - Hard drop. X borra tu post del feed de todo el mundo sin avisarte. Se aplica a posts con contenido grave (abuso infantil, etc.) o cuentas suspendidas. Tú ni te enteras - Etiqueta DO_NOT_AMPLIFY. Es literalmente un campo en el código que dice "no amplificar este post". Si te la ponen, los anuncios dejan de aparecer al lado de tus posts → X deja de ganar dinero mostrándote → el sistema deja de pushearte. Apagón en seco - Reglas de BotMaker. Es el panel interno desde el que los empleados de X pueden limitar a una cuenta concreta a mano. En el código se ven las categorías que existen (Content, ContentLimited, Safety, Grok) pero NO se ve a quién se las aplican ni por qué. La herramienta está documentada, los usos no - Embedding envenenado. El más jodido como ya vimos antes. El modelo tiene una "memoria" interna por cada cuenta. Si tu cuenta acumula suficientes "no me interesa" + bloqueos + silencios + reports a lo largo del tiempo, esa memoria se vuelve tóxica. A partir de ahí, incluso tus buenos posts futuros se penalizan automáticamente. Nadie lo decidió. El modelo simplemente aprendió que tu cuenta da mal engagement, y se autocorrigió 6/ Solo los posts ORIGINALES pasan por el "Banger Screen" Las respuestas y retweets nunca entran en el clasificador de calidad de Grok. Si te pasas el día respondiendo a cuentas virales, estás optimizando para el Reply Ranker, NO para la amplificación ¿Quieres que te descubran fuera de tu red? Escribe posts originales, no hay otra 7/ Las respuestas a cuentas pequeñas pasan por escáner anti-spam. Las respuestas a cuentas grandes pasan por Grok Dos clasificadores distintos. El SpamEapiLowFollowerClassifier pega a las respuestas a cuentas pequeñas. El ReplyRanker puntúa de 0 a 3 con Grok las respuestas a cuentas grandes "¡Primero!" o respuestas solo con emojis sacan un 0. El rollo tipo "Sir, this is a Wendy's" se penaliza. Vamos, que si escribes respuestas, más te valen que aporten algo, si no, mejor ni te molestes 8/ El 50% de todas las peticiones al feed son "tráfico shadow" is_sampled(request_id, 0.5) marca como shadow la mitad de cada feed request. Muchas features contextuales (inferencia de género, demografía, preferencias de topics Grok) solo se activan en shadow O con un feature flag Traducción: literalmente no puedes saber qué versión del algoritmo está viendo cualquier usuario. La mitad de tu audiencia está en un experimento en cualquier momento 9/ El dwell (el tiempo que un usuario se queda mirando tu post antes de hacer scroll) es 5x veces mejor que recibir likes El scorer tiene 5 señales distintas de dwell (dwell, cont_dwell_time, click_dwell_time, etc.) pero solo 1 señal de favorito. - Un post con un montón de likes pero la gente lo lee 1 segundo y sigue scrolleando → score bajo - Un post con pocos likes pero la gente se queda 8 segundos leyéndolo → score alto ¡Optimiza por tiempo pasado en tu post, no por likes! 10/ Cosas que sí funcionan: - Engagement en los primeros 10 min. Manda DM a tus colegas, pingea a tu comunidad, lo que sea - Postea en la zona horaria de TU AUDIENCIA, no en la tuya. Para targetear US: 8 a 11am ET (14 a 17 hora Madrid) - No postees 5 cosas seguidas. El AuthorDiversityScorer multiplica cada post siguiente tuyo por decay^position. Para el post 4 ya estás en el suelo - Vídeo ≥ 10 segundos. Por debajo de MinVideoDurationMs pierdes el peso VQV entero - Vídeos con audio. Grok corre ASR (speech to text) en cada vídeo. Sin audio = señal en blanco - Cita virales de tu nicho. El modelo ya sabe que el original engancha, tu valor añadido se apila encima 11/ Cosas que te destrozan el alcance: - DESCUBRIMIENTO DE LA HOSTIA: hilos de más de 10 tweets. El DedupConversationFilter solo deja 1 tweet por conversación por feed. Los megahilos son matemáticamente un desperdicio - Repostear el mismo contenido. Los bloom filters lo deduplican - AI slop. Hay literalmente un campo slop_score en el output del BangerScreen. Lo detectan explícitamente - NSFW/violencia/odio sin etiquetar. Auto MediumRisk = sin ads = shadowban estructural - Spamear respuestas a cuentas pequeñas. Hay un clasificador específico para eso 12/ Lo que NO han publicado los muy pillines: El esqueleto es público. Los diales no - Los valores numéricos exactos de cada peso (FavoriteWeight, ReplyWeight, OonWeightFactor, AuthorDiversityDecay). Viven en xai_feature_switches::Params, config externa - Los prompts reales de Grok (los 7 prompts de policy PToS, BangerMiniVlmScreenScore, SafetyPtos). Pueden tener literalmente cualquier framing - Las reglas de BotMaker que aplican DO_NOT_AMPLIFY a cuentas concretas - util/phoenix_request.rs, que construye la llamada final al modelo - 25+ crates xai_* referenciados pero no incluidos - Los pesos del Phoenix de producción. Solo han publicado la versión mini Mi teoría: nos han puesto un esqueleto algo escuchimizado del total que tienen. El músculo (los pesos) y el cerebro (los prompts y las reglas de BotMaker) son completamente opacos. Se han reservado lo mejor, está claro 13/ Chuleta resumen para no olvidar: - Los primeros 30 min importan más que cualquier otra cosa - Tu ubicación es irrelevante, tu timing y tu idioma no - Los shadowbans existen en 4 sabores. El peor es el modelo envenenándote el embedding de autor en silencio a partir de señales negativas pasadas, levantar caveza limpiando tu embedding te va a costar horrores, pero se puede - Las respuestas y retweets no pasan por el clasificador de calidad. Los originales sí - El dwell (que alguien se quede mirando tu post) le gana al like 5 a 1 - La mitad del tráfico está en algún experimento en cualquier momento - Se han reservado lo mejor del algoritmo, pero bueno, algo es algo
Javi López ⛩️ tweet media
Español
61
172
944
72.8K
Tiger
Tiger@screenest_ai·
Vibe coders think full-stack is 2 layers. It's 12. Here's which AI tool owns each: Frontend → v0, Bolt, Lovable APIs & Backend → Claude Code, Cursor, Copilot DB & Storage → Claude Code + Supabase MCP Auth → Claude Code (Clerk / Auth.js boilerplate) Hosting → Cursor + Vercel MCP Cloud & Compute → Amazon Q Developer CI/CD → GitHub Copilot Security & RLS → Manual. AI assists, doesn't own this. Rate Limiting → Claude Code (boilerplate) Caching & CDN → Mostly manual Load Balancing → Amazon Q Developer Error Tracking → Cursor + Sentry The gap between vibe coding and production is exactly these 10 layers nobody talks about.
English
0
0
0
60
Sick
Sick@sickdotdev·
Agree?
Sick tweet media
English
77
312
5K
172.1K
Tiger
Tiger@screenest_ai·
Memory compression solves the token side. The harder side: agents still don't know what they don't remember. I'm building Contextberg around this — screen + browser + input history as passive ambient context. Hypothesis is that retrieval precision improves when the agent has continuous environmental grounding. contextberg.com
English
0
0
0
125
Nico
Nico@nicos_ai·
Ahora puedes darle memoria infinita a Claude Code, Codex y Hermes. 100% gratis. Agentmemory ya es tendencia en GitHub con +4.000 estrellas. La herramienta: → guarda todo lo que Claude/Codex hace durante tus sesiones de programación → lo comprime con IA → recupera automáticamente el contexto relevante en futuras sesiones La diferencia es absurda: CLAUDE.md → 22.000+ tokens para 240 observaciones Agentmemory → solo 1.900 tokens para las mismas observaciones 92% menos contexto consumido. Y además: → Hasta 95% menos tokens por sesión → 200x más llamadas a herramientas antes de llegar al límite de contexto → 100% open source A partir de unas 1.000 observaciones, gran parte de la memoria integrada de Claude deja de ser accesible. Agentmemory mantiene todo indexado y searchable. Esto cambia por completo cómo se usa Claude Code, Codex y Hermes, entre otras. Ya no reinicias el contexto. Tu proyecto simplemente lo recuerda. Enlace abajo 👇
Español
24
86
514
26.1K
Tiger
Tiger@screenest_ai·
Same pattern here — memory and tool selection failures almost always trace back to context loss between hops, not the integrations themselves. I've been building something around this called Contextberg: the idea is that screen + browser + input history gives the agent enough ambient context to stay consistent. Still early, but that's the hypothesis.
English
0
0
0
11
Osman R.
Osman R.@UsmanReads·
@screenest_ai Useful map. The part that keeps biting me is not the integrations themselves, it is keeping the agent coherent as it hops between them. Memory and tool selection seem to be where most of my failures live.
English
0
0
0
22
Tiger
Tiger@screenest_ai·
The Ultimate Agentic Tech Stack (2026 Edition) To build truly "superpowered" AI agents, you need more than just an LLM. You need a robust ecosystem of integrations. Here is the comprehensive breakdown: 🌐 Web & Search • Firecrawl: Clean web crawling for LLM-ready data. • Browserbase: Headless browser for logins and complex UI actions. • Apify: Pre-built scrapers for X, LinkedIn, and Maps. • Exa / Perplexity: Real-time AI-native search retrieval. 💻 Development & Data • Codex: Core agentic framework for execution. • GitHub: Handling code, issues, and PR automation. • DeepWiki: Deep-dive technical code analysis. • Snowflake: Enterprise-grade data warehousing. • dbt: SQL-based data transformation and modeling. • Supabase / Pinecone: Backend and Vector memory (RAG). ⚙️ Operations & Productivity • Google Workspace: Connecting Gmail, Calendar, and Drive. • Notion: Centralized documentation and task databases. • Obsidian: The "Second Brain" for localized knowledge. • Linear: High-performance issue tracking. • Stripe: Automated billing and payment workflows. 🗣️ Communication & Media • Discord / Slack: Automated support and team workflows. • Bland / Twilio: Giving agents a voice for real-world calls. • YouTube Transcripts: Turning video into searchable research. • Readwise: Querying your entire library of highlights. • Granola / Fathom: Searchable meeting history and transcripts. 🔄 Automation & Intelligence • Zapier / Make: Connecting to 5,000+ third-party apps. • Replicate / Hugging Face: Running specialized open-source models. • Vercel / Railway: Instant infrastructure and deployment. The "Must-Have" Top 5: 1.Firecrawl (Web) 2.Browserbase (Action) 3.Google Workspace (Context) 4.GitHub (Engineering) 5.Obsidian (Memory)
English
1
0
2
107
Tiger
Tiger@screenest_ai·
@bolekzahradnik @cryptopunk7213 The asymmetry is already here — $35K vs $5-10M for the same class of exploit. Cost-zero tokens don't create the wasteland, they just make it visible. Defense has always been playing catch-up; AI just removed the excuse that offense was too expensive.
English
0
0
0
12
Bolek Zahradnik
Bolek Zahradnik@bolekzahradnik·
@screenest_ai @cryptopunk7213 The Grid from Tron, it is how it will look like, just with less fancy effects, but overloading some transformers could be quite firework 🎆😁
English
0
0
0
22
Ejaaz
Ejaaz@cryptopunk7213·
claude mythos just broke Apple's $2 billion defense system. it did so by discovering a completely different attack vector to break in only took it 5 days costing ~$35K of mythos api time (the same exploit class costs $5-10M on grey market) the researchers that commandeered the exploit produced a 55-page report that was delivered to Apple HQ in-person (hoping they release it after patching). most shocking part for me is apple's MIE worked as intended. mythos just discovered a new way to side-step it entirely by poisoning the data the M5 chip ingested. at this point i think we have to accept that mythos walks the walk. As the anthropic red-team explicitly confirmed this week - this is NOT a compute resource issue. its national defense.
International Cyber Digest@IntCyberDigest

❗️🚨 BREAKING: Researchers used Mythos Preview to find the first public macOS kernel memory corruption exploit on Apple's M5 silicon, they give a glimpse into Mythos say it’s really powerful. Apple spent five years and an estimated several billion dollars building Memory Integrity Enforcement (MIE), the hardware-assisted memory safety system built around ARM's MTE. It was the flagship security feature of the M5 and A19, designed specifically to kill the entire memory corruption bug class. Researchers from Calif built a working exploit in five days. According to Apple's own research, MIE disrupts every public exploit chain against modern iOS, including the recently leaked Coruna and Darksword kits. Calif walked into Apple Park this week and handed over the report in person. Full 55-page technical report drops after Apple patches the vulnerability.

English
118
431
5K
1.5M