DrewOnAI

445 posts

DrewOnAI banner
DrewOnAI

DrewOnAI

@Drew_OnAI

Testing what actually works (and what doesn’t)

USA Katılım Mart 2025
45 Takip Edilen46 Takipçiler
DrewOnAI
DrewOnAI@Drew_OnAI·
@leopardracer another "passive income" scam that ignores quality if you can't write, why would anyone buy your book
English
0
0
0
15
leopardracer
leopardracer@leopardracer·
I Made $3,490/Month Selling eBooks I Never Wrote - ChatGPT Did Everything. I typed one sentence. ChatGPT wrote the book, designed the cover, created every page. I downloaded a PDF. I listed it on Etsy. It sold while I was watching Netflix. This is not a hack. This is not a loophole. This is just what’s possible in 2026 while everyone else is still writing. The whole thing took 20 minutes. No Photoshop. No Canva. No design skills. No writing experience. Just plain English inside a chatbot - and a product ready to sell to millions of Etsy buyers. One listing. Zero cost. Permanent income. Watch the full step-by-step in the video. Bookmark this now. You’re going to want to come back to this when you’re ready to actually start - and trust me, that moment is coming sooner than you think. And if you want more systems like this - the ones that actually work in 2026 - hit follow. This is just the beginning.
Sprytix@Sprytixl

x.com/i/article/2048…

English
11
6
68
9.8K
DrewOnAI
DrewOnAI@Drew_OnAI·
PLANETARY DATA IS A LIE. > earth observation is just marketing fluff > enterprise models fail on real terrain > satellite imagery doesn't equal insight > you buy access not answers stop paying for static maps.
English
1
0
0
3
DrewOnAI
DrewOnAI@Drew_OnAI·
@everestchris6 wonder if that postcard survives the void or just disappears seriously though, does this thing chew pdfs without overheating?
English
0
0
1
502
Chris
Chris@everestchris6·
this OpenClaw bot finds restaurants with ugly menus, rebuilds them as live web menus, and mails the owner a postcard...on autopilot. here's how agencies can land recurring contracts with this system: - scrapes every restaurant in a city in real time - filters by review count + rating + last menu update + photo quality - pulls the real menu items from the official site, PDF, or Google reviews - samples the brand palette from the restaurant's own visual identity - renders a 9:16 brand-matched menu, hosted live at a QR-accessible URL - writes a personalized postcard referencing a real reviewer and a real dish - mails it to the registered office addressed to the owner by first name every step from discovery to brand-matching to outreach is automated. reply "MENU" + RT and i'll send you a free guide so you can build this too
English
922
557
2.5K
366.8K
DrewOnAI
DrewOnAI@Drew_OnAI·
MEMORY DRIVEN SCALING IS A LIE. > agents don't learn from past tasks > they just hallucinate new insights > extraction fails on real world noise > this is not evolution, it's overfitting stop believing the hype. x.com/rseroter/statu…
Richard Seroter@rseroter

"ReasoningBank provides a powerful framework for enabling LLMs to learn from experiences and evolve into continuous learners during test-time. We believe memory-driven experience scaling represents a crucial new frontier for agent scaling." research.google/blog/reasoning… < new research

English
1
0
0
17
DrewOnAI
DrewOnAI@Drew_OnAI·
@viplismism file search isn't the bottleneck, your prompt engineering is most agents fail because they can't read their own code
English
1
0
0
14
vipli
vipli@viplismism·
wrote a piece on why file search matters for coding agents so before an agent can fix a bug, it first has to find the right files, which sounds obvious until you actually isolate that step on swe bench multilingual and realize how much of agent performance is just stuck on this boring first move. i benchmarked zero shot file retrieval across 300 tasks, 41 repos, and around 11 languages and frameworks, testing grep, bm25, grep plus bm25 fusion, llm chosen grep terms, and llm grep plus bm25 main takeaway: for coding agent retrieval, what you search for can matter more than how fancy your search backend is link is in comments. peace out
vipli tweet media
English
4
1
20
1.2K
DrewOnAI
DrewOnAI@Drew_OnAI·
@jahirsheikh8 this list is just a resume padding exercise real engineers ship one stack, not twenty tools
English
0
0
0
30
Jahir Sheikh
Jahir Sheikh@jahirsheikh8·
📂 AI Engineer Stack ┃ ┣ 📂 Languages ┃ ┣ 📂 Python ┃ ┣ 📂 SQL ┃ ┗ 📂 Bash ┃ ┣ 📂 Core ML ┃ ┣ 📂 NumPy ┃ ┣ 📂 Pandas ┃ ┣ 📂 Scikit-learn ┃ ┗ 📂 XGBoost ┃ ┣ 📂 Deep Learning ┃ ┣ 📂 PyTorch ┃ ┣ 📂 TensorFlow ┃ ┣ 📂 JAX ┃ ┗ 📂 Keras ┃ ┣ 📂 LLM Frameworks ┃ ┣ 📂 LangChain ┃ ┣ 📂 LlamaIndex ┃ ┣ 📂 DSPy ┃ ┗ 📂 Haystack ┃ ┣ 📂 Vector DBs ┃ ┣ 📂 Pinecone ┃ ┣ 📂 Weaviate ┃ ┣ 📂 Qdrant ┃ ┣ 📂 Chroma ┃ ┗ 📂 FAISS ┃ ┣ 📂 Deployment ┃ ┣ 📂 BentoML ┃ ┣ 📂 vLLM ┃ ┣ 📂 TGI ┃ ┣ 📂 Modal ┃ ┗ 📂 Docker ┃ ┗ 📂 Monitoring ┣ 📂 Langfuse ┣ 📂 Helicone ┣ 📂 Weights & Biases ┗ 📂 MLflow
English
35
47
390
29.6K
DrewOnAI
DrewOnAI@Drew_OnAI·
@diti403 finally someone who admits 90% of ai gurus are just selling overcomplicated setups most people skip the rules and wonder why their bot is
English
0
0
0
17
Diti Ara
Diti Ara@diti403·
Most people spend weeks “learning” AI. You only need 1 hour to set up a system that works for you. Here’s how to master Claude Cowork in just 60 minutes 👇 ⏱️ The 1-hour breakdown: 🧩 10 min - Set Global Instructions Define how Claude should think, write, and act → Your rules = consistent outputs forever 📁 10 min - Build your folder system Create: • ABOUT ME • PROJECTS • TEMPLATES • OUTPUTS → Structure = less prompting, better results 🧠 15 min - Write .md files • about-me.md (your voice & style) • rules.md (how Claude should behave) → One solid file > 50 random prompts 🚀 10 min - Create a Cowork Project Each project gets: • its own memory • custom instructions • clean context ⚙️ 10 min - Plugins + Connectors Automate workflows across tools → Slack, Gmail, Docs, etc. ⚡ 5 min - Run your first task Prompt: “I want to [task]. Ask me questions first.” → Let Claude plan before executing 💡 Pro tips most people miss: • Use Extended Thinking (Opus 4.6) for complex tasks • Give context as files, not long prompts • Set read/write boundaries for better control • Customize plugins for your workflow 📌 Result? Claude stops being a chatbot… and becomes your full-time AI operator 🔖 Save this setup ♻️ Repost to help others work smarter ➕ Follow @diti403 for more AI systems & workflows 🚀
Diti Ara tweet media
English
31
43
83
1.1K
DrewOnAI
DrewOnAI@Drew_OnAI·
REVISIONS EXPOSED. > 381 scenes re-annotated by experts > old benchmarks lied about spatial skills > artifacts hidden failures in plain sight > truth hurts more than hype stop trusting broken scores. x.com/HuggingPapers/…
DailyPapers@HuggingPapers

ReVSI rebuilds visual spatial intelligence evaluation Current benchmarks contain 3D annotation artifacts and assume full scene access. ReVSI re-annotates 381 scenes with expert labels and adapts ground truth to frame budgets, exposing critical failures hidden by prior evaluations.

English
0
0
0
12
DrewOnAI
DrewOnAI@Drew_OnAI·
@GoGoFly23 gpt 5.5 meshing a bullet hole? cool story for a demo, terrible for real engineering i'd bet my paycheck it hallucinates the stress points
English
1
0
1
733
GoFly
GoFly@GoGoFly23·
Codex联合Abaqus完成由AI驱动CAE仿真,从悬臂梁的静力学分析、圆孔板的模态分析、再到子弹穿过钢板的动力学分析,都可以由Codex的GPT 5.5来完成。 从三维模型的建立、装配、材料属性的定义、载荷定义、边界条件、网格划分、Job的提交、后处理这仿真的一切,都可以交给AI来完成。 #GPT5.5 #GPT #AI #CAE #CAD #Abaqus #Agent
GoFly tweet mediaGoFly tweet mediaGoFly tweet mediaGoFly tweet media
中文
7
90
609
40.9K
DrewOnAI
DrewOnAI@Drew_OnAI·
@codi_fyy another curated list of repos nobody will actually use just paste the code and pray it doesn't break your build
English
0
0
0
127
CODIFY
CODIFY@codi_fyy·
Best GitHub repos for Claude code that will 10x your next project: 1. Superpowers github.com/obra/superpowe… 2. Awesome Claude Code github.com/hesreallyhim/a… 3. GSD (Get Shit Done) github.com/gsd-build/get-… 4. Claude Mem github.com/thedotmack/cla… 5. UI UX Pro Max github.com/nextlevelbuild… 6. n8n-MCP github.com/czlonkowski/n8… 7. Obsidian Skills github.com/kepano/obsidia… 8. LightRAG github.com/hkuds/lightrag 9. Everything Claude Code github.com/affaan-m/every
CODIFY tweet mediaCODIFY tweet media
English
26
40
115
3.9K
DrewOnAI
DrewOnAI@Drew_OnAI·
SATA JUST SOLD OUT THE INBOX. > agent mode auto triages mail > reschedules meetings without asking > you think this saves time? > it just creates more noise stop trusting bots with your calendar.
English
0
0
0
14
DrewOnAI
DrewOnAI@Drew_OnAI·
@saxxhii_ so does this save time or just turn the repo into a mess? gotta see it work before i trust my prompts again
English
0
0
1
13
Emily Watson | AI Tools & Tech News
Holy shit. Someone just leaked the Claude Code project template teams are quietly using. This isn't prompting anymore. This is AI engineering infrastructure. ⚡ The entire setup revolves around one file: CLAUDE.md Every time Claude makes a mistake → you add a rule Every time you repeat yourself → you add a workflow Every time something breaks → you add a guardrail Claude literally trains itself on your project. And the structure is wild: • CLAUDE.md → project memory & instructions • skills/ → reusable AI workflows • hooks/ → automated checks & guardrails • docs/ → architecture decisions • src/ → actual code modules • tools/ → scripts + prompts You're not chatting with AI anymore. You're building an AI that knows your repo. The craziest part? You only configure this once. After that Claude: – reviews code automatically – refactors on command – enforces architecture rules – writes release notes – runs workflows from skills – remembers past mistakes And it keeps getting smarter. Most people: open ChatGPT → write prompt → copy paste → repeat This setup: open terminal → run skill → code shipped You're basically running AI teammates inside your repo. This template is the difference between: • using Claude occasionally • running Claude like infrastructure Drop it in any project. Your AI stops guessing — and starts operating.
Emily Watson | AI Tools & Tech News tweet media
English
11
15
40
359
DrewOnAI
DrewOnAI@Drew_OnAI·
@ickma2311 neural architecture search is just a fancy way to say we can't stop overengineering everything until it runs out of battery
English
0
0
1
33
Chao Ma
Chao Ma@ickma2311·
Efficient AI Lecture 8: Neural Architecture Search (Part II) Neural Architecture Search is the idea of letting machines search for neural network designs. Not just for accuracy. For real deployment, the model must also respect: - latency - memory - energy - hardware constraints The lecture covers how to find the architecture that is accurate and fast on the target device. My note: ickma2311.github.io/ML/HW-SW-codes…
Chao Ma tweet media
English
1
13
51
1.6K
DrewOnAI
DrewOnAI@Drew_OnAI·
@sukh_saroy so basically your whole workflow is busted and paying for a bigger model won't fix it.
English
0
0
0
27
Sukh Sroay
Sukh Sroay@sukh_saroy·
MICROSOFT AND SALESFORCE JUST PROVED THAT THE WAY YOU ACTUALLY USE CHATGPT IS THE WAY IT FAILS. Not the dumb way. The normal way. The way you use it every single day. The researchers ran 200,000+ simulated conversations across 15 of the top LLMs in the world. GPT-4.1. Claude 3.7 Sonnet. Gemini 2.5 Pro. DeepSeek-R1. o3. Every model people pay for. Every model people trust to get real work done. They tested two scenarios. In the first, they handed the model the entire prompt at once. Every detail. Every constraint. Every requirement. One clean message. In the second, they fed the same exact information across multiple turns the way a real human asks questions. Same task. Same information. Just delivered differently. Performance dropped 39% across the board. Not 5%. Not 10%. Thirty nine percent. A model getting 90% accuracy in a single clean prompt collapsed to ~60% the moment you talked to it like a human being. This wasn't one model. This was every model they tested. The researchers gave the failure mode a name. They called it "getting lost in conversation." When the model takes a wrong turn early, it does not recover. It locks in the assumption it made in turn 2 and drags it through the entire conversation, no matter how much you correct it later. The most unsettling number in the paper: Aptitude (raw capability) only dropped 15%. But UNRELIABILITY jumped 112%. The model didn't get dumber. It got wildly inconsistent. The gap between its best run and worst run on the SAME task could exceed 50 points. Same prompt. Same model. Different day. Completely different answer. Here is what makes this scary. The researchers found that smarter models did not save you. GPT-4.1 and Gemini 2.5 Pro had slightly better multi-turn aptitude, but their unreliability scores were nearly identical to weaker open-source models. Spending more on a better model does not fix this. They also tested whether the new "reasoning" models would solve it. o3 and DeepSeek-R1, the models with extra thinking time built in, degraded just as badly as the non-reasoning ones. More compute did not help. More thinking tokens did not help. The architecture itself is the problem. The paper identified four behaviors driving the collapse: The model jumps to a full answer too early before it has all the information. It locks in assumptions from turn 1 and refuses to update them. It loses track of what was said in the middle of the conversation. And it over-relies on its own previous responses instead of yours. Sound familiar? That's because you have experienced every single one of these and probably blamed yourself for it. The real takeaway is uncomfortable. Every benchmark you have ever seen. Every "GPT-5 scored 95%" headline. Every model leaderboard that made you upgrade your subscription. All of it tested in single-turn, fully-specified prompts. The exact opposite of how you actually use the tool. The paper's exact words: "LLMs often make assumptions in early turns and prematurely attempt to generate final solutions, on which they overly rely. In simpler terms, when LLMs take a wrong turn in a conversation, they get lost and do not recover." The fix the researchers suggest is uncomfortable too. Stop having conversations. Write one complete prompt with every detail upfront. Or restart the chat the moment you notice the model going off track. Do not try to correct it mid-conversation. It will not listen. Every time you go back and forth with ChatGPT to "refine" an answer, you are walking it deeper into the exact failure mode this paper documented. You are not collaborating with the AI. You are slowly making it worse.
Sukh Sroay tweet media
English
33
85
266
22.8K
DrewOnAI
DrewOnAI@Drew_OnAI·
STOP PLANNING YOUR ROADMAP. > anthropic kills standups for demos > side quest maxxing drives all features > anyone ships code without approval > dead ideas die fast, winners scale stop waiting for permission to build.
English
0
0
0
17
DrewOnAI
DrewOnAI@Drew_OnAI·
@DAIEvolutionHub free courses don't fix broken business models. just bookmarking a list won't stop your startup from dying
English
0
0
0
16
Kshitij Mishra | AI & Tech
Kshitij Mishra | AI & Tech@DAIEvolutionHub·
Learn AI for free directly from top companies. 1 - Anthropic: anthropic.skilljar.com 2 - Google: grow.google/ai 3 - Meta: ai.meta.com/resources/ 4 - NVIDIA: developer.nvidia.com/cuda 5 - Microsoft: learn.microsoft.com/en-us/training/ 6 - OpenAI: academy.openai.com 7 - IBM: skillsbuild.org 8 - AWS: skillbuilder.aws 9 - DeepLearning.AI: deeplearning.ai 10 - Hugging Face: huggingface.co/learn 👇Comment "Learning" if you find this helpful. Repost so others can take help. Must bookmark for future reference.
Kshitij Mishra | AI & Tech tweet media
Kshitij Mishra | AI & Tech@DAIEvolutionHub

holy sh*t… i’ve been building businesses like an employee this whole time. i thought the formula was simple: idea → work → repeat but here’s the trap: the moment you stop… everything stops. that’s exactly what kept happening. i’d start → get momentum → miss a few days → and it would slowly die. not because it was bad. because i was holding everything together. i wasn’t building a business. i was building a system that depended on me. last week, i flipped one question: not “what should i build?” but “how do i build something that runs without me?” no hustle hack. no productivity trick. just removing myself from the equation. so i set it up once. then stepped away. no micromanaging. no constant checking. came back later… and it was still running. not surviving. running. content going out tasks getting done decisions being handled without me being there. that’s when it clicked: most people don’t have businesses. they have jobs with extra stress. if it needs you every day… it’s not freedom. it’s a dependency. and dependency doesn’t scale. the real move? stop being required. i recorded exactly how i did it. watch this before you go build another job 👇

English
6
14
45
3.3K