Ravi M

163 posts

Ravi M

Ravi M

@raviswtech

가입일 Şubat 2025
29 팔로잉2 팔로워
Ravi M
Ravi M@raviswtech·
Good tools
Nav Toor@heynavtoor

I canceled $500/mo in SaaS subscriptions last month. Replaced every single one with open-source GitHub repos. 1. AppFlowy → Replaces Notion github.com/AppFlowy-IO/Ap… 2. Tabby → Replaces GitHub Copilot github.com/TabbyML/tabby 3. Continue → Replaces Cursor Pro github.com/continuedev/co… 4. LanguageTool → Replaces Grammarly github.com/languagetool-o… 5. Stable Diffusion WebUI → Replaces Midjourney github.com/AUTOMATIC1111/… 6. Chatwoot → Replaces Intercom github.com/chatwoot/chatw… 7. n8n → Replaces Zapier github.com/n8n-io/n8n 8. Whisper → Replaces Otter. ai github.com/openai/whisper 9. PostHog → Replaces Mixpanel github.com/PostHog/posthog 10. LocalAI → Replaces OpenAI API github.com/mudler/LocalAI 100% open source. Zero subscriptions. All free. (Save this before it disappears)

English
0
0
0
2
Ravi M
Ravi M@raviswtech·
@ImKingGinger @WyrmStar Prompting happens with humans also, remember advertisements asking one to do this, buy that.. That is prompting. And it is art, good or bad - still art.
English
0
0
1
46
Marcus Pittman
Marcus Pittman@ImKingGinger·
@WyrmStar If you read the article from Harvard Law. The guy who got the first copyright literally just prompted. So prompting is art. And copyrightable art.
English
2
1
22
574
Marcus Pittman
Marcus Pittman@ImKingGinger·
Every time I see someone post “AI art can’t be copyrighted” I know they haven’t actually read a single ruling. The U.S. Copyright Office has been granting copyright registrations for AI-assisted artwork. They did it for a comic book. They did it for an image of a piece of cheese. Harvard Law wrote about it. What can’t be copyrighted is if you let an AI run completely autonomously with zero human creative input and then try to list the AI as the author. That’s what the Thaler case was about. The guy literally listed his AI as the sole creator and said he did nothing. A monkey once took a selfie and a court ruled it couldn’t be copyrighted because there was no human author. Nobody walked away from that case thinking cameras can’t produce copyrighted photos. But that’s exactly the logic people are using with the Thaler ruling. The Copyright Office’s own 2025 report says AI-assisted works qualify for copyright when the human provides substantial creative input. Selection, arrangement, editing, composition. You know, like what every filmmaker, graphic designer, and digital artist using AI tools is actually doing. But keep posting the misinformation. It makes it easier for the people actually building things to operate while you argue about stuff you didn’t read.
Marcus Pittman tweet mediaMarcus Pittman tweet media
English
43
50
323
18.4K
Ravi M
Ravi M@raviswtech·
@dr_bandak It crazy situation now. At one time mistake in writing resulted in loss of respect. Now - to err is human!
English
0
0
0
45
Dr. Banda Khalifa MD, MPH, MBA
Humans spent centuries writing books, essays, articles, and research papers. Then we used all that human writing to train AI systems to write like humans. Then we built another AI system to inspect the writing and say, “This looks AI suspiciously.” So now we have one machine trained on humans to sound human, and another machine trained on humans to figure out whether the first machine sounds a little too human. And after all that, a stressed human still has to make the final call.
Possum Reviews@ReviewsPossum

This AI text detector says Abraham Lincoln's Gettysburg Address was written by AI.

English
116
1.4K
5K
507.3K
Ravi M
Ravi M@raviswtech·
@aakashgupta This is right approach, knowing which model to use when. Can skills help an orchestrator agent to select which agent to be included in a pipeline? A token hungry fat one? Or lean agile one? What kind of LLM should be chosen for the orchestrator? Lean, agile, extravert LLM?
English
0
0
0
19
Aakash Gupta
Aakash Gupta@aakashgupta·
The line from this episode that should terrify every AI API company: "I am mortally afraid of ever using Anthropic APIs because one prompt and it burns through $20 like it's nothing." OpenClaw is model-agnostic. You plug in whatever LLM you want. Gemini for deep research. A Flash model when customers need fast responses. Qwen 3.5 for background tasks at 1/10th the cost of Anthropic's API. That flexibility changes the math on running AI agents entirely. A persistent agent executing cron jobs every 30 minutes across Slack monitoring, competitor scraping, bug triage, and customer feedback analysis would rack up thousands of API calls per day. On Claude's API, that's potentially hundreds of dollars daily. On Qwen 3.5 running locally, the marginal cost approaches zero. This is the part most people miss about the agent era. The bottleneck was never intelligence. GPT-4 class models have been available for two years. The bottleneck was cost at volume. A single smart query is cheap everywhere. An always-on daemon making 500 autonomous decisions per day while you sleep needs the cheapest reliable model you can find. OpenClaw's architecture treats LLMs like interchangeable parts. Heavy reasoning task? Route to Opus. Slack response to a customer? Route to Flash. Weekly competitor analysis? Run it on an open-source model locally using your own RAM, no API call at all. The AI labs are selling intelligence. OpenClaw is selling the orchestration layer that lets you shop for the cheapest intelligence per task. Every platform war eventually comes down to who controls the routing layer above the commodity. This is that play, running on a single terminal command.
Aakash Gupta@aakashgupta

You need to have started using OpenClaw yesterday. Here's the web's easiest setup guide + 5 killer use cases: 38:06 - 1. Live knowledge bot 47:47 - 2. Automated standups 54:46 - 3. Push-based comp intel 1:13:26 - 4. VOC reporting 1:24:30 - 5. Auto bug routing

English
19
14
80
14K
Ravi M
Ravi M@raviswtech·
Curious how CoPilot fares? GitHub Copilot can reach Levels 1–3 autonomy today (permission-free editing, multi-step autopilot, agent SDK). It does not yet support Levels 4–6 (persistent loops, structured eval, VPS gateways). Claude Code is ahead in autonomous experimentation.
Ravi M tweet media
Aakash Gupta@aakashgupta

There are 6 levels of making Claude Code run autonomously, and most people are stuck on Level 1. Level 1: Kill the permission prompts. Run claude --dangerously-skip-permissions. One flag. Now it stops asking “Can I edit this file?” every 30 seconds while you’re checking Slack. Level 2: Context window management. Claude Code now supports 1M tokens. Use /clear between tasks. Run /compact at 60% usage instead of waiting for auto-compaction to fire at 90% when the model is already forgetting your instructions. Level 3: Subagents. The reason it stops at 15 minutes: everything runs in one context window. Subagents run in separate contexts. Build a looping todo command, each task executes in its own window. Builds, tests, and git operations never touch the main conversation. 2+ hours autonomous with zero intervention. Level 4: Ralph Wiggum loop. Official Anthropic plugin. Claude works, tries to exit, a Stop hook blocks the exit, re-feeds the same prompt. Each iteration sees modified files and git history from previous runs. One developer ran 27 hours straight, 84 tasks completed. Geoffrey Huntley ran one for three months and built a programming language with a working LLVM compiler. Level 5: Karpathy’s AutoResearch. On March 7, Karpathy pushed a 630-line script to GitHub and went to sleep. Woke up to 100+ ML experiments completed overnight. 25K stars in five days. The difference from Ralph: structured eval loops. Define a metric, run, measure, analyze failures, improve, repeat. One Claude Code port took model accuracy from 0.44 to 0.78 R² across 22 autonomous experiments. Level 6: VPS + OpenClaw for 24/7. Your laptop lid closing kills everything. Run Claude Code on a VPS inside tmux. Detach, close your laptop, come back tomorrow to a finished diff. OpenClaw (247K GitHub stars) takes it further: a persistent gateway connecting LLMs to your real tools, running 24/7 across messaging, email, git, and calendars. Jensen Huang at GTC called it “probably the most important release of software ever.” The unlock at every level is the same: give Claude a way to verify its own work.

English
1
0
2
135
Ravi M
Ravi M@raviswtech·
@camsoft2000 The thingbwith AI slop is - we always had bad readable code. Just that we discard that & move on.. till we deployed it in production. Once in production and adding value, it is not longer slop. It becomes asset to be maintained, and sometimes there debt associated with it.
English
0
0
0
81
camsoft2000
camsoft2000@camsoft2000·
I’m getting to the point with one of the projects I work on where the complexity of AI slop is becoming a real issue. While I can still happily prompt the agent to add x feature and it will do so and it will likely work perfectly, the code is just getting too complex and fragmented. Agents love to copy and paste and keeping patterns DRY is a real challenge. The agent will start diverging all those copy and pastes until you’ve got loads of similar but slightly different blocks of logic. Again it all still works and solves the problem I’m after. But I just can’t get any kind of consistency anymore, the code is a mess and I just don’t have a handle on it. I want a clean unified architecture but agents just code with tunnel vision. The project is now too big and complex for an agent to fully reason with and too big and complex for me to reason with. The only real solution is a complete rewrite. Maybe this is the way things will go. Code will just become disposable. I don’t really want to care about the code and to be honest I don’t but I do care about consistency and maintainability and the AI slop is hurting those very things I do care about. I know some will say “I’m holding it wrong”, use x,y,z skill, tool whatever and already use tools and anti slop skills, plans, docs, etc but the outcome is the same. Vibe coding something into existence is truly magical. But turning it into a mature product with months of iterations is painful. I can’t even hand code this thing because I don’t understand the code anymore and I’m too lazy to try and code myself because I’m addicted to AI. So what’s the solution, either start again and accept that’s just the way we have to roll, or just carry on fighting the slop and accept each new feature will take longer to implement than the last. I’m tired. I’m addicted.
English
165
37
604
84.2K
Ravi M
Ravi M@raviswtech·
@SumitM_X Note that memory & skills (prompts+data) are different and also related.
English
0
0
0
6
Ravi M
Ravi M@raviswtech·
@SumitM_X When using a chatbot and asking it to code- It works based on one prompt, by mapping the output as a high probability match. Even humans does that. Remember - "You didn't tell saar" When agents with skills are involved, we are slowly increasing capabilities, via multiple prompts
English
1
0
0
143
SumitM
SumitM@SumitM_X·
I saw a junior developer create a microservice using ChatGPT in 30 seconds. I asked one question: What happens to the data if the other service is down for 10 minutes? He didnt know. The AI didnt explain backpressure. It didnt explain idempotency. It didnt explain dead letter queues. Writing code is easy now. But Engineering is still hard.
English
77
92
1.3K
138.8K
Ravi M
Ravi M@raviswtech·
@rauchg That way writing also should be an output. Even speech. Just by looking at once face & body language people should understand what are the complete intentions and act? May be AI will do that eventually! Only a machine that is augmenting ones skill as a tool is acceptable.
English
0
0
0
72
Guillermo Rauch
Guillermo Rauch@rauchg·
Code is an output. Nature is healing. For too long we treated code as input. We glorified it, hand-formatted it, prettified it, obsessed over it. We built sophisticated GUIs to write it in: IDEs. We syntax-highlit, tree-sat, mini-mapped the code. Keyboard triggers, inline autocompletes, ghost text. “What color scheme is that?” We stayed up debating the ideal length of APIs and function bodies. Is this API going to look nice enough for another human to read? We’re now turning our attention to the true inputs. Requirements, specs, feedback, design inspiration. Crucially: production inputs. Our coding agents need to understand how your users are experiencing your application, what errors they’re running into, and turn *that* into code. We will inevitably glorify code less, as well as coders. The best engineers I’ve worked with always saw code as a means to an end anyway. An output that’s bound to soon be transformed again.
English
287
243
2.6K
316.7K
Ravi M 리트윗함
Josh Kale
Josh Kale@JoshKale·
Four companies launched OpenClaw competitors in three weeks. Manus may have just just dropped the most interesting one. "My Computer" can now access your local files, run terminal commands, build native apps, train ML models on your idle GPU, and operate your machine remotely from anywhere. All through a desktop app with explicit approval on every action One example: a colleague had Manus build a full real time translation app in Swift. Twenty minutes. Never opened Xcode This is the fourth major OpenClaw competitor in three weeks: - Perplexity has Personal Computer - NVIDIA unveiling NemoClaw at GTC today - Anthropic shipped remote + scheduled tasks - Meta and Manus shipping My Computer Oh and OpenAI hiring OpenClaw's creator Every major AI company just converged on the same conclusion: the most valuable thing an AI can do is use your computer for you. The only question left is who builds the version people use the most
Manus@ManusAI

Today, we're taking Manus out of the cloud and putting it on your desktop. Introducing My Computer, the core feature of the new Manus Desktop app. It’s your AI agent, now on your local machine.

English
32
27
225
45.2K
Ravi M
Ravi M@raviswtech·
Ask claude this : I want solve following problem: Following are the constraints: Below is the environment of production, development environments. Give me a step by step guide with atleast 5000 words. See what it does! #claude #stepbystepguide
English
0
0
0
47
Ravi M
Ravi M@raviswtech·
@JohnnyNel_ Companies expect technology to solve a problem, then realise there are new problems to solve. Governance is a classic example for this.
English
0
0
0
3
Ravi M
Ravi M@raviswtech·
🚨 Clawbot/OpenClaw exploding in enterprises... but causing chaos! 800+ malicious skills found. Memory breaking. Security teams panicking. Top 10 problems + 2 wild fixes each. Enterprise AI folks - this thread is for you. 🧵
English
3
0
2
56
Ravi M
Ravi M@raviswtech·
Judging quality of AI is the next big profession. It is not just supervision, but judging and marking sure the AI works and complies with government regulations. Sharing some thoughts with help of NotebookLM. #ai #Governance #compliance
English
0
0
0
16
Ravi M
Ravi M@raviswtech·
These fixes are weird, creative & actually work in enterprises right now. Not basic “add RAG” stuff. Which problem is killing YOUR team most? Drop the number + your story below 👇
English
1
0
0
12