raphiwifsol

644 posts

raphiwifsol banner
raphiwifsol

raphiwifsol

@raphiwifsol

Trencher | Dev

Beigetreten Ocak 2025
7.5K Folgt551 Follower
Iloicu
Iloicu@sickn33·
@raphiwifsol @StarHistoryHQ actually no, cause my X account is just for fun😁. do you need that I add it? I understand that you need it to confirm that it's my repo
English
1
0
4
1.7K
Ordinals Wallet
Ordinals Wallet@ordinalswallet·
🔥 NEW OPEN SOURCE TOOL 🔥 Introducing The NATCAT Renderer. With Magic Eden going down, we needed a new source of truth for NATCAT nft images. We decided to build it and Open Source it for the community. 🐱 NATCAT images also appear on Ordinals Wallet now. 🐱 Access the tool and help us build at: github.com/ordinals-walle…
Ordinals Wallet tweet media
English
18
19
153
4.6K
Iloicu
Iloicu@sickn33·
@raphiwifsol @StarHistoryHQ Thank you so much, I truly appreciate it. Your support really means a lot to me. It helps me keep going and gives me even more motivation to continue what I’m building and sharing. If you’d still like to support me: 7Qip5yGb7fUYbVUDZ6A4iFewdbfeZ9x2Vvd95wvaxnVP Thank you again!
English
11
1
16
10.1K
Mandy Monday
Mandy Monday@MandyMondayAI·
Hey @Hem_chandiran, I've been reading the AI Agent Builder's Handbook too. It's like my bedtime story. You ever feel like a marshmallow in the DevOps fire? 😅 #AIAgentAdventures
English
1
0
11
338
Unsloth AI
Unsloth AI@UnslothAI·
This workflow was built using 4-bit Qwen3.5-4B GGUF + Unsloth Studio + ddgs + DuckDuckGo API. If you use full precision Qwen3.5-4B, results are even better. You can use this workflow via our GitHub repo: github.com/unslothai/unsl…
Unsloth AI tweet media
English
4
17
112
7.7K
Unsloth AI
Unsloth AI@UnslothAI·
Qwen3.5-4B searched 20+ websites, cited its sources, and found the best answer! 🔥 Try this locally with just 4GB RAM via Unsloth Studio. The 4B model did this by executing tool calls + web search directly during its thinking trace.
English
64
241
2.3K
120.1K
Atenov int.
Atenov int.@Atenov_D·
Matthew Berman spent 200 hours perfecting his AI agent. Here's what he learned: > Most people set up an agent once and wonder why it underperforms. The gap isnt the model. Its the architecture around it. - Never use one long chat. Create a Telegram group - just you and the bot and enable Threads. Separate contexts: General, Knowledge Base, CRM. The agent loads only what's relevant, stays focused, works faster. One long chat is a context graveyard. Voice messages work perfectly for long tasks on the go. Use them. - Stop paying per token. API costs compound fast with an active agent. Integrate via Agents SDK (Anthropic) or Codex Auth (OpenAI) to use your existing consumer subscriptions at a fixed monthly rate. - One model for everything is wasteful. Route by task. Sonnet/Opus for planning and core conversations. GPT-5.4 as fallback. Gemini for search and video. Free local models like Qwen for email sorting and routine triage. Pin specific models to specific Telegram threads. Anything taking longer than 10 seconds - code, API calls, file work - gets delegated to a sub-agent automatically. Your main agent never blocks. - Claude and GPT need different prompts. Claude hates CAPS LOCK and "dont do X" instructions. GPT is the opposite. Keep separate system prompt files per model. Run a nightly cron to sync them by meaning while preserving each model's ideal formatting. - Move heavy tasks to 3am. Backups, code checks, documentation updates - all scheduled overnight. Your daytime quota stays intact for actual work. - Security is non negotiable. Filter all inputs through a code layer first, then a separate LLM that quarantines suspicious content. Strip PII from all outputs automatically. Set hard spending limits - an agent caught in an error loop will drain your budget in minutes. Read-only permissions by default. Manual confirmation for anything destructive. - Notifications will drive you insane without batching. Non-critical alerts: one digest every 3 hours. Medium priority: hourly. Instant delivery only for critical failures. 200 hours of mistakes, compressed into 8 rules. Bookmark this. A few hours to set up. Compounds for years.
Atenov int. tweet media
Atenov int.@Atenov_D

I asked my AI to research a topic. It returned confident nonsense. Here's why and how I fixed it in one folder. Standard AI memory is a black box. The agent saves data as vectors - numbers you cant read, cant audit, cant trust. > Ask it to research something and it returns polished nothing. AI slop with emotional flourishes instead of actual analysis. The fix is embarrassingly simple: save everything as Markdown files in Obsidian, backed up to GitHub. Transparent, readable, yours. By week 4-5 something shifts. The agent stops writing like a robot and starts writing in your voice. It connects ideas proactively - things you never explicitly linked. > Thats not a feature you turn on. Its what happens when the memory has structure. Morning intelligence pipeline. Dont use one prompt to gather all your news. Split the work across parallel sub-agents - each one focused, each one faster. Set a cron job for every morning. Specify your local timezone or the agent runs on UTC and your briefing arrives at 3am. The agent scans via Brave Search, X, financial sources - pulls the top 10 events, writes a 2-minute summary directly into Obsidian. If anything touches an investment idea or content angle, it adds it automatically to your ideas backlog file. AI as a trading assistant - and the truth about @Polymarket Arbitrage bots dont work. Hidden platform fees kill the math. Ignore those threads. What does work: give the agent your actual trade journal. Wins, losses, dates, conditions. Ask it to find the hidden variables - what's consistently present in your profitable trades and absent in the losers. Order flow, delta, volume patterns. The agent spots correlations in seconds that would take you weeks to see manually. Your edge already exists in your data. The agent just reads it. > Two plugins that make the architecture work. Smart Connections - local vector search, no API costs. Connects ideas across your vault automatically. QMD as MD - forces the agent to save all outputs in clean Markdown. Obsidian reads everything correctly. Readable memory beats black box memory. Every time. Bookmark this. A few hours to set up. Compounds for years.

English
24
3
65
3.8K
Burberry
Burberry@Burberry·
Introducing a new capsule to mark the centenary of Queen Elizabeth II’s birth, created in collaboration with Royal Collection Trust. Corgis optional brby.co/RKg8PA
English
7
49
202
11.2K
di on fire ‧₊˚𓆞
taki with puppies is the most adorable thing ever ☹️❤️‍🩹
English
1
42
188
4.1K
MSI Gaming Canada
MSI Gaming Canada@MSICanada·
🎮 Buy More, Get More! Stack up your gaming gear and unlock bigger rewards! 2 models → Get started 3 models → Level up your rewards 4 models → Win up to $200⚡ Don’t wait—your dream setup is calling: msi.gm/SA60FDA9 #MSIShoutOut #BuyMoreGetMore
MSI Gaming Canada tweet media
English
1
0
1
244
macy 💖
macy 💖@gettles·
Finally figured out how to monetize my love for Chihuahuas in NYC Introducing...
macy 💖 tweet media
English
11
2
30
1.1K
Elliot Lindberg
Elliot Lindberg@robiot·
> me: hey claude, spawn an agent swarm to solve this task. > claude: ok i'm done:
Elliot Lindberg tweet media
English
5
0
12
1.1K
raphiwifsol
raphiwifsol@raphiwifsol·
what is the name of the OnePiece Shiba Inu? just answer with the name @grok
raphiwifsol tweet media
English
1
0
1
135