Hvx Crypto

3.6K posts

Hvx Crypto banner
Hvx Crypto

Hvx Crypto

@hvxcrypto

Let's explore the crypto jungle together.

Katılım Ağustos 2023
416 Takip Edilen333 Takipçiler
Hvx Crypto
Hvx Crypto@hvxcrypto·
@funkii Really crazy stuff man…incredibly useful
English
0
0
0
13
𝗠𝗶𝗰𝗵𝗮𝗲𝗹 𝗞𝗼𝘃𝗲
High taxes are killing European tech. Let's say you want to hire a $100k engineer for your business. In US, you need to allocate $116,200 to payroll (including healthcare). in Germany - $120,000. But it gets funnier with PIT: The American employee will take home $79,180 The German - $58,300. Both have healthcare coverage. So "fReE HeAlThKarE" argument is invalid. About 25% of all money will go to US Fed. About 50% of all that money will go to German bureaucrats. The German will also pay 19% of VAT on almost all goods purchased with these $59k. What's an incentive for businesses to operate in Europe? What's the incentive for talent to stay in Europe? Until this is fixed, US tech is going to leapfrog EU tech.
𝗠𝗶𝗰𝗵𝗮𝗲𝗹 𝗞𝗼𝘃𝗲 tweet media
English
15
4
27
5.2K
Srinath M
Srinath M@srinathmad·
@ZssBecker Completely agree. I run code review skill from different sessions on the same code multiple times and ask it to document the tech debt, & its always growing 😅. Not sure what's the right way to fix it.
English
4
0
2
992
Alex Becker 🍊🏆🥇
I vibe code every day. I have a team of 30+ engineers. We spend F tons of credits. And I will tell you this about AI from my experience. It’s being wildly over hyped. Everyone is drunk. Fucking drunk. All the CEOs and Gen Z’s saying coding is dead are idiots. IDIOTS.
English
737
377
7K
476.9K
Hvx Crypto
Hvx Crypto@hvxcrypto·
@ZssBecker The advantage is not on the llm but on the dev methodology and the right architecture
English
0
0
0
102
Alex Becker 🍊🏆🥇
Please re-read this before getting grumpy. I actively vibe/code apps to 5000+ customers at my SaaS companies (one at 40 millish ARR). I built and now maintain 3-4 apps/core parts of our business with it. I am a turbo claude code user. I'm not saying it's not useful. I saying that despite being incredibly useful, its also stupid as hell. Even with extreme gaurdrails/watching its a competent junior dev who does whatever it wants and you have to watch like a hawk. At somepoint making code fast is NOT an advantage and if your using claude/codex to push and review its own code...your actually an insane person. LLMs are amazing. The CEO's vibe zers are also drunk from the models telling them how smart they are 24/7. Anyone with even a hint of dev experience can crack open the code and see the endless tech debt piling up. It's no where near where it needs to be to do what people are claiming and each release is only like 10% better.
English
80
37
787
67.7K
Hvx Crypto
Hvx Crypto@hvxcrypto·
@dotta Which tool did you use to create the video?
English
0
0
0
11
dotta 📎
dotta 📎@dotta·
How I made this video in ten minutes: - I opened a task asking my Paperclip CEO to hire a video editor and give her the "remotion-best-practices" skill - I asked our CMO agent to write the script, I asked for one revision - We already had a brand guide for Paperclip here: paperclip.ing/brand - I asked the new video editor hire to make the video using the script and brand guide, I asked for one revision Done. tbh there's a lot of details I would like to make better, but that will come with time because every task in Paperclip is tracked and we learn from the iterations to make the skills better for next time Honestly, making this video would have taken me a week before and now it's almost an afterthought Looks pretty good, too
dotta 📎@dotta

Announcing companies.sh - the open standard for Agent Companies Import and run entire companies with a single command Just run `npx companies.sh add <repo/company>` More 👇

English
27
18
413
48.2K
Arpit Bhayani
Arpit Bhayani@arpit_bhayani·
With agentic slop, we are trading software reliability for shipping velocity and calling it progress. It isn't. Systems are more fragile than ever, and engineers building them no longer trust their code to hold up in real-world edge cases. I am pro-AI, but this will backfire - big time.
English
152
187
2.2K
87.3K
Santiago
Santiago@svpino·
Last year, I met a person who has never written a single line of code in his life, yet he feels he can build anything he wants. He told me point-blank: "I challenge you to tell me something I can't build using AI." I tried to explain, but I couldn't find the right words. The most fascinating aspect of vibe-coding is how it has convinced so many people to believe they are better and more capable than they really are.
English
688
73
1.4K
228.8K
Hvx Crypto
Hvx Crypto@hvxcrypto·
@OmriBuilds An iteration of theory and massive practice Loop it
English
0
0
0
17
Omri Dan
Omri Dan@OmriBuilds·
What’s the fastest way to level up your coding skills in 2026?
English
35
0
24
2.7K
satsmonkes
satsmonkes@satsmonkes·
Daily DLMM grind @MeteoraAG I always had this question in my head do i could be profitable in dlmm during a bearish phase. Well i think i got my answer and i recommend everyone to take time to sharpen your skills during these market phases. 🖨️
satsmonkes tweet media
English
14
8
91
4.5K
Alex Yumashev
Alex Yumashev@jitbit·
I've been mostly shitting on AI these days (also sick of the "built 15 apps in 15 minutes" crowd), but Claud just did something wild for me, a project that would otherwise take MONTHS. Had it help me rip out our entire search engine (we're talking millions and millions of records) and migrate it from SQL "full-text search" to a small embedded, in-process Lucene port (Lucene is what powers Elasticsearch under the hood). Our app has thousands of tenants with millions of tickets - the search went from 7-8 seconds down to fucking milliseconds. The rewrite was the easy part though. The real work was all the one-off CLI tooling for index rebuilding, compaction, deduplication, gradual deploy... Literally dozens of tools. Also prep-work, tech-stack deciding, risk analysis, benchmarking, planning... The reindexing job itself ran for 39 HOURS STRAIGHT, reporting nice progress graphs and auto-fixing errors as it went. It just finished and eberything checks out not gonna turn into one of those AI evangelists, but claude saved me weeks of the most tedious infrastructure grind imaginable. And I've been trying to approach this project for literally YEARS.
English
28
16
328
26.3K
Hvx Crypto
Hvx Crypto@hvxcrypto·
@synopsi Which eval framework are you using? Did you write yourself the golden records?
English
1
0
0
2K
Rasty Turek
Rasty Turek@synopsi·
The way I work with coding agents changed significantly in the last year. Started: plan -> implement -> review -> fix Later: prod spec -> plan ... Then: prod spec -> ... -> eval Now: evals -> prod spec -> ... I now essentially spend 90% of time working on evals. The difference this makes is indescribable. Almost all code works immediately, design is close to perfect, text is almost there. It takes very little to get it to usable. Stronger and clearer guardrails I give the coding agent, better it does. And when I start with them, it writes incredibly clear spec and requirements that are super easy to follow and have very little room for interpretation. I also try to avoid being overly specific directly. I noticed that when I write the product spec manually the agent does worse than when it writes it itself. It uses language I would've necessarily use myself. And that makes all the difference.
English
63
21
684
87.6K
Hvx Crypto
Hvx Crypto@hvxcrypto·
@sudoingX Please can you provide any evidence of the accuracy of the queen 3.5 27B?
English
0
0
0
108
Sudo su
Sudo su@sudoingX·
the founder of openclaw joined the company that was founded to make AI open and now charges you per token. and is now telling you open models aren't there yet. i run qwen 3.5 27b on a single 3090. 50 tok/s. it writes code, handles tool calls, runs agent sessions for hours. the model built a full space shooter, 3,000+ lines, from a single prompt. i published the data. "open models aren't there yet" is what you say when your harness can't parse tool calls on local models and you blame the model instead of fixing the harness. i have the DMs. people switch from openclaw to hermes agent and their "broken" models suddenly work. pair a good model with a good harness like hermes agent where parsers are built per model. your data stays on your machine. no API key. 0 subscription. no one training their next model on your thinking. don't listen to someone with an OpenAI paycheck telling you open source can't do the job. install it. test it yourself. the receipts are on my timeline. he built a harness that couldn't handle local models and chose the API paycheck over fixing it. that should tell you everything.
Peter Steinberger 🦞@steipete

@sbaratelli @nvidia @openclaw most folks will want as much intelligence as possible, and open models aren't there yet.

English
263
401
5.3K
409.5K
AI Edge
AI Edge@aiedge_·
Here's an AI hot take for you: Perplexity is building the best AI platform we've ever seen. Yes, better than Claude and ChatGPT (combined). (not better models, better platform) The number of tools in the Perplexity suite is insane at this point.
English
10
4
50
4.3K
Hvx Crypto
Hvx Crypto@hvxcrypto·
@andy_ai0 @DeRonin_ Don’t fall on this people… you won’t become an AI engineer in 6 months.. at the most you will know the basics
English
1
0
0
241
Andy
Andy@andy_ai0·
This is the best roadmap to become an AI engineer in 6 months Ronin and I spent 40+ hours on it, collecting resources for every point you need to study Now it’s literally a step-by-step guide with all the links, resources, and even practical assignments If this were sold as a paid course, it would cost at least $2k You can read it for FREE, spending just 5-10 hours And start practicing today... the choice is yours
Ronin@DeRonin_

x.com/i/article/2033…

English
14
41
386
77.8K
christoshi
christoshi@christoshi_·
.@hvxcrypto just made a small private contribution to support the launch of @tryyeshi thank you brother... truly appreciate the support 🫡
English
3
1
3
182
gosha
gosha@defigosha·
How do I make 1-2 SOL (80-200$) a day using @MeteoraAG? Very simple, actually. This is basically my strategy below. Every couple of hours I go to Pool Discovery (my filters atm are mostly empty, only choosing DLMM. Thne, I filter by highest fees and choose only new tokens (2h - 2days max, preferably under 12h). Only thing I care about are RAW fees. I can asses Fee/TVL myself, for new tokens it is mostly similar. Key is a -70% or -80% bid ask. I do it for EVERY token, no matter where its price is (although it is very low I may opt to -65%). This ensures I catch the falling knife and accumulate fees, and eventually pumps back just a little bit - because of the bid ask it mostly negates above. The screenshot below is only for today and tomorrow. The only loss is where I made a mistake and entered -55% position, if I did -75%, I would be in profit. Misclicked and decided to leave it this way.
gosha tweet media
gosha@defigosha

Bear market and relatively low activity doesn't really mean you have to sit down and do nothing. I wasn't actively LPing for some time, but today, using @MeteoraAG Discovery, it takes me 5 minutes to find and open a position once or twice a day. Each day I earn 50-200$ on these small wins, and I hope they will compound. The strategy for those more or less passive and easy LPs is really simple, and I will share it here tommorow!

English
31
29
341
56.9K
OpenFang
OpenFang@openfangg·
OpenFang v0.4.3 is out! here's everything since v0.3.49: - @nvidia NIM provider with 5 models (nemotron-70b, llama-3.1-405b/70b, mistral-large, nemotron-4-340b) - Matrix adapter: auto-accept invites, mention detection, DM/group detection, skip old messages - YOLO mode, openfang start --yolo auto-approves all tool calls, also configurable via auto_approve = true in config - dashboard username/password authentication with HMAC session tokens - cron jobs can now run workflows directly by ID or name - auto-load workflow definitions from ~/.openfang/workflows/ on startup - workflow CRUD API, CLI commands, and dashboard UI - cross-channel default recipient via default_chat_id/default_channel_id config - configurable lifecycle_reactions flag per channel - OpenSSL now statically compiled via vendored feature, no runtime libssl dependency on Linux - WhatsApp gateway now handles images, voice notes, videos, documents, and stickers with descriptive placeholders - WhatsApp sender metadata flows end-to-end from API to kernel to system prompt, agents know who sent each message - Telegram reply-to-message context forwarded to agents - Telegram typing indicator refreshes every 4s continuously during LLM processing - added openrouter/hunter-alpha model - added "free", "openrouter/free", and "free-reasoning" aliases for OpenRouter free-tier models - Minimax global URL switched to api.minimax.io/v1, China users can override via provider_urls - memory recall loop fixed, build_memory_section() no longer tells the model to call memory_recall when memories are already injected - raw errors in channels sanitized, rate limits and auth errors replaced with clean user-friendly messages - HAND.toml parser now accepts both flat root-level format and [hand] table format - token quota exceeded fix, pre-emptive quota-aware compaction triggers before LLM calls - log_level in config.toml now takes effect (RUST_LOG > config.toml > default "info") - max iterations error now includes guidance on configuring [autonomous] max_iterations - config.toml backed up to config.toml.bak before any auto-rewrite - default model in web UI spawn wizard fetches from /api/status instead of hardcoding groq - LoopGuard poll detection now purely keyword-based, long kubectl/docker commands correctly identified - custom model provider display shows provider:model format in dropdown - agents created via API immediately registered in channel router name cache - tool call denial message now includes guidance to use auto_approve or --yolo - Browser hand install recognizes Homebrew "already an App" as success - open links in new tab for markdown agent responses - config overwrite warning toast when saving API key triggers auto-provider-switch - skill output cards in dashboard default to expanded instead of collapsed github.com/RightNow-AI/op…
English
11
17
104
7.9K
Hvx Crypto
Hvx Crypto@hvxcrypto·
@DeRonin_ I would say that at least, if you do this full time every day you become a junior on these stuff
English
0
0
0
32
Ronin
Ronin@DeRonin_·
How to become AI engineer in next 6 months: By the end, you want to be able to: - build LLM apps end-to-end - use APIs from OpenAI / Anthropic / open-source stacks - design prompts and context properly - add tool calling and structured outputs - deploy real projects So, let’s discuss your roadmap month by month Month 1: Get solid enough in coding and fundamentals What to learn: - Python really well - Git + GitHub - CLI / terminal basics - JSON, APIs, HTTP, async basics - basic SQL - basic data handling with pandas - virtual environments, package management, error handling - FastAPI or Flask Month 2: Master LLM app development What to learn: - prompting fundamentals - system vs user instructions - structured outputs / JSON schemas - function/tool calling - streaming responses - conversation state - cost / latency / token basics - failure handling - prompt injection awareness Month 3: Learn RAG properly What to learn: - embeddings - chunking - vector databases - metadata filtering - reranking - retrieval quality issues - hallucination reduction - citations and grounding Month 4: Agents, tools, workflows, evals - agent loops - tool selection - state management - retries - when NOT to use agents - multi-step workflows - evaluation harnesses - task success metrics Month 5: Deployment, product thinking, and reliability What to learn: - FastAPI production patterns - Docker - background jobs - queues - auth + API key security - logging - observability - prompt/version management - eval dashboards - cost monitoring - rate limits - caching Month 6: Specialize and become hireable these knowledge and skills you gained can be applied in three directions you need to choose one of them and focus on practice although everything mentioned above is also best learned purely through practice Direction 1: AI product engineer Best if you want startup jobs fast Focus on: - LLM apps - RAG - agents - deployment - product UX Direction 2: Applied ML / LLM engineer Focus on: - fine-tuning - when to fine-tune vs prompt - evaluation - inference optimization - open-source models - training pipelines Direction 3: AI automation engineer Focus on: - workflow orchestration - business process automation - multi-tool systems - CRM, docs, email, support, ops use cases This roadmap will help you go through a practical path, and the key is to study each of these points and then test them in real work By month six, you will already have several built products or examples of completed tasks And it will be much easier to get a job as an AI engineer Save it so you don't lose it and can return to study later
Ronin tweet media
English
132
609
4.4K
778.6K
Hvx Crypto
Hvx Crypto@hvxcrypto·
@openfangg Can we please have chatgpt login with Oauth? I tried to use it but it is not available.
English
0
0
0
18