Cartyisme
975 posts

Cartyisme
@cartyisme
artist, idiot, shillosopher · @OpenSea employee · @ART101NFT founder · @w0wn3r0 progenitor · all my views are my own
California, USA Katılım Kasım 2017
1.3K Takip Edilen3.2K Takipçiler

@TheAhmadOsman tbf I was having issues with tool calls, as it seems they were being wrapped w/ vLLM. And llama.cpp solved the issues. 5090 user here.
English

DGX Spark uses unified memory
> 273 GB/s
RTX PRO 6000 delivers
> 1.8 TB/s (1792 GB/s)
If someone told you they’re comparable, they’re wrong
And this is exactly why llama.cpp isn’t the right tool here
Try vLLM or SGLang on a GPU and you’ll see very different results
Max Weinbach@mweinbach
@TheAhmadOsman I have on DGX Spark and then was having insane tool calling issues and was told by Nvidia to use llama cpp
English

@cartyisme 100%. Luckily we still have time to fill the gap and learn how to set them up. Open-source progress is wild right now!
English

so @NadavAHollander and I have been cooking on a new project
if you’re regularly trawling your feed on the hunt for fresh information but are drowning in a firehose of unrelated slop…
then you should reach out!
we still have room for a few more curious minds in the first wave
English

@sudoingX @mamboussa this guy is awesome, love the insight, having gone through all of this myself, ollama is not the way. even gave u q4 quantization command for ur kvcache. tho 132k context seems ambitious, but im not running a 3090.... try Q3_K_M from @UnslothAI good trade off here
English

llama.cpp is the way. grab the Qwen3.5-9B Q4_K_M.gguf from huggingface, compile llama.cpp with CUDA, and launch with:
./llama-server -m model.gguf -ngl 99 -c 131072 -np 1 -fa on --cache-type-k q4_0 --cache-type-v q4_0 --host 0.0.0.0
then install hermes agent and point it at localhost:8080. dm me if you get stuck.
English

Hi @sudoingX im going to install Agent Hermes on my desktop with a 3060 12gb, can you help me to configure it with Llama.cpp or Ollama ?
English

@cartyisme Haha this works if you want to preserve the details with up and downscaling the pixels yeah!
English
Cartyisme retweetledi

gm✨
Hi @opensea , you guys are showing the same collections since 1 month on Base Homepage.
The module says "This Week curated collections" , however, the same collections are there week after week, some of them even without real volume in months!
Is someone even paying attention?

English
Cartyisme retweetledi

Getting your opensea verified was the biggest flex back then. Shoutout @cartyisme for all the help over the years
x.com/coinbilly/stat…
coinbilly@coinbilly
☑️
English

hoping to do the same, but cant find the power cord D:
Miles Deutscher@milesdeutscher
Converting my old Mac Mini into a dedicated ClawdBot AI assistant today. Will report back.
English

Thank you so much!!! Its being hard to contact, I have an open thread about a token that was sold, but on OpenSea it’s still listed and I still appear as the owner. I know you’re extremely busy, but it was amazing back when there was a support channel and things got fixed as we reported bugs.
English

@AspynPalatnick @netprotocolapp geeks > mops > sociopaths
2016 > 2021 > 2025
👀
English

A group chat I’m in was talking about how NFTs and memecoins are no longer punk
Makes me think @netprotocolapp may be the most punk thing in crypto today
Apps with centralized servers/DBs? Mainstream
Censorship-resistant decentralized apps with all data stored onchain? Punk
English

@badte_eth @SteveKBark @HollanderAdam Mind shooting me a DM? Happy to try and help get this figured out and troubleshoot w/ you.
English

gm @SteveKBark @HollanderAdam I'm having persistent issues updating the collection overview images for opensea.io/collection/ska…
If you could point me in the right direction to get it resolved I would greatly appreciate it
English





