Cokédex
36 posts


@RoundtableSpace @RoundtableSpace What do you think about Pokémon cards?
English
Cokédex retweetledi
Cokédex retweetledi
Cokédex retweetledi
Cokédex retweetledi

@PumpDotFunGuy All I see is a series of people posting letters after one another
English

As promised, a bit of jojo-lore!
TL;DR: I built, ran and sold a reputation management business, and I've done deep reputation-related engineering and consultancy as a freelance developer.
The longer version is more useful:
I dropped out of school at 16. With money from delivering pizzas I bought the cheapest webshop I could find - a growshop (selling lamps, fertilizer, soil, etc. for growing weed). The instant I bought it, it broke. Owner vanished. I had no money to hire a developer and barely knew what one was. So you start Googling, end up on StackOverflow, and teach yourself PHP and MySQL.
Once the shop was back up, I'd gone from 'lost' kid to someone with a passion for problem-solving through code. But, having a shop means nothing without traffic. Google Ads didn't accept anything weed-related, so I had to figure out organic traffic. That led to SEO, which led to a corner of the internet called BlackHatWorld.
If you know BHW, you know. If you don't: it's a forum where people share (mostly) 'blackhat' tactics to make money online. That's where I discovered the 7878 ORM method - a system for managing reviews for local businesses.
The idea: find businesses with bad online ratings and sell them a 'review funnel'.
Here's how it worked: after a purchase, customers get a card thanking them and asking for feedback. They scan a QR code, land on a simple webpage with a happy and a sad smiley, asking how their experience was. If positive, they're prompted to share it publicly - Google Maps, Yelp, TripAdvisor, whatever matters for that business, or where they had the most negative reviews. If negative, they're routed to a private form so the complaint goes directly to the owner instead of becoming another public one-star review.
The business gets more positive reviews surfaced, the customer feels heard either way, and negative feedback becomes something actionable instead of permanent damage. Depending on how bad a business's reputation was, this could turn things around fast - and business owners paid serious money for it.
In essence, I was delivering those cards and the funnel, that's it.
I knew how popular this method was from the love it got on BHW, so I went all-in, scaled across Europe, hired my first freelancers. To manage everything, I put my still developing webdev skills to the test and built a CRM in a more serious stack (MEAN): client setup, funnel management, support tickets, billing - the whole operational spine. I turned that internal tooling into a white-label SaaS so others could rent the solution and run their own version out of the box - no manual work setting up funnels, designing cards, any of it. I went from 'kid rebuilding a broken webshop' to running two somewhat serious companies at once.
Then, my sites started ranking for reputation-related terms. Leads shifted from 'local businesses with review issues' to 'individuals with search problems'.
This was before Google offered any official way to request the removal of personal information. If someone Googled your name and the first page was dominated by something damaging - an old news article, forum posts, an angry ex's blog, whatever - your only option was to push it down by getting other content to rank higher. Official bios, LinkedIn, publications, directories. Fill page one with legitimate stuff, and the negative content drops to page two.
I did this for professionals whose search results didn't reflect who they were anymore. Some had genuinely been treated unfairly. Others just had outdated information following them around. It also forced me to develop an ethical filter fast. When you're 'repairing' reputation, you constantly decide whether you're helping someone recover from something unfair... or helping them launder a history they shouldn't outrun. I turned down work where the intent was obviously to enable ongoing harm. I took work where the ask was closer to 'make the public record proportional'.
That experience is why I'm sensitive to the difference between memory and weaponized memory - and why 'just' slap a score on it ... frustrates me.
Eventually, Google introduced the Right to Be Forgotten - basically what I'd been doing manually, now offered as an official request process. At the same time competition on the SaaS grew aggressively, so I had a choice: invest heavily or sell. I was young and I didn't want to lock myself in. So I sold. Mid-twenties, a first and (then) life-changing exit.
Especially through the individual reputation work, I'd ended up in rooms and networks you don't stumble into quickly otherwise. And because I didn't see the Google changes coming at all - I was still knee-deep in this stuff, obsessed with reputation, data, and systems - I reached out to ex-clients and orgs where I thought I could help. That pulled me into work closer to 'trust system design' than 'SEO clean-up'.
Over the years I ended up doing all kinds of reputation-adjacent work, designing reputation systems from scratch or helping out by consulting, some examples:
• One of Europe's largest online secondhand marketplaces wanted one trust score for sellers. I had to explain why 'great products' and 'ships on time' aren't the same thing, and collapsing them hides the exact signal buyers actually need - so we split it into separate dimensions (quality vs reliability), instead of pretending trust is one number.
• A home services platform kept getting hit by what looked like organic growth in certain regions. It wasn't. I found a ring of contractors who all reviewed each other's profiles within 48 hours of signing up - so we added velocity limits, mutual-review detection, and delayed visibility / manual review triggers, which stopped fake demand from looking like real regional traction.
• A community forum was using 1-5 star ratings on posts, but actual user behavior was binary: people either loved something or hated it. The stars just added noise. Switching to upvote/downvote gave cleaner signal because it matched how people actually felt - and then you can actually weight, rank, and moderate based on the shape of real reactions, not invented nuance.
• A gig platform hired me to 'be the bad guy' - assume a motivated attacker and try to break their verification system. Within a week I'd found three ways to fake verified reviews using their own referral program - so we closed the referral exploit paths, added verification constraints, and removed the 'free legitimacy' loops that attackers were farming.
And the governance stuff - who can dispute, how disputes get resolved, what gets remembered, what gets forgotten; a freelance platform asked me: how long should a negative review affect someone's score? Forever is unfair. Instantly forgotten is dangerous. There's no clean answer, just trade-offs you have to own and defend.
None of this is hard in theory. It's hard when you have incentives, edge cases, and motivated attackers - and you still need a system that normal users can understand and tolerate.
That's the piece most people miss: reputation isn't a vibe. It's mechanism design under adversarial conditions. The UI is part of the mechanism. The defaults are part of the mechanism. The ability to delete, rewrite, or go quiet is part of the mechanism. And the incentives you create will be exploited - especially where trust is the scarce commodity.
Anyhow, a lot of yapping to land on this: I'm not claiming some 'reputation guru' title. I am, however, saying I've spent a decade building and breaking trust systems, so I'm probably allowed to have takes here that aren't just vibes.
As promised, in the next post I'll try to define what 'reputation' could mean in a CT-native way, so we can move forward from thereon.
English

Get comfy, we're reputationposting!
Over the next few weeks, I'll be thinking in public about reputation on CT: what it is, what it isn't, why the current reputation meta is so easy to abuse, and how to push the conversation in a more useful direction.
I'm doing this because CT-native reputation products are starting to frame 'reputation' like a feature: something you can badge, vouch for, and score. People are outsourcing trust decisions to these systems, and the downstream damage is already visible.
The part I can't watch quietly is what this does to incentives:
• Social inclusion becomes a reputational shield (If you're in the right group, you must be legit, right?)
• Group membership turns into a laundering surface (Bad actors join good groups to borrow their credibility)
• People with cleaner track records but fewer connections get excluded (Quality loses to popularity)
• Everyone gets trained to outsource judgment to a UI instead of thinking (The app says they're trusted, so I don't have to check)
A lot of what gets read as 'good reputation' here is just social geometry: be in the right groups, befriend the right mutuals, and/or collect the right endorsements. The irony is that those are some of the least reliable signals on CT, proven over and over by the scammers and grifters who optimized for exactly that.
Here's the core problem: what actually makes someone trustworthy is hard to measure.
Think about it. Things that matter are, for example:
• Did they hold their position when the market turned against them, or did they flip and pretend they never said it?
• Did they call out a friend who was wrong, or go quiet to protect the relationship?
• Did they keep (proxy-)endorsing someone they knew was shady, because speaking up was too costly?
• Do they admit mistakes, or bury them?
That stuff takes months or years to observe. Even if you could log it, how do you store context like 'he went quiet when his friend rugged'? You can't query that from an API (yet). It certainly doesn't collapse into a score, not in the least because it's not supposed to.
So, reputation systems measure what they can measure: who you follow, who follows you, how active you are, how long you've been around, who vouched for you. Easy to build. Easy to display. Also easy to game, and the people most motivated to game these systems are exactly the ones you shouldn't trust.
Then there's another issue, maybe the most important one: reputation is always for something, with someone.
What does that mean? Simple: someone can be a great trade caller, but terrible to do deals with. Banger tweets, nightmare collaborator. Everyone vouches for him, nobody can say what he's actually done. Reputation in one context tells you nothing about another.
So, 'good reputation' without context is meaningless, but that's exactly what most systems try to produce. One score. One badge. As if trust is universal.
It's great that reputation is top of mind now. What's not great is that the people building these products, or those out there rep-yapping, don't seem to understand what reputation actually is. Bad definitions get shipped, users absorb and echo them, and suddenly you've got an ecosystem full of people optimizing for the wrong thing, confident they're doing it right.
I believe that if we get more honest about what 'reputation' on CT actually measures, we'll make fewer dumb trust calls. Right now, social proof is being sold as trust, and the wrong people benefit most from that confusion.
Tomorrow I'll share a bit of jojo-lore, so you understand why I'm interested in this topic and why I think I'm fit to talk about it. Then I'll try to define 'reputation' in a CT-native way, so we can get into the interesting parts from the same baseline.
English
















