Joachim Haraldsen

8.3K posts

Joachim Haraldsen banner
Joachim Haraldsen

Joachim Haraldsen

@Noobwork

From Norway 🇳🇴 → Now in Tokyo 🇯🇵 Advisor & Investor | Gaming, Sports & Tech | Built @Heroicgg → sold for multi-million exit

Tokyo-to, Japan Beigetreten Aralık 2012
1.9K Folgt10.9K Follower
Joachim Haraldsen
Joachim Haraldsen@Noobwork·
So interesting running Hermes Agent and OpenClaw next to each other. I recently tried a fresh install of both, giving same agent the same prompts after setup, out of the box Hermes agent are so much more intelligent! @NousResearch @Teknium are really crushing it! I am not saying @openclaw is bad, it is not at all! I love it, however it requires a lot more from the user to become usable. As an example it keeps misspelling my Nick Noobwork as «Nobwork» 😂
English
1
2
9
868
Rich Lira
Rich Lira@soyrichlira·
Claude Chat, Cowork, and Code don't share state. You add a task in Chat. Open Cowork. It has no idea. Open Code. Same thing — blank slate. So I built Compass MCP — an open source MCP server that bridges all three surfaces through shared markdown files. 6 tools. 2 files. Zero database. → add_task in Chat → get_tasks in Code — same task is there → complete_task in Cowork — updated everywhere The missing operational layer for Claude power users. 🧭 github.com/richlira/compa… #MCP #Claude #OpenSource #AI #DevTools #Anthropic
Rich Lira tweet media
English
16
3
102
6.4K
Boxmining
Boxmining@boxmining·
anyone running this? Just started the installation.
Boxmining tweet media
English
136
7
456
36.5K
Joachim Haraldsen
Joachim Haraldsen@Noobwork·
In 2013, I was a 20-year-old making gaming videos in Norway. In 2023, I sold one of the biggest esports organizations in the world. In 2026, I'm writing this from a café in Tokyo. Three chapters. One common thread. Chapter one started with a webcam and zero expectations. Built a YouTube channel to 200K subscribers when "gaming content" wasn't yet a career path. Just a Norwegian kid who loved games and figured out how to talk about them. Chapter two got serious. Founded Heroic Group, scaled it into a globally recognized esports organization, landed in Forbes, and sold my way out. The kind of decade that looks linear in hindsight but felt like controlled chaos while living it. Eight years since I was active on social media. Eight years of building behind the scenes, advising companies, moving to Tokyo, and figuring out what comes next when you've already done the thing you set out to do. Chapter three is Noobwork's return. A brand built on everything I learned through gaming, business, and personal reinvention. Tokyo lifestyle meets creator strategy meets the lessons you only get by starting over. This isn't a pivot. It's a continuation. If you're building something that doesn't fit neatly into one category, you're probably onto something. 🎮
Joachim Haraldsen tweet media
English
16
3
236
24.3K
Santiago
Santiago@svpino·
@cubesol_greg Here is the issue: Before, you needed a bad programmer to write and deploy crappy code. Now, you can generate crappy code at scale without boundaries. It's not that the quality is worse today than it was yesterday; the issue is the quantity.
English
7
0
31
2.8K
Santiago
Santiago@svpino·
Vibe-coding feels like magic. Until you're the one cleaning up the magic later.
English
153
23
428
52.4K
Joachim Haraldsen
Joachim Haraldsen@Noobwork·
@garrytan Any thoughts about implementing google stitch into the design process? Or an opportunity for it?
English
0
0
1
111
Garry Tan
Garry Tan@garrytan·
Sometimes instead of talking to users you can just implement the things they ask for in the same night they tell you they want it Coming tonight: Design mockups and HTML finals in /plan-design-review Automatic parallelization with worktrees in /plan-eng-review #GStackFam
Garry Tan tweet media
English
28
4
216
15.4K
Joachim Haraldsen
Joachim Haraldsen@Noobwork·
Få kokt opp kroppen i badekaret igjen 😂🥵
Norsk
0
0
3
1.2K
Joachim Haraldsen
Joachim Haraldsen@Noobwork·
@Jensen2k Gøy å se at vi har noen fremoverlente større selskaper i Norge som kjører på med AI!
Norsk
0
0
1
106
Martin Jensen
Martin Jensen@Jensen2k·
SSB har masse verdifull data, men er vanskelig å navigere i. Vi har bygd MCP + skills til Claude og ChatGPT. Brukt internt i TRY en stund, nå deler vi den videre 👇 🔗 tools.try.no/ssb-mcp
Norsk
13
8
235
15.4K
@levelsio
@levelsio@levelsio·
I wish I could edit my @WHOOP sleep data because today I took it off in my sleep at 4am so now it shows 33% sleep score and 2.5h of sleep but I actually slept till 11am and it was like perfect sleep. Thank you for your attention to this matter
English
76
1
508
94.7K
Joachim Haraldsen
Joachim Haraldsen@Noobwork·
This is wild
NIK@ns123abc

🚨NEWS: Cursor’s $50B “in-house model” is literally Kimi K2.5 with RL on top. Got caught in 24 hours >be Moonshot AI >spend hundreds of millions training Kimi K2.5 >1 trillion parameters, 15 trillion tokens, agent swarm architecture >beat GPT-5.2 and Opus 4.5 on real benchmarks >open-source it because you believe in the ecosystem >one condition: display “Kimi K2.5” if you make over $20M/month from it >Cursor takes the model >runs RL on coding tasks >ships it March 19 as “Composer 2” >blog post: “continued pretraining + scaled reinforcement learning” >zero mention of Kimi K2.5 >“our in-house models generate more code than almost any other LLMs in the world” >publishes benchmark chart >Composer 2 against Opus 4.6 and GPT-5.4 >uses the chart to justify raising at $50 billion! >less than 24 hours later >kimi dev intercepts the API response >model ID: kimi-k2p5-rl-0317-s515-fast >they didn’t even rename it >Moonshot head of pretraining runs tokenizer test >confirms: identical to Kimi’s tokenizer >publicly tags Cursor’s co-founder: “why aren’t you respecting our license?” >two more Moonshot employees post confirmations >all three posts deleted within hours >legal is now involved >but it gets worse >Cursor had Kimi K2.5 listed as a FREE model in their UI just weeks ago >users were openly using it >Feb 9: “K2.5 was in my model list. I updated and it vanished” >it vanished because Cursor pulled it from the picker, and relaunched it as their own model >Moonshot valuation: $4.3B >Cursor valuation: $50B Absolute state of Cursor.

English
0
0
1
1K
Joachim Haraldsen
Joachim Haraldsen@Noobwork·
@damsgaard_olaf @BergensInvestor Da er vi helt enige! Ja man burde definitivt bygge en agent struktur hvor man ikke benytter seg av dyre modeller på enklere tasks. Jeg brukte kun opus som orchestrator/CEO agenten, alle andre bots kjører andre modeller. Akk nå leker jeg med hunter alpha
Norsk
0
0
1
51
Olaf Damsgaard
Olaf Damsgaard@damsgaard_olaf·
Enig, det stemmer. Når det er sagt, så er poenget mitt med en lokal node (eller VPS) mest rettet mot 'high volume'-oppgaver der API-kostnadene på Opus kan fly i taket. Men jeg er enig i at OpenRouter med f.eks. Hunter Alpha er en veldig smidig mellomvei for å få god ytelse uten å måtte styre med egen hardware. Veldig interessert i Openrouter, men ikke fått brukt det nok enda.
Norsk
1
0
0
72
Bergens Investoren
Bergens Investoren@BergensInvestor·
AI agenter, lokal LLM og hardware, er det Mac Mini M4 Pro 64GB man bør gå for 🤗 ? ​Jeg planlegger å utforske bruk av AI-agenter i praksis (OpenClaw etc) og ønsker i utgangspunktet å drifte via lokalt kjørte LLM. Da trenger jeg ny hardware har jeg innsett. Etter å ha testet 8B modeller på min eldre PC (RTX 2080), må jeg ærlig si at jeg ikke er spesielt imponert over resonneringsevnen sammenlignet med de store online modellene. ​For å få fornuftige svar, ihvertfall tekst formulerte, så virker det som man må opp i 30B-70B klassen. Jeg er usikker på hva agenter trenger, men antar det samme? Siden støy og varme er greit å ha på edruelige nivåer for meg hjemme, har jeg tenkt at Mac Mini er et bedre valg er PC (har også sett på spark/ veriton GN100) ​Etter hva jeg har lest meg fram til så kan Mac Mini PRO 64GB kjøre kvantifiserte 70B-modeller med akseptabel hastighet takket være Apples Unified Memory. ​Jeg lurer på: - ​Har noen erfaring med Openclaw eller lignende og vil dele erfaringer rundt bruk? - Bør jeg gå for 64GB for å kunne kjøre 70B, eller holder det med 24GB/32GB for 13B/30B-modeller i praksis? - ​Er det noen som kan tipse meg om hvor jeg kan kjøpe Mac Mini Pro 64GB i Norge idag? Setter stor pris på erfaringer og innspill!
Norsk
11
0
12
3.3K
Joachim Haraldsen
Joachim Haraldsen@Noobwork·
@damsgaard_olaf @BergensInvestor Det er ikke helt riktig. Det er ToS breach on su bruker subscription, men API er helt fint. OpenAI tillater å bruke subscription. Disse vil gi mye bedre resultat. En annen løsning er Openrouter med Hunter Alpha eller m2.7 også gode alternativer
Dansk
1
0
1
42
Olaf Damsgaard
Olaf Damsgaard@damsgaard_olaf·
Men, Anthropic melder de banner folk som prøver å kjøre Opus med Claw-programvare. Tenker det beste er å investere i en god KI-hjerne, enten hardware eller VPS, med en solid lokal modell som Minimax eller Qwen 3 som du kan bruke til 90% av det openclaw trenger. Så heller bruke Claude Code med opus/sonnet når du virkelig trenger god reasoning
Norsk
1
0
0
65
Joachim Haraldsen
Joachim Haraldsen@Noobwork·
@BergensInvestor Så ville det jo også vært gunstig å kjøpe seg tid frem til nye mac mini/ mac studio blir lansert
Norsk
0
0
0
60
Joachim Haraldsen
Joachim Haraldsen@Noobwork·
@BergensInvestor Jeg ville droppet det å kjøre lokal modell, det vill prestere meget dårlig sammenlignet med OpenAI eller Anthropic sine modeller. Og dropper du lokalmodell så holder en eld gammel laptop for å kjøre openclaw. Får du utbytte av det men tenker det å kjøre lokalt ville vært best
Norsk
3
0
2
636