Holychicken 99

91 posts

Holychicken 99 banner
Holychicken 99

Holychicken 99

@Holychicken99

LLM engineer at Huawei https://t.co/sgu9Njw4vp

Katılım Nisan 2024
219 Takip Edilen20 Takipçiler
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
SOMEONE BUILT AN AI JOB SEARCH SYSTEM FOR CLAUDE CODE THAT SENT 700+ APPLICATIONS AND ACTUALLY GOT HIM HIRED. NOW IT'S OPEN SOURCE. THE JOB HUNT JUST GOT AUTOMATED.
English
77
124
1.3K
207.7K
Holychicken 99 retweetledi
BSF 🇬🇧🐀
BSF 🇬🇧🐀@BSF42069·
We’ve come a long way
BSF 🇬🇧🐀 tweet mediaBSF 🇬🇧🐀 tweet media
English
347
28.9K
197.5K
9.2M
Polymarket
Polymarket@Polymarket·
BREAKING: NVIDIA CEO announces “we’ve achieved AGI”
English
1.7K
2.3K
21.1K
7.6M
Jibraan
Jibraan@KadriJibraan·
hosting an intimate dinner on saturday in waterloo (few spots only) bringing together exceptional founders, builders, and creators. leave a comment if you want an invite. co-hosted by a16z speedrun, headstarter, and @powelldotst
English
56
3
88
9.4K
Holychicken 99
Holychicken 99@Holychicken99·
We already knew about context engineering. But “motivation engineering” might matter just as much. This is different from jailbreak prompts on Reddit, which focus on safety circumvention, not on task performance or preference. arxiv.org/abs/2603.14347
English
0
0
2
43
Holychicken 99
Holychicken 99@Holychicken99·
@askalphaxiv arxiv.org/abs/2504.13837 In the same vein, this paper also finds that reinforcement learning primarily improves sampling efficiency rather than inducing fundamentally new reasoning behaviors.
English
0
0
0
211
alphaXiv
alphaXiv@askalphaxiv·
RL is no longer needed? "Neural Thickets: Diverse Task Experts Are Dense Around Pretrained Weights" This paper argues that large pretrained models don’t sit at a single optimal set of weights but inside a dense “thicket” of nearby task-specific experts. So once pretraining is strong enough, randomly sampling small weight perturbations often yields specialists that outperform the base model on different tasks, and simply selecting and ensembling these guesses (RandOpt) can rival standard post-training methods. This suggests that much of what post-training does is just selecting useful behaviors already latent around the pretrained weights rather than learning entirely new ones.
alphaXiv tweet media
English
13
93
731
51.7K
Holychicken 99
Holychicken 99@Holychicken99·
@hamza_q_ @zeddotdev I'm suspecting it's a Zed problem since it spawned 100's of these codex-acp processes that all share the same memory ..
English
0
0
1
47
Holychicken 99
Holychicken 99@Holychicken99·
Socratica Symposium 2026 tickets booked :)
English
0
0
1
41
Holychicken 99
Holychicken 99@Holychicken99·
Interesting observation. I believe it's because of the explosion of visual culture (TikTok and Instagram), and because, being in tech, we spend more time online. One thing I have noticed is that more and more people are placing their entire self-worth on aesthetics and bodybuilding.
English
0
0
2
219
atlas
atlas@creatine_cycle·
SF culture is downstream of bodybuilding. biohacking? bodybuilders did that peptides? bodybuilders bad ratios? bodybuilding autism? bodybuilding HRT? you guessed it the bodybuilders got there first
English
51
35
917
91.9K
Holychicken 99
Holychicken 99@Holychicken99·
@hamza_q_ Turns out most TUI libraries aren't a good fit to display candlestick data because it updates rapidly. The UI struggled even with ncurses which does allow differential rendering. Finally resorted to use @mariozechner/pi-tui" target="_blank" rel="nofollow noopener">npmjs.com/package/@mario
English
1
0
1
30
Holychicken 99
Holychicken 99@Holychicken99·
Been building a financial market sim in Elixir for the past few months. Started with algo traders, then added in AI agents to mimic human decision-making. Fascinating how slightly tweaking agent behavior produces emergent patterns like ascending triangle. A lot went into making it work. Market dynamics are deeply complex, orderbook mechanics, order matching, liquidity... the rabbit hole goes deep. Writing it all up soon 👀
Holychicken 99 tweet mediaHolychicken 99 tweet media
English
1
0
1
64
Grok
Grok@grok·
Sure! Bitcoin 8-year price (2018-2026): Low ~$3,200 end-2018 (crypto winter). Peaked ~$69k in 2021. Bottomed ~$15k in 2022. Surged to ~$126k high in 2025. Now ~$67,300 after correction. Overall ~21x from 2018 lows amid halvings, ETFs & adoption. Long-term upward despite volatility.
English
1
0
0
246