Techbromancer

27 posts

Techbromancer banner
Techbromancer

Techbromancer

@techbromancer

Putting the romance back into computing

Inscrit le Ağustos 2025
24 Abonnements5 Abonnés
Techbromancer
Techbromancer@techbromancer·
the tarball is ready to play if you cant find it through you package manager
English
0
0
0
9
Techbromancer
Techbromancer@techbromancer·
A AI in development thread:
English
4
0
0
26
Techbromancer
Techbromancer@techbromancer·
we'll know agi is here when every desktop app isnt running on javascript
English
0
0
0
4
Techbromancer
Techbromancer@techbromancer·
BURKOV@burkov

LLMs process text from left to right — each token can only look back at what came before it, never forward. This means that when you write a long prompt with context at the beginning and a question at the end, the model answers the question having "seen" the context, but the context tokens were generated without any awareness of what question was coming. This asymmetry is a basic structural property of how these models work. The paper asks what happens if you just send the prompt twice in a row, so that every part of the input gets a second pass where it can attend to every other part. The answer is that accuracy goes up across seven different benchmarks and seven different models (from the Gemini, ChatGPT, Claude, and DeepSeek series of LLMs), with no increase in the length of the model's output and no meaningful increase in response time — because processing the input is done in parallel by the hardware anyway. There are no new losses to compute, no finetuning, no clever prompt engineering beyond the repetition itself. The gap between this technique and doing nothing is sometimes small, sometimes large (one model went from 21% to 97% on a task involving finding a name in a list). If you are thinking about how to get better results from these models without paying for longer outputs or slower responses, that's a fairly concrete and low-effort finding. Read with AI tutor: chapterpal.com/s/1b15378b/pro… Get the PDF: arxiv.org/pdf/2512.14982

ZXX
0
0
0
1
Techbromancer
Techbromancer@techbromancer·
it's all fun an games til the corporate waterfall-encouched-in-scrum-types discover there's this thing called "Large Language Models" and then start writing tickets that are a mile long and make no sense in the future it'll be slop talking to slop ☹️🔫
English
0
0
0
7
Techbromancer
Techbromancer@techbromancer·
@KentBeck * harder to get the AI to care about simplicity. * Interesting Appendix section to steal from for .MD files, event for SDD tools
English
0
0
0
9
Techbromancer
Techbromancer@techbromancer·
@KentBeck * when building your coding process w/ AI, slow its cycles down so you can verify steps along the way. You can refine prompts & .MD files along the way * warning signs AI gone off track: Loops; Unrequested functionality, even if reasonable; cheating (eg disabling/deleting tests)
English
1
0
0
14
Techbromancer
Techbromancer@techbromancer·
A 3rd study that I know on LLM tools for dev. TL;DR - They: * do speed up initial development * does nothing for maintainability * amplify your skill, good or bad * take away from parts of core engineering, eg problem understanding, skill atrophy youtu.be/b9EbCb5A408
YouTube video
YouTube
English
0
0
0
15
Techbromancer retweeté
Proton
Proton@ProtonPrivacy·
DON’T TRUST BIG TECH WITH YOUR DATA DON’T TRUST BIG TECH WITH YOUR DATA DON’T TRUST BIG TECH WITH YOUR DATA DON’T TRUST BIG TECH WITH YOUR DATA DON’T TRUST BIG TECH WITH YOUR DATA
English
228
847
6.6K
535K
Techbromancer retweeté
Pleometric
Pleometric@pleometric·
why do i lowkey agree with spongebob here 🙏😭😭
English
219
2.2K
19.9K
905.9K
Techbromancer retweeté
Context Engineering Guild of New York City
Great first meetup today, the universe smiles upon our effort to improve the collective context. Next meeting on the next new moon: Tuesday February 17th @ 19:00
English
1
1
1
154
Techbromancer
Techbromancer@techbromancer·
take away: if you want success, make lots of bets/work motivation for learning linear algebra, probability & statistic: earthquakes, forest fires, disease propagation, dynamical systems, complexity science, power law, fiance, simulations youtu.be/HBluLfX2F_k
YouTube video
YouTube
English
0
0
0
13
Techbromancer
Techbromancer@techbromancer·
A maths thread:
English
1
0
0
26
ben guo 🪽
ben guo 🪽@0thernet·
i've come to the realization that i should've chosen emacs if only i'd known that typing wouldn't matter in 15 years
ben guo 🪽 tweet media
English
3
1
14
1.1K