ㅤㅤㅤ

9.8K posts

ㅤㅤㅤ banner
ㅤㅤㅤ

ㅤㅤㅤ

@transpiracy

i love my girlfriend and coding.

Katılım Ocak 2009
1.7K Takip Edilen710 Takipçiler
Sabitlenmiş Tweet
ㅤㅤㅤ retweetledi
noah glass
noah glass@noah·
5.4K
9K
8.5K
0
ㅤㅤㅤ
ㅤㅤㅤ@transpiracy·
@pvncher @0xblacklight @aisdk assuming single step. if you have so much as even 1 more step the caching already pays for itself. no idea who is doing a single query without any tool loops or follow ups. also applies if you reuse the same system prompt between differing queries
English
0
0
0
16
eric provencher
eric provencher@pvncher·
@0xblacklight @aisdk For agentic coding - sure, but if you want to just do simple queries, the 25% upcharge for cache writes can be a real pain.
English
1
0
1
205
Kyle Mistele 🏴‍☠️
dang have y'all ever tried using opus for agentic coding without cache-control like lighting money on fire found the @aisdk option for it for anthropic, seems like it should be enabled by default since it costs u (almost?) nothing
GIF
English
1
0
10
1.9K
ㅤㅤㅤ
ㅤㅤㅤ@transpiracy·
they should invent fun hobbies that can take your mind off of work im going crazy over here
English
0
0
1
23
ㅤㅤㅤ retweetledi
e
e@etc__fc·
“oracle-zoned” is such apt phrasing for the realization I had last year that most people actively resent anything resembling clairvoyance, and at best view you as little more than a human hot-take aggregator kept around solely as some sort of spectacle
Vivian@suchnerve

Autistic people who specced into manually-learned social skills get oracle-zoned rather than passing for allistic. You generally get seen as a useful and interesting font of mysterious wisdom, not as a fellow normal person who should be socially included.

English
9
288
3.5K
116.9K
ㅤㅤㅤ retweetledi
Trevor Levin
Trevor Levin@trevposts·
I did the math a couple weeks ago and it turns out a vegan prompting a frontier LLM *every second, 24/7* consumes less water than the average omnivore who never uses AI.
mary@theoceanblooms

people are trying to argue with this and it’s literally the truth. a kilogram of beef requires over 15,000 litres of water to produce. a vegan who uses chatgpt every day is living a more sustainable lifestyle than someone who regularly eats beef while boycotting AI.

English
25
164
2.6K
130.9K
ㅤㅤㅤ
ㅤㅤㅤ@transpiracy·
@KentonVarda except it compacts way sooner than 1M on the subscription plan
English
0
0
0
222
Kenton Varda
Kenton Varda@KentonVarda·
People pointed out that under the subscription plan, input token cache reads are *free* (don't count against quota), as opposed to API pricing where they cost 1/10 of uncached tokens. I guess that means if you use Claude Code you can actually use 1M context without getting screwed over on cost. Us poor Opencode users paying API pricing, though, basically can't. Seems like if you stay under 100k tokens, API pricing is basically negligible and I'd rather pay it than lock myself in, but a single 1M conversation would be expected to be 100x the cost of a 100k conversation, which is quite substantial -- hundreds, maybe thousands of dollars for one convo. Yikes.
English
7
0
70
6.8K
Kenton Varda
Kenton Varda@KentonVarda·
Claude has a 1M context window now, but realistically costs skyrocket around the 200k mark since input token costs are O(n^2) (due to resubmitting the chat history on every turn/step -- even cache hits can get pricey).
English
55
20
826
100.4K
ㅤㅤㅤ
ㅤㅤㅤ@transpiracy·
@zebassembly your colleagues should read my application so that i can join in on the vagueposting
English
0
0
0
152
ㅤㅤㅤ
ㅤㅤㅤ@transpiracy·
minimax m2.7 is truly a contender for worst transformer ever trained dawg
English
0
0
0
41
ㅤㅤㅤ retweetledi
😈
😈@turtlekiosk·
we should stop using react and use a more efficient desktop app stack BUT we should take up the same amount of resources by including a physics engine in the UI layer of every program so you can do stuff like detonate a text field if you feel like it
English
14
63
1K
21.4K
ㅤㅤㅤ retweetledi
Conner O'Malley
Conner O'Malley@conner_omalley·
Irish Zionism
Română
503
7.7K
69.6K
8.6M
ㅤㅤㅤ
ㅤㅤㅤ@transpiracy·
@noahzweben how is it even this slow? no other harness has startup times this egregious, and it behaves like this even spinning one up from the desktop app
English
0
0
0
63
Noah Zweben
Noah Zweben@noahzweben·
Remote Control - Session Spawning: Run claude remote-control and then spawn a NEW local session in the mobile app. * Out to Max, Team, and Enterprise (>=2.1.74) *Have GH set up on mobile (relaxing soon) * Working on speeding up session start-time
English
124
119
1.6K
733K
ㅤㅤㅤ
ㅤㅤㅤ@transpiracy·
@0xTejpal @matanSF @droid much worse product? worse than something you didn’t make at all and instead fed copious amounts of steering to for gaming the benchmark?
English
0
0
15
1.5K
ㅤㅤㅤ
ㅤㅤㅤ@transpiracy·
@thdxr like there is clearly interest from people in creating/discussing issues, and even a few PRs proposing fixes but they seem to just go stale :(
English
0
0
0
14
ㅤㅤㅤ
ㅤㅤㅤ@transpiracy·
@thdxr given the difficulty of filtering PRs, do yall have some system for determining when issues should be tended to / addressed? rn amazon bedrock support is very neglected and not sure how to drum up support for getting PRs okayed or assigning ppl to the issues
English
1
0
0
350
Theo - t3.gg
Theo - t3.gg@theo·
Maria is speaking straight facts here. Please don’t spam us with giant PRs adding new features. Just tell us what you want and we’ll triage accordingly. Appreciate the hype around T3 Code but please chill a LITTLE bit guys
maria@maria_rcks

to the people contributing to t3.codes could you please... stop? - do not file a 9k loc pr that adds 7 features on top of what you advertised in the title. - ANY ui change should have before / after screenshots, idk how this isn't common knowledge. - and for the ones adding providers, i'd just wait for @jullerino / @theo to add them. if you want to make things right: - first open an issue with something you'd like changed / added with clear examples of how it'll work / look, also attach examples from other open source apps that do that specific thing well - something as simple as changing the wording "open pr" to "view pr" - removing a stray dot in the ui for the working animation and most important of all... make it easy to review

English
38
11
808
107.5K
ㅤㅤㅤ
ㅤㅤㅤ@transpiracy·
@SeriesInfinityy because, ultimately, a human can always review the resulting code, modify it, or prompt further iteration. the contribution guidelines are drawing a fake line in the sand.
English
0
0
0
21
ㅤㅤㅤ
ㅤㅤㅤ@transpiracy·
@SeriesInfinityy using AI for “codebase understanding” is effectively the same thing as asking it to perform the implementation for competent developers. the only difference is imposing yourself the additional steps of manually altering the individual files based on the guidance the LLM spat out
English
1
0
0
55
ㅤㅤㅤ
ㅤㅤㅤ@transpiracy·
the minecraft LCE forks are so funny because why do they consistently have anti-LLM statements in the contribution guidelines and yet every PR is completely written by AI "codebase understanding" my ass 😭
ㅤㅤㅤ tweet mediaㅤㅤㅤ tweet media
English
2
0
4
216
xXx_CANINEFAGGOT_xXx
xXx_CANINEFAGGOT_xXx@caninefaggot·
@transpiracy im gonna have to run my own version that just fixes the build system without any of this silliness going on, i wanna keep it as close to the original code as possible
English
2
0
3
62
ㅤㅤㅤ retweetledi
Ryan Florence
Ryan Florence@ryanflorence·
I remember writing prompts as comments inside an empty function body to get those sweet vscode co-pilot completions a hundred years ago
English
19
26
840
31.6K