nanomader

705 posts

nanomader banner
nanomader

nanomader

@nanomader

building @getgoalcoder

Beigetreten Ağustos 2020
230 Folgt30 Follower
nanomader
nanomader@nanomader·
@kskrygan Well, finally, JetBrains is waking up. Just do it
English
0
0
0
7
Kirill Skrygan
Kirill Skrygan@kskrygan·
Would you be interested if JetBrains releases a totally local AI agent, working 100% on your laptop, using our code insight engine and deeply integrated into the IDE? Yes, it will be probably 1 month behind the very recent frontier models, but no token blood bath anymore WDYT?
English
809
236
7.2K
484.8K
Sam Altman
Sam Altman@sama·
wow y'all love 5.5 we should think of something nice to do to celebrate!
English
2.6K
289
11.2K
767.4K
nanomader
nanomader@nanomader·
@haider1 This speed is unbelievable. Just a few days ago 5.5-codex was spotted. Usually they stay very quiet about new models until actual release
English
0
0
0
1.1K
Haider.
Haider.@haider1·
openai is already teasing with gpt-5.6 so if you checked the internal codex logs to verify which model was being used, and you found the rollout mapping most calls were routed to gpt-5.5, but one entry appears to show gpt-5.6, which means the codex environment may have had access to a rollout entry labeled
Haider. tweet media
English
67
68
1.3K
233.5K
nanomader
nanomader@nanomader·
AI has made software dramatically easier to create but it has not made software easier to trust. Sooner or later, we need public, verifiable proof that certain AI review prompts were actually run, against specific artifacts, with specific outputs. I mean real receipts, not "trust me"
English
0
0
2
25
nanomader
nanomader@nanomader·
@PavelSnajdr @elliotarledge rolling out powerful model doesn't mean people automagically will be out of jobs. Adoption rate is a thing
English
1
0
0
63
Pavel Snajdr
Pavel Snajdr@PavelSnajdr·
@elliotarledge open your eyes and stop expecting what obviously isn't coming they could have rolled out GPT Pro for 20x cheaper than they did, people could have been out of jobs already did they? instead they raised the price
English
1
0
1
954
Elliot Arledge
Elliot Arledge@elliotarledge·
if ~gpt-5.7-spark has vision and is roughly gpt 5.5 medium levels, i think a lot of white collar jobs could be replaced. its up to openai to make the computer use rl environments robust enough, and to get the training hparams just right. very impressive and can do a lot for me right now, just needs speed and polishing.
English
10
6
216
16.3K
nanomader
nanomader@nanomader·
Yes, it works but it's not recommended for many reasons. Please avoid doing it unless you do not care about output quality. To make it clear: switching reasoning models within discussion is fine, switching LLM models within discussion is not recommended and it's better to start new thread
English
0
0
0
23
Tibo
Tibo@thsottiaux·
Looking at the traffic dashboard for Codex just now, it would be scary if we didn't have a lot more compute coming online in the coming weeks. All according to plan fortunately.
English
251
101
4.9K
194.7K
Jake
Jake@JakeKAllDay·
@thsottiaux @thsottiaux how about an auto mode ala cursor so I don’t have to keep toggling bw 5.4 mini high, 5.5 low, 5.5 med, etc? Save you some compute with efficient routing…
English
1
4
14
4.1K
nanomader
nanomader@nanomader·
this + "Codex Review: Didn't find any major issues. 👍" is the best feeling. Local code review works great!
nanomader tweet media
English
0
0
2
20
nanomader
nanomader@nanomader·
@mikeysee Yes, good prompt. For the bugs I use following one: While we are here with current context, let's do review of nearby surrounding code and steps to see perhaps is there anything of value we could think and possibly upgrade and/or fix.
English
0
0
0
69
Mikeysee
Mikeysee@mikeysee·
This has been a big unlock for me lately: "Is there anything else we should consider?" Use this magical phase frequently. It really helps with the sycophancy and over adherence to the initial plan once implementation starts. It works really well after task implementation and befre PR submission to make sure you have considered all the edges.
English
4
2
34
2.3K
nanomader
nanomader@nanomader·
@dosco I have seen both sides of this in my 9-5 and sadly I do not have recommendation as to how to help people make this transition, because it requires having a open mind and ability to ask a lot of questions
English
1
0
1
35
spacy
spacy@dosco·
if your whole job is hand writing code even code that you think is fairly complex. i highly suggest taking a week off to get into LLMs. this message might not find the right audience on here since we're all pretty AI pilled but people out there need to hear this.
English
1
0
18
712
nanomader
nanomader@nanomader·
@de_zolaa @RaminNasibov I fully agree because I am one of such person even though I spend ±15 hours daily in front of the computer working with AI. I do not have source of this image, I saved it around 10 years ago from Reddit (it's not mine)
English
0
0
0
18
Ramin Nasibov
Ramin Nasibov@RaminNasibov·
I saw a guy at coffeeshop today. No iPhone. No laptop. No tablet. Just sitting there. Drinking his coffee.
English
2.4K
3.3K
54.4K
1.8M
nanomader
nanomader@nanomader·
We are getting closer to the level of Bosch "The Garden of Earthly Delights" detail level with chatgpt images v2
nanomader tweet media
English
0
0
0
29
V
V@VictorInFocus·
@gdb Building a character consistency tool: same character, different scenes. GPT-5.5 got the app together fast, and GPT Image 2.0 is making the actual swaps work
V tweet mediaV tweet mediaV tweet mediaV tweet media
English
5
0
41
1.6K
Gail Weiner
Gail Weiner@gailcweiner·
Is it a coincidence that some heavy weights at OpenAI left and the models suddenly improve?
English
17
3
128
10K
nanomader
nanomader@nanomader·
I don't think people fully realize what hearing "your codex limit resets tomorrow" does to them! It makes people go little crazy in the best way, prompting everything they can think of, building, exploring, staying curious, trying new things. I really believe it's much more than just free tokens. It feels like absorbing the message "hey don't worry, tomorrow is another day! You can start fresh"
English
0
0
1
37
nanomader
nanomader@nanomader·
I agree! I am running anywhere between 3-7 Codex tabs and I am running full CI/CD flow including two reviews: one using custom codex skill for local review, second via Codex Github review. If it weren't for tests, review and bug fixing I would have 10x more commits, but I am over quality not quantity.
nanomader tweet media
English
2
0
1
176
cygaar
cygaar@0xCygaar·
I don't believe a single one of those "I'm running 20 Claude instances in parallel" posts. A single Claude code instance requires numerous amounts of back and forth to get it right, 20 means you're churning out complete slop that doesn't work.
English
194
53
1.2K
69.7K