LawrenceDCodes

29.4K posts

LawrenceDCodes banner
LawrenceDCodes

LawrenceDCodes

@LawrenceDCodes

Dev 🥑 • @CodeConnector_ Advisor • #AI Explorer • @msstate alum • @X Community Notes Contributor • Building @Lawbot226

United States Katılım Şubat 2014
2.5K Takip Edilen15K Takipçiler
LawrenceDCodes
LawrenceDCodes@LawrenceDCodes·
@nnennahacks @opencode wonder if anyone's working on a sub-agent that runs continuously and switches model per a schema in a local .md. Gruntwork like this - use free tier, etc
English
1
0
1
35
LawrenceDCodes
LawrenceDCodes@LawrenceDCodes·
Taste makes the difference 👌🏾
English
4
4
19
572
Kanika Tolver
Kanika Tolver@KanikaTolver·
Reading about agent design principles tonite lol 😂
English
1
0
6
549
LawrenceDCodes
LawrenceDCodes@LawrenceDCodes·
@builtbyamiee I'm currently following far too many. I follow based on people saying things consistently I want to read. The ratio is completely irrelevant.
English
1
0
0
8
Data by Kaka 👩‍💻
Data by Kaka 👩‍💻@builtbyamiee·
@LawrenceDCodes You are a programmer, and I believe you have good knowledge about mathematics and statistics Over 15k persons are following you, and you aren't following 1/3 percent of your followers. That is what I mean by not being a cool statistic
English
1
0
0
13
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Our AI Studio vibe coding roadmap for the new few weeks: - Design mode - Figma integration - Google Workspace integration - Better GitHub support - Planning mode - Immersive UI - Agents - Multiple chats per app - Simplified deploys - G1 support And more, should be fun : )
English
205
105
1.8K
70.1K
LawrenceDCodes
LawrenceDCodes@LawrenceDCodes·
@builtbyamiee I'm asking you to explain what's not cool about the statistics. I'm asking again.
English
2
0
0
10
Burke Holland
Burke Holland@burkeholland·
I keep trying to tell people this. The context window is not a memory. It’s a room. The more stuff you put in there, the more cluttered it gets until eventually the model just stays confused. Don’t listen to me. Listen to Matt.
Matt Pocock@mattpocockuk

Doing some experiments today with Opus 4.6's 1M context window. Trying to push coding sessions deep into what I would consider the 'dumb zone' of SOTA models: >100K tokens. The drop-off in quality is really noticeable. Dumber decisions, worse code, worse instruction-following. Don't treat 1M context window any differently. It's still 100K of smart, and 900K of dumb.

English
15
12
107
11K
LawrenceDCodes
LawrenceDCodes@LawrenceDCodes·
@PromptLLM @GeminiApp @PromptLLM Maybe, maybe not. If there's a flight of stairs consisting of 15 steps I'm the type of person who appreciates the 1st step. And the 4th. And the 12th.
English
0
0
1
185