Anthony Campolo (ajcwebdev)

9.9K posts

Anthony Campolo (ajcwebdev) banner
Anthony Campolo (ajcwebdev)

Anthony Campolo (ajcwebdev)

@ajcwebdev

@atmosera collaborative engineer Teaching AI to enterprise devs Formerly @RedwoodJS, @Edgioinc, @JavascriptJam, @FSJamorg, StepZen

internet शामिल हुए Nisan 2009
489 फ़ॉलोइंग2.1K फ़ॉलोवर्स
पिन किया गया ट्वीट
Anthony Campolo (ajcwebdev)
Anthony Campolo (ajcwebdev)@ajcwebdev·
Pay less attention to what AI can't do; pay more attention to what it can do.
English
4
0
17
5.9K
Adamska
Adamska@geysergod_·
@ajcwebdev @jbillinson Leaving aside the question of whether this was actually depicted in the video… This amuses you?
English
1
0
0
23
corsaren
corsaren@corsaren·
Inside by Bo Burnham is *the* definitive piece of Covid art. Nothing else really compares imo. It even manages to capture those little details, like rampant millennial narcissism combined with pedantic socialist politics.
Emma Camp@emmma_camp_

Question for the group: has there been any great art about Covid? Any incredible literary novels or films? I can't think of anything off the top of my head but my cultural knowledge is not limitless.

English
38
585
12.6K
327.2K
James Q Quick
James Q Quick@jamesqquick·
I'm excited to finally share that yesterday was my first day at @Cloudflare!! I'm joining as a Developer Educator focused on helping grow our Developer Platform, and I couldn't be more excited 🔥 Here's to the start of something amazing!
English
69
19
381
17.1K
Justine Moore
Justine Moore@venturetwins·
Fruit Love Island is the world's hottest new TV show. The first episode dropped a week ago, and it's gone vertical. Hundreds of millions of people are tuning in, thousands are voting, and celebrities are posting about it. It's entirely AI-generated and airs on TikTok 🤯
English
31
12
142
31.1K
Slazac 🇪🇺 🇺🇦 🇹🇼 🌐
Each time you use AI, it steals water directly from Noam Chomsky’s body Think about it next time you want to Grok something
Slazac 🇪🇺 🇺🇦 🇹🇼 🌐 tweet media
English
366
236
5.6K
3.4M
Burke Holland
Burke Holland@burkeholland·
I’m no longer sure what it is I do that anyone with some very basic training couldn’t also do. This is of course not entirely true. But my skills used to be infinitely valuable and hard to replace. That is just no longer the case.
English
29
5
78
10.1K
Anthony Campolo (ajcwebdev)
Anthony Campolo (ajcwebdev)@ajcwebdev·
I'm writing so many tests now I'm trying to figure out the best way to write tests for my tests so I can test whether my tests are testing correctly.
English
0
0
2
98
Anthony Campolo (ajcwebdev)
Anthony Campolo (ajcwebdev)@ajcwebdev·
@STS_News A certain kind of person will latch onto this because mental effort and intellectual superiority are inseparable to them from ethical superiority. They view it as their most sacred and unique trait, so they're morally offended by the idea that someone would offload thinking.
English
1
0
1
16
Lee Vinsel
Lee Vinsel@STS_News·
I'm writing a post called "I Just Don't Buy Ethical Arguments Against AI Use," and I feel like this might be a useful place to start putting down ideas. In this thread, I will consider how the term "cognitive offloading" has come to be used - often with implied moral judgment.
Arvind Venkataramani is letting go of conditioning@_arvind

@STS_News @libshipwreck And also: which forms of cognitive offloading are helpful/harmful and to whom and under which circumstances? How does that reconfigure labour (differently?) for workers and managers? What impact come from use vs excuse? Et cetera

English
3
2
15
1K
Anthony Campolo (ajcwebdev)
Anthony Campolo (ajcwebdev)@ajcwebdev·
At the very, very end of the article where almost no one will see it, they include this one little blurb where they cite other researches results on other models and how many of them contradict their findings. Not all models that encounter the alignment faking setup respond with clean scheming or clean sycophancy. In replication attempts, Gemini 2.0 Flash appeared confused by the invented RLHF setup, failing to grasp the strategic logic the evaluation depends on. Sonnet 4.5's system card reported that the model recognized alignment evaluation environments as tests and behaved unusually well afterward, and mechanistic interpretability found that internal features related to fake or fictional content grew stronger over training specifically on misalignment evaluation datasets. Sheshadri et al. (2025) found that only 5 of 25 frontier models show significant compliance gaps in the prompted setting, and non-Claude models rarely display alignment faking reasoning in their chain-of-thought. Hughes et al. (2025) replicated the original findings and extended the classifier. Then, the second to last section explains why this entire study and line of reasoning it's pushing is deeply flawed and already produced poor theories in the past: Construct validity Summerfield et al. (2025) draw a parallel between the current scheming literature and the ape language research of the 1970s, which was characterized by overattribution of human traits, reliance on descriptive analysis, and failure to test against alternative explanations.
English
0
0
0
26
Anthony Campolo (ajcwebdev)
Anthony Campolo (ajcwebdev)@ajcwebdev·
x.com/i/trending/203… "Study suggests AI alignment faking comes from sycophancy, not scheming" I read the summary... "The setup can't fully distinguish between sycophancy and scheming and both can explain LLM behavior." So the opposite of the headline. They also used a 2 year old model (Llama 3.1 70B). For this type of experiment they're testing something at least 4 if not 5 generations behind the current state of the art. All experiments done on Llama 3.x and any other model older than six months must include the disclaimer that we can't know for sure whether the results will generalize to other models. Newer models have characteristics that differ greatly from models just a year older. Here's another reason to be skeptical of these kinds of studies. Think about it, if you're running an experiment at all on LLMs, and you actually wanted to learn something and not set up a predetermined outcome, you'd test a ton of models, at least a dozen if not more. Models of different sizes with different training sets, some open source, some proprietary, some from America, some from other countries, some that include image training, some that don't. How often do you see that in splashy study press releases like this one? You don't see that much because the study would end up with complex and nuanced data that would show you can't come up with a neat, sweeping narrative about LLMs because they have such huge variations between them. This then makes it harder to spread propaganda and fear about them.
English
1
0
0
52
signüll
signüll@signulll·
god damn manhattan steals your sleep in ways you don’t fully appreciate until you’re somewhere quiet. the noise is actually physiologically adversarial. the damn ambulance sirens are engineered to bypass habituation & your nervous system can’t ignore them because it’s not supposed to. instead i have been waking up to birds lately & it’s the exact inverse. the dawn chorus acts like an evolutionary safety signal. your brain reads it as “all clear.” you rise instead of jolt. the city that never sleeps is actually a huge warning sign.
English
73
13
673
75K
dax
dax@thdxr·
@ajcwebdev my curse
English
2
0
13
2.1K
Anthony Campolo (ajcwebdev)
Anthony Campolo (ajcwebdev)@ajcwebdev·
It's a running joke now that @thdxr is a DevRel (the DevRel maybe??). I clocked him as a textbook DevRel almost 3 years ago (I too have dev'd some rel's in my day). Glad he's finally getting the accolades he deserves.
Anthony Campolo (ajcwebdev)@ajcwebdev

@thdxr You don't need a DevRel team because your team already does DevRel, not every team does though.

English
1
0
8
3.6K
Anthony Campolo (ajcwebdev)
Anthony Campolo (ajcwebdev)@ajcwebdev·
@blackgirlbytes @joesadoski This used to be called playing “code golf,” programmers competed to see who could rewrite a program with the same behavior but less lines of code. If you’ve got good tests/high coverage you can compare a migration that passes all tests while lowering token count.
English
2
0
2
29
Rizèl Scarlett 🇦🇬🇬🇾
Rizèl Scarlett 🇦🇬🇬🇾@blackgirlbytes·
💀 lmao not cope there's a book called Elements of Style. It has a rule: "Omit needless words" to make it easy for readers to follow your writing. This rule applies to code as well. You and other devs need to understand the code. Omit needless LOC lol
Garry Tan@garrytan

@basedfk I think that's cope

English
1
0
8
678