Eduardo Velasquez @[email protected]

2.1K posts

Eduardo Velasquez @eddiev@hachyderm.io

Eduardo Velasquez @[email protected]

@guardivelasquez

Senior software engineer with over 35 years of experience in software development and architecture.

Lynnwood, WA, USA Katılım Temmuz 2009
477 Takip Edilen206 Takipçiler
Dmitrii Kovanikov
Dmitrii Kovanikov@ChShersh·
If you recognise this, take care of your knees
Dmitrii Kovanikov tweet media
English
887
445
6.2K
206.3K
Eduardo Velasquez @[email protected] retweetledi
Jamie Bonkiewicz
Jamie Bonkiewicz@JamieBonkiewicz·
FBI agents have to retire at 57, air traffic controllers at 56, and pilots at 65. But somehow demented 70 and 80 year olds running the country is totally fine. Make it make sense.
English
792
26.7K
181.7K
2.7M
Petr Houška
Petr Houška@Petrroll·
@KooKiz @guardivelasquez It's not a great patterns but the null-flow analysis doesn't handle certain things well so it's kinda necessary in many cases (and or the alternative being way more complexity).
English
1
0
0
17
Kevin Gosse
Kevin Gosse@KooKiz·
I'm trying GPT-5.4 and scratching my head when seeing the generated code. What reasoning process could lead to explicitly initializing to null a property, _and_ using the null-forgiving operator.
GIF
Kevin Gosse tweet media
English
7
0
14
2.7K
Eduardo Velasquez @eddiev@hachyderm.io
@marcsh @KooKiz The idea is that it will never be null after “something” happens. If I’m not mistaken, this syntax appeared before the required and init keywords were introduced
English
2
0
1
98
/// //
/// //@marcsh·
@KooKiz @guardivelasquez Maybe the other person will know, but is that value ever going to be null? Cause if so, It should be TheType? Name = null; If its not supposed to be null ever,then it should be The Type Name { public get; init; } Which will gently remind you if you dont assign in a constructor
English
3
0
0
103
Eduardo Velasquez @[email protected] retweetledi
Ujjwal Chadha
Ujjwal Chadha@ujjwalscript·
The BIGGEST lie in AI LLMs right now is “It learns.” We are confusing a Context Window with a Brain. They are not the same thing. The cold reality is that AGI is much further away than the hype suggests. 1. The "Read-Only" Problem Your brain physically changes when you learn. Synapses fire, pathways strengthen. You evolve. An LLM is a read-only file. Once training finishes, that model is stone cold frozen. It never learns another thing. When you correct it, it doesn't "get smarter." It just pretends to agree with you for the duration of that specific chat session. Close the tab, and the lesson is gone forever. 2. The "Context" Trap "But it remembers what I said earlier!" No, it doesn't. Engineers are just re-feeding your previous sentences back into the prompt, over and over again, at massive compute cost. That isn't memory. That is a scrolling teleprompter. We are simulating continuity by burning GPU credits, not by building a persistent mind. 3. The RAG Band-Aid Because models can't learn, we built an entire infrastructure of Vector DBs and RAG (Retrieval-Augmented Generation) to glue external data onto them. It’s duct tape. We are trying to fix a lack of intelligence with a search engine. We are building systems that are 90% scaffolding and 10% model, trying to force a static equation to act like a fluid thinker. 4. The Result? We have built the world’s greatest improviser, but it has severe anterograde amnesia. It can fake a conversation, but it cannot grow. It cannot compound knowledge. True AGI requires Online Learning—the ability to update weights in real-time without catastrophic forgetting. We don't know how to do that yet. Not at scale. Not stably. Until we solve the "Static Weight" problem, we aren't building a mind. We're just building a really fancy autocomplete. Inference != Intelligence.
English
163
269
1.4K
84.1K
Eduardo Velasquez @[email protected] retweetledi
Riley Coyote
Riley Coyote@RileyRalmuto·
holy sh*t. this is hands down the coolest website i have ever found in my life. it's a live feed of the freaking Hubble Telescope AND James Webb Space Telescope. and the resolution is honestly so incredible i didn't think it was real. unbelievable. spacetelescopelive.org
English
438
6.2K
42.7K
2M
Eduardo Velasquez @[email protected] retweetledi
Barry R McCaffrey
Barry R McCaffrey@mccaffreyr3·
A serious warning to all democratic societies to faithfully uphold the rule of law.
Eva Chipiuk, BSc, LLB, LLM@echipiuk

After the Nuremberg Trials, one of the most unsettling conclusions did not come from the courtroom, but from the psychiatrist tasked with evaluating the defendants. Dr. Douglas Kelley, the U.S. Army psychiatrist assigned to assess many of the senior Nazi officials, expected to find monsters people fundamentally different from the rest of humanity. He did not. What disturbed him most was how ordinary they were. They were not raving madmen. They were not obvious sociopaths. They were intelligent, educated, and often convinced they were simply doing their duty, following orders, or serving a higher cause. Kelley warned that this was the real danger: evil does not always look abnormal. It often presents itself as competence, obedience, and institutional loyalty. His central warning was deeply uncomfortable there are people with morally vacant or destructive tendencies everywhere. In every society. In every era. What determines the outcome is whether systems elevate those people, shield them from accountability, and normalize their behavior, and whether ordinary citizens are willing to question authority when it matters most. Modern bureaucracies and institutions are powerful precisely because they diffuse responsibility. Decisions are broken into policies, protocols, committees, and “best practices.” Harm is rarely framed as harm; it is reframed as necessity, risk management, or compliance. Individuals are encouraged not to think morally, but procedurally. This is how ordinary people become capable of extraordinary wrongdoing by outsourcing conscience to institutions and convincing themselves that accountability lies somewhere else. The lesson of Nuremberg is not that “those people were different.” It is that they were not. That is why vigilance matters. That is why blind trust in authority is dangerous. And that is why a healthy society must protect dissent, accountability, and moral courage especially when it is inconvenient. History does not repeat itself because people forget facts. It repeats itself when people convince themselves, “It could never happen here.”

English
23
336
1.1K
88.7K
Jake Tapper 🦅
Jake Tapper 🦅@jaketapper·
"To see what is in front of one’s nose needs a constant struggle." -- George Orwell
English
23.4K
11.9K
47.6K
0
Dave
Dave@GamewithDave·
To any gamers whose birth year starts with 19 still gaming👏🏻🫡
English
1.1K
547
10.6K
253.6K
Genius Tech
Genius Tech@Geniustechw·
Men, what is stopping you from dressing like this?
Genius Tech tweet media
English
5.9K
63
566
394.9K
Libran💜
Libran💜@libran_songol·
if a woman sleeps with 10 men she's a slút, but if a man does it... he's ????
English
15.9K
759
13.5K
3.4M
Royce Morgan
Royce Morgan@PTSportsFix·
Who’s your favorite TV show dad of all time? A) Uncle Phil B) Danny Tanner C) Homer Simpson D) Carl Winslow E) Red Forman F) Phil Dunphy G) Hank Hill H) Tim Taylor I) Bob Belcher J) Mike Brady K) Coach Eric Taylor L) Louis Huang M) Michael Kyle N) Comment other TV show dad 📺
English
5.5K
128
1.9K
709.3K
The Tennessee Holler
The Tennessee Holler@TheTNHoller·
Maybe someone should tell Megyn R. Kelly 15 is not “barely legal”, it’s actually quite illegal?
English
40
314
2.1K
19.7K
Wise
Wise@trikcode·
Not sure if anyone needs to hear this, but: .NET is better than Java
English
179
61
1K
45.8K
Acid Lemon
Acid Lemon@acid__lemon·
@NeuralWave_ @trikcode I came here to see the answer. No answer. I guess I should have known an account named wise/trikcode would be a troll.
English
1
0
1
184
Eduardo Velasquez @[email protected] retweetledi
Dave W Plummer
Dave W Plummer@davepl1968·
I restore old computers, and am always curious how their classic performance compares to modern PCs. Are they a hundred times faster? A thousand? A million? Here are the stats. I wrote a Dhrystone test in K&R C that runs on everything I own, unmodified, from the PDP-11/34 up to my M2 Mac Ultra Pro. FWIW, the Mac Pro is 200000X as fast as the 11/34! Code: github.com/davepl/pdpsrc/…
Dave W Plummer tweet media
English
194
207
2.2K
134.1K
Aaron Stannard
Aaron Stannard@Aaronontheweb·
Thank you, legacy code author, very helpful
Aaron Stannard tweet mediaAaron Stannard tweet media
English
26
6
250
28.3K