
Tyler Johnson
59 posts

Tyler Johnson
@tjohnson640
Code author at @livefront. ❤️ iOS, swift, and design.





We’re Measuring AI on the Wrong Ruler | John Nosta, Psychology Today The relentless urge to compare AI and human intelligence may be a mistake. Key points - We assume artificial intelligence (AI) and humans share the same scale of intelligence. - Human thought carries lived consequence while AI computation does not. - One ruler may not be able measure two different kinds of thinking. --- Every debate about artificial intelligence (AI) seems to revolve around the same question: Is it smarter than we are? The subtitles of the questions might change, and the endpoints might be argued, but behind the cacophony of authoritative brilliance is a shared assumption—that intelligence lives on a single line. More of it on one end, less on the other. Humans are somewhere along that spectrum, and machines are moving toward us. But with all the discussion and debate, we rarely stop to examine the ruler itself. And the moment we ask whether AI is ahead of us, we have already accepted that we are measuring the same thing. The Illusion of a Shared Scale It’s understandable why we default to this handy ruler. Large language models create the very stuff of our humanity, from words to images. And this output clearly looks like thinking, and it is commonly better than what we humans produce. But let's be careful not to get our hand slapped by that ruler in the process. Here's what we need to consider: When surface outputs converge, we assume structure also does. Thought for thought and concept for concept seem to arrive along a continuum where a "cognitive assessment" can be placed alongside them. But human cognition is not just output quality; it's consequence-bearing. When you make a decision, you carry the aftermath forward. And when you change your mind, that revision becomes part of your biographical narrative. It unfolds through time and alters who you are. AI computation does none of this. It generates responses without any biography. It doesn't carry yesterday into tomorrow in any lived sense. Its fluency is extraordinary, but it's reversible, consequence-free, and precariously fragile in its understanding. To measure both along a single axis of “smart” flattens the difference and misses key opportunities. Optimization Is Not Superiority So, let's start with some basic assumptions. A calculator outperforms you at arithmetic. A navigation system like Waze outperforms you at route planning. Yet we certainly don't conclude that either possesses deeper intelligence. What we recognize is the optimization for a specific task. The confusion (and trouble) begins when AI’s optimization extends into domains that are traditionally human, such as writing and creativity. And because that terrain feels familiar, we assume we are witnessing a better version of ourselves. But resemblance is not equivalence. If we insist on placing human thought and machine computation on the same ruler, we will misread both. The machine appears superhuman because it excels at measurable outputs. The human appears inefficient because we hesitate, revise, doubt, and sometimes contradict ourselves. Those very “inefficiencies” are inseparable from what makes human cognition distinct. A Different Kind of Comparison What if the real mistake is not overestimating AI or underestimating ourselves, but misclassifying what we are comparing? Human thought is embodied and autobiographical. It's shaped by lived experience and future consequence. AI, to the contrary, operates through statistical inference across vast datasets. It identifies patterns with astonishing scale and speed. Both generate language, and both can solve problems. But the architectures that define the "thinking" are not interchangeable. And when we collapse them into a single metric of intelligence, we distort the conversation. We fuel hype on one side and anxiety on the other. If we step off that single “axis of smart,” the debate shifts. The question is no longer whether AI is ahead of us. It becomes more precise: What kind of cognitive system is this, and how does it intersect with ours? That shift does not minimize AI’s power; it helps clarify it. It also preserves space for a more honest account of the multifaceted complexity of human thinking—from fear to flow. Rethinking the Frame The language we use shapes the future we imagine. If we continue to treat intelligence as a single measurable quantity along a single axis, we'll keep asking whether machines are catching up or surpassing us. If instead, we recognize that we may be dealing with different dimensions of cognition, we open a different and more nuanced path. And to this point, the age of AI may not hinge on who is smarter but on whether we can abandon a model of intelligence that was too narrow to begin with. The first step is simple. Question the ruler. Read more: psychologytoday.com/us/blog/the-di…





Donald Trump is president in every one of these states. Gas prices are much higher in the blue states. The "Affordability Crisis" is caused by DEMOCRAT POLICY. As governor of California of California I will end Democrat 'climate' insanity so we can have $3.00 gas.


Scott Adams and Cognitive Dissonance | Richard Cocks, The Orthosphere Scott Adams wrote a book called “Loser Think” which was designed to help people be more rational. One of his items to avoid is called “mind reading.” This is to imagine that we can know someone’s inner motivations when they have not stated them. So, for instance, critics will say Trump only cares about himself. How could they possibly know that? Did they ask him? And if they asked him, can they be sure that he is right? For someone who cares only about himself, he certainly seems to care about his family a lot. Also, as Adams points out, if Trump is indeed a narcissist and wants to polish his image, the best way of doing that is to be a highly effective president. Trump says exaggerated things about himself, but then he does that with everything. Dana Carvey satirizes him as saying things like, “My wife is the best wife. She’s a very very good wife. No one has a better wife than me.” But, then he says the same thing about America on any number of fronts. Adams’ characterization of what Trump says is that it is mostly “directionally correct.” If his rallies are well attended, are they the most well attended ever? I don’t know. But, the hyperbole is in line, at least, with the good attendance. When Biden’s inner circle said that he was not cognitively impaired, was that directionally correct? No. It was the opposite of the truth. Conflating the directionally correct with a blatant falsehood is not an example of clear thinking. But then this paragon of rationality, Adams, is a determinist, which is immediately self-defeating. If you can’t figure this out for yourself, then you are, frankly, an idiot. And yet, the majority of scientists and philosophers believe in it. It doesn’t say much about human cognitive abilities that this is the case. Like the emperor’s new clothes, looking around and finding so many others engaging in a suspension of disbelief on the topic must provide emotional cover for this egregiously irrational notion. Materialism seems to imply determinism, so believing in the former means committing to the latter. Adams is to be complimented on seeing through the pretensions of AI to be truly intelligent or capable of consciousness. He has tried using any number of AIs to help him in his work as a cartoonist, to make videos, to streamline his work process, sometimes paying hundreds of dollars for the privilege, only to find that the claims were empty promises. At one point, he wanted to feed a book into an AI only to find that the upper limit was a few hundred words. This fact had not been explained before he started. If you make an AI video, each video is sui generis (unique unto itself). What you cannot do is to get it to make the same video with alterations. You cannot say, “Create the scene again, but with an altered “camera” angle.” Or, “Do it again, but remove items that make it clear this is not 1930s New York.” AI companies are offering “agents” that are supposed to be able to, for instance, book plane flights for you. You will have to give it your calendar details and your credit card information. Can you trust them? No. Companies are laying off tens of thousands of workers with the idea that their work will be replaced with AI. This is alarming because they can’t be. Is our economy about to implode as a result? LLMs can pass the bar exam for lawyers, but they cannot do the work of lawyers. They also hallucinate and make up references complete with fake citations. Hallucinations cannot be eliminated. They are part of the very fabric of LLMs. Adams suspects that companies know all this but use the excuse of AI for the redundancies they wanted to engage in anyway. It would be mind-reading to claim to know for sure. A side note that gives me pause is that Alan Turing wrote an article saying that if a machine were ever to be actually intelligent, it would have to be capable of making mistakes. Algorithms are solutions to known problems. They do not make mistakes, by definition. If an algorithm does not lead consistently to the solution of a problem, then it is not an algorithm. They are solutions to known problems. Real intelligence means dealing with the unknown, not reading off an answer someone else has provided. It is, presumably, a coincidence that LLMs make mistakes. Still, it is a little unnerving. LLMs are not algorithms. Algorithms are deterministic. An algorithm is a step by step set of instructions that if followed are guaranteed to produce the correct result. LLMs use statistical methods to predict the next word based on millions of pages of human writing. There is nothing guaranteed or predictable about this process. Every iteration is different. Adams notes that LLMs are just pattern recognition devices and not intelligent. True. But then he is as likely to say that the same thing is true of human beings; that we are not intelligent, either. However, according to this line of thought, that observation must itself be merely pattern recognition and not actually true. He will slide between these things without realizing that he has hoist himself by his own petard. It was Ken Wilber and Rupert Sheldrake who introduced me to the idea of a performative contradiction and the reflexive implications of what people claim. To say, “There is no truth” is a performative contradiction. The assertion is itself a contention of truth. If it is true, then truth does exist. But, for some reason, materialists, nominalists, positivists, and analytic philosophers (same thing), have never heard of performative contradictions and engage in them with abandon. So much of what they write involves these issues that probably analytic philosophy would have to close up shop if they ever acknowledged them. I’m sure some readers would like me to shut up about them. However, as soon as they are encountered, it is possible to validly dismiss whole trains of thought. Adams likes to remind his listeners that he is a trained hypnotist. And the first thing one learns as a hypnotist, he says, is that people are not particularly rational but are more driven by emotion. So far, so good – except he does not add the qualifier “particularly.” He likes to point out some instance of irrationality as though it were proof that humans are never rational. The fact that a hypnotist, trained in persuasion, can sometimes manipulate people is somehow supposed to be evidence that we are not rational at all. Likewise, the fact that events in the brain can interfere with “free” decision making proves we do not have free will ever, he thinks. This is certainly an instance of extreme cognitive dissonance and word salad. His conclusions do not follow from his premises at all. As Iain McGilchrist comments in The Matter With Things, optical illusions do not prove that our eyes can never be trusted. They prove, instead, that vision is not infallible. Nobody thought eyesight was unerring in the first place. We automatically adjust our estimations of color according to the perceived lighting and its interplay with shadows. When we see one object and part of it is in shadow, we don’t automatically think that the object is multi-colored. We see this kind of lighting effect all the time and we make allowances for it. This means that there can be a carefully contrived illusion that has the color “grey” looking light on one place of a checkerboard and dark at another, when in fact, it is technically the same shade of grey. This illusion is only an illusion when seen on a screen or on the page of a book. In real life, we would be making the adjustment correctly. To get rid of this illusion would mean that our estimation of color would, in everyday contexts, be hopelessly wrong. The “grey” color would not be the same in the concrete world. So, choose your poison. Be deceived by a drawing and continue to function well in real life, or not to be fooled by a drawing and get it all wrong in reality. We are better off as we are and the illusion is no cause for alarm. Listening to Adams on these topics is like listening to a record skipping. The sudden break in logic with the several necessary intervening premises missing can make one think, “Hang on. Did I miss something? How did he get from here to there?” Determinism is true. Therefore, rationality is impossible. How did you arrive at this conclusion? Using rationality (hence, determinism is false). We are merely pattern recognition devices. Is recognizing a pattern the same thing as identifying the truth? No. Is it therefore true that we are merely pattern recognition devices? Whether that statement is true or not lies outside the purview of human capabilities. I, a hypnotist, can sometimes fool you and manipulate you. Therefore, you are never rational. This is saying that being wrong sometimes, means you are never right. That does not follow. Sometimes, organic events in your brain, or an electric probe in your brain, can make you think or do something. That means you have no free will. How could someone who thinks this badly dream of writing a book telling other people how to be more rational? Optical illusions prove that our senses cannot be trusted. How come it is so hard to contrive these optical illusions? How is it that there is a very limited number of them, to the extent that it would be quite possible to recite them all? Isn’t what makes them interesting their very deviation from normal perceptions? There would be no gee whiz element if they were ubiquitous. This can be compared to “the news.” Something is only “the news” if it is not routine and commonplace. No headline ever says “Man Dies From Heart Disease.” Having written all that, I listen to him everyday and find his insights interesting enough to keep tuning in. I appreciate the fact that he correctly posits that AI as we know it will never reach AGI (artificial general intelligence) and is supremely skeptical of “models” used to make factual claims about reality. They do not work in finance. And they do not work for climate change. There are constantly being found new vital factors that have not been included in the models, like the role of plankton in the sea, that invalidate all previous predictions. And yet, no alarm is shown by the climate scientists. If you knew nothing about human nature, you would think they would all hang their heads in shame, quit their jobs and find a better use of their time. This indicates that the goal is to promote climate change and that they are just using whatever “evidence” seems to confirm it, rather than the evidence leading them to the conclusion. Adams was employed to make models for a corporation. He learned to ask, “What would you like it to prove?” His boss made it clear that he had no faith in them and merely pointed at them when they happened to agree with what he was intending to do anyway. Models are determined by the assumptions the scientist makes. The same is true of philosophical arguments. Unprovable metaphysical claims push us in inevitable directions. Find out someone’s core beliefs and his main conclusions can be predicted. Adams is currently dying from late-stage bone cancer brought on from metastasized prostate cancer. You would not know it from his demeanor. He has largely lost the use of his hands, among other things. I presume Adams could provide insight into my own forms of cognitive dissonance. It would be interesting to find out what they are. Unfortunately, apparently we all just start spouting word salad when they are pointed out to protect our minds from accepting the criticism. orthosphere.wordpress.com/2025/11/18/sco…






#Packers announce process for CEO search. 📰: pckrs.com/93p6d9c5


The End of Programming is near... Here is exactly why I believe this. Agree? Disagree? Let me know?






