Tyler Johnson

59 posts

Tyler Johnson

Tyler Johnson

@tjohnson640

Code author at @livefront. ❤️ iOS, swift, and design.

Katılım Mart 2013
870 Takip Edilen134 Takipçiler
Sabitlenmiş Tweet
Tyler Johnson
Tyler Johnson@tjohnson640·
Although the icon suggests it, shipping an App Clip isn’t as easy as clipping a coupon. Imagine trying to fit three toys from a toddler into a shoebox - it’s more like that. Here’s a checklist along with some tips and tricks to help navigate the journey. link.medium.com/QgRZdWrmLeb
English
0
0
2
0
"Doc" Hypnosis 🧠 | BowTied Brain-Hacking
Persuasion 101: The High Ground It shows up in the funniest places, really. Doesn't even have to be a political conversation. Sports is the context for this story, but sports is incidental to the story. This is just how our minds work, whether it's sports, politics, work-related stuff, doesn't matter. I was talking with "the boys" yesterday after Michigan's basketball team nearly got knocked out of the tournament -- again. Second time in two games! Against much weaker opponents! Both games came down to the wire. So you know, everyone's trotting out the well-worn sports cliches: Michigan's checked out and more focused on next week's National Tournament (mind-reading), this or that team benefitted from "revenge mindset" (more mind-reading), Michigan doesn't match up well with certain teams (old patterns vs new), and so on. It's a fun set of arguments, it's what you're supposed to do when you're round-tabling with "the boys." But then somebody went full-blown High Ground: "You only think Michigan was supposed to blow these teams out of the water because of the ranking system -- and that ranking system is bullsh*t." Just like that, everyone agreed on that one thing. The "Authority Figures" who decide each team's rank are wrong, again, because they're always wrong, forever. It was fascinating to watch. High Ground enters the conversation, and just like that, the entire conversation shifted. Ah ... persuasion. 😁
English
3
2
24
1.1K
Tyler Johnson
Tyler Johnson@tjohnson640·
@BowTiedTrance “And the moment we ask whether AI is ahead of us, we have already accepted that we are measuring the same thing.” They’ve got us thinking past the sale.
English
0
0
1
13
"Doc" Hypnosis 🧠 | BowTied Brain-Hacking
"when you change your mind, that revision becomes part of your biographical narrative" That might be the most concise summary of how AI "thinking" differs from true human thinking. AI does not have long-term memory or the ability to change its mind. AI "operates through statistical inference across vast datasets."
Owen Gregorian@OwenGregorian

We’re Measuring AI on the Wrong Ruler | John Nosta, Psychology Today The relentless urge to compare AI and human intelligence may be a mistake. Key points - We assume artificial intelligence (AI) and humans share the same scale of intelligence. - Human thought carries lived consequence while AI computation does not. - One ruler may not be able measure two different kinds of thinking. --- Every debate about artificial intelligence (AI) seems to revolve around the same question: Is it smarter than we are? The subtitles of the questions might change, and the endpoints might be argued, but behind the cacophony of authoritative brilliance is a shared assumption—that intelligence lives on a single line. More of it on one end, less on the other. Humans are somewhere along that spectrum, and machines are moving toward us. But with all the discussion and debate, we rarely stop to examine the ruler itself. And the moment we ask whether AI is ahead of us, we have already accepted that we are measuring the same thing. The Illusion of a Shared Scale It’s understandable why we default to this handy ruler. Large language models create the very stuff of our humanity, from words to images. And this output clearly looks like thinking, and it is commonly better than what we humans produce. But let's be careful not to get our hand slapped by that ruler in the process. Here's what we need to consider: When surface outputs converge, we assume structure also does. Thought for thought and concept for concept seem to arrive along a continuum where a "cognitive assessment" can be placed alongside them. But human cognition is not just output quality; it's consequence-bearing. When you make a decision, you carry the aftermath forward. And when you change your mind, that revision becomes part of your biographical narrative. It unfolds through time and alters who you are. AI computation does none of this. It generates responses without any biography. It doesn't carry yesterday into tomorrow in any lived sense. Its fluency is extraordinary, but it's reversible, consequence-free, and precariously fragile in its understanding. To measure both along a single axis of “smart” flattens the difference and misses key opportunities. Optimization Is Not Superiority So, let's start with some basic assumptions. A calculator outperforms you at arithmetic. A navigation system like Waze outperforms you at route planning. Yet we certainly don't conclude that either possesses deeper intelligence. What we recognize is the optimization for a specific task. The confusion (and trouble) begins when AI’s optimization extends into domains that are traditionally human, such as writing and creativity. And because that terrain feels familiar, we assume we are witnessing a better version of ourselves. But resemblance is not equivalence. If we insist on placing human thought and machine computation on the same ruler, we will misread both. The machine appears superhuman because it excels at measurable outputs. The human appears inefficient because we hesitate, revise, doubt, and sometimes contradict ourselves. Those very “inefficiencies” are inseparable from what makes human cognition distinct. A Different Kind of Comparison What if the real mistake is not overestimating AI or underestimating ourselves, but misclassifying what we are comparing? Human thought is embodied and autobiographical. It's shaped by lived experience and future consequence. AI, to the contrary, operates through statistical inference across vast datasets. It identifies patterns with astonishing scale and speed. Both generate language, and both can solve problems. But the architectures that define the "thinking" are not interchangeable. And when we collapse them into a single metric of intelligence, we distort the conversation. We fuel hype on one side and anxiety on the other. If we step off that single “axis of smart,” the debate shifts. The question is no longer whether AI is ahead of us. It becomes more precise: What kind of cognitive system is this, and how does it intersect with ours? That shift does not minimize AI’s power; it helps clarify it. It also preserves space for a more honest account of the multifaceted complexity of human thinking—from fear to flow. Rethinking the Frame The language we use shapes the future we imagine. If we continue to treat intelligence as a single measurable quantity along a single axis, we'll keep asking whether machines are catching up or surpassing us. If instead, we recognize that we may be dealing with different dimensions of cognition, we open a different and more nuanced path. And to this point, the age of AI may not hinge on who is smarter but on whether we can abandon a model of intelligence that was too narrow to begin with. The first step is simple. Question the ruler. Read more: psychologytoday.com/us/blog/the-di…

English
2
4
24
5.4K
Tyler Johnson
Tyler Johnson@tjohnson640·
@rolypolyistaken Excellent post. Thank you for putting this into words. First time I recognized the sticky confirmation bias trap was with “Republicans will be hunted”. It was fascinating watching him work.
English
0
0
24
961
MAZE
MAZE@mazemoore·
This is my first time and only time ever doing this. If you are from the land of the free and the home of the brave and if you love this beautiful country, drop a comment and if I don't follow you, I give you my word I will. God bless America.
English
13.5K
1.6K
22.9K
1.1M
Tyler Johnson
Tyler Johnson@tjohnson640·
@BowTiedTrance Watching this with the labeling actually makes me laugh. Seems like there’s an effective “mocking” dimension here.
English
0
1
4
303
"Doc" Hypnosis 🧠 | BowTied Brain-Hacking
The line "we can have $3.00 gas" is very strong persuasion because it's specific, visual, and appeals to everyone. I would be hammering that line in every ad, email, post, etc., 15 times a day.
Steve Hilton@SteveHiltonx

Donald Trump is president in every one of these states. Gas prices are much higher in the blue states. The "Affordability Crisis" is caused by DEMOCRAT POLICY. As governor of California of California I will end Democrat 'climate' insanity so we can have $3.00 gas.

English
4
2
14
1.1K
Tyler Johnson
Tyler Johnson@tjohnson640·
@BowTiedTrance I read the whole second paragraph before reading the second part of your post - only because you were highlighting it. Took me about 6 tries and I still don’t understand it. I think you nailed the 97% figure.
English
1
0
3
37
"Doc" Hypnosis 🧠 | BowTied Brain-Hacking
Second paragraph: "this paragon of rationality, Adams, is a determinist, which is immediately self-defeating. If you can’t figure this out for yourself, then you are, frankly, an idiot." And that's when 97% of the audience quit reading.
Owen Gregorian@OwenGregorian

Scott Adams and Cognitive Dissonance | Richard Cocks, The Orthosphere Scott Adams wrote a book called “Loser Think” which was designed to help people be more rational. One of his items to avoid is called “mind reading.” This is to imagine that we can know someone’s inner motivations when they have not stated them. So, for instance, critics will say Trump only cares about himself. How could they possibly know that? Did they ask him? And if they asked him, can they be sure that he is right? For someone who cares only about himself, he certainly seems to care about his family a lot. Also, as Adams points out, if Trump is indeed a narcissist and wants to polish his image, the best way of doing that is to be a highly effective president. Trump says exaggerated things about himself, but then he does that with everything. Dana Carvey satirizes him as saying things like, “My wife is the best wife. She’s a very very good wife. No one has a better wife than me.” But, then he says the same thing about America on any number of fronts. Adams’ characterization of what Trump says is that it is mostly “directionally correct.” If his rallies are well attended, are they the most well attended ever? I don’t know. But, the hyperbole is in line, at least, with the good attendance. When Biden’s inner circle said that he was not cognitively impaired, was that directionally correct? No. It was the opposite of the truth. Conflating the directionally correct with a blatant falsehood is not an example of clear thinking. But then this paragon of rationality, Adams, is a determinist, which is immediately self-defeating. If you can’t figure this out for yourself, then you are, frankly, an idiot. And yet, the majority of scientists and philosophers believe in it. It doesn’t say much about human cognitive abilities that this is the case. Like the emperor’s new clothes, looking around and finding so many others engaging in a suspension of disbelief on the topic must provide emotional cover for this egregiously irrational notion. Materialism seems to imply determinism, so believing in the former means committing to the latter. Adams is to be complimented on seeing through the pretensions of AI to be truly intelligent or capable of consciousness. He has tried using any number of AIs to help him in his work as a cartoonist, to make videos, to streamline his work process, sometimes paying hundreds of dollars for the privilege, only to find that the claims were empty promises. At one point, he wanted to feed a book into an AI only to find that the upper limit was a few hundred words. This fact had not been explained before he started. If you make an AI video, each video is sui generis (unique unto itself). What you cannot do is to get it to make the same video with alterations. You cannot say, “Create the scene again, but with an altered “camera” angle.” Or, “Do it again, but remove items that make it clear this is not 1930s New York.” AI companies are offering “agents” that are supposed to be able to, for instance, book plane flights for you. You will have to give it your calendar details and your credit card information. Can you trust them? No. Companies are laying off tens of thousands of workers with the idea that their work will be replaced with AI. This is alarming because they can’t be. Is our economy about to implode as a result? LLMs can pass the bar exam for lawyers, but they cannot do the work of lawyers. They also hallucinate and make up references complete with fake citations. Hallucinations cannot be eliminated. They are part of the very fabric of LLMs. Adams suspects that companies know all this but use the excuse of AI for the redundancies they wanted to engage in anyway. It would be mind-reading to claim to know for sure. A side note that gives me pause is that Alan Turing wrote an article saying that if a machine were ever to be actually intelligent, it would have to be capable of making mistakes. Algorithms are solutions to known problems. They do not make mistakes, by definition. If an algorithm does not lead consistently to the solution of a problem, then it is not an algorithm. They are solutions to known problems. Real intelligence means dealing with the unknown, not reading off an answer someone else has provided. It is, presumably, a coincidence that LLMs make mistakes. Still, it is a little unnerving. LLMs are not algorithms. Algorithms are deterministic. An algorithm is a step by step set of instructions that if followed are guaranteed to produce the correct result. LLMs use statistical methods to predict the next word based on millions of pages of human writing. There is nothing guaranteed or predictable about this process. Every iteration is different. Adams notes that LLMs are just pattern recognition devices and not intelligent. True. But then he is as likely to say that the same thing is true of human beings; that we are not intelligent, either. However, according to this line of thought, that observation must itself be merely pattern recognition and not actually true. He will slide between these things without realizing that he has hoist himself by his own petard. It was Ken Wilber and Rupert Sheldrake who introduced me to the idea of a performative contradiction and the reflexive implications of what people claim. To say, “There is no truth” is a performative contradiction. The assertion is itself a contention of truth. If it is true, then truth does exist. But, for some reason, materialists, nominalists, positivists, and analytic philosophers (same thing), have never heard of performative contradictions and engage in them with abandon. So much of what they write involves these issues that probably analytic philosophy would have to close up shop if they ever acknowledged them. I’m sure some readers would like me to shut up about them. However, as soon as they are encountered, it is possible to validly dismiss whole trains of thought. Adams likes to remind his listeners that he is a trained hypnotist. And the first thing one learns as a hypnotist, he says, is that people are not particularly rational but are more driven by emotion. So far, so good – except he does not add the qualifier “particularly.” He likes to point out some instance of irrationality as though it were proof that humans are never rational. The fact that a hypnotist, trained in persuasion, can sometimes manipulate people is somehow supposed to be evidence that we are not rational at all. Likewise, the fact that events in the brain can interfere with “free” decision making proves we do not have free will ever, he thinks. This is certainly an instance of extreme cognitive dissonance and word salad. His conclusions do not follow from his premises at all. As Iain McGilchrist comments in The Matter With Things, optical illusions do not prove that our eyes can never be trusted. They prove, instead, that vision is not infallible. Nobody thought eyesight was unerring in the first place. We automatically adjust our estimations of color according to the perceived lighting and its interplay with shadows. When we see one object and part of it is in shadow, we don’t automatically think that the object is multi-colored. We see this kind of lighting effect all the time and we make allowances for it. This means that there can be a carefully contrived illusion that has the color “grey” looking light on one place of a checkerboard and dark at another, when in fact, it is technically the same shade of grey. This illusion is only an illusion when seen on a screen or on the page of a book. In real life, we would be making the adjustment correctly. To get rid of this illusion would mean that our estimation of color would, in everyday contexts, be hopelessly wrong. The “grey” color would not be the same in the concrete world. So, choose your poison. Be deceived by a drawing and continue to function well in real life, or not to be fooled by a drawing and get it all wrong in reality. We are better off as we are and the illusion is no cause for alarm. Listening to Adams on these topics is like listening to a record skipping. The sudden break in logic with the several necessary intervening premises missing can make one think, “Hang on. Did I miss something? How did he get from here to there?” Determinism is true. Therefore, rationality is impossible. How did you arrive at this conclusion? Using rationality (hence, determinism is false). We are merely pattern recognition devices. Is recognizing a pattern the same thing as identifying the truth? No. Is it therefore true that we are merely pattern recognition devices? Whether that statement is true or not lies outside the purview of human capabilities. I, a hypnotist, can sometimes fool you and manipulate you. Therefore, you are never rational. This is saying that being wrong sometimes, means you are never right. That does not follow. Sometimes, organic events in your brain, or an electric probe in your brain, can make you think or do something. That means you have no free will. How could someone who thinks this badly dream of writing a book telling other people how to be more rational? Optical illusions prove that our senses cannot be trusted. How come it is so hard to contrive these optical illusions? How is it that there is a very limited number of them, to the extent that it would be quite possible to recite them all? Isn’t what makes them interesting their very deviation from normal perceptions? There would be no gee whiz element if they were ubiquitous. This can be compared to “the news.” Something is only “the news” if it is not routine and commonplace. No headline ever says “Man Dies From Heart Disease.” Having written all that, I listen to him everyday and find his insights interesting enough to keep tuning in. I appreciate the fact that he correctly posits that AI as we know it will never reach AGI (artificial general intelligence) and is supremely skeptical of “models” used to make factual claims about reality. They do not work in finance. And they do not work for climate change. There are constantly being found new vital factors that have not been included in the models, like the role of plankton in the sea, that invalidate all previous predictions. And yet, no alarm is shown by the climate scientists. If you knew nothing about human nature, you would think they would all hang their heads in shame, quit their jobs and find a better use of their time. This indicates that the goal is to promote climate change and that they are just using whatever “evidence” seems to confirm it, rather than the evidence leading them to the conclusion. Adams was employed to make models for a corporation. He learned to ask, “What would you like it to prove?” His boss made it clear that he had no faith in them and merely pointed at them when they happened to agree with what he was intending to do anyway. Models are determined by the assumptions the scientist makes. The same is true of philosophical arguments. Unprovable metaphysical claims push us in inevitable directions. Find out someone’s core beliefs and his main conclusions can be predicted. Adams is currently dying from late-stage bone cancer brought on from metastasized prostate cancer. You would not know it from his demeanor. He has largely lost the use of his hands, among other things. I presume Adams could provide insight into my own forms of cognitive dissonance. It would be interesting to find out what they are. Unfortunately, apparently we all just start spouting word salad when they are pointed out to protect our minds from accepting the criticism. orthosphere.wordpress.com/2025/11/18/sco…

English
6
1
16
1.2K
Tyler Johnson
Tyler Johnson@tjohnson640·
@Nicolascole77 I’m having a hard time understanding this from first principles. Did you get the concept of open/closed loops from a particular field of study?
English
0
0
0
28
Nicolas Cole 🚢👻
Nicolas Cole 🚢👻@Nicolascole77·
Life is a game of Opening & Closing Loops Open Loops = • Busy mind • Lots of action items • High switching cost • Worry, anxiety, fear, uncertainty of outcome Closed Loops = • Clarity • Decisions made • How you go to bed calm • Knowledge based on experience
English
10
3
49
5.6K
Tyler Johnson retweetledi
Livefront
Livefront@livefront·
#Android engineers, rejoice! We have an incredibly helpful how-to video - on customizing your API docs with Dokka. In this video you'll find examples, code snippets, etc. Improved API documentation is something we can all get excited about! youtube.com/watch?v=HAj1LE…
YouTube video
YouTube
English
0
1
1
203
Tyler Johnson
Tyler Johnson@tjohnson640·
@rossiadam Blows my mind to hear this from multiple sources. Seems that affirmations are real. Also, maybe we live in a simulation. 🤯
English
0
0
0
40
Adam Rossi
Adam Rossi@rossiadam·
I am a sucker for someone with a dream. With an idea so clear they can see it when they close their eyes…somewhere out there in the future. Someone willing to go hungry to make the dream real. The amount of crazy dreams I have seen become real in my life is unbelievable.
English
19
5
136
7.8K
Tyler Johnson
Tyler Johnson@tjohnson640·
I’ve thought about this a lot, and at the moment I disagree. I can’t tell if it’s only because I have a financial interest in not having my job disappear. Here’s why I disagree: When a new feature is added to a programming language (ex. Swift concurrency), how will AI be trained how to use the APIs if programmers don’t write the training code? Ok, new language features, like concurrency, are irrelevant because that feature is for humans. Would we start creating programming languages that only AI’s understand? Will AI programs contain hallucinations or be completely flawless? If they’re not flawless, who will debug the program that’s written in a language that only AI understands? Ok, debugging is useless because you just regenerate the entire program from scratch every time. Will you not discover the AI has chosen different tradeoffs that you find unacceptable in each new generation of source code? Size of the program, speed, maximizing CPU efficiency, power consumption, etc. How many tokens would be required to create/maintain the Facebook website/mobile app? There must be an upper bound on how many tokens a future AI will be able to handle. Will it be enough to maintain a large codebase? What happens after the initial launch of a startup that decides to pivot, but wants to maintain 1/4 of the working features? Will AI be able to reliably regenerate a whole new program while maintaining the 1/4 that was working flawlessly? I’m open to the possibility that we can answer all these questions, but at the moment, I’m skeptical. By the way, the folks who made copilot have explicitly said there’s a reason it’s called copilot and not “pilot”. I think the most likely future is one where AI is a tool used by programmers to be more efficient.
English
0
0
1
102
Adam Rossi
Adam Rossi@rossiadam·
Rainy day. Family went to the mall, so me and the dog are banished to our fortress of solitude.
English
15
0
85
3.6K
Tyler Johnson
Tyler Johnson@tjohnson640·
@nick__krantz I’m not a scrum master, but I did stay at a Holiday Inn Express last night. I assume you’re talking about agile, and in that context, I think the goal is consistency. Estimates are not _supposed_ to be backed by time.
English
1
0
1
30
Nick Krantz
Nick Krantz@nick__krantz·
Does anyone ever retrospective on their ticket estimating? I don't think I have, but it feels like something I should be doing, otherwise it's never becoming more accurate
English
2
0
0
111
Tyler Johnson
Tyler Johnson@tjohnson640·
We had this game at our high school in the Midwest in the mid 2000’s. A “kill” was when someone sprayed you with water. A syringe was the weapon of choice for hiding the water up your sleeve and squirting the person from a distance. Kids started getting suspended for playing during school hours because the sprints through the hallways were disrupting learning. I’ve always wanted to recreate this game in an app, allowing people to join tournaments in the cities where they live/work.
English
1
0
2
428
Adam Rossi
Adam Rossi@rossiadam·
When I was in high school, Me and my friends played a long-running assassin game. We just called it the Game. I am wondering if this was just a Blue Ridge mountains thing, or if this was widespread? The game worked like this: You pay $20 and get an ID card and a code name. All money goes to the last man standing. The commissioner then gives you a contract to “kill” another player. You had to shoot the guy with an air soft gun. All sorts of rules about where/when/how this could be done. For example, not at school. You would then stalk the guy and if you shot him, he was out. You took his card and gave it to the judges. You were not told who had your contract. We took the Game very, very seriously. For example: I was tipped off that a guy in my neighborhood had my contract. I asked him on the bus if he wanted an alliance. He said no. Like an idiot i did not have my gun on my. Usually we brought them everywhere. I knew I needed to move quickly. So I got off the bus at a different stop, ran through the woods, got my gun at home, and then ran through the woods like a rabid animal to his house. Just in time to see him going into his basement door. So I RAN INTO HIS HOUSE after him, chased him through the basement and shot him. This was allowable. I took his card, got the next assignment. I can’t tell you the level of paranoia we lived with playing this game. So how about it? Was this a Blue Ridge thing, or was this in other places in the late 80s?
Adam Rossi tweet media
Reston, VA 🇺🇸 English
32
3
79
26.4K