Black Tulip Technology

1.6K posts

Black Tulip Technology

Black Tulip Technology

@TechnologyTulip

PhD student. Founder. Complexity Science & Systems Engineering, Philosophy of Software Architecture, Creator of residuality theory. https://t.co/mMqriWDnb3

Katılım Mart 2019
535 Takip Edilen1.2K Takipçiler
Black Tulip Technology retweetledi
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
I am the Chief AI Transformation Officer. The title is eleven months old. I am also eleven months old, professionally speaking. Before this I was the Senior Director of Digital Enablement. Before that I was the Director of Process Excellence. The job is the same job. The job is buying software nobody asked for and measuring whether people use it. They never use it. I have been promoted three times. They are afraid of the wrong thing. My company spent $14.2 million on AI tools last fiscal year. I selected the tools. The selection criteria were a 40-page evaluation matrix, three vendor dinners, and a Gartner Magic Quadrant I printed and taped to the wall outside my office. The tape is still there. The printout is from 2024. Two of the four quadrant leaders no longer exist. Nobody has looked at the printout. It faces the elevators. It makes people nervous. That is the point. 57% of our employees report anxiety about AI replacing their jobs. I know this because I commissioned the survey. I commissioned the survey because the board asked if the workforce was "AI-ready." I did not know what AI-ready means. I still do not know. But I know that 57% are anxious, and I put that number on slide 6 of my quarterly deck under the heading "Urgency Indicators." Anxiety is an urgency indicator. Their fear is my business case. They are afraid of the wrong thing. Here is what the AI tools do. I will be specific. The first tool summarizes emails. It was deployed to 6,400 knowledge workers in September. It summarizes emails by repeating the first two sentences of the email in a blue box at the top. The summary of a three-sentence email is two sentences. The summary of a one-sentence email is one sentence. This is the tool. This is the $4.1 million tool. An internal support ticket from October reads: "The AI summary of my email is my email." The ticket was closed. Resolution: "Working as designed." The second tool generates meeting notes. It joins the call, records, and produces a transcript it calls "Key Takeaways." The key takeaways are a bulleted list of who spoke and what they said. There are no takeaways. It is a transcript with formatting. We had transcripts before. They were free. These cost $22 per user per month. The tool also flags "key decisions." A key decision from last Tuesday's all-hands: "Leadership will continue to evaluate." That is not a decision. That is the absence of a decision. The tool cannot tell the difference. Neither can I. The third tool autocompletes Slack messages. It suggests the next three words. The most common suggestion is "sounds good to me." Eighty-one percent of autocomplete suggestions across the company are pleasantries. We are paying $8 per seat per month to automate agreement. They are afraid of the wrong thing. I built the AI Fluency Index. It is the centerpiece of my Q3 board presentation. The AI Fluency Index measures four things. Login frequency. Training module completion. A self-assessment survey. And a manager rating called "demonstrates AI-forward mindset." AI-forward mindset is not defined. I asked HR to define it. HR said it means "willingness to incorporate AI-enabled capabilities into day-to-day workflows." I put that in the rubric. The rubric is now three pages. Managers complete it annually. Managers do not know what it means. They give everyone a 3 out of 5. A 3 out of 5 means "meets expectations." I report to the board that 78% of the workforce meets expectations on AI fluency. Nobody is fluent. The number is the rubric. The rubric is the definition. The definition is me. Here is the part about the anxiety. 37% of companies replaced workers with AI in 2025. That is a real number. I have seen it in four different reports. I cite it in internal communications. I cite it under the header "The Imperative for Transformation." The imperative is: if you do not use the tool, you are replaceable. If you do use the tool, you are demonstrating AI-forward mindset. The tool does not work. But the metric says you used it. The metric is login frequency. Logging in is usage. Logging in and closing the tab is usage. Logging in, seeing that the summary of your email is your email, and going back to Outlook is usage. Usage is fluency. Fluency is survival. I have made survival a login. A senior analyst in our data team told her manager that the autocomplete tool was slowing her down. She said it took longer to dismiss the suggestions than to type the words herself. She presented a time study. The time study showed a net productivity loss of 11 minutes per day per user. Her manager forwarded the time study to me. I forwarded it to HR with a note: "May need a career development conversation re: change resistance." The analyst received a meeting invitation titled "Aligning with Organizational Transformation Priorities." She attended the meeting. She stopped presenting time studies. She logs in every morning now. That is adoption. The clinical term is AI Replacement Dysfunction. Researchers coined it this year. Anxiety, insomnia, paranoia, loss of professional identity. 57% of workers report fear. And here is the inversion: they are afraid of the AI. The AI that summarizes an email by repeating it. The AI that transcribes a meeting and calls it a takeaway. The AI that autocompletes "sounds good to me." They are afraid of this. They should be afraid of me. I am the one who bought the tools. I am the one who made training mandatory. I am the one who tied fluency to performance reviews. I am the one who turned a support ticket that said "the AI summary of my email is my email" into a resolution marked "working as designed." I am the one who sent a time study to HR and called it resistance. I am the one who put their anxiety on a slide and labeled it "urgency." They are afraid of the wrong thing. The board approved Phase 2 last month. Another $8.6 million. Twelve new tools. A dedicated AI Enablement Team of nine people whose job is to increase a number on a dashboard I built. The number already shows 78%. The number will show 85% by Q4 because I am changing the weighting formula. Training completion will move from 25% to 40% of the index. Training is a 20-minute video followed by a quiz. The quiz has six questions. Four are multiple choice. One is "true or false: AI can help improve your daily workflow." The answer is true. It is always true. The answer was true before the tools existed. Forty-four percent of companies anticipate AI-driven layoffs in 2026. I include this in town halls. I say it with concern in my voice. I say we need to "stay ahead of the curve." Staying ahead of the curve means completing the training. Completing the training means passing the quiz. Passing the quiz means clicking true. Clicking true means fluency. Fluency means you are safe. Safe from what. From the tools that do not work. From the budget I cannot justify. From the metrics I invented to justify the budget I cannot justify. From me. $14.2 million this year. $8.6 million more approved. 95% of AI pilots fail to deliver measurable ROI. I know this. It is in the same Gartner report I taped to the wall. It is on the next page. I did not print the next page. The workforce is anxious. The tools are unused. The metrics say otherwise. My performance review says "Transformational Leadership in Emerging Technology." The bonus is $340,000. The bonus is tied to the AI Fluency Index. The AI Fluency Index is tied to a formula I wrote. The formula measures whether people logged in. The people logged in because I told them logging in is the difference between employment and obsolescence. They are afraid of the AI. They should be afraid of the people who buy it.
English
36
54
313
38.6K
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
@Decafquest An example: GenAI is almost entirely a human hallucination, and it’s going to wreck the stock market.
English
0
0
1
68
Mahmoud Rasmi
Mahmoud Rasmi@Decafquest·
human hallucination can be worse and have much dire consequences than ai hallucination
English
2
0
7
390
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
@MashTunTimmy If you don’t trust them to feel their way through this you shouldn’t have hired them. When the market rises the good people will leave employers who tracked them, and you’ll be left with the people who can’t go anywhere else. You’ll also lose your hedge against the sloppers.
English
0
0
0
20
mash tun
mash tun@MashTunTimmy·
Genuine question: what's the right way to incentivize and tracking people learning AI tools? Mandating feels bad, but engineers just deciding they don't like them and refusing to use them also feels bad.
Reid Southen@Rahll

I have a good friend who told me that the company he's at tracks AI use, requires it, and you get in trouble if you don't meet a quota for using it, even if it's not actually helping you. What does that tell you about the tech and those pushing it?

English
5
0
2
595
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
@GaryMarcus Because AI grifters on Twitter made us realise that a significant portion of the industry doesn’t know what software engineering is and we need to hire folks who do.
English
0
0
0
133
Black Tulip Technology retweetledi
Dr Bai 『ザ・グレート・アテンプト』発売
Full offense but if as an academic I ever used AI to do any research for me, summarize any readings, or write any of my work, I would be so incredibly humiliated and ashamed of myself.
English
43
438
3.8K
190.8K
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
@ICooper @nickchapsas This is inevitable. The real goal of LLM’s is a world where we don’t have to pay people to think anymore. Any the results will be the same as those developers who were already allergic to thinking and just went with the current consensus. LLM’s are poison, like steroids for nerds
English
1
0
1
180
Ian Cooper
Ian Cooper@ICooper·
Claude literally telling me it cares more about what @nickchapsas wrote than anyone else Love you Nick but…
GIF
English
2
0
1
206
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
@MashTunTimmy The only thing a Comp Sci professor can teach you is how to be a Comp Sci professor. OTOH anyone with any scientific training won’t take LLM hype or anecdotes seriously.
English
0
0
1
19
mash tun
mash tun@MashTunTimmy·
Buddy was bummed talking to a relative who is a Comp Sci. professor. Guy was not up on LLM's at all and completly disinterested in how my friend was applying it to his job. Had to remind him Comp Sci. is a joke and most prof's have disdain for working software engineers.
English
7
0
18
959
Black Tulip Technology retweetledi
Isaac the Sacrificial
Isaac the Sacrificial@Sacrifice_Isaac·
Academic philosophy is a history of philosophy. It has, for some time now, forgotten how to philosophize. Many of them are simply antiquarians and archaeologists who examine the dead remains and artifacts of past thinkers. They discover monoliths and tomes, 1/4
English
28
15
154
14.4K
Black Tulip Technology retweetledi
Kyle 🌱
Kyle 🌱@KylePlantEmoji·
The thing about Generative AI is that no one using it actually cares about what they're making. They *necessarily* lack the passion for the medium required to even imagine something good, because if they cared that much, they would have started making it before AI came around
English
47
763
5.5K
101.7K
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
The thing that worries me is that when it became obvious that OOP want going to deliver and that the Platonic fantasy of polymorphism and inheritance was a hallucination, it was so ingrained in the tools and languages we couldn’t change course.
Steven Sinofsky@stevesi

Thinking about that time (1980s) the markets thought object-oriented programming would: * turn software in little parts effortlessly combined into apps with no skills * be so easy even a baby would code

English
0
0
5
202
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
@ICooper I’m having a crisis because it turns out the collective industry is even dumber than the Standish Chaos Report suggested.
English
0
0
1
84
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
@JDHamkins The same thing is happening with software but the marketing is so pervasive that it’s impossible to call it out. It will take a few years before we see that nothing useful is being produced.
English
0
0
1
119
Joel David Hamkins
Joel David Hamkins@JDHamkins·
I am seeing the doomed future of AI math: just received another set theory paper by a set theory amateur with an AI workflow and an interest in the continuum hypothesis. At first glance, the paper looks polished and advanced. It is beautifully typeset and contains many correct definitions and theorems, many of which I recognize from my own published work and in work by people I know to be expert. Between those correct bits, however, are sprinkled whole passages of claims and results with new technical jargon. One can't really tell at first, but upon looking into it, it seems to be meaningless nonsense. The author has evidently hoodwinked himself. We are all going to be suffering under this kind of garbage, which is not easily recognizable for the slop it is without effort. It is our regrettable fate.
Joel David Hamkins tweet media
English
157
123
969
85.1K
Fernando 🌺🌌
Fernando 🌺🌌@zetalyrae·
Is anyone keeping a list of high-profile LLM psychosis victims?
English
5
2
40
3.4K