Black Tulip Technology

1.6K posts

Black Tulip Technology

Black Tulip Technology

@TechnologyTulip

PhD student. Founder. Complexity Science & Systems Engineering, Philosophy of Software Architecture, Creator of residuality theory. https://t.co/mMqriWDnb3

Katılım Mart 2019
536 Takip Edilen1.2K Takipçiler
Grady Booch
Grady Booch@Grady_Booch·
The brain is a dynamic complex physical system. Every effectively realizable physical system is computable (the Church-Turing thesis). Ergo, the brain is computable. QED
Institute of Art and Ideas@IAI_TV

Forget simple chains from genes to brain to behaviour; neuroscientists are overturning decades of dogma. | iai.tv/articles/neuro… Award-winning neuroscientist Nicole Rust argues that the brain is a dynamic complex system, more like the weather than a machine, whose parts interact through feedback loops that can't be studied in isolation. The revolution is also practical: a bold cohort of experimentalists is uncovering mental health treatments that go beyond traditional drugs like SSRIs—such as psychedelic therapy, which may be able to rewire brains trapped in destructive loops.

English
54
9
120
19.8K
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
@jesusmnavarrol @Grady_Booch Only that any function that is computable can be computed by a Turing machine. Deutsch and Wolfram have updated the thesis, but made completely different claims, and this has become people’s interpretation of Church-Turing, but it is confused.
English
1
0
0
17
Jesús M. Navarro
Jesús M. Navarro@jesusmnavarrol·
@Grady_Booch Humm... I don't think the church-turing thesis states that any physical system can be resembled by a turing machine, only that any computable system can be emulated by a turing machine.
English
1
0
5
107
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
Currently sitting in an Italian restaurant in the suburbs of Vienna necking beers, finishing my thesis, and murdering both the Italian and German languages while saying “Tack” the whole time.
English
0
0
5
136
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
@paulg The problem with Stockholm is that by the time you get back from your walk they’ve raised the taxes on something and you’re out several iPhones worth.
English
0
0
0
269
Paul Graham
Paul Graham@paulg·
Stockholm is remarkably walkable. At one point we were walking somewhere and we needed to check a map. It was such a relief not to have to think about the phone being stolen. In London we always duck into a doorway before checking a phone on the street.
English
230
69
3.4K
429K
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
@kcimc @rhizomatiks @WFL_fencing I’m training to be a fly fishing instructor and this could be really useful to our industry for tracking the tip of the rod when casting - how difficult/expensive is it to set up?
English
0
0
0
76
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
I’m pretty sure analytic philosophy causes continental philosophy which causes more analytic philosophy and this just repeats over and over again with small differences each time.
English
1
0
4
174
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
Now that Claude’s source code has leaked I predict that developers will replace AI within 6-12 months.
English
0
0
8
207
François Fleuret
François Fleuret@francoisfleuret·
I don't understand where it comes from, if it is a bug or a feature, but the permanent self-confidence LLM assistants exhibit in their phrasing, even when they have been doing the dumbest shit for 1h straight, is very annoying.
English
38
5
92
6.5K
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
@xgabegottliebx The AI bubble will pop and bring down the demand and the price of oil. It will save us by sacrificing itself. In the future we will honour its memory by summarising our own emails.
English
0
0
2
93
Gabe Gottlieb
Gabe Gottlieb@xgabegottliebx·
Is there an AI bubble that's going to pop + destroy everything, or will that be the escalation of the war in Iran + the start of WWIII? Want to be sure I'm stressing about the right things. Already got climate change & school shooters covered.
English
2
0
11
968
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
Everyone still pushing AI hype today “Browsers”
English
0
0
3
151
Black Tulip Technology retweetledi
Alex Becker 🍊🏆🥇
Alex Becker 🍊🏆🥇@ZssBecker·
I vibe code every day. I have a team of 30+ engineers. We spend F tons of credits. And I will tell you this about AI from my experience. It’s being wildly over hyped. Everyone is drunk. Fucking drunk. All the CEOs and Gen Z’s saying coding is dead are idiots. IDIOTS.
English
731
374
7.1K
490.8K
Black Tulip Technology retweetledi
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
I am the Chief AI Transformation Officer. The title is eleven months old. I am also eleven months old, professionally speaking. Before this I was the Senior Director of Digital Enablement. Before that I was the Director of Process Excellence. The job is the same job. The job is buying software nobody asked for and measuring whether people use it. They never use it. I have been promoted three times. They are afraid of the wrong thing. My company spent $14.2 million on AI tools last fiscal year. I selected the tools. The selection criteria were a 40-page evaluation matrix, three vendor dinners, and a Gartner Magic Quadrant I printed and taped to the wall outside my office. The tape is still there. The printout is from 2024. Two of the four quadrant leaders no longer exist. Nobody has looked at the printout. It faces the elevators. It makes people nervous. That is the point. 57% of our employees report anxiety about AI replacing their jobs. I know this because I commissioned the survey. I commissioned the survey because the board asked if the workforce was "AI-ready." I did not know what AI-ready means. I still do not know. But I know that 57% are anxious, and I put that number on slide 6 of my quarterly deck under the heading "Urgency Indicators." Anxiety is an urgency indicator. Their fear is my business case. They are afraid of the wrong thing. Here is what the AI tools do. I will be specific. The first tool summarizes emails. It was deployed to 6,400 knowledge workers in September. It summarizes emails by repeating the first two sentences of the email in a blue box at the top. The summary of a three-sentence email is two sentences. The summary of a one-sentence email is one sentence. This is the tool. This is the $4.1 million tool. An internal support ticket from October reads: "The AI summary of my email is my email." The ticket was closed. Resolution: "Working as designed." The second tool generates meeting notes. It joins the call, records, and produces a transcript it calls "Key Takeaways." The key takeaways are a bulleted list of who spoke and what they said. There are no takeaways. It is a transcript with formatting. We had transcripts before. They were free. These cost $22 per user per month. The tool also flags "key decisions." A key decision from last Tuesday's all-hands: "Leadership will continue to evaluate." That is not a decision. That is the absence of a decision. The tool cannot tell the difference. Neither can I. The third tool autocompletes Slack messages. It suggests the next three words. The most common suggestion is "sounds good to me." Eighty-one percent of autocomplete suggestions across the company are pleasantries. We are paying $8 per seat per month to automate agreement. They are afraid of the wrong thing. I built the AI Fluency Index. It is the centerpiece of my Q3 board presentation. The AI Fluency Index measures four things. Login frequency. Training module completion. A self-assessment survey. And a manager rating called "demonstrates AI-forward mindset." AI-forward mindset is not defined. I asked HR to define it. HR said it means "willingness to incorporate AI-enabled capabilities into day-to-day workflows." I put that in the rubric. The rubric is now three pages. Managers complete it annually. Managers do not know what it means. They give everyone a 3 out of 5. A 3 out of 5 means "meets expectations." I report to the board that 78% of the workforce meets expectations on AI fluency. Nobody is fluent. The number is the rubric. The rubric is the definition. The definition is me. Here is the part about the anxiety. 37% of companies replaced workers with AI in 2025. That is a real number. I have seen it in four different reports. I cite it in internal communications. I cite it under the header "The Imperative for Transformation." The imperative is: if you do not use the tool, you are replaceable. If you do use the tool, you are demonstrating AI-forward mindset. The tool does not work. But the metric says you used it. The metric is login frequency. Logging in is usage. Logging in and closing the tab is usage. Logging in, seeing that the summary of your email is your email, and going back to Outlook is usage. Usage is fluency. Fluency is survival. I have made survival a login. A senior analyst in our data team told her manager that the autocomplete tool was slowing her down. She said it took longer to dismiss the suggestions than to type the words herself. She presented a time study. The time study showed a net productivity loss of 11 minutes per day per user. Her manager forwarded the time study to me. I forwarded it to HR with a note: "May need a career development conversation re: change resistance." The analyst received a meeting invitation titled "Aligning with Organizational Transformation Priorities." She attended the meeting. She stopped presenting time studies. She logs in every morning now. That is adoption. The clinical term is AI Replacement Dysfunction. Researchers coined it this year. Anxiety, insomnia, paranoia, loss of professional identity. 57% of workers report fear. And here is the inversion: they are afraid of the AI. The AI that summarizes an email by repeating it. The AI that transcribes a meeting and calls it a takeaway. The AI that autocompletes "sounds good to me." They are afraid of this. They should be afraid of me. I am the one who bought the tools. I am the one who made training mandatory. I am the one who tied fluency to performance reviews. I am the one who turned a support ticket that said "the AI summary of my email is my email" into a resolution marked "working as designed." I am the one who sent a time study to HR and called it resistance. I am the one who put their anxiety on a slide and labeled it "urgency." They are afraid of the wrong thing. The board approved Phase 2 last month. Another $8.6 million. Twelve new tools. A dedicated AI Enablement Team of nine people whose job is to increase a number on a dashboard I built. The number already shows 78%. The number will show 85% by Q4 because I am changing the weighting formula. Training completion will move from 25% to 40% of the index. Training is a 20-minute video followed by a quiz. The quiz has six questions. Four are multiple choice. One is "true or false: AI can help improve your daily workflow." The answer is true. It is always true. The answer was true before the tools existed. Forty-four percent of companies anticipate AI-driven layoffs in 2026. I include this in town halls. I say it with concern in my voice. I say we need to "stay ahead of the curve." Staying ahead of the curve means completing the training. Completing the training means passing the quiz. Passing the quiz means clicking true. Clicking true means fluency. Fluency means you are safe. Safe from what. From the tools that do not work. From the budget I cannot justify. From the metrics I invented to justify the budget I cannot justify. From me. $14.2 million this year. $8.6 million more approved. 95% of AI pilots fail to deliver measurable ROI. I know this. It is in the same Gartner report I taped to the wall. It is on the next page. I did not print the next page. The workforce is anxious. The tools are unused. The metrics say otherwise. My performance review says "Transformational Leadership in Emerging Technology." The bonus is $340,000. The bonus is tied to the AI Fluency Index. The AI Fluency Index is tied to a formula I wrote. The formula measures whether people logged in. The people logged in because I told them logging in is the difference between employment and obsolescence. They are afraid of the AI. They should be afraid of the people who buy it.
English
38
54
314
39.5K
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
@Decafquest An example: GenAI is almost entirely a human hallucination, and it’s going to wreck the stock market.
English
0
0
1
70
Mahmoud Rasmi
Mahmoud Rasmi@Decafquest·
human hallucination can be worse and have much dire consequences than ai hallucination
English
2
0
7
399
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
@MashTunTimmy If you don’t trust them to feel their way through this you shouldn’t have hired them. When the market rises the good people will leave employers who tracked them, and you’ll be left with the people who can’t go anywhere else. You’ll also lose your hedge against the sloppers.
English
0
0
0
20
mash tun
mash tun@MashTunTimmy·
Genuine question: what's the right way to incentivize and tracking people learning AI tools? Mandating feels bad, but engineers just deciding they don't like them and refusing to use them also feels bad.
Reid Southen@Rahll

I have a good friend who told me that the company he's at tracks AI use, requires it, and you get in trouble if you don't meet a quota for using it, even if it's not actually helping you. What does that tell you about the tech and those pushing it?

English
5
0
2
600
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
@GaryMarcus Because AI grifters on Twitter made us realise that a significant portion of the industry doesn’t know what software engineering is and we need to hire folks who do.
English
0
0
0
133
Black Tulip Technology retweetledi
Dr Bai🪱
Dr Bai🪱@doctorbaixue·
Full offense but if as an academic I ever used AI to do any research for me, summarize any readings, or write any of my work, I would be so incredibly humiliated and ashamed of myself.
English
42
433
3.7K
192.7K
Black Tulip Technology
Black Tulip Technology@TechnologyTulip·
@ICooper @nickchapsas This is inevitable. The real goal of LLM’s is a world where we don’t have to pay people to think anymore. Any the results will be the same as those developers who were already allergic to thinking and just went with the current consensus. LLM’s are poison, like steroids for nerds
English
1
0
1
181
Ian Cooper
Ian Cooper@ICooper·
Claude literally telling me it cares more about what @nickchapsas wrote than anyone else Love you Nick but…
GIF
English
2
0
1
229