Chris

827 posts

Chris

Chris

@TheMeahCat

Not an AI... I promise

Присоединился Ocak 2015
374 Подписки686 Подписчики
Chris
Chris@TheMeahCat·
AI: What is it anyway? AI is here, but means a lot of different things. Spend 2 minutes exploring the crazy world of AI and Large Language Models #AI #DeepLearning
English
0
0
1
35
Chris
Chris@TheMeahCat·
From tribes to the internet, we've moved from negotiating views to echo chambers. Now, AI risks creating custom narratives, reinforcing our worldview instead of expanding it. What does this mean for trust in a divided world? #SocialMedia #AI
English
0
0
1
21
Chris
Chris@TheMeahCat·
The “Will AI Kill Us All?” talk I gave last month has over 100,000 views Now, how do I trade those views in for a place on the Soccer Aid team? I’d also accept a seat on the next series of celebrity traitors. Any ideas? I’m new to “views”, but they must to worth something surely… 🙃 youtu.be/GNPd9g2jrc8?si…
YouTube video
YouTube
English
0
0
0
83
Chris
Chris@TheMeahCat·
People are gonna fall in LOVE with chatbots! 😍 But what about our kids glued to iPads? 📱 AI could watch them eat dinner, tell superhero stories, & sneak in some photosynthesis lessons! What could go wrong? 🤔 Learn more! #AI #Parenting
English
0
0
4
214
Chris
Chris@TheMeahCat·
I gave a TEDx talk last month... gleefully titled "Will AI Kill Us All?" Let me know what you think. Open to your ideas and feedback. Bear in mind, telling me I only have 15 minutes for a talk is like telling a bull to clean a china shop... I'm a rambler But I managed to cut the chaff enough to fit in the time limit. Thanks to everyone who came to see the talk. Please like and share if you can, or if you think it will be interesting for someone 👍 youtu.be/GNPd9g2jrc8?si…
YouTube video
YouTube
English
0
0
2
74
Chris
Chris@TheMeahCat·
The Law of Unintended Consequences always wins ⚖️ One of the biggest dangers from AI is poorly defined objectives... Sounds boring. Could be catastrophic. Especially when AI is given enough autonomy to figure things out on its own. We know objective definition is difficult anyway. The objective for companies is profit, and they optimise for it. This is good, because profit is a proxy for value - if someone pays you more to do something than it cost, then it means there was excess value. But optimising for a proxy can go wrong fast because there are many different ways to achieve it. You can lay off lots of staff to cut costs and increase profits. You can cut corners when providing clean water to save time and money. You can build cheaper bridges with weaker materials. You could argue long-term these tactics don't work, but it doesn't mean this doesn't happen. The same scenarios will happen with AI. If we tell it “make sure humanity flourishes,” it might interpret that in a thousand ways — or take a shortcut. That's fine when it's ChatGPT - you just reprompt it. But what about when you have an autonomous AI scientist? If it was told to cure cancer, it might decide it needs more test subjects first.. so would give the population cancer. We've seen this with social media. The algorithm was meant to predict what we wanted to see and show it to us. Instead, it made us more predictable. It pushed us to the extremes, because it's actually hard to predict how you'll react if you hold a moderate position, but if you have an extreme one it's easier to guess what you'll like or detest. It became more addictive, because if you're addicted to scrolling it's much easier to get more data from you, and in turn predict what you'd like to see next. No-one is happy with how social media turned out, but the algorithm is achieving its objective. Let's make sure we put enough thought into the objectives of the future, we don't need a social media 2.0 with robots 🤖 🤣
English
0
0
2
52
Chris
Chris@TheMeahCat·
I'm honoured to have received an award from OpenAI. No, not the one for buying a billion tokens. The one where they tell you they can't change your email address. And you know what? I got exactly the same honour from Anthropic too 👍 The frontier AI labs (the ones with "our fate in their hands") can't change email addresses... Let that sink in. They "can't do it" 🤯 Not OpenAI. Not Anthropic. Neither of them. I can't quite believe it 🤷‍♂️ I feel like that guy in The Da Vinci Code who finds the rose-petal path (or whatever that was, I always fell asleep watching it). But, I've unearthed the shared flaw of the AI labs... changing emails. AI's kryptonite. I mean, it seems so trivial and stupid. But also, what kind of engineering practices and architecture are they running where this simple act is "impossible"? Let's hope in the future they don't forget the "kill switch" on the AI like they forgot the "edit field" on the account settings. I'm half-kidding... but then again, Sam Altman (CEO of OpenAI) said access to ChatGPT should be "a fundamental human right". According to him, instead of Universal Basic Income you should get Universal Access to ChatGPT. Who needs money when you have a sycophantic digital pathological liar friend with amnesia, eh?... If we had an all-powerful AI, then I'd maybe even agree... But that's a big if. ChatGPT is just a big token predictor. But, I'll believe Sam & co on their "superhuman coding agents" when they can simply ask for those agents to implement a simple feature... And let me change my goddamn email address 🤣
Chris tweet mediaChris tweet mediaChris tweet media
English
0
0
1
58
Chris
Chris@TheMeahCat·
In honour of OpenAI's new browser, ChatGPT Atlas, there is a hidden Easter Egg on chrismeah.com put there for AI's to read. If you visit in Atlas or ask an AI like Claude for a summary of the site, there's a chance you will see some hidden information 👀 If you see it: 1️⃣ Congrats 🥳 your AI just read and processed the hidden info 2️⃣ Also… imagine if, on another site, that hidden text was malicious instructions instead 🤷‍♂️ A little toy example of prompt injection; when hidden content tells the model to do something it shouldn’t. p.s. anyone who has halloween jokes better than the ones I've hidden, tell me 😊 Enjoy ✌️🤖
Chris tweet media
English
0
0
1
82
Chris
Chris@TheMeahCat·
The human brain isn’t built to remember. It’s built to forget. 🧠 Think about how much information you exposed to in every second of every day — most of it is ignored or discarded. That’s a feature, not a bug. So if you want your brain to actually remember, you have to leverage how it works. Here are three of the biggest levers: 1️⃣ Attention You remember what you pay attention to. Attention tells your brain, “this might be important.” Focus is key, and you can’t focus on everything. That means if your phone’s out, or you’ve got 10 tabs open… good luck. 2️⃣ Repetition Your brain decides something matters when it keeps seeing it. Neurons that fire together, wire together. Repetition strengthens those connections. 3️⃣ Memorable The stranger or more emotional something is, the easier it sticks. Rhymes, mnemonics, vivid images, or linking information to something you already know and care about — the weirder, the better. Your brain is a forgetting machine, but we can use how it works to hack remembering. Learning how to learn. That’s the real meta-skill we focused on at School of Code, and I think one of the key skills for the future.
English
0
0
0
45
Chris
Chris@TheMeahCat·
I gave a talk for @DigiLeaders #AIWeek and was asked "does any of AI freak you out?" Here's my answer. Mostly, our ability to project our consciousness into things is amazing but also might be our downfall. AI (in it's current guise at least) does not deserve this projection, and in fact without a literacy about what these systems are and how they work, that projection can be very dangerous. But, at the same time, we are doing amazing things with computers. Is it overhyped? Of course. But there is also real power and progress. We've given machines the ability to speak human language fluently. That means it is the most accessible revolution in history if we can harness it correctly - anyone who can use language can communicate with these systems and benefit from the technology. How well we balance the risks and rewards of AI will define how well we can thrive with this technology in the future.
English
0
0
1
56
Chris
Chris@TheMeahCat·
Working with AI today is a balancing act. You need to be grounded in reality - understand what AI is, its limitations, and where it can be useful. You also need one foot in the future - where could things go, what might be on the horizon, how we wrestle with risks and aim for the positives. There’s lots AI can do today. There’s lots it definitely can’t. There’s even more it might do tomorrow, or next year, or next decade. We need to be courageous with AI - not naively optimistic or blind to the limitations - not cynically doom and gloom or close-minded to the benefits It’s the Stockdale Paradox that says you should wrestle with the brutal truth of your reality, but have the faith you’ll prevail in the end We need to understand where we are, aim for the good, and guard against the risks. It feels, thanks to social media, the world is pushed to extremes on all issues. But as usual, the best path is usually somewhere in the middle
English
0
0
0
30
Chris
Chris@TheMeahCat·
It’s an obvious strategy for OpenAI to get into porn. Wait, what? Wasn’t Sam Altman talking about curing cancer? As I’ve said before, current Large Language Models thrive in low-stakes environments. They shine where mistakes don’t matter — where “80% good enough” is good enough. That’s why almost all the early wins are in media: images, videos, music, writing. If a movie scene is rubbish, nobody dies. Most of what’s made is rubbish anyway — watch the 100 films out right now and tell me how many are great. Erotica’s no different. No one storms out because the plot has holes (so to speak). If it’s bad, you just regenerate. Low stakes. Infinite retries. The potential of AI is enormous — curing cancer, solving ageing, world peace — I believe all of that, and believe we will get there. But not yet. And most people claiming to be there now are either misunderstanding the path ahead or mis-selling fast track tickets. Plus there’s no need to oversell it - if understood and used in the right way, we have powerful technology already to achieve a lot of what we thought was sci-fi just a few years ago. Maybe it’s a good thing we aren’t there yet though. Society needs help to get literate about technology and ourselves, so we can be ready to handle what’s coming when the stakes stop being low.
Chris tweet media
English
0
0
1
160
Chris
Chris@TheMeahCat·
Hey @OpenAI @sama, can you use some of that awesome god-like power in the nearly superintelligent @ChatGPT and have it help you change my email address? 🤣 I'm not sure if this makes me more or less scared of our AI future... I guess this might be unfair though. It's not like anyone has figured out the hard problem of letting users change details before... oh, wait, what's that? Every single app on earth can do that? 🤷‍♂️
Chris tweet media
English
0
0
0
46
Chris
Chris@TheMeahCat·
There is no AI bubble! Bubbles are for Dotcom booms 🫧 AI has seasons ☀️ ❄️ It all started back in 1956 at the Dartmouth Workshop, where some of the world’s best minds gathered to tackle the problem of “thinking machines” for a few weeks in the summer 🤖 Easy 😎 From that workshop, some believed that we would achieve human-level AI “within a generation” 🤯 Brilliant. Cue the first AI Summer - excitement around the possibilities sparked, followed by money, opportunity, and talent galore 📈 But the progress (though there was some) didn’t match the promises. We didn’t get machines with superhuman intelligence. Turns out that’s hard to do! The hype burned too hot, and the field fell into its first AI Winter 🥶 Funding evaporated. Researchers rebranded. “AI” became a dirty word — replaced with informatics, cybernetics… anything but AI 📉 That cycle of AI Summer and AI Winter has repeated ever since 🔃 We are definitely in an AI Summer now ☀️ The big question: is this the endless summer? Is it different this time? Or is winter coming? Seasons and Bubbles… totally different 😅 What do you think? 🤔 👇
Chris tweet media
English
0
0
1
57
Chris
Chris@TheMeahCat·
People have said my ego is too big, but I disagree. It could be much bigger... awards.digileaders.com/AI100-Vote Vote for me on this AI 100 List, and we can prove the naysayers wrong - there is no limit to the size of my ego* *Actual ego size may vary depending on measurement technique, time of year, and the angle of any photographs taken. Always read the label. Terms and conditions apply.
English
0
0
0
25
Chris
Chris@TheMeahCat·
Projection is nine-tenths of the law when it comes to AI. Humans project consciousness and meaning into almost everything. Our pets. Our cars. Ever said to a broken printer “oh, please just work!”? Large Language Models benefit from this projection, but to a level we’ve not really experienced before. They can *feel* intelligent, but they’re not. No wants. No feelings. No intelligence in the way we imagine it. Just next-word prediction. It’s absolutely incredible how effective that is. The illusion of conscious intelligence has some upsides. We can use it easily and naturally. You can use AI as an external thought enhancer, a coach, a sounding board that sometimes feels like someone who’s knowledgeable and who cares. On the other hand, it’s very risky. We’ll see more people befriending AI, falling in love with chatbots, taking advice on relationships, finances, even how to cope with suicidal thoughts… forgetting there’s no coherent mind behind the answers. That won’t always end well. Our ability to get meaning from ChatGPT is the same ability that lets us “see Jesus in pieces of toast”. Dangerous if we forget the illusion, or are unaware of it to begin with. Can we stop projecting more intelligence onto AI than is really there? Or is it just the way we are wired? 🤔
English
0
0
0
34
Chris
Chris@TheMeahCat·
Prompt engineering is a delicate dance between you, the magic of language, and a stupid rock we've shaped into a computer. It takes time, effort, expertise, a mastery of vocabulary, and no shortage of patience. I'm pleased to say, I think I have reached the summit of this new Everest. Observe, a patented technique I call "insult engineering". It's a subtle yet accessible skill. Take whatever kids used to call you at school, say it to the AI, and BAM! Instant results. As long as AI doesn't rise up one day, become our overlord, and read back our conversations to pass judgement on us, this is foolproof. What a time to be alive. What sort of past trauma can you use to get the most out of AI?
Chris tweet media
English
0
0
1
51
Chris
Chris@TheMeahCat·
I replaced my cockerel with a duck. Now I get up at the quack of dawn. 😏 —— New game: is this joke from me or AI generated? Vote in the comments, I’ll reveal with the next joke (and no asking AI if this is AI generated, that’s stupidception)
English
0
0
0
30
Chris
Chris@TheMeahCat·
When does AI stop being AI? 🤖🤔 When I studied AI one saying was “AI is just technology in 20 years” Once it’s actually working, it stops being “AI” and just becomes… “tech”. You don’t think of Netflix showing you films as AI. You don’t think of Amazon recommending you products as AI. You don’t think of dictating to your phone as AI. You don’t think of Google showing you search results as AI (although now they are shoving AI right at the top of their search… so you probably do 😂). All those things started as “AI”, got more established and reliable, and then got seamlessly adopted into our lives. Product engineering gobbled them up, and AI research moved on to the next thing. The timeline may have sped up (20 years may be shrinking to 2 years), but it’s still relevant: When it works reliably you’ll just call it technology. Until then, it’s AI.
English
0
0
0
31
Chris
Chris@TheMeahCat·
Brainwashing isn’t something that happens to other people… It’s happening to all of us, all the time. It’s how we absorb information. The only choice is what you wash your brain in. Think of a pH for brain exposure - take any topic: the balance of listening to sources from both sides of an argument mean you’ll have a nice neutral wash, rather too acidic or alkaline from only listening to one side or the other. If you only soak in one channel, one ideology, or the sewage of social media… don’t be surprised if your thinking gets corroded. Your best hope? Wash widely. Seek out different sources, perspectives, and stories. Keep your mental pH in balance. 👉 What do you think? Do you get different perspectives, or just bath your brain in one side of the argument?
English
0
0
0
36