Darryll Colthrust

5.4K posts

Darryll Colthrust banner
Darryll Colthrust

Darryll Colthrust

@dcolthrust

I mainly talk about the AI and the Future of Work. @chaptr_xyz

London Katılım Mart 2011
2.4K Takip Edilen1.3K Takipçiler
Darryll Colthrust retweetledi
Overworld
Overworld@overworld_ai·
Today, we’re releasing a research preview of our real-time, local-first world model built for interactive, playable AI-worlds 60fps, locally run, all on consumer-grade hardware. Come take a look ⬇️
GIF
English
56
154
1.1K
495.1K
Darryll Colthrust retweetledi
Anthropic
Anthropic@AnthropicAI·
New from the Anthropic Economic Index: the first comprehensive analysis of how AI is used in every US state and country we serve. We've produced a detailed report, and you can explore our data yourself on our new interactive website.
English
86
304
2.3K
303.1K
Darryll Colthrust
Darryll Colthrust@dcolthrust·
⚡ Google has released a comprehensive methodology for measuring the energy, water, and carbon emissions of its AI models in a live production environment across the full infrastructure. A single median Gemini text prompt consumes: ⚡ Energy Consumption: 0.24 Wh = 9 secs of TV 🌍 Carbon Emissions: 0.03 gCO2e. (e.g. boiling a 1 cup of water is 7g of CO₂e) 💧 Water Consumption: 0.26 mL = 5 drops of water It's great that Google has provided this level of detail and these numbers are tiny on an individual level. It really does help put our normal carbon footprint into perspective. However, Google doesn't only serve 1 person. Gemini now has ~400 million monthly active users with typical engagement patterns of about 4 visits per month with around 3 pages viewed per visit. This gives us a rough estimate of over 64 billion queries annually! But, the story doesn't end there because Google has also been trying to increase their efficiencies. Over a 12-month period (May 2024-May 2025) achieved significant reductions in the environmental impact of its Gemini Apps: - 33x reduction in median energy consumption per text prompt. - 44x reduction in carbon footprint per text prompt I expect this reduction to continue which tends to be the usual practice and pattern with tech. More efficient image generation and editing is likely the next frontier if nano banana is anything to go by. 🍌 I'm looking forward to seeing all other model providers follow suite. What are your thoughts on the Google's calculations and per query consumptions? #AI #carbonfootprint #Google youtu.be/aarDw3sooYE
YouTube video
YouTube
English
1
0
2
116
Cas.Fyn
Cas.Fyn@FynCas·
I Made €153K With My VEO AI UGC Agent — in Just 24 Days No team. No agency. Just 4 AI creators and one repeatable system. - €153K collected - 100+ video ads/day - Posted across TikTok & Meta Zero filming, zero editing, zero freelancers These creators aren’t real — but they sell like pros. Product demos, CTAs, routines — fully AI-generated. The stack: - MakeUGC: AI creators + voices - VEO 3: AI hooks - Daily testing, instant optimization I packaged the whole thing: - Tools, strategy, ads, and my €150K/month system Comment “UGC” & I’ll send it over. (Must be following)
Cas.Fyn tweet media
English
487
70
510
95.1K
Darryll Colthrust
Darryll Colthrust@dcolthrust·
Last week was a socially hectic week, leading to a depleted social battery by the end of the week! However, I can summarise it all with... "I'm proud to be working at Holtzbrinck Publishing Group." I love the variety of professional activities, the people, the collective wisdom shared, the problems to be solved, plus, I get the freedom to manage my duties and responsibilities as a father and husband. - London Book Fair - AI Board Prep with Katharina Neubert and Filmon Zerai - CEO retreat - Dinners - All my usual meetings - Coding in the evenings - Managing CHAPTR with my friend Andy Ländle - Secondary school inductions - 11yr old birthday party with VR (easily the hardest activity!) - 70yr old birthday party weekend @digitalsci @MacmillanUSA @MacmillanLearn @panmacmillan @SpringerNature @DIEZEIT Holtzbrinck Buchverlage GmbH #father #husband #CEO
Darryll Colthrust tweet mediaDarryll Colthrust tweet mediaDarryll Colthrust tweet mediaDarryll Colthrust tweet media
English
1
0
8
96
Matt Shumer
Matt Shumer@mattshumer_·
If you struggle to take notes during meetings, comment below or DM me. Opening up an alpha for a new product, today. First come first serve, so if you want access, let me know.
English
122
2
79
19K
Darryll Colthrust
Darryll Colthrust@dcolthrust·
💯 agree. That's why I like using these tools. It helps me to learn how to unleash some of my own creative thinking. In addition, we should also remember creativity comes in many forms, not just art and writing. I'm 💯 certain if I look at the code to build these tools, it will be creative and I will be inspired. 😉
English
0
0
0
10
Elona Mars
Elona Mars@elonamars·
@dcolthrust The technology is incredible, but humanity remains the true inspiration behind creativity.
English
1
0
0
10
Darryll Colthrust
Darryll Colthrust@dcolthrust·
⁉️ What is truth? Is community-driven fact-checking the future of online "Truth"? In an era when AI-driven deepfakes and manipulated media swirl across our feeds, traditional fact-checkers face an uphill battle against sheer volume. In 2019, Pew Research discovered that 64% of U.S. adults say fabricated news fuels major confusion about current events. Fast-forward to 2024 and this is concern has now reached 86% in the UK! Platforms like X and soon Meta, believe the answer lies in community-driven oversight. Community Notes uses a very complex open source algorithm that works as follows: 1️⃣ User Contributions: Users can apply to become contributors. Once approved, they can write notes to provide additional information or correct inaccuracies in tweets. 2️⃣ Diverse Agreement: For a note to be published and visible to all users, it must be rated as helpful by contributors with differing viewpoints. This ensures that the added context is balanced and not biased towards a particular perspective. 3️⃣ Algorithmic Evaluation: The system uses an algorithm that considers the diversity of ratings to determine a note's helpfulness. Notes that receive approval from a broad spectrum of contributors are more likely to be displayed. It crowdsources additional context, quickly exposing misleading posts. On one hand, community input can be fast and transparent; on the other, critics warn that bias might creep in if a platform’s users are polarized. Yet, these shifts signal a broader movement—one that recognizes the power of collective vigilance, especially when supported by professional expertise for more complex issues like health or science. The accelerating rate of misinformation and the ease of content generation leads me to believe that both community AND fact-checkers are required. However, to be truly effective, it requires a scale of involvement that is currently beyond us and will not dispel the current and continuous tidal wave of factual content online. 👉🏾 How do we incentive the truth? Mark's address: about.fb.com/news/2025/01/m… #society #ai #truth #communitynotes
English
0
0
0
85
Darryll Colthrust
Darryll Colthrust@dcolthrust·
After listening to Jensen's CES 2025 opening keynote, you quickly understand this diagram will be very different by the end of the year. #ces2025 #ai
Darryll Colthrust tweet media
English
0
0
0
90
Darryll Colthrust
Darryll Colthrust@dcolthrust·
Socrates famously declared, “I know that I know nothing" and this is exactly the mental state I want to retain in 2025. 🙇🏾 I spent most of the Christmas and New Year break immersing myself in articles, podcasts, essays and academic papers. The more I read, the more I realised, there is so much more to learn! 🤯 Navigating the unknown isn't always comfortable, however, at this juncture, pushing past that uncomfortable feeling and fanning that spark of curiosity is crucially important. Within our lifetime, we will see the birth of a world where knowledge, reasoning and physical execution will be codified. Many human skills, crafted over years of blood, sweat and tears will be reassessed as machines complete lots of tasks faster, better and cheaper. This progress only fuels my curiosity and keeps me from settling into complacency. By continuously challenging my assumptions, I try to apply structured Socratic inquiry, to open myself to new possibilities. This helps me to recognise gaps aren't a weakness - it’s the crucial first step toward true discovery, ONLY IF, I attempt to plug that gap. Before the busyness of 2025 starts, take a moment to challenge your current assumptions. A single question could push you to think differently and pave the way to your next big insight. Blooper video generated with Pika 2.0. It was supposed to be Socrates and I deep in discussion. 🤣 #Innovation #AI #DigitalTransformation #CriticalThinking #SocraticQuestioning
English
0
0
1
47
Darryll Colthrust
Darryll Colthrust@dcolthrust·
Currently testing @OpenAI's new Sora app. Especially the storyboard feature and it's not too bad! Here is my first test. I decided to take some inspiration from Pan Macmillan's Dracula book. Sound effects to come. 🎶 Immediate thoughts: 👉🏾 Clean interface, simple onboarding intro and fairly self-explanatory 👉🏾 Prevents the generation of content where a face is used as input, even if it's AI-generated. I tried with some midjourney images. 👉🏾 Adding styles is important because the model's interpretation of the prompt varies wildly sometimes and with limited credits, I'd want to be more precise with an input reference to steer the generation. 👉🏾 Setting the mood for the scene makes a big difference to the feel. 👉🏾 The storyboard feature works. It'll be picked up in other editing suites. 👉🏾 It won't be long before the storyboard prompts are automatically generated. I used 4o to generate the prompts and scripts. 👉🏾 It has no problem creating faces or people when prompted. 👉🏾 Text generation is freakishly good. I've tested text for transitions. 👉🏾 The credits situation is a blocker because they run out very quickly. 👉🏾 For v1, this is good and great storytelling is definitely still necessary. #sora #ai #books #storytelling #technology
English
0
0
0
157
Darryll Colthrust
Darryll Colthrust@dcolthrust·
Tencent just launched HunyuanVideo, an open-source AI model designed for text-to-video generation with 13 billion parameters. More planned. Text-to-Video Model) ✅Inference ✅Checkpoints 📆Penguin Video Benchmark 📆Web Demo (Gradio) 📆ComfyUI 📆Diffusers Image-to-Video Model 📆Inference 📆Checkpoints Project Page: aivideo.hunyuan.tencent.com Code: github.com/Tencent/Hunyua… #AI #video #technology
English
0
2
2
898
Darryll Colthrust
Darryll Colthrust@dcolthrust·
I thought I'd give ChatGPT-4o a try with some of the examples in OpenAI's blog post. Prompt: "Create a caricature of this photo." The input is a picture of me and the output is the generated image. Getting some early Gemini blooper vibes. 😂 #OpenAI #AI #technology #ChatGPT #ChatGPT4o
Darryll Colthrust tweet mediaDarryll Colthrust tweet mediaDarryll Colthrust tweet mediaDarryll Colthrust tweet media
English
0
0
2
324
Darryll Colthrust retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
Llama 3 degrades much more than Llama 2 when quantized. 🤔 📌 Most possible reason because Llama 3, trained on a record 15T tokens, captures extremely nuanced data relationships, utilizing even the minutest decimals in BF16 precision fully. Making it more sensitive to quantization degradation. 📌 So sensitive that even the smallest decimal points of each parameter offered by BF16 precision were filled and had a purpose. Other LLMs were trained for far less (2T), and thus did not have time to saturate smaller precision ranges of the parameters like Llama did, and thus are not affected by quantization as much. 📌 It's pretty well established that bigger models are more data efficient / absorb more new information per exposure. They've got more unused information-storing free capacity. But a model that is trained to within an inch of its life does not have that capacity. And because it is jammed packed, that lack of capacity also results in a lack of redundancy within the network. So at a fixed, small parameter count, a model that's been pretrained for far longer—like Llama 3 8b, in comparison to earlier similarly-sized models—might have begun to saturate the capacity of its trainable params, and thus not absorb much from continued training. Therefore, modifying the network is more damaging to overall performance with a smaller model (Llama 3 8B) that has been trained to freakishly high levels of that saturation. i.e. altering one node causes a cascade of brokenness because of that lack of redundancy.
Rohan Paul tweet media
English
18
63
441
85.1K
Darryll Colthrust
Darryll Colthrust@dcolthrust·
@Boardwavers and FPE hosted a great breakfast session yesterday on Revolutionising B2B software sales for scalability. We got the opportunity to listen in to the journeys of Pippa Begg at Board Intelligence and Abakar Saidov at @BeameryHQ. Some interesting takeaways for me as @chaptr_xyz gears up to launch our products into the market, backed by the Holtzbrinck Publishing Group. 👉🏾 Following the SaaS Playbook with CxOs doesn't always work (well). 👉🏾 Use Product Managers alongside your sales team for the first 5 clients. A new sales team won't necessarily fully believe in the product in the same way until there are enough clients. 👉🏾 For a new product on the market, create content that answers existing questions that people are asking but also guides them to your new product which may be answering different questions they didn't know to ask. Thanks again to Kath Easthope and Henry Sallitt for hosting. #boardwave #sales #B2B #europe
Darryll Colthrust tweet mediaDarryll Colthrust tweet mediaDarryll Colthrust tweet mediaDarryll Colthrust tweet media
English
0
0
1
23