Steve Jackman

8.4K posts

Steve Jackman banner
Steve Jackman

Steve Jackman

@sjeeves

Digital Education Specialist, Musician, #MSCDE @EdinburghUni, #ADE, #APLS. From the UK now in Bangkok. @mufuinternat

Bangkok, Thailand Katılım Kasım 2008
2.6K Takip Edilen2.4K Takipçiler
Steve Jackman retweetledi
Robert Wright
Robert Wright@robertwrighter·
Anthropic's Claude helped select hundreds of targets for the opening wave of Iran strikes. There's a good chance that one of them was the elementary school where more than 100 girls died. My latest @NonzeroNews piece. nonzero.org/p/iran-and-the…
English
676
3K
8.6K
1.8M
Steve Jackman retweetledi
Google for Education
Google for Education@GoogleForEdu·
Talk about a real gem!! 💎 Shout out to @EricCurts for building this incredible collection of over 100 Gems for schools. We’re loving the creative ways teachers can put them to work in their classrooms with the @GeminiApp! Check it out! edugems.ai
Eric Curts@ericcurts

💎 100 of the Best Gemini Gems for Schools controlaltachieve.com/2026/02/100gem… 🧭 Curriculum 💡 Instructional Materials 🙋 Engagement 🔑 Support 💯 Assessment & Data 📖 Literacy 🧪 Math & Science 🧒 Student Tools 💼 Professional Tasks #edtech @GoogleForEdu @GeminiApp

English
7
48
185
23.3K
Steve Jackman retweetledi
Steve Jackman retweetledi
Carl Hendrick
Carl Hendrick@C_Hendrick·
Why haven't we have a great AI novel yet? Because LLMs are compression engines trained on the statistical regularities of human text. When generating stories, they produce a kind of average of all stories in the training distribution. This is why LLM fiction tends to be so generic: competent but undistinguished prose, predictable character arcs etc. This kind of "average of averages" explains not just why LLM fiction is bad but why it's bad in such a specific and recognisable way. It's never incoherent or offensive. It's never wrong exactly. It's just… beige. And scaling doesn't work either. More parameters and more data push the model toward a higher-resolution average, not away from averaging itself. You get more polished averages, not more distinctive points. Better beige is still beige. This is the "ill-defined domain" problem. AI is currently very good in certain domains: code compiles or it doesn't, a proof holds or it fails etc. But literature doesn;t work at all like this. There's no compiler for narrative or readily formalisable criteria for a truly great work of fiction.
Ethan Mollick@emollick

So far “telling a satisfying and well-written medium-length story” has proved far harder for LLMs than mathematical proofs, music generation, research reports, code, and many other forms of work. The technical reasons are pretty clear, but they are supposed to be language models

English
118
171
1.2K
96.6K
Nia
Nia@nia_thinks·
You can only use 3 tools to build your entire SaaS. What are they?
English
81
4
92
7.8K
Ajey Gore
Ajey Gore@AjeyGore·
What’s the work stack people settling on? Notion, notebooklm, obsidian? Plus something else? I am now scattered all over….
English
26
1
18
6.9K
Steve Jackman retweetledi
Aditya Agarwal
Aditya Agarwal@adityaag·
It's a weird time. I am filled with wonder and also a profound sadness. I spent a lot of time over the weekend writing code with Claude. And it was very clear that we will never ever write code by hand again. It doesn't make any sense to do so. Something I was very good at is now free and abundant. I am happy...but disoriented. At the same time, something I spent my early career building (social networks) was being created by lobster-agents. It's all a bit silly...but if you zoom out, it's kind of indistinguishable from humans on the larger internet. So both the form and function of my early career are now produced by AI. I am happy but also sad and confused. If anything, this whole period is showing me what it is like to be human again.
English
467
1.8K
15.8K
3.3M
Brian Roemmele
Brian Roemmele@BrianRoemmele·
I may need to sport the 500 year timeless look of the chap on the left. But I think my socks already have that smell?
English
13
16
274
59.3K
Thor House
Thor House@morphman777·
This is the collection of photographs I chose to reflect the life of my dad, Simon House. The music is Psychestra which is a collaboration that we did back in 2011. I hope you enjoy youtube.com/watch?v=AA8jQH…
YouTube video
YouTube
English
1
0
0
310
Steve Jackman retweetledi
PoliticsJOE
PoliticsJOE@PoliticsJOE_UK·
Tony Benn would have been 100 today. Here's the Labour titan's stunning anti-war speech, as powerful now as it was in 1998.
English
48
801
3.2K
134K
Jacob Klug
Jacob Klug@Jacobsklug·
After generating $250K (last 2 months) I built a playbook for @lovable apps—and I’m giving it away. In just two months, we cracked the code to building apps with AI. I’ve distilled everything we learned into this single document. Comment "Build" and drop a follow. I’ll DM it to you. P.S. This will likely blow up, so give me some time to reply.
English
6.3K
192
3.2K
731.7K
Steve Jackman
Steve Jackman@sjeeves·
@PamBurnard Any links to full presentation, slides, research would be awesome to be able to explore!
English
0
0
0
19
Pam Burnard
Pam Burnard@PamBurnard·
Brilliant as ever, Georgina Born theoretically framing and critiquing Music and AI at the Faculty of Music, University of Cambridge, last week. One of my all-time inspiring academic-heroes.
English
1
1
2
329
Steve Jackman retweetledi
Sandford Police Commentary
Sandford Police Commentary@Sandford_Police·
If Trump is looking to expand the United State of American can we propose giving him Luton, and Slough..
English
340
198
2.8K
98.5K
Steve Jackman
Steve Jackman@sjeeves·
Love this super simple explanation of AI bias…
Brian Roemmele@BrianRoemmele

Why is the time 10:10 in AI land? The 10:10 bias and how to understand it applies to countless subjects in AI. — When artificial intelligence (AI) is tasked with generating images of analog watches, one fascinating and predictable outcome is that the watch hands often default to the 10:10 position. This phenomenon is not a mere coincidence but a direct result of the training data AI models are exposed to. Most AI models, particularly those used for image generation or recognition, are trained on vast datasets of images sourced from the internet, advertisements, and other visual media. Because watch companies have historically standardized on the 10:10 time in their marketing materials, this bias is naturally inherited by AI systems. Why Watch Companies Use 10:10 The 10:10 time is a nearly universal standard for analog watch advertisements. There are several reasons for its adoption: 1. Aesthetic Balance - When set to 10:10, the watch hands create a symmetrical "V" or "smile" shape on the watch face. This symmetry is visually appealing and draws attention to the design of the watch. - The placement avoids overlap with other key elements on the watch face, such as the manufacturer’s logo (usually at 12 o’clock) and other complications like date windows or chronometers (often at 3, 6, or 9 o’clock). 2. Positive Connotations - The upward-pointing hands resemble a smile, which subconsciously conveys positivity and happiness to viewers. Marketing psychology often leverages subtle cues like this to create an emotional connection with potential buyers. 3. Brand and Logo Visibility - Many watchmakers place their logos or brand names at the 12 o’clock position. The 10:10 hand placement frames the logo without obscuring it. - Additional features like subdials or text at 6 o’clock are also unobstructed. 4. Historical Tradition - The 10:10 convention has been used for decades, creating an industry-wide norm. While smaller deviations exist (e.g., 10:08 or 10:09 in some ads), they are rare. The tradition has become so entrenched that deviating from it might make a watch ad look unusual or less professional. Interestingly, some brands experimented with other hand positions in the past. For example, Bulova once used the 8:20 position in its advertisements. However, the downward-facing hands were seen as creating a "frown" and evoking a negative emotional response, leading most brands to settle on 10:10. How AI Inherits the 10:10 Bias AI models trained on large datasets learn patterns and associations from the images they process. When it comes to watches, these datasets overwhelmingly consist of marketing materials and product images from the internet. Because the vast majority of these images feature watches set to 10:10, AI systems internalize this as a default state for analog watch faces. The 10:10 Bias in AI When prompted to generate images of watches Image AI default to 10:10 hand positions unless explicitly instructed otherwise, it will try to comply but the concept of an analog watch face degrades. Additionally AI models used for image recognition usually incorrectly classify a watch with a non-10:10 hand position as "unusual" or "anomalous" because it deviates from the pattern seen in the training data. The 10:10 Bias and You The 10:10 bias shows we must be aware of biases inherent in training data. While the 10:10 bias in watch images is relatively harmless, it highlights a broader issue: AI models will replicate and amplify the patterns they are exposed to, even if those patterns are culturally or industry-specific. An example is AI systems have been trained on the bias of human interactions as seen on Reddit and Facebook. This bias builds an inaccurate portal of what human communications are really about. It is one reason I use training data that is not from these source and usually from before the internet era. Knowing these issues also afford a power over AI few can imagine.

English
0
0
0
198
Vicky Heng
Vicky Heng@vickyyhengg·
Excited to be at the Apple Store KL, cheering on my amazing ADE friends as they led inspiring Today at Apple sessions! 💻🎶 Learned some cool coding tricks and music creation skills along the way—always something new to discover! #TodayAtApple #AppleEduChat
Vicky Heng tweet mediaVicky Heng tweet mediaVicky Heng tweet mediaVicky Heng tweet media
English
3
3
20
934