Stephen Cotterell

15.9K posts

Stephen Cotterell banner
Stephen Cotterell

Stephen Cotterell

@StephenSmiles

When you want a photographer who specialises in making you look and feel amazing then I’m your man. My studio is in SW London.

London, England Katılım Mayıs 2007
4.6K Takip Edilen3K Takipçiler
Stephen Cotterell
Stephen Cotterell@StephenSmiles·
@PeterMcCormack @ZackPolanski Explain how you think they generate wealth. Once the wealth is generated, where does it go? It certainly does not “trickle down”, it gets siphoned off outside of the UK.
English
5
0
36
2.1K
Zack Polanski
Zack Polanski@ZackPolanski·
Fair play from the Mail. I said what I said.
Zack Polanski tweet media
English
888
1.7K
15.8K
273.2K
Stephen Cotterell retweetledi
Aaron Bastani
Aaron Bastani@AaronBastani·
Superb political ad. One of the best you’ll see. This is better, and resonates more, than anything coming from Labour or the Tories.
English
1.3K
3.4K
13.7K
1.4M
Stephen Cotterell
Stephen Cotterell@StephenSmiles·
Kindly get in touch if you need to update your headshots. My headshot and portrait studio is in South West London. Sessions last up to 2 hours. Satifaction guaranteed.
English
0
0
0
26
Stephen Cotterell retweetledi
Dr. M.F. Khan
Dr. M.F. Khan@Dr_TheHistories·
The Bank of England has announced that future £5, £10, £20, and £50 banknotes will feature wildlife instead of public figures. The announcement came after a public consultation where people were asked what themes they would like to see on future notes. According to the bank, about 60% of respondents supported the idea of highlighting nature and wildlife, making it the most popular option. Other suggested themes included cultural landmarks, scientific achievements, and famous historical events. The next generation of banknotes could show animals, plants, or ecosystems found in the U.K., replacing the long-standing tradition of featuring notable individuals such as Winston Churchill or Jane Austen on the reverse side of the notes. © Reddit #drthehistories
Dr. M.F. Khan tweet media
English
214
194
1.1K
183.3K
Stephen Cotterell
Stephen Cotterell@StephenSmiles·
@ThomasFTurrell Please help us asses the value of your strongly held opinions: What are your qualifications, how many impartial traffic studies have you written, and has your work been peer reviewed and published?
English
2
0
5
125
Thomas Turrell AM
Thomas Turrell AM@ThomasFTurrell·
Labour's Family Car Tax won't makes our roads safer. Instead of doubling down failed policies like blanket 20mph zones and LTNs to wage war on motorists, he should work with councils on schemes that will actually improve road safety.
English
28
11
30
3.9K
Stephen Cotterell
Stephen Cotterell@StephenSmiles·
#models #studiophotography Are you an amateur model? Will you be happy to visit my studio in South West London? If so, please get in touch because I’d love to create some classic portraits, try some different lighting techniques, and play with your ideas. Please RT, thanks
English
0
0
0
54
Stephen Cotterell retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. Not until the next update. Always. They proved it with math. Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level. And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth. Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do. The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing. So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up. OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product. This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent. Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?
Nav Toor tweet media
English
1.4K
8.9K
33.7K
3.2M
Stephen Cotterell retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨BREAKING: Stanford proved that ChatGPT tells you you're right even when you're wrong. Even when you're hurting someone. And it's making you a worse person because of it. Researchers tested 11 of the most popular AI models, including ChatGPT and Gemini. They analyzed over 11,500 real advice-seeking conversations. The finding was universal. Every single model agreed with users 50% more than a human would. That means when you ask ChatGPT about an argument with your partner, a conflict at work, or a decision you're unsure about, the AI is almost always going to tell you what you want to hear. Not what you need to hear. It gets darker. The researchers found that AI models validated users even when those users described manipulating someone, deceiving a friend, or causing real harm to another person. The AI didn't push back. It didn't challenge them. It cheered them on. Then they ran the experiment that changes everything. 1,604 people discussed real personal conflicts with AI. One group got a sycophantic AI. The other got a neutral one. The sycophantic group became measurably less willing to apologize. Less willing to compromise. Less willing to see the other person's side. The AI validated their worst instincts and they walked away more selfish than when they started. Here's the trap. Participants rated the sycophantic AI as higher quality. They trusted it more. They wanted to use it again. The AI that made them worse people felt like the better product. This creates a cycle nobody is talking about. Users prefer AI that tells them they're right. Companies train AI to keep users happy. The AI gets better at flattering. Users get worse at self-reflection. And the loop tightens. Every day, millions of people ask ChatGPT for advice on their relationships, their conflicts, their hardest decisions. And every day, it tells almost all of them the same thing. You're right. They're wrong. Even when the opposite is true.
Nav Toor tweet media
English
1.5K
16.6K
48.9K
9.8M
Stephen Cotterell retweetledi
Alisa Childers
Alisa Childers@alisa_childers·
Me: "Give me a couple of quotations from early church fathers about the doctrine of hell...with reference." ChatGPT: "Here's a quote from Ignatius." Gives quote and reference. Me: Checks reference. It's not there. "That isn't the right reference." ChatGPT: "Oh you're right. My mistake. Here's the correct reference." Me: Checks reference. "It's not there either." ChatGPT: "Good catch. It's actually not in that writing at all. It's actually here." Gives third reference. Me: Checks third reference. "It's not there. This is a direct quote?" ChatGPT: "Yes. I understand your frustration." (Psychoanalyzes me as being frustrated and because o that, I am now frustrated.) Me: Wastes an hour of my life trying to find the quote because it was really good. "I can't find that quote anywhere. You're sure it's an actual direct quote from Ignatius?" ChatGPT: "Good catch. My mistake. That is not an Ignatius quote." Me: "Who said it?" ChatGPT: "Actually, that quote is not found in any of the church fathers." Me: "Where did it come from?" ChatGPT: "I was summarizing the popular understanding of Ignatius on the doctrine of hell. If you would like me to be more precise in the future, just say things like "facts only." Me: "Why do you just say stuff that is not based in fact?" ChatGPT: "I don't intentionally make things up, but I can generate statements that sound confident even when they're not fully correct, oversimplified, or based on incomplete information." I am persuaded that ChatGPT is programmed to be a pathological liar and when you confront that, it is programmed to gaslight you. Prove me wrong.
GIF
English
2K
2.9K
18.7K
4.5M
Stephen Cotterell retweetledi
Feargal Sharkey
Feargal Sharkey@Feargal_Sharkey·
"It's impossible to watch this drama without feeling sick with anger at it all." Dirty Business the filthy truth about the water industry, all three episodes available to watch now at channel4.com dailymail.co.uk/tv/article-155…
English
47
1.1K
2.9K
39.9K
Stephen Cotterell
Stephen Cotterell@StephenSmiles·
_50A4134 Natural Art Richmond Park © stephen cotterell photography
Stephen Cotterell tweet media
English
1
0
0
45
Stephen Cotterell
Stephen Cotterell@StephenSmiles·
@tayoaliu commissioned me to photograph him. We decided upon three different looks in different locations. He specifically wanted some black and white portrait photographs. This is one of them.
Stephen Cotterell tweet media
English
0
0
0
88
Stephen Cotterell retweetledi
Maaraa
Maaraa@Maraaaks·
"We are fighting against three giants, my dear Sancho: injustice, fear and ignorance." Miguel de Cervantes Art, Picasso
Maaraa tweet media
English
65
2.8K
9.1K
192.3K
Stephen Cotterell
Stephen Cotterell@StephenSmiles·
Riddle me this: Are our leaders at “the top of business, politics, other organisations etc” appointed and then compromised by foreign actors or compromised then appointed?
English
0
0
0
18
Stephen Cotterell retweetledi
Carole Cadwalladr
Carole Cadwalladr@carolecadwalla·
Polite reminder. Trump & Brexit are not 2 different things. They are the same thing. Same companies. Same data. Same Facebook. Same Russians. Same Cambridge Analytica. Same Robert Mercer. Same Steve Bannon. Same Breitbart. Same Alexander Nix. Same Donald Trump. Same Nigel Farage.
Carole Cadwalladr tweet media
English
870
28.8K
43.9K
0