Robert Matheson

1.7K posts

Robert Matheson banner
Robert Matheson

Robert Matheson

@Robert_Matheson

Community engaged artist. Founder of the NFT Museum in Newberry, SC. Organizer of Newberry Made. #GiveAlways #KeepCreating

Newberry, SC Katılım Kasım 2011
608 Takip Edilen1.1K Takipçiler
Sabitlenmiş Tweet
Robert Matheson
Robert Matheson@Robert_Matheson·
@ChatGPTapp image functionality is incredible but the #censorship is ridiculous. Prompt: "Research the latest political news and create a satirical image of a top story." REJECTED by GPT 😡 @grok No prob ✨ AI TOOLS SHOULD ALLOW FOR FREEDOM OF EXPRESSION!
Robert Matheson tweet media
English
1
0
1
137
Robert Matheson retweetledi
beeple
beeple@beeple·
OOF.
129
52
767
30.1K
Robert Matheson
Robert Matheson@Robert_Matheson·
@elonmusk @therabbithole Why would we care? Focused on race much? You must know it's a made up concept. It means nothing but to those who perpetuate it.
English
0
0
0
9
Nevglov
Nevglov@nevilleglover·
The Ones That Went Before Us Editions: 5 Listed: 1.75 #tezos Link Below: 👇👇👇
English
2
0
1
44
Robert Matheson
Robert Matheson@Robert_Matheson·
@jconorgrogan @joannejang This exactly. I asked GPT to research the top news stories and create a satirical image - DENIED. This is done every day by humans. Free speech might hurt someone's feelings but it's a right we've enshrined in Western culture for 250 years. Ridiculous.
English
0
0
2
269
Conor
Conor@jconorgrogan·
@joannejang What safety have to do with image censorship? Why don't we use precise language at what this is, and how it is distinct from other forms of AI safety efforts? Anyone can already draw anything today, or use photoshop to make anything. Is the world "unsafe" as it is?
English
5
1
52
11.6K
Joanne Jang
Joanne Jang@joannejang·
// i lead model behavior at openai, and wanted to share some thoughts & nuance that went into setting policy for 4o image generation. features capital letters (!) bc i published it as a blog post: -- This week, we launched native image generation in ChatGPT through 4o. It was a special launch for many reasons — one of which our CEO Sam highlighted as "a new high-water mark for us in allowing creative freedom." I wanted to unpack that a bit, as it could be easily missed by those not deep in AI or closely following our evolving thoughts on model behavior (wh… what do you mean you haven’t read the sixty-page Model Spec in your free time??). tl;dr we’re shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm. The goal is to embrace humility: recognizing how much we don't know, and positioning ourselves to adapt as we learn. Images are visceral There's something uniquely powerful and visceral about images; they can deliver unmatched delight and shock. Unlike text, images transcend language barriers and evoke varied emotional responses. They can clarify complex ideas instantly. Precisely because images carry so much impact, we felt even more heft — relative to other launches — in shaping policy and behavior. Evolving perspectives on launching what feels like a new capability When it comes to launching (what feels like) a new capability, our perspective has evolved across multiple launches: 1. Trusting user creativity over our own assumptions. AI lab employees should not be the arbiters of what people should and shouldn’t be allowed to create. We’re always humbled after launch, discovering use cases we never imagined — or even ones that seem so obvious in hindsight but didn’t occur to us from our limited perspectives. 2. Seeing risks clearly, but not losing sight of everyday value to users. It’s easy to fixate on potential harms, and broad restrictions always feel safest (and easiest!). We often catch ourselves questioning, “do we really need better meme capabilities when the same memes could be used to offend or hurt people?”. But I think that framing itself is flawed. It implies that subtle, everyday benefits must justify themselves against hypothetical worst-case scenarios, which undervalues how these small moments of delight, humor, and connection genuinely improve people’s lives. 3. Valuing unknown, unimaginable possibilities. Maybe due to our cognitive bias against loss aversion, we rarely consider the negative impacts of inaction; some people refer to it as “invisible graveyards” although that’s a bit too morbid and extreme. There are second order or indirect impacts unlocked by a new capability: all the positive interactions, innovations, and ideas from people that never materialize simply because we feared the worst-case scenario. How we thought about policy decisions for Day 1 Navigating these challenges is hard, but we aimed to maximize creative freedom while preventing real harm. Some examples from our launch decisions: - Public figures: We know it can be tricky with public figures—especially when the lines blur between news, satire, and the interests of the person being depicted. We want our policies to apply fairly and equally to everyone, regardless of their “status”. But rather than be the arbiters of who is “important enough”, we decided to create an opt-out list to allow anyone who can be depicted by our models to decide for themselves. - “Offensive” content: When it comes to “offensive” content, we pushed ourselves to reflect on whether any discomfort was stemming from our personal opinions or preferences vs. potential for real-world harm. Without clear guidelines, the model previously refused requests like "make this person’s eyes look more Asian" or "make this person heavier," unintentionally implying these attributes were inherently offensive. - Hate symbols: We recognize symbols like swastikas carry deep and painful history. At the same time, we understand they can also appear in genuinely educational or cultural contexts. Completely banning them could erase meaningful conversations and intellectual exploration. Instead, we're iterating on technical methods to better identify and refuse harmful misuse. - Minors: Whenever a policy decision involved younger users, we decided to play it safe: choosing stronger protections and tighter guardrails for people under 18 across research and product. Ultimately, these considerations — coupled with our progress toward more precise technical levers — led us toward more permissive policies. We recognize this might be misinterpreted as "OpenAI lowering its safety standards,” but personally, I don’t think that does justice to the team’s extensive research, thoughtful debates, and genuine love & care for users and society. My colleague Jason Kwon once passed onto me: “Ships are safest in the harbor; the safest model is the one that refuses everything. But that’s not what ships or models are for.” The future is built with imagination and adventure. As we continue our research and learn from society, we believe we can continue to find ways to responsibly increase user freedom. When (not if!) our policies evolve, updating them based on real-world feedback isn’t failure; that’s the point of iterative deployment. Please keep sharing your feedback and creations — they genuinely help us improve!
English
271
368
2.5K
1.2M
Robert Matheson
Robert Matheson@Robert_Matheson·
@StudioYorktown To be clear, I'm not trying to dismiss your perspective. Just stepping out of the human bias and looking at what's happening from an evolutionary standpoint. I see no reason why Ai cannot evoke the same or even more powerful emotion than humans alone.
English
0
0
0
15
Bruce | Studio Yorktown
Bruce | Studio Yorktown@StudioYorktown·
@Robert_Matheson While I agree that we too are bio-machines in a way, I would really need to see more conclusive evidence that the totality of creativity or the human experience is reducible to a computational process. Until such time, I prefer to hold to my current viewpoint!
English
3
0
0
43
Bruce | Studio Yorktown
Bruce | Studio Yorktown@StudioYorktown·
I'm writing a longer essay about this because I think it's a very interesting topic, but an AI cannot 'steal' a style because style is much more than a final recognizable aesthetic. An AI does not arrive at a style as a result of painstaking decision-making, deliberation, cultural and contextual sensitivity, and layers and layers of revisions. It merely looks for the similarities between the source material it has been provided (pattern recognition) in order to break it down into a mathematical formula (of sorts) which then can be used to try and fulfill the prompts it is given. So if someone were to say, 'Look how easily I can make a Ghibli movie now', as if having the technology to do so with a fraction of the effort would automatically put it on par with a true Ghibli movie, that will never happen. The true essence of a style is only available to its originator, and anything that follows from that may be in a similar category, but never actually the same. Steve Jobs' design challenge was to create a new type of device in the iPhone. Subsequent attempts by other manufacturers were to copy the iPhone, thus the goals were never the same. I'm blown away by the Ghibli filter and love the reinterpretations into a different and recognizable style, but I'm also aware that it only emulates Ghibli on an aesthetic level. It can't emulate the experiences and ways of seeing the world that have given rise to the creative choices that form that aesthetic, and I think that is the key difference that has to be noted. I think it's great because many people may become aware and curious to watch some of Miyazaki's incredible films. At the same time, I don't think Studio Ghibli is under any kind of threat because it may look like a Ghibli, it may even sound like a Ghibli, but AI representations will never have the heart of Ghibli. But it leaves me with one thought and question, It seems more and more people associate 'style' with the visual and less with the process that facilitates that end result. Are we in danger of losing sensitivity to the difference between things by becoming desensitized to their essence rather than their facade?
English
8
9
59
7.1K
Robert Matheson
Robert Matheson@Robert_Matheson·
@StudioYorktown I understand your perspective but humans are not independent from the computational process of creation as a whole. We are a small subset or subroutine of the overall computational process of the universe. Ai is an extension, consolidating our collective human experience.
English
0
0
0
18