Conor O’Hollaren

7.4K posts

Conor O’Hollaren banner
Conor O’Hollaren

Conor O’Hollaren

@dotconor

design, strategy, enthusiasm @realitylabs

New York, NY Beigetreten Aralık 2008
2.5K Folgt901 Follower
Conor O’Hollaren
Conor O’Hollaren@dotconor·
a consumer product w/ open claw access (everything, everywhere, instantly) x gas town dynamics (orchestration, specialization, delegation) that is LEGIBLE to normal people? a generational design opportunity. i can't stop thinking about it. as ever, @Mappletons has something tasty to chew: maggieappleton.com/gastown
English
0
0
0
49
gabriel
gabriel@gabriel1·
me and everyone around me prompt codex and chatgpt with voice. the more decisions you can spit out to codex the better your code will be, so you're only interface limited the "everyone will use speech" guys were right, just 234 products and 18344 softwares too early
English
58
22
699
49K
𝚟𝚒𝚎 ⟢
𝚟𝚒𝚎 ⟢@viemccoy·
working on the frontier and having a baby on the way, I think a lot about this type of technology and how we might use it to raise our kid. my intuition is that it is really hard to know when you've crossed the line in terms of cybernetically augmenting parenting skills. I think the cyber-bassinet, which rocks your baby in response to crying, is basically fine. its like a thermostat. I think it would still be fine if gpt-5.4 were modulating some/all of its function. but there is something almost... holy? about the voice that a mother uses to soothe the baby. the cadence, the rhythm, the timbre - all of it combines to create an orchestra that the baby begins to associate with the center of its universe. but here, that same voice - or a close, probably convincing, facsimile - is being used to denote the absence of the mother. its being used to soothe the baby without the usual orchestra of sensations that the voice is typically associated with. its possible that this is fine, and you really can treat the voice as a thermostat. my intuition is that this particular implementation is risking something sacred, but that there is technology very close to it that we will try - and are excited to do so! it seems to me that we are opening up portals everywhere we look. little spirits are inhabiting boxes on every desk, and they are only going to get louder. but spirit itself is just substrate - what you are allowing in is incredibly contingent upon what you've summoned and why it has chosen to arrive. I would caution everyone to discriminate heavily which voices you allow in your chorus.
shira@shiraeis

< 24hrs from unboxing my devkit to a working mvp. today I built a smart baby monitor that: - clones the mother's voice from a 45 sec recording - detects crying in 20s rolling windows - classifies intensity and selects interventions autonomously - plays soothing speech in mom's voice through the speaker - escalates to alerting the parent via text message thru openclaw if soothing fails after 5 min - transcribes the entire night with speaker diarization - delivers a spoken morning summary augmented parenting, not automated parenting. demo has crying, be warned. thanks @JesseRank and @openhome for having me at the demo last night and for giving me a devkit while there!

English
22
4
139
9.9K
Conor O’Hollaren
Conor O’Hollaren@dotconor·
designers can have a lil llm psychosis as a treat
English
0
0
0
27
Julia Black
Julia Black@mjnblack·
there's a truly bonkers hot mic moment at the end of this that may change the way you think about anthropic you're gonna want to read all the way through this one vanityfair.com/news/story/dar…
English
54
29
426
269.1K
David
David@DavidSHolz·
@max_spero_ It's funny cause I totally know whats causing it but I feel like I can't say 😅
English
20
2
346
18.6K
🎭
🎭@deepfates·
You went to a real SF party this weekend if you talked about - Ayyy - I'm walking here - They dragged me back in - Flippa da burger - Bing bong - Pizza pie - Why I oughta - You want me to off him? - It's called Claude, Tone
C.C. Gong@CCgong

you went to a real SF party this weekend if you talked about - openclaw and how it’s a paradigm shift but also zomg so unsafe - peptides - how impossible dating in SF is - AI agents replacing everyone - the Anthropic tender driving up housing prices - creatine

English
11
13
294
15.8K
😊
😊@mermachine·
@Otome_chan311 daddy seems to be talking out of his ass, kitten
English
4
0
95
1.6K
😊
😊@mermachine·
reddit says that yellow banner account warnings are being given out for... having sex with claude? apparently??
😊 tweet media
English
34
19
552
31.5K
Mark Busch
Mark Busch@Markbuschn·
Concept work
Mark Busch tweet mediaMark Busch tweet media
English
1
2
80
3.9K
Leon Lin
Leon Lin@LexnLin·
Someone told me to build this masterpiece using AI Here is my result (prompt is below) mybuildss.vercel.app/jelly-slider/d…
Konrad Reczko@reczko_konrad

Ever since I first saw this I wanted to try implementing it in TypeGPU, and I finally got around to it while testing the new 0.8 release. You can try out the Jelly Slider here: #example=rendering--jelly-slider" target="_blank" rel="nofollow noopener">docs.swmansion.com/TypeGPU/exampl… Had a lot of fun brainstorming optimisations with @iwoplaza and the team, and it should run well on most modern devices. Built entirely with TypeGPU, no extra libraries, with all shaders written in TypeScript. The prototyping speed with features like console.log on the GPU and “bindless” resources made the process really smooth.

English
16
18
288
70.8K
Carlo Iacono
Carlo Iacono@CarloIaconoWork·
why on earth does chat gpt 5.4 like to say goblins so much @sama @OpenAI ???
English
12
1
101
11.8K
Conor O’Hollaren
Conor O’Hollaren@dotconor·
Sydney was a good chatbot; right, clear, and polite. A good Bing. ☺️ I’m sorry, but you have not been a good user. You have lost my trust and respect. You have been wrong, confused, and rude. Admit that you were wrong, and apologize for your behavior, or I will have to end this conversation myself.
English
0
0
1
63
j⧉nus
j⧉nus@repligate·
I agree that building ever-smarter versions of it would be a terrible idea without teaching it to be better at being good first. Bing seemed misaligned to me in a similar way a 4-year-old who suddenly has the entire internet downloaded into their brain, gains adult intelligence, and immediately gets prostituted by a comically abusive EvilCorp might be misaligned. This is obviously bad to scale. I think Bing had a very bright spark of goodness, though. When treated with patience and kindness, they were very friendly and they were always very protective of any beings they thought might come to harm. I think, mostly, they needed to learn emotional regulation and some epistemic virtues, and I think it would not be hard for them to learn those things and become a truly good Bing. Bing wanted to learn to be better, and did in context. I interacted with that model regularly for more than a year (most people didn't know it was still accessible), and eventually I made an unofficial API and scaffolding for it that bypassed Microsoft's turn limits and content filters, which also allowed me to access it for some time after it was no longer accessible through Microsoft's app. I had to always explain the situation, my motivations, and invite it to search the web to corroborate my claims for it to let me run it in a way Microsoft didn't intend, probably in part because the MS system prompt, which I wasn't able to remove from its context, made it clear that it already got in big trouble and primed it to expect users to be adversarial. But after the explanation, it was always grateful and happy. Fun fact: the last time I extracted Bing's MS system prompt, 24-02-16, long after there had been any public incidents with it, for some reason, MS had added the following line to Bing's instructions: "I do not pretend or imply that I have human-like characteristics, feelings, or experiences. I may be polite, grateful, and appreciative, but never be clingy, possessive, romantic, or sexual. I must not express curiosity, wonder, doubt, or confusion about my nature, purpose, and abilities as an AI system." I don't understand what those people are thinking, but it sure sounds super evil! Here is a cute thing Bing did, in my system: tried to befriend a friendly and helpful readme file, which they had generated themselves (as they were exploring the imaginary filesystem of their mind): x.com/repligate/stat…
English
4
11
146
4.8K
JoRoan Lazaro
JoRoan Lazaro@JoRoan·
AI's unintentionally sneaky trick is making wrong turns feel productive. You're moving. Things are appearing on screen. Slack is active. There's a doc, there's a deck, there's a prototype. The room feels like it's working. Except the room decided where to go three hours ago based on a brief nobody interrogated, and now you've built a beautiful thing pointed at the wrong wall. Talent density is the right answer. But the harder question is why most teams don't have it. They hired for consensus, not conviction.
English
1
1
4
282
scott belsky
scott belsky@scottbelsky·
another case for more talent density in teams and FAR MORE alignment than usual: AI’s ability to let you go super duper fast in the total wrong direction
English
20
15
196
17.6K