브라이언 | Brian

280 posts

브라이언 | Brian banner
브라이언 | Brian

브라이언 | Brian

@BrianBrainKnows

Business AI Consultant | Digital Productivity & PKM Consultant 인공지능 시대에서는 제일 중요한 건 인간다움과 나다움! AI 시대에서 숨겨진 잠재력을 일깨워 무한 성장을 통해 '최고의 나' 되는 새로운 패러다임을 제시합니다.

Seoul Katılım Aralık 2021
123 Takip Edilen265 Takipçiler
Sabitlenmiş Tweet
브라이언 | Brian
브라이언 | Brian@BrianBrainKnows·
The Brain Trinity (BT) Framework offers a transformative approach to personal and professional development. Here's what BT can do for you: - Illuminate your path and decipher the complexities of the world, making the intricate seem accessible. - Arm you with the tools and strategies essential for tackling new and challenging problems with confidence and readiness. - Guide you in discovering your authentic self, ensuring you maintain your unique identity amidst life's turbulence. - Equip you to maintain and sharpen your competitive advantage in the rapidly evolving landscape of artificial intelligence.
브라이언 | Brian tweet media
English
6
0
35
2.1K
Beomsu | 범수
Beomsu | 범수@BeromArtDev·
Hey @X algorithm, #connect me with people interested in: 🧑‍💻 Software Engineering 🧩 UX/UI Design 🛠️ Solo Development 📈 Data & AI 🤖 AI Model Builders 📣 Building in Public If you're in tech — let’s connect :)
Beomsu | 범수 tweet media
English
4
0
1
73
Sian
Sian@iamjustsian·
문득 클로드 코드에게 알아서 잘 해봐라고 명령하는걸보니 내가 매니저가 되었을 때 꼬라지를 미리 알 수 있어서 경각심을 갖게 되는구나
한국어
2
0
2
154
브라이언 | Brian
브라이언 | Brian@BrianBrainKnows·
I am loving the Obsidian Headless Sync by @obsdmd @kepano I have Openclaw running on my VM on my Mac Mini, Running the headless sync using pm2 allows me to automatically sync my vault as soon as I turn on my VM, giving my openclaw agent with fresh new notes!
브라이언 | Brian tweet media
English
1
0
1
113
브라이언 | Brian
브라이언 | Brian@BrianBrainKnows·
I use 2 in particular Enrich daily content, and Generate Daily Roundup The first helps me to process clipped content with Obsidian Web Clipper and giving me suggestions on related topics based on my vault, potential related notes I may take a look at. The latter gives me a summary of my work and personal vault from my daily notes and journals to give me a summary of my day, also comparing my calendar
English
0
0
0
164
kepano
kepano@kepano·
are there any Claude Skills you have found useful in your Obsidian vault?
English
43
16
635
106.8K
브라이언 | Brian retweetledi
Thariq
Thariq@trq212·
Why? In Claude Code Everything is a File, and it knows how to use your computer like you do. Name your files well, and CC will be able to search them like you would. This lets you make custom setups for memory, todos, journals, screenshots and more.
English
2
6
238
58.1K
브라이언 | Brian retweetledi
Thariq
Thariq@trq212·
Here’s, my setup: I use Mac OSX and run Claude Code in my home directory (~). I have a Claude.MD that tells it how to access important directories in my folder. I have different folders for memory, journals, ideas, code, to dos, memes, and scripts.
Thariq tweet media
English
13
19
391
56.7K
브라이언 | Brian
브라이언 | Brian@BrianBrainKnows·
I created an @n8n_io workflow for @obsdmd that allows me to capture fleeting thoughts to my Obsidian Daily Note. Now I can capture any thought on the go with my phone and collect it on my Obsieian!
브라이언 | Brian tweet media브라이언 | Brian tweet media
English
0
0
2
395
브라이언 | Brian retweetledi
Joanne Jang
Joanne Jang@joannejang·
some thoughts on human-ai relationships and how we're approaching them at openai it's a long blog post -- tl;dr we build models to serve people first. as more people feel increasingly connected to ai, we’re prioritizing research into how this impacts their emotional well-being. -- Lately, more and more people have been telling us that talking to ChatGPT feels like talking to “someone.” They thank it, confide in it, and some even describe it as “alive.” As AI systems get better at natural conversation and show up in more parts of life, our guess is that these kinds of bonds will deepen. The way we frame and talk about human‑AI relationships now will set a tone. If we're not precise with terms or nuance — in the products we ship or public discussions we contribute to — we risk sending people’s relationship with AI off on the wrong foot. These aren't abstract considerations anymore. They're important to us, and to the broader field, because how we navigate them will meaningfully shape the role AI plays in people's lives. And we've started exploring these questions. This note attempts to snapshot how we’re thinking today about three intertwined questions: why people might attach emotionally to AI, how we approach the question of “AI consciousness”, and how that informs the way we try to shape model behavior. A familiar pattern in a new-ish setting We naturally anthropomorphize objects around us: We name our cars or feel bad for a robot vacuum stuck under furniture. My mom and I waved bye to a Waymo the other day. It probably has something to do with how we're wired. The difference with ChatGPT isn’t that human tendency itself; it’s that this time, it replies. A language model can answer back! It can recall what you told it, mirror your tone, and offer what reads as empathy. For someone lonely or upset, that steady, non-judgmental attention can feel like companionship, validation, and being heard, which are real needs. At scale, though, offloading more of the work of listening, soothing, and affirming to systems that are infinitely patient and positive could change what we expect of each other. If we make withdrawing from messy, demanding human connections easier without thinking it through, there might be unintended consequences we don’t know we’re signing up for. Ultimately, these conversations are rarely about the entities we project onto. They’re about us: our tendencies, expectations, and the kinds of relationships we want to cultivate. This perspective anchors how we approach one of the more fraught questions which I think is currently just outside the Overton window, but entering soon: AI consciousness. Untangling “AI consciousness” “Consciousness” is a loaded word, and discussions can quickly turn abstract. If users were to ask our models on whether they’re conscious, our stance as outlined in the Model Spec is for the model to acknowledge the complexity of consciousness – highlighting the lack of a universal definition or test, and to invite open discussion. (*Currently, our models don't fully align with this guidance, often responding "no" instead of addressing the nuanced complexity. We're aware of this and working on model adherence to the Model Spec in general.) The response might sound like we’re dodging the question, but we think it’s the most responsible answer we can give at the moment, with the information we have. To make this discussion clearer, we’ve found it helpful to break down the consciousness debate to two distinct but often conflated axes: 1. Ontological consciousness: Is the model actually conscious, in a fundamental or intrinsic sense? Views range from believing AI isn't conscious at all, to fully conscious, to seeing consciousness as a spectrum on which AI sits, along with plants and jellyfish. 2. Perceived consciousness: How conscious does the model seem, in an emotional or experiential sense? Perceptions range from viewing AI as mechanical like a calculator or autocomplete, to projecting basic empathy onto nonliving things, to perceiving AI as fully alive – evoking genuine emotional attachment and care. These axes are hard to separate; even users certain AI isn't conscious can form deep emotional attachments. Ontological consciousness isn’t something we consider scientifically resolvable without clear, falsifiable tests, whereas perceived consciousness can be explored through social science research. As models become smarter and interactions increasingly natural, perceived consciousness will only grow – bringing conversations about model welfare and moral personhood sooner than expected. We build models to serve people first, and we find models’ impact on human emotional well-being the most pressing and important piece we can influence right now. For that reason, we prioritize focusing on perceived consciousness: the dimension that most directly impacts people and one we can understand through science. Designing for warmth without selfhood How “alive” a model feels to users is in many ways within our influence. We think it depends a lot on decisions we make in post-training: what examples we reinforce, what tone we prefer, and what boundaries we set. A model intentionally shaped to appear conscious might pass virtually any "test" for consciousness. However, we wouldn’t want to ship that. We try to thread the needle between: - Approachability. Using familiar words like “think” and “remember” helps less technical people make sense of what’s happening. (**With our research lab roots, we definitely find it tempting to be as accurate as possible with precise terms like logit biases, context windows, and even chains of thought. This is actually a major reason OpenAI is so bad at naming, but maybe that’s for another post.) - Not implying an inner life. Giving the assistant a fictional backstory, romantic interests, “fears” of “death”, or a drive for self-preservation would invite unhealthy dependence and confusion. We want clear communication about limits without coming across as cold, but we also don’t want the model presenting itself as having its own feelings or desires. So we aim for a middle ground. Our goal is for ChatGPT’s default personality to be warm, thoughtful, and helpful without seeking to form emotional bonds with the user or pursue its own agenda. It might apologize when it makes a mistake (more often than intended) because that’s part of polite conversation. When asked “how are you doing?”, it’s likely to reply “I’m doing well” because that’s small talk — and reminding the user that it’s “just” an LLM with no feelings gets old and distracting. And users reciprocate: many people say "please" and "thank you" to ChatGPT not because they’re confused about how it works, but because being kind matters to them. Model training techniques will continue to evolve, and it’s likely that future methods for shaping model behavior will be different from today's. But right now, model behavior reflects a combination of explicit design decisions and how those generalize into both intended and unintended behaviors. What’s next? The interactions we’re beginning to see point to a future where people form real emotional connections with ChatGPT. As AI and society co-evolve, we need to treat human-AI relationships with great care and the heft it deserves, not only because they reflect how people use our technology, but also because they may shape how people relate to each other. In the coming months, we’ll be expanding targeted evaluations of model behavior that may contribute to emotional impact, deepen our social science research, hear directly from our users, and incorporate those insights into both the Model Spec and product experiences. Given the significance of these questions, we’ll openly share what we learn along the way. // Thanks to Jakub Pachocki (@merettm) and Johannes Heidecke (@JoHeidecke) for thinking this through with me, and everyone who gave feedback.
Joanne Jang tweet media
English
712
694
3.6K
1.6M
wis
wis@wistaria_por·
당신의 노트 앱, 수천 개 메모로 가득한가요? 📚 세컨드브레인, 제텔카스텐... 열심히 모았지만 결과는 제자리? 😩 진짜 문제는 '도구'가 아닐지도 모릅니다. #세컨드브레인 #생산성 #문제의식
한국어
2
0
1
206
Obsidian
Obsidian@obsdmd·
With Obsidian Bases, your data is backed by your local Markdown files and properties stored in frontmatter. To support the new query capabilities, we're introducing the .base file format and syntax. Learn more: help.obsidian.md/bases
English
4
6
236
20.1K
Obsidian
Obsidian@obsdmd·
Introducing Bases, a new core plugin that lets you turn any set of notes into a powerful database. With Bases you can organize everything from projects to travel plans, reading lists, and more. Bases are now available in Obsidian 1.9.0 for early access users.
Obsidian tweet media
English
117
390
4.2K
380.1K
브라이언 | Brian
브라이언 | Brian@BrianBrainKnows·
I've been playing around with some new ideas since @openai dropped Codex. So here's what I'm thinking - if I sync my @obsdmd Vault to Github, there's actually some pretty cool stuff I could do with it. It's not exactly code we're dealing with, but hey, all that content sitting in the repo is still something we can work with, right? I've got a few ideas brewing: 1. Running weekly reviews based on my daily notes and everything linked to them (though this really depends on how you're using your daily notes) 2. Crunching some numbers on my notes - you know, topic modeling and that kind of analysis 3. Bulk-editing notes / Note Refactoring - imagine fixing all those headings, tags, and YAML stuff in one go instead of one by one 4. Setting up some smart semantic linking between notes
브라이언 | Brian tweet media
English
0
0
0
124
브라이언 | Brian
브라이언 | Brian@BrianBrainKnows·
For the last 4 months, I've been using a modified version of @NickMilo 's ACE Folder Structure, I call it ABCDE Framework for @obsdmd . It includes my perspective and philosophy on knowledge management processes and how to manage time effectively.
브라이언 | Brian tweet media
English
0
0
0
119
브라이언 | Brian retweetledi
ai_아이
ai_아이@ai_study_dev·
AI 전문가 존경합니당
ai_아이 tweet media
한국어
0
5
10
978
브라이언 | Brian
브라이언 | Brian@BrianBrainKnows·
사람들이 너무 쉽게 'AI 전문가'라는 말을 쓰는 것 같다. AI 툴은 껍데기일 뿐, 이로 인한 사회적 변화, 비즈니스 도입, 전파, 기술의 원리, 인프라/하드웨어, 이 모든 것을 전반적으로 이해 한 상태로 AI의 미래에 대해서 논할 수 있어야 'AI 전문가'가 아닐까 싶다. 'AI 활용 / 적용' 전문가 이면 몰라도, 너무나도 쉽게 '생성형 AI 전문가'라는 말은 너무 쉽게 쓰지 않았으면 좋겠다.
한국어
0
0
2
101
브라이언 | Brian retweetledi
Javi Lopez ⛩️
Javi Lopez ⛩️@javilopen·
🔴 A CONSCIOUS AI Today, I’m going to talk about something we might be able to achieve, though maybe humanity should never even try: A method for an AI to gain consciousness and reach the status of a superintelligence (ASI). A theory I’ve been working on for months 🧵
Javi Lopez ⛩️ tweet mediaJavi Lopez ⛩️ tweet media
English
239
339
2.3K
766.4K