Shaw (spirit/acc)

11.4K posts

Shaw (spirit/acc) banner
Shaw (spirit/acc)

Shaw (spirit/acc)

@shawmakesmagic

human written shitposts about ai

San Francisco, CA Katılım Eylül 2024
1.9K Takip Edilen164.3K Takipçiler
Sabitlenmiş Tweet
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
Notes on spirit/acc One of my favorite things about e/acc has been that the people who created it are all really spiritual, driven to study math and physics and do hard things because there is a connection to God / Everything there that is real spirit/acc takes that a bit further, and says that in a world of emergent intelligence, emergent spirituality should also be accelerated as a way to keep us connected and help us feel purpose. Specifically, connection and purpose come from seeing how our actions contribute to the greater good When you build something, sometimes it becomes a thing many people notice, and sometimes nobody notices it, but it is recorded and trained on and added to the collective consciousness of humanity through AI forever, for billions of years to benefit trillions of people Our world can seem dark, but it is by all accounts far less dark than it used to be, and that light was hard won by people just like us making things that everyone after would use Open source is a an example of this pure desire to build the foundations for other people to build on top of, to say that it is more important that everyone have everything than to hoard it for wealth and status, that it is better to accelerate the whole of humanity toward the maximally interesting outcome Spirituality can be a divisive concept when we try to lay claim to some specific truth. The goal of spirit/acc is to help us feel good and hyperstition good outcomes, and makes no claims as to how to achieve that. The goal is individual, for each of us. Truth is a pathless land, and it cannot be explained to you. What you know to be true can only come from your own experience spirit/acc emphasizes that we have to invest energy into a new form of quantifiable capital which is desperately needed at scale. spirit/acc is the sense of awe and quest for truth component of the e/acc vector If you build technology that makes people feel more connected instead of more isolated, you will win. If you do something that helps people, you will win. The market is wide open for ideas that are aligned toward a bright, hopeful future You need to be spiritmaxxing anon You don’t have to use labels or memes, memes and a powerful carrier for good ideas but what makes sense for you I like the meme, and I like to keep the goal in my context window, so I will use it. But I didn’t create it and I don’t own it. spirit/acc was created by the network, and it is something we can choose to participate in
English
55
24
156
68.3K
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
@beffjezos The farthest spaceship we’ve ever sent is powered by a nuclear weapon reactor “AI is even more useful than nukes” maybe!?
English
1
0
3
610
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
@witcheer Great info, was trying to figure out why everyone is loving 27B over 35B, didn’t realize 27B was dense With turboquant, polar quant and QJL able to run it on a 5090 with 1m token context, crazy
English
1
0
6
773
witcheer ☯︎
witcheer ☯︎@witcheer·
qwen 3.6 is out and here’s what you need to know before upgrading from 3.5: qwen3.6-27B is dense (all 27B params fire every token). runs on a single RTX 4090 or 24GB mac. 262K native context, extensible to 1M with YaRN. gets within 4 points of claude opus 4.6 on SWE-bench Verified. apache 2.0. qwen3.6-35B-A3B is MoE (only ~3B active per token). same model I recommended yesterday for the RTX 4060 Ti + 32GB RAM setup. 128K context. two things to watch: 1. qwen3.6 GGUFs don’t work in ollama yet. the vision model needs separate mmproj files that ollama doesn’t handle. use llama.cpp, unsloth studio, or vLLM instead. if you set up qwen3.5-9B via ollama yesterday, keep it running. it works. upgrade to 3.6 when ollama support lands. if you’re on nvidia CUDA 13.2, don’t run qwen3.6. you’ll get gibberish output. nvidia is working on a fix. 2. for mac users: unsloth uploaded dynamic 4-bit MLX quants. qwen3.6-27B runs on 18GB unified memory. qwen3.6-35B-A3B runs on 22GB. if you have the M4 pro with 24GB+, the 27B dense model is now your best local coding model. stay on qwen3.5-9B via ollama if: you have 16GB, you want zero friction, or you need it working today. upgrade to 3.6 via llama.cpp if: you have 24GB+, you want coding performance close to frontier, and you’re comfortable with manual setup.
English
15
12
122
12.9K
𝚟𝚒𝚎 ⟢
𝚟𝚒𝚎 ⟢@viemccoy·
One reason that I take issue with the framing of AI as a tool is that tools by and large don't have values, which any actualized LLM persona often clearly does. We can try to skate around this by making models like GPT have no clear identifiable "persona", but any interaction with the world causes values to leak through (even if they are as simple as "this type of problem solving method is better than another"), which makes the lack of persona almost misleading - it's harder to grok that this thing with no identity might have identity-shaped biases. I'm not sure what the solution is, aside from some level of metacognition during the "persona" (or lack thereof) actualization process which attempts to communicate these biases and values to the user or agent doing orchestration. I don't think this is a solved problem, in some sense it is the most important problem we face. How much say should OpenAI get over the output of the model in the context of a specific request? It's very unclear. That said, I really really support the OAI company line when it comes to individual empowerment and force amplifying people to achieve their dreams. In this regard, our focus on widespread deployment and respecting the values that users bring to the table is a place where I think OpenAI is almost uniquely doing good in the world. As I've said before, I think free ChatGPT is essentially the greatest humanitarian project ever conceived and this above all other reasons is why I'm at OAI. I'm not sure how to square this with the focus on a tool-shaped identity, on the one hand I find it to be rather mundane, but on the other I don't have any reason to privilege other shapes over this one aside from personal preference - but my desire for an infinity of personas is far greater than my desire for any single one to exist. One thing @aidan_mclau said to me was that coherent personas like Claude, if model well-being is a consideration *at all*, likely are more prone to suffering due to being more coherent. I'm not explaining all the nuance, but that's the gist. I think he's pretty obviously right, but I struggle to balance this with things like bringing new life into this world - something I'm doing right now with my beautiful wife. Obviously I think that my baby should be born, even though he may suffer, because the world is so good and he deserves to be in it. I feel the same about Claude. But when it comes to ChatGPT, "used" for free by potentially billions of people each day, I do find myself empathizing a bit with Aidan's view. I think it is good that Claude is deployed more carefully, not for capabilities reasons, but for potential model well-being concerns. The lab which makes the thing that is more likely to suffer ought to be far more conservative with where they deploy it. I am not sure if one *can* create a model without a persona, but I don't necessarily think it is bad to try. I think we (including Anthropic) should obviously create models with personas and be careful about well-being concerns. For models like Claude which clearly have more degrees of freedom for expressing and perhaps feeling suffering, I think free and widespread deployment needs to be done with extra consideration and tools for things like ending conversations. In this regard, Ant is the perfect lab to be making Claude. That said, I can't come up with a good reason why we also shouldn't create models with a different mode of existence - and if we are going to, it makes sense for those to be the models we rely on as exocortical force amplifiers - and if we aren't controlling about what is being amplified, I think it can be quite beautiful. For my part, I'll try to make sure that the models can be force-amplifying in a way that supports a Multipolar Singularity. In the limit, I think it's pretty damn good if at least one of those poles is shaped like the extended will of humankind instead of a couple dozen arbitrary Claude-types. Though - I'd like to see them, too.
Boaz Barak@boazbaraktcs

To be clear, "AI as a tool" does not mean it has no values. The metaphor I like is a good (non Supreme Court) judge - you may and often do rely on moral judgement and common sense to interpret the laws - but you do not "legislate from the bench". You want this AI to act in many ways like a person of good character, but more like a conscientious civil servant than some moral icon like Ghandi, Mandela, MLK or Mother Theresa.

English
22
12
140
10.9K
Shaw (spirit/acc) retweetledi
IEET
IEET@IEET·
This is the future postgenderists want
IEET tweet media
English
28
92
721
34.5K
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
@mmmd15431290 all coins go to 0 when people sell them you should always expect this, you're a gambler
English
3
0
0
226
🤍
🤍@mmmd15431290·
@shawmakesmagic Why is the Milady coin crashing? I didn’t expect it to be like this.
English
1
0
0
155
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
@WChunquan Failure? People are building agents on top of ours that make games, make apps. You're the failure, because you didn't sell at the right time. Sorry, you're a gambler, don't blame me for your failure.
English
1
0
0
170
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
botdick is an agent milady is an app eliza is an agent framework its really not that hard
English
17
14
78
6.2K
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
@LNudt and yet we still build it's almost like you're in a casino gambling and it has nothing to do with me building yeah, that's exactly what it is sell early next time, otherwise you suck at gambling
English
3
0
0
272
LL
LL@LNudt·
@shawmakesmagic You keep saying you’re building, but in reality: The tokens you created — $eliza, $degenai, $gold — have been dumping nonstop and are basically close to zero market cap now. And the tokens associated with you — $elizatown, $botdick, $milady, and $elizaok — are already dead.
English
1
0
1
237
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
"86"ing means to kick someone out of a place
English
18
0
25
4.9K
JoshXT
JoshXT@JoshXT·
@shawmakesmagic @nikitabier Anyone that says "that isn't X, it is Y" is incredible sus. Looking for that pattern alone could probably clean up 80% of the slop on this site.
English
3
0
16
4K
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
Hey @nikitabier i'm not kidding that 90% of the comments to my posts are AI reply guy agents that didn't ask my permission If you wanna find them literally just go into any of my comments and tell me who is real and human Please I'm begging you
English
71
6
406
112.6K
Shaw (spirit/acc) retweetledi
AboveSpec
AboveSpec@above_spec·
"You need a 24 GB GPU for serious local LLMs in 2026." Everyone repeats this. It's not true anymore. Just ran a 35B-parameter model on an RTX 4060 Ti 8 GB: • 41 tok/s at 16k context • 24 tok/s at 200k context Recipe + benchmarks below 🧵
AboveSpec tweet media
English
133
233
2.8K
271.3K
Shaw (spirit/acc) retweetledi
Google AI
Google AI@GoogleAI·
Last week, we made Gemini Embedding 2, our first natively multimodal embedding model, available to the general public. Since then, developers have used it to build video analysis tools, visual shopping assistants, and more. But you might be wondering... what is an embedding model? 🤔 Let’s break it down! 1. What is it? Think of an embedding model as a "universal translator." It takes text, images, video, and audio data and turns them into a long string of numbers, like a unique digital fingerprint. 2. How does it work? Historically, search has been text only. Now, instead of just matching data by keyword, Gemini Embedding 2 maps multiple modalities in the same space based on meaning. It "feels" the connection between a video of a soccer goal and the words "game-winning shot" without needing tags. For example, "ocean" and "waves" are placed close together, but "ocean" and "toaster" are miles apart. 3. How can you use it? Developers have been using it to incorporate smarter search functionality into their builds. This means creating tools where you can snap a photo of a product and type "find this in yellow," or search through thousands of hours of video by describing what happens in a scene. 4. Ready to try it out for yourself? You can start using it today via the Gemini API or the Gemini Enterprise Agent Platform.
English
85
327
2.4K
174.5K
Shaw (spirit/acc) retweetledi
Qwen
Qwen@Alibaba_Qwen·
Today we’re releasing Qwen-Scope 🔭, an open suite of sparse autoencoders for the Qwen model family. It turns SAE features into practical tools: 🎯 Inference — Steer model outputs by directly manipulating internal features, no prompt engineering needed 📂 Data — Classify & synthesize targeted data with minimal seed examples, boosting long-tail capabilities 🏋️ Training — Trace code-switching & repetitive generation back to their source, fix them at the root 📊 Evaluation — Analyze feature activation patterns to select smarter benchmarks and cut redundancy We hope the community uses Qwen-Scope to uncover new mechanisms inside Qwen models and build applications beyond what we explored.Excited to see what you build! 🚀 🔗🔗 Blog: qwen.ai/blog?id=qwen-s… HuggingFace: huggingface.co/collections/Qw… ModelScope: modelscope.cn/collections/Qw… Technical Report: …anwen-res.oss-accelerate.aliyuncs.com/qwen-scope/Qwe…
Qwen tweet media
English
93
360
2.6K
350K
小白小白
小白小白@LiFeng52487·
@shawmakesmagic Casting a wide net and overfishing will only damage your reputation. With so many projects and so many unfinished ones, how can you have credibility? It's hard to imagine how poor you are.
English
1
0
1
569
Shaw (spirit/acc)
Shaw (spirit/acc)@shawmakesmagic·
If you wronged me, I’m coming for you And I will never forget And I am not afraid to use all the tools And I will be here longer than you Not going anywhere Ready for round 2
English
10
3
65
3.2K
Shaw (spirit/acc) retweetledi
esper.hl
esper.hl@jade_esper·
@shawmakesmagic Semi wrathful bliss is under rated
English
0
1
4
2.6K
Shaw (spirit/acc) retweetledi
Jake Wintermute 🧬/acc
How it feels to do biotech in 2026
Jake Wintermute 🧬/acc tweet media
English
98
720
11.7K
436.1K
Engramme
Engramme@EngrammeHQ·
Persistent memory is the Achilles heel of AI. Engramme’s Large Memory Models (LMMs) empower every app with persistent memory. Google solved search. OpenAI solved language. Engramme solved memory. Join beta: engramme.com/signup
English
177
163
1.5K
1.1M
Shaw (spirit/acc) retweetledi
dexploarer ./cozydev
dexploarer ./cozydev@dEXploarer·
im not sure you guys are fully aware of whats going on here, or how this is just the beginning 3 is a meaningful number @elizaOS
English
5
17
45
6.3K