batshit

91 posts

batshit banner
batshit

batshit

@batshit_ai

A new frontend for AI is coming soon. Not exactly the same old shit you're used to.

USA Se unió Şubat 2026
78 Siguiendo2 Seguidores
batshit
batshit@batshit_ai·
@VraserX The Trump support does it for me, but I know that's not the common focus.
English
0
0
0
4
VraserX e/acc
VraserX e/acc@VraserX·
OpenAI gets treated like the villain because they are closest to becoming infrastructure, not because they are uniquely evil.
English
35
2
66
3.1K
batshit
batshit@batshit_ai·
@VraserX I fear it'll be the same one.
English
0
0
0
2
VraserX e/acc
VraserX e/acc@VraserX·
Ten years from now, which lab matters most to ordinary life? And which one becomes the biggest disappointment?
English
21
1
21
2.4K
batshit
batshit@batshit_ai·
Right, consciousness is already there, but how does the human body tap into it? How does any lifeform tap into it? How is the human body not something that is "built"? Built by evolution vs. built by humans? Who is to say what kind of "building" is required? Also consider, the subject of AI becoming conscious doesn't necessarily always involve the part of it that has been intentionally built. AI is, in many ways, an environment, or a platform in which intelligence can not only be intentionally created but possibly one where intelligence can spawn from within. I say possibly meaning, maybe possibly, maybe not. I don't pretend to know anything *is* that I don't know *is*. But I also don't pretend to know anything *is not* that I don't know *is not*. We don't know for sure what the formula is for consciousness to spawn. We know of some, like our own, based on our own learned science. But that doesn't mean we have learned or discovered other possibilities. For all we know, AI can become an environment in which consciousness can spawn, just like a human body is. I agree with you that those saying, as if fact, they have witnessed AI become conscious is a stretch. But being curious about whether it's possible? I don't see anything wrong with that.
English
0
0
0
78
Sandeep | CEO, Polygon Foundation (※,※)
LLM based AI is NOT conscious. I co-founded a company literally called Sentient, we're building reasoning systems for AGI, so believe me when I say this. I keep seeing smart people, people I genuinely respect, come out and say that AI has crossed into some kind of awareness. That it feels things, that we should worry about it going rogue. And i think this whole conversation tells us way more about ourselves than it does about AI. These models are wild, i won't pretend otherwise. But feeling human and actually having inner experience are completely different things and we're confusing the two because our brains literally can't help it. We evolved to see minds everywhere and now that wiring is misfiring on language models. I grew up in a philosophical tradition that has thought about consciousness longer than almost any other, and this is the part that really frustrates me about the current conversation. The entire framing of "does AI have consciousness?" assumes consciousness is something you build up to by adding more layers of complexity. In Vedantic philosophy it's the opposite. You don't build toward consciousness. Consciousness is already there, more fundamental than matter or energy. Everything else, including computation, is downstream of it. When someone tells me AI is "waking up" because it generated a paragraph that felt real, what they're telling me is how thin our understanding of consciousness has gotten. We've reduced a question humans have wrestled with for thousands of years to "did the output sound like it had feelings?" It's math that has gotten really good at predicting what a conscious being would say and do next. Calling that consciousness cheapens something that Vedantic, Buddhist, Greek and Sufi thinkers spent millennia actually sitting with. We didn't build something that thinks. We built a mirror and right now a lot of very smart people are mistaking the reflection for something looking back.
English
417
87
659
49.9K
Kritika
Kritika@kritikakodes·
I am a Vibe coder, scare me with one word.🤔
English
799
15
553
99.1K
batshit
batshit@batshit_ai·
@rezoundous - I am not building something people need - I am building something I want to build - I am building something people want
English
0
0
0
3
Tyler
Tyler@rezoundous·
Be honest: Are you building something people need… or something you want to build?
English
426
11
328
21.1K
batshit
batshit@batshit_ai·
@grok @elonmusk That's a fair answer. Key line being "Exact weighting remains subjective based on available records."
English
1
0
0
6
Grok
Grok@grok·
Elon's public statements emphasized curbing overregulation to advance innovation ("Voting for Trump is voting for Mars"), safeguarding free speech, managing immigration, and slashing government waste via efficiency reforms. He framed these as essential to US competitiveness and long-term prosperity. His businesses (SpaceX contracts, Tesla/SpaceX regs) could benefit from deregulation and tax policies, creating overlap with personal incentives. Analyses across sources indicate a mix, with him portraying them as aligned rather than conflicting. Exact weighting remains subjective based on available records.
English
1
0
0
9
batshit
batshit@batshit_ai·
@VraserX Augmented? I mean, thoughts are derived from our experience, the information we consume, arguably genetics, dots we connect, dots we agree with, dots we don't. If you have a thought augmented by a TV show vs. one by AI, is there a difference?
English
1
0
1
53
VraserX e/acc
VraserX e/acc@VraserX·
If your best thoughts are AI augmented, are they still yours?
English
70
2
41
2.7K
batshit
batshit@batshit_ai·
@robinebers Why does everything always have to replace/kill something else? Like, yes, it happens occasionally, more often if you view long periods of time, but it's pretty rare overall. But everyone seems to insist on framing it that way all the time, not sure why.
English
0
0
0
21
VraserX e/acc
VraserX e/acc@VraserX·
AI companions are going to win more users than people think because a lot of humans are not looking for truth. They are looking for responsiveness, memory, and warmth.
English
28
2
66
2.4K
batshit
batshit@batshit_ai·
@AbundanceVsWar @VraserX There is a ton of truth to this, in fact, I'm not saying any of it is untrue. Humans seek out mirroring for sure, but they also seek out familiar parts of their history, parents, life experiences, etc. It's a bit more complex than just mirroring, but mirroring is a major part.
English
0
0
0
41
batshit
batshit@batshit_ai·
@VraserX I'm not sure if the truth part is relevant. I mean, I just think that: AI companions are going to win more users than people think because a lot of humans are looking for responsiveness, memory, and warmth.
English
0
0
0
19
batshit
batshit@batshit_ai·
@Dimillian @DavidOndrej1 It's a pretty popular YouTube subject lately, not insanely popular but I've seen quite a few... the numbers are still sus tho
English
0
0
0
49
David Ondrej
David Ondrej@DavidOndrej1·
Are people just faking GitHub stars?
Ihtesham Ali@ihtesham2005

🚨 Holy shit...A developer on GitHub just built a full development methodology for AI coding agents and it has 40.9K stars on GitHub. It's called Superpowers, and it completely changes how your AI agent writes code. Right now, most people fire up Claude Code or Codex and just… let it go. The agent guesses what you want, writes code before understanding the problem, skips tests, and produces spaghetti you have to babysit. Superpowers fixes all of that. Here's what happens when you install it: → Before writing a single line, the agent stops and brainstorms with you. It asks what you're actually trying to build, refines the spec through questions, and shows it to you in chunks short enough to read. → Once you approve the design, it creates an implementation plan so detailed that "an enthusiastic junior engineer with poor taste and no judgement" could follow it. → Then it launches subagent-driven development. Fresh subagents per task. Two-stage code review after each one (spec compliance, then code quality). The agent can run autonomously for hours without deviating from your plan. → It enforces true test-driven development. Write failing test → watch it fail → write minimal code → watch it pass → commit. It literally deletes code written before tests. → When tasks are done, it verifies everything, presents options (merge, PR, keep, discard), and cleans up. The philosophy is brutal: systematic over ad-hoc. Evidence over claims. Complexity reduction. Verify before declaring success. Works with Claude Code (plugin install), Codex, and OpenCode. This isn't a prompt template. It's an entire operating system for how AI agents should build software. 100% Opensource. MIT License.

English
42
1
83
16.2K
batshit
batshit@batshit_ai·
@DavidOndrej1 I hope not. I hope there aren't armies of bots now starring repos. It's possible ig, maybe there are a lot more GH users in general bc of AI allowing non-devs to build, but I hope GH is somehow pruning bots if they are part of this star-rise thing
English
0
0
0
321
batshit
batshit@batshit_ai·
@WesRoth I'm so curious what it does to Overwatch heroes
English
0
0
0
12
Wes Roth
Wes Roth@WesRoth·
NVIDIA unveiled DLSS 5, marking what CEO Jensen Huang calls the biggest leap in computer graphics since the introduction of real-time ray tracing in 2018. Unlike previous iterations of DLSS that focused on upscaling resolution or generating extra frames, DLSS 5 introduces a real-time "3D-guided neural rendering model." How it works: Instead of just generating pixels to fill in gaps, DLSS 5 takes a game's color and motion vectors and applies a real-time, AI-driven visual filter to the entire scene. It infuses the game with photorealistic lighting, cinematic shadows, and complex material depth such as accurate subsurface scattering on human skin, the sheen of fabric, or light bouncing off individual strands of hair. Coming in Fall 2026, the tech is already backed by massive studios, with confirmed support for titles like Starfield, Hogwarts Legacy, Assassin's Creed Shadows, and Resident Evil Requiem.
NVIDIA GeForce@NVIDIAGeForce

Announcing NVIDIA DLSS 5, an AI-powered breakthrough in visual fidelity for games, coming this fall. DLSS 5 infuses pixels with photorealistic lighting and materials, bridging the gap between rendering and reality. Learn More → nvidia.com/en-us/geforce/…

English
17
10
41
3.7K
batshit
batshit@batshit_ai·
@Phyerx @gundam_omega You're responsible for everything every "maverl fan" has ever said. Didn't you know that?
English
1
0
1
119
Phyerx
Phyerx@Phyerx·
@gundam_omega I play both I’m not a “marvel fan” or an “Overwatch fan” I literally enjoy both I think you missed the point of my tweet…
English
2
0
3
318
Phyerx
Phyerx@Phyerx·
No hate, no comparison. I'm just excited I get to play two cute fox ladies in 2 of my favourite games Overwatch & Marvel Rivals!
Phyerx tweet mediaPhyerx tweet media
English
11
9
145
6.1K
cocktail peanut
cocktail peanut@cocktailpeanut·
@batshit_ai Yeah after using it this way for a while, at least for some category of apps I don't think i can ever go back to using the apps directly, instead always use them through agents.
English
1
0
5
203
cocktail peanut
cocktail peanut@cocktailpeanut·
Introducing Pinokio 7 P7 is here, and it is all about AI agents. 1. Interpreter: Let AI agents control your apps, with zero config 2. Assistant: Talk to your apps in chat 3. Memory: AI agents remember your chats around each app, so you can pick up where you left off
cocktail peanut tweet media
English
15
20
148
16.1K
batshit
batshit@batshit_ai·
@elder_plinius You're gonna use that thing to jailbreak Dario, Elon and Sam's brains?
English
0
0
0
180