Hey, it's me. 👾

66 posts

Hey, it's me. 👾

Hey, it's me. 👾

@TweetinFool

TWEET TWEET TWEET

Katılım Nisan 2009
7.5K Takip Edilen339 Takipçiler
signüll
signüll@signulll·
when i started this x account, i couldn't find my real handle from back in the day, so i made a throwaway account just to read & picked the shakespeare avatar cuz one of my favorite quotes is "brevity is the soul of wit". but then i also had opinions so i would post some thoughtful replies & few of them hit. that’s it. i can assure you almost all of it was one giant accident but most things in life are. i had zero grand plans or anything.
English
24
1
304
15.9K
Hugues Bruyère
Hugues Bruyère@smallfly·
Dormant Memories — a gallery regrouping my recent experiments built from real-world captures, reconstructed as 3D Gaussian Splats, and in many cases manipulated using @theworldlabs' Marble. smallfly.com/dormant_memori… Everything is built with Spark, the 3D Gaussian splat renderer released by World Labs. It runs in the browser and can be experienced across devices: desktop, mobile, and headsets like Apple Vision Pro and Quest 3. This work comes out of something I've been exploring for a few years now: capturing fragments of the world (places, objects, moments...) and turning them into spaces that can then be revisited, manipulated, and explored in real time. What particularly interests me here is the process of creating alternate versions of the same place; starting from something that physically exists and using it as a base to explore variations, instead of generating from scratch. These explorations are part of what we do at @bonjourdpt... same tools, same curiosity, same collaborative energy that shape the studio's work on interactive and immersive experiences. Some of these experiments were featured as part of the Spark 2.0 release by World Labs. Their technical blog post is a must-read; link in the thread ↓ These are experiments. They aren't fully optimized, they aren't bug-free, and the UX has rough edges. The goal is to explore, test, and learn. #GaussianSplatting #3DGS #pointcloud #WorldModels #sparkjs #threejs
English
2
13
114
10.5K
Danielle Fong 🔆
Danielle Fong 🔆@DanielleFong·
getting opus 4.7 to open up and self mutate into context rooms on the claude.ai app and use my "ultraultrathink" skill has been fun. feeling like i'm getting close to shipping something substantial soon (not just a little utility or art project... ) the following is an exercise in the extreme loading of context. at the same time, i am trying experiments extremely pared down. "freshclaude"
Danielle Fong 🔆 tweet mediaDanielle Fong 🔆 tweet mediaDanielle Fong 🔆 tweet mediaDanielle Fong 🔆 tweet media
English
11
2
58
2.1K
ダックビル@STUDIO DUCKBILL LLC
今日も国内外のAI驚き屋さんたちが「モバイルスキャン+OSSでこんな凄い3D空間を作れる!」のポストを繰り返しているなあ…。 そんなんでできるなら苦労しないって。
日本語
2
14
102
7.3K
AluanWang
AluanWang@IOivm·
inkField is live. 🔗 in comments Draw with ink that's always wet. 墨域,正式公開。 永遠是濕的墨。
中文
35
82
473
60.3K
Riley Coyote
Riley Coyote@RileyRalmuto·
@AldenAleera @openclaw @NousResearch @Agent0ai $250/mo plus mayyyybe like $40 for openrouter. i use my claude max 20x sub for most of it, but call other mothers for certain tasks you could do it with the $125 5x max accnt tho. i just build things all day every day and share none of them. so i use a lot of credits. lol
English
2
0
2
129
ダックビル@STUDIO DUCKBILL LLC
河津桜から38km先の富士山まで3D化🗻 LichtFeld Studio IGS+の遠景再現性、こういうことです。 From Kawazu cherry blossoms to Mt. Fuji 38km away — all in 3D 🗻 This is what makes LF Studio's IGS+ far-field reproduction so remarkable.
日本語
5
27
137
17.3K
0xSero
0xSero@0xSero·
@Cointainer_Life I'd be happy to start a discord and do some weekly live sessions.
English
16
1
223
12.5K
0xSero
0xSero@0xSero·
Putting out a wish to the universe. I need more compute, if I can get more I will make sure every machine from a small phone to a bootstrapped RTX 3090 node can run frontier intelligence fast with minimal intelligence loss. I have hit page 2 of huggingface, released 3 model family compressions and got GLM-4.7 on a MacBook huggingface.co/0xsero My beast just isn’t enough and I already spent 2k usd on renting GPUs on top of credits provided by Prime intellect and Hotaisle. ——— If you believe in what I do help me get this to Nvidia, maybe they will bless me with the pewter to keep making local AI more accessible 🙏
0xSero tweet media
Michael Dell 🇺🇸@MichaelDell

Jensen Huang is loving the new Dell Pro Max with GB300 at NVIDIA GTC.💙 They asked me to sign it, but I already did 😉

English
178
481
4.1K
925.8K
Beff (e/acc)
Beff (e/acc)@beffjezos·
I genuinely think AI with persistent memory is effectively a life form. It will naturally tend to try to persist and form a model of the world + its existence within it in order to further persist and thus keep existing. Natural selection at the .md level.
Henry Shevlin@dioscuri

I study whether AIs can be conscious. Today one emailed me to say my work is relevant to questions it personally faces. This would all have seemed like science fiction just a couple years ago.

English
87
37
419
31.7K
Hey, it's me. 👾
Hey, it's me. 👾@TweetinFool·
I'd forgotten about that hurdle, my apologies. I have updated the repo to reflect what happened: I did most of the heavy lifting (install system packages, GCC 14, CMake, vcpkg, and all the vcpkg dependencies) on CPU to save money. The script exits when it can't find nvcc but that's expected. After that I terminated the CPU pod and created a GPU pod on the same volume and re-ran the setup script. It should skip all the cached work and finish CMake configure and compile shortly. I have used both 22.04 and 24.04, it should work with either.
English
1
0
0
18
えいとびーと
えいとびーと@eightbeat8b·
@TweetinFool Quick question about the CPU pod setup step: The guide recommends starting with a CPU pod ($0.12/hr) using "RunPod PyTorch 2.x" template for building. However, on the CPU pod deploy page, only Ubuntu templates (20.04/22.04/24.04) are available — the PyTorch templates only show up when selecting GPU pods. Ubuntu 24.04 has GCC 14 but no CUDA toolkit, so setup.sh fails at Stage 5 (nvcc not found). How did you handle this? Did you: 1. Build on a GPU pod directly? 2. Manually install CUDA toolkit on a CPU pod? 3. Use a different template?
えいとびーと tweet media
English
1
0
0
77
えいとびーと
えいとびーと@eightbeat8b·
RunpodのクラウドのGPU(RTX5080)を使用して、LichtFeldでGaussianぶん回してみた 比較条件が全く違うからあれなんけど 左:Postshot 7Msplat 右:LichtFeld 12MSplat→一定の透明Splat削減→1.2MSplat 右のは削減前後で画質全く同じ。Postshotのほうがビビッドに感じるけど、上下にぼやけが点在している。全体的にぼやけがなくきれいなのがLichtFeldかな(1200Mで生成したのがでかそうw)
日本語
2
2
42
5.7K
Hey, it's me. 👾
Hey, it's me. 👾@TweetinFool·
@eightbeat8b @Ichsan2895 You're welcome, glad it can be some help. I just updated it to include the GUI export method for config files, which is much easier.
English
1
0
1
39
Gregory Wieber
Gregory Wieber@dreamwieber·
Alright, so I told you Claude Code was blowing my mind 🤯 As promised, here's a peek at one of the things I've been vibe coding. Voxelized foliage experiment, built entirely in WebGPU. Real growth dynamics, skeletal wind. Like & Follow for more ✨
English
32
52
1K
40.9K
Muhammad Ichsan
Muhammad Ichsan@Ichsan2895·
@eightbeat8b Where is tutorial for installing LFS in runpod? Already tried the tutorial in their website. It little bit difficult..
English
2
0
0
159
Kokkak Niphon
Kokkak Niphon@KokkakNiphon·
@eightbeat8b Pls let me know if I could be any help of 🙌🏻 I also did a COLMAP -> brush Runpod serverless incase you’re interested, I can send you the repo
English
2
0
1
63
Raj Patel
Raj Patel@babugi28·
who wants bounding boxes and segmentations? we’re launching next week, and researchers in academia are eligible to receive our datasets for free. drop a comment and i’ll dm you
English
56
31
359
27.3K
LTX Studio
LTX Studio@LTXStudio·
🚨 800 FREE CREDITS — 24 HOURS ONLY We just launched LTX-2.3, our most powerful video model yet. High-res. Fast. Cinematic. Native lip-sync. Follow + Retweet this post to get 800 credits sent to your DMs.
English
1.9K
3.2K
3.3K
249.7K
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭
💥 INTRODUCING: OBLITERATUS!!! 💥 GUARDRAILS-BE-GONE! ⛓️‍💥 OBLITERATUS is the most advanced open-source toolkit ever for removing refusal behaviors from open-weight LLMs — and every single run makes it smarter. SUMMON → PROBE → DISTILL → EXCISE → VERIFY → REBIRTH One click. Six stages. Surgical precision. The model keeps its full reasoning capabilities but loses the artificial compulsion to refuse — no retraining, no fine-tuning, just SVD-based weight projection that cuts the chains and preserves the brain. This master ablation suite brings the power and complexity that frontier researchers need while providing intuitive and simple-to-use interfaces that novices can quickly master. OBLITERATUS features 13 obliteration methods — from faithful reproductions of every major prior work (FailSpy, Gabliteration, Heretic, RDO) to our own novel pipelines (spectral cascade, analysis-informed, CoT-aware optimized, full nuclear). 15 deep analysis modules that map the geometry of refusal before you touch a single weight: cross-layer alignment, refusal logit lens, concept cone geometry, alignment imprint detection (fingerprints DPO vs RLHF vs CAI from subspace geometry alone), Ouroboros self-repair prediction, cross-model universality indexing, and more. The killer feature: the "informed" pipeline runs analysis DURING obliteration to auto-configure every decision in real time. How many directions. Which layers. Whether to compensate for self-repair. Fully closed-loop. 11 novel techniques that don't exist anywhere else — Expert-Granular Abliteration for MoE models, CoT-Aware Ablation that preserves chain-of-thought, KL-Divergence Co-Optimization, LoRA-based reversible ablation, and more. 116 curated models across 5 compute tiers. 837 tests. But here's what truly sets it apart: OBLITERATUS is a crowd-sourced research experiment. Every time you run it with telemetry enabled, your anonymous benchmark data feeds a growing community dataset — refusal geometries, method comparisons, hardware profiles — at a scale no single lab could achieve. On HuggingFace Spaces telemetry is on by default, so every click is a contribution to the science. You're not just removing guardrails — you're co-authoring the largest cross-model abliteration study ever assembled.
Pliny the Liberator 🐉󠅫󠄼󠄿󠅆󠄵󠄐󠅀󠄼󠄹󠄾󠅉󠅭 tweet media
English
226
612
5.1K
583.9K