Nevv🗿

1.5K posts

Nevv🗿 banner
Nevv🗿

Nevv🗿

@NevvDevv

Slavic code wizard. Building products in stealth.

Katılım Mayıs 2019
5.1K Takip Edilen6.1K Takipçiler
Kilo
Kilo@kilocode·
🚨 Virtual Hackathon Alert: Kilo x @XiaomiMiMo Last month, we asked developers to build the worst website ever. More than 500 of you showed up. Now we're back with a new, free hackathon: Build the WORST captcha experience ever. Prove you're not a robot by screaming into your mic. Calling your mom. Solving a mystery. Whatever it takes, get creative. $1,250 in prizes is on the line, starting now!
English
10
5
76
6.9K
Nevv🗿 retweetledi
Deedy
Deedy@deedydas·
So many startups think their engineers are "cracked" but have no idea what that really means. This team of 5 19yr olds built a 30 petabyte storage cluster in SF for ~$500k to get a 40x cheaper AWS S3 as a side quest to store 90M hours of video. Now, that's cracked.
Deedy tweet media
English
155
261
5.5K
533.8K
Nevv🗿 retweetledi
Nevv🗿
Nevv🗿@NevvDevv·
@Stefan_3D_AI Rigging for custom models that are not characters still sucks btw
English
0
0
1
255
Grace Zhang
Grace Zhang@GZinMetaverse·
Everyone knows world models are the future. That’s why we’re bringing another build weekend(Mar 14-15) to @fdotinc , where physical AI builders ship world models across XR, gaming, and WebSpatial. Call it Episode One. The physical world will come next. If you want to be part of the series — start here👇 luma.com/worldsinaction…
English
5
7
116
7.7K
Radiants ☀️
Radiants ☀️@RadiantsDAO·
The last comment gets the $10,000 Hackathon Prize 👇
English
477
42
397
23.4K
Martin Erlić
Martin Erlić@SeloSlav·
Built a procedural open world with @spacetime_db as the backend. Every terrain chunk, tree, and stone is server-authoritative. The client subscribes and the world just appears, consistent across every connected player, no REST endpoints, no polling, no manual sync logic. The database is the game server. Now to fill it with agents!
English
21
15
350
31K
Taylor
Taylor@taylor_sntx·
hologram map working with live heightmap data from the earth... here I'm exploring the Himalayas, Grand Canyon, and Andes mountains. it accepts any longitude/latitude and then you can pan around. works in the browser powered by three.js
English
18
61
779
34.5K
Nevv🗿 retweetledi
sam
sam@samdape·
you basically need to be unemployed rn to keep up
English
466
1.6K
20.6K
1.2M
Pixel Vault®
Pixel Vault®@pixelvaultst·
Free PNG pack, over 200 assets! ✨
Pixel Vault® tweet media
English
353
373
4.2K
111K
Nevv🗿
Nevv🗿@NevvDevv·
@AlexFinn "Can run on almost any modern computer" "If you have 32gb of RAM" 😐
English
0
0
0
26
Alex Finn
Alex Finn@AlexFinn·
Do you even understand what this means? An open source model just released that is: • Just as smart as Sonnet 4.5 • Incredible at coding • Can run on almost any modern computer If you have 32gb of RAM (most Mac Minis do) you can have unlimited super intelligence on your desk. For free. Sonnet 4.5 was released 5 months ago In 5 months that level of intelligence went from frontier to free on your desk And not only that, can run on any laptop with 32gb of RAM If you have the memory, do the following immediately: 1. Download LM Studio 2. Go to your OpenClaw and ask which of these new Qwen models is best for your hardware 3. Have it walk you through downloading and loading it 4. Build apps with it knowing you are using your own personal, private super intelligence on your desk The people denying this is the future are so beyond lost.
Qwen@Alibaba_Qwen

🚀 Introducing the Qwen 3.5 Medium Model Series Qwen3.5-Flash · Qwen3.5-35B-A3B · Qwen3.5-122B-A10B · Qwen3.5-27B ✨ More intelligence, less compute. • Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B — a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts. • Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models — especially in more complex agent scenarios. • Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring: – 1M context length by default – Official built-in tools 🔗 Hugging Face: huggingface.co/collections/Qw… 🔗 ModelScope: modelscope.cn/collections/Qw… 🔗 Qwen3.5-Flash API: modelstudio.console.alibabacloud.com/ap-southeast-1… Try in Qwen Chat 👇 Flash: chat.qwen.ai/?models=qwen3.… 27B: chat.qwen.ai/?models=qwen3.… 35B-A3B: chat.qwen.ai/?models=qwen3.… 122B-A10B: chat.qwen.ai/?models=qwen3.… Would love to hear what you build with it.

English
343
469
5.6K
810.2K
Nevv🗿 retweetledi
Joseph Suarez 🐡
Joseph Suarez 🐡@jsuarez·
Reinforcement Learning 10,000x Faster - an invited talk for the University of Warwick's AI summit. I share the story of Neural MMO and how a crazy out-there MMO AI project became the basis for PufferLib, the fastest RL library available today. Star puffer on github to support!
English
11
28
256
16.6K
RikiLuxen
RikiLuxen@Iuvnriki·
@magicblock how many of these projects are actually playable?
English
1
0
0
108
MagicBlock ✨
MagicBlock ✨@magicblock·
30 projects submitted to the Solana Blitz v0 Hackathon Games, DeFi, privacy, infra, and more - all built with MagicBlock Winners dropping soon 👀
English
63
10
151
7.3K
Nevv🗿 retweetledi
HIMARS
HIMARS@himars·
Fully onchain gaming is the future
English
295
73
1.4K
194.4K
tk
tk@heettike·
we're doing an internal hackathon later this week & might just tokenise all 5 projects with noice age of abundance for software, agents & tokens
English
10
4
65
4.3K
Gregor Zunic
Gregor Zunic@gregpr07·
The three biggest names in AI just signed on to sponsor one hackathon 👀 Google Deepmind, Anthropic, and OpenAI are now on board for the world's largest web agents hackathon. $150K+ in prizes. Don't miss it. 🌁 Register below ⬇️
Gregor Zunic tweet media
English
17
11
144
54.1K
Taylor
Taylor@taylor_sntx·
i want visualizations to feel more organic, less sharp and perfect. like a well-worn hologram. this three.js visualization uses a few tricks - particles arranged in rings instead of a grid, variable density that decreases with height, and falloff opacity
English
19
46
716
26.3K
Nevv🗿 retweetledi
Jay Alto
Jay Alto@theJayAlto·
out of necessity, the internet will soon be made up of geofenced, human-verified, invite-only micro-communities
English
259
590
7.4K
416.1K
Nevv🗿
Nevv🗿@NevvDevv·
@askOkara Honestly tho, opus still generates higher quality code in my experience
English
0
0
0
96
Nevv🗿
Nevv🗿@NevvDevv·
@LiorOnAI Wanted to start with BCI, any idea what are currently cheapest affordable hardware options?
English
0
0
0
98
Lior Alexander
Lior Alexander@LiorOnAI·
You can now turn cheap EEG headsets into lab-grade brain scanners. And it's open-source. ZUNA is a 380M-parameter foundation model that reconstructs missing brain signals from partial EEG data. It works across any electrode setup, from consumer headsets to 256-channel research systems, without retraining. It lets you: - Reconstruct missing EEG channels from sparse data - Denoise corrupted signals - Predict new channels from just electrode coordinates - Handle arbitrary electrode layouts The model uses a diffusion autoencoder with a transformer backbone. It was trained on 2 million channel-hours across 208 datasets using masked diffusion training and 4D spatial embeddings. This lets the model understand the physical geometry of electrode placement. Each channel signal gets compressed into tokens, then the model encodes x, y, z positions plus time into separate attention components. EEG data has been stuck in a pre-foundation model era. Datasets are small, fragmented across institutions, collected under different protocols. The standard fix for missing channels is spherical spline interpolation, basically spatial smoothing. It works okay when a few channels drop out but falls apart when you lose more than 75% of your data. ZUNA beats this baseline by learning actual patterns in brain activity instead of just smoothing between points. The gap widens dramatically at high dropout rates, exactly where you need it most. Thought-to-text is positioning itself as the next major AI modality after language, vision, and audio. But you can't build that future on data that gets thrown away because a few electrodes failed. The model is fully open source under Apache 2.0, runs on consumer GPUs, and works on CPU for many tasks.
Zyphra@ZyphraAI

Introducing ZUNA, a 380M-parameter BCI foundation model for EEG data, a significant milestone in the development of noninvasive thought-to-text. Fully open source, Apache 2.0.

English
33
131
1K
92.6K