Dolphin

84 posts

Dolphin banner
Dolphin

Dolphin

@dphnAI

AI Lab developing uncensored models & distributed inference ⟠ Over 4m monthly downloads on Hugging Face ⟠

$POD Katılım Ocak 2025
326 Takip Edilen1.8K Takipçiler
Sabitlenmiş Tweet
Dolphin
Dolphin@dphnAI·
Dolphin Mistral 24B Venice Edition is released Dolphin Mistral 24B Venice Edition is a collaborative project we undertook with @AskVenice with the goal of creating the most uncensored version of Mistral 24B for use within the Venice ecosystem Dolphin Mistral 24B Venice Edition is now live as “Venice Uncensored” - the new default model for all Venice users
English
21
32
182
19.9K
Dolphin retweetledi
0xJeff
0xJeff@0xJeff·
If you care about Privacy, you'll care about PRIVATE AI ​ > Growing concerns of data privacy & AI security - 71% of users regretted sharing their data with an AI tool (Cisco) ​ > DeCompute players see surge in demand after adding TEEs in - Players like @TargonCompute saw ARR ramp up to millions post-TEE ​ > DeInference/private inference aggregates top models & charges $18-20/month. Cheaper than individual lab (e.g. @AskVenice, @chutes_ai) ​ > Trend of models getting alignment/censorship removed while preserving capability (e.g. @dphnAI serving top uncensored models to Venice) ​ > Growing adoption of uncensored models for (i) Companion AI/roleplay use cases (ii) Penetration testing (simulate cyber attacks) ​ > DePIN for crowdsourcing compute = PROVEN to work in bootstrapping supply (GPUs) + cheap compute/inference bootstrap early demands ​ "Private AI = The Fastest Growing AI Stack in 2026"
0xJeff@0xJeff

Alpha-packed episode in time for the Privacy x AI szn ​ The Privacy Landscape (Part 1) is live, covering: > The Privacy Ecosystem Map - March 2026 Edition > Why the SEC’s recent rule change is a massive $21.6B catalyst for @Dusk, @Aleo, & @Canton > Privacy L1s/Privacy BTC Narrative and the hurdle they face in 2026 > How @AskVenice and @dphnAI are disrupting the market with "uncensored/private" AI ​ 9 Infra verticals, 3 covered in this week's report ​ Part 2 to come next week ​ [Link in bio]

English
13
3
56
7.3K
Dolphin
Dolphin@dphnAI·
@mudler_it Will be sharing more on this with the release
English
0
0
2
135
Ettore Di Giacinto
Ettore Di Giacinto@mudler_it·
@dphnAI I wonder what stops someone to edit the node code or attach as a fake node in the network, with a fake GPU and earn tokens?
English
2
0
5
192
Dolphin
Dolphin@dphnAI·
Preview of our distributed inference software Community beta live in a few days Idle GPUs will be able to run models & earn $POD tokens for their contributions
Dolphin tweet media
English
8
10
75
11.6K
Dolphin
Dolphin@dphnAI·
Added remote monitoring of node software via our web app this week
Dolphin tweet mediaDolphin tweet media
English
1
0
9
1.8K
Dolphin
Dolphin@dphnAI·
@VitalikButerin @Alibaba_Qwen Once you get it running at in Q4 with 32k context, try gradually increasing the context window until you run out of vRAM to work out what fits feel free to dm
English
0
0
0
749
Dolphin
Dolphin@dphnAI·
@VitalikButerin @Alibaba_Qwen 35B in Q4 gguf quant + context at 32k should fit on your 5090 By default the Qwen3.5 context window is set to 262k which drastically increases vRAM usage & is probably what brought it into system memory You can also try FP8 kv_cache quantitation to reduce context footprint
English
2
0
8
1.2K
Qwen
Qwen@Alibaba_Qwen·
🚀 Introducing the Qwen 3.5 Small Model Series Qwen3.5-0.8B · Qwen3.5-2B · Qwen3.5-4B · Qwen3.5-9B ✨ More intelligence, less compute. These small models are built on the same Qwen3.5 foundation — native multimodal, improved architecture, scaled RL: • 0.8B / 2B → tiny, fast, great for edge device • 4B → a surprisingly strong multimodal base for lightweight agents • 9B → compact, but already closing the gap with much larger models And yes — we’re also releasing the Base models as well. We hope this better supports research, experimentation, and real-world industrial innovation. Hugging Face: huggingface.co/collections/Qw… ModelScope: modelscope.cn/collections/Qw…
Qwen tweet media
English
912
2.9K
21.4K
8.9M
Dolphin
Dolphin@dphnAI·
@VitalikButerin @Alibaba_Qwen Same model - A3B is just appended to the HF name to represent how many parameters are active at inference time
English
1
0
7
1.9K
Dolphin
Dolphin@dphnAI·
@VitalikButerin @Alibaba_Qwen Qwen 3.5 27B is a dense model - all parameters are active at once GLM-4.7-flash is 30B Mixture of experts model which only activates 3B parameters at once Try Qwen-3.5-35B-A3B which is a mixture of experts with 3B active parameters - same as GLM flash
English
2
0
21
3.3K
Dolphin retweetledi
Venice
Venice@AskVenice·
Introducing Dolphin Mistral 24B Venice Edition 🐬🌅 Venice and @dphnAI are co-releasing Dolphin Mistral 24B Venice Edition – the most uncensored AI model yet Let’s break down this release 🧵
English
22
75
320
91.7K
Dolphin
Dolphin@dphnAI·
Dolphin token has finished a 1:1 migration to a new CA on Base 0xeD664536023d8E4b1640C394777D34aBAFF1dF8F No action is required, all holders have automatically received tokens to their wallet
English
9
1
23
5.7K
Dolphin
Dolphin@dphnAI·
If your wallet is missing any tokens - dm us on here or ask in our tg chat This would only possible if you purchased the old token in the 10 minute migration window
English
5
0
6
2.3K
Dolphin
Dolphin@dphnAI·
1. DPHN was deployed directly onto Base. The new contract exists on Base but has backwards compatibility with Ethereum mainnet so it can be bridged to there in the future if mainnet scales. 2. Better ticker (majority of community members preferred POD > DPHN) 3. Upgrading our staking contract ahead of network launch
English
1
0
4
546
egrk🚩
egrk🚩@venzeg·
@dphnAI why would you need to migrate?
English
1
0
0
721
Dolphin
Dolphin@dphnAI·
Dropping an experimental checkpoint in our journey to train decensored models via online RL This model, based on @Salesforce’s xGen Small 9B, somehow reward-hacked its way into massive improvements in our testing Thanks to @lium_io for B200 GPU access blog.dphn.ai/xgen-rl
English
1
1
25
3.7K
wsg
wsg@astrometalsky·
@dphnAI @lium_io Have you made another model this uncensored that isn't as resource intensive? Not including the Venice ai colab? I'm willing to test if it's in beta lol 🤞
English
1
0
2
62
Dolphin retweetledi
lium.io
lium.io@lium_io·
Appreciate being a part of @dphnAI 's training run. These are exactly the type of missions that we love to support. Great work DolphinAI!
Dolphin@dphnAI

Dolphin X1 405B is now live on @huggingface This model is a result of our efforts to decensor @allen_ai Tulu-3 405B efficiently, using just a single B200 Node to create the largest Dolphin model ever Thank you to @lium_io for the generous 8xB200 sponsorship

English
4
9
61
6.6K
Dolphin
Dolphin@dphnAI·
@astrometalsky @lium_io Yes but it requires 8xA100 to run it in FP8 For a model you can run locally try our latest 24B or 8B depending on your specs
English
1
0
2
43
wsg
wsg@astrometalsky·
@lium_io @dphnAI Would you say this is probably the most capable most uncensored publicly available model at the moment?
English
1
0
1
52
Dolphin
Dolphin@dphnAI·
Weights are available here - we also uploaded a calibrated FP8 quant which will be able to fit in any 8xA100 / 8xH100 node here huggingface.co/dphn/Dolphin-X…
English
0
0
14
1.1K
Dolphin
Dolphin@dphnAI·
Read more about how we managed to train 405B with a single node here in our first blog post blog.dphn.ai/405b/
Dolphin tweet media
English
1
1
17
1.7K
Dolphin
Dolphin@dphnAI·
Dolphin X1 405B is now live on @huggingface This model is a result of our efforts to decensor @allen_ai Tulu-3 405B efficiently, using just a single B200 Node to create the largest Dolphin model ever Thank you to @lium_io for the generous 8xB200 sponsorship
Dolphin tweet media
English
5
4
41
9.9K