Dolphin

110 posts

Dolphin banner
Dolphin

Dolphin

@dphnAI

AI Lab developing uncensored models & distributed inference ⟠ Over 4m monthly downloads on Hugging Face ⟠

Base Katılım Ocak 2025
779 Takip Edilen4.4K Takipçiler
Sabitlenmiş Tweet
Dolphin
Dolphin@dphnAI·
Node provider rollout has been going well Our pool of inference nodes running Qwen 3.6 35B have generated over 3.2B tokens so far Total inference bandwidth -> 9400 t/s 28x RTX 4090 12x RTX 5090 8x RTX PRO 6000 & many other cards API access coming soon 🐬
Dolphin tweet mediaDolphin tweet media
Dolphin@dphnAI

Dolphin Inference Network node operation is now live for anyone who would like to beta test before we go into production $POD rewards live for testers Repurposing idle GPUs to run Qwen 3.5 35B MoE

English
26
17
154
71.9K
Dolphin
Dolphin@dphnAI·
@0xmons Was referring to the base model made by Mistral linked above Still a very popular model despite being a few months old
English
0
0
6
574
Dolphin
Dolphin@dphnAI·
@0xmons This was built on Mistral small 3.2 vision, the updated version - still has 870k monthly downloads Have a bunch of new models we have been working on that will be releasing soon huggingface.co/mistralai/Mist…
English
1
0
8
640
Dolphin
Dolphin@dphnAI·
@lucaxyzz Guide at dphn.ai/docs RTX 6000 PRO can participate now in running Qwen 3.6 35B Support for smaller GPUs coming soon You can watch the network stats live at datagen.dphn.ai
English
1
1
16
733
lucacadalora (e/aiccelerate.id)
Interesting to see where $POD and @dphnAI go. How can we get involved on the inference/compute side? We have an RTX PRO 6000 and an RTX 5090 that could potentially be routed into your network @dphnAI
English
1
0
10
1.7K
FoFan
FoFan@FoFan_eth·
$POD - is the @dphnAI's and @AskVenice's bet on Uncensored AI models. I don't know why no one is talking about it. - The General Info Dolphin is an AI Lab, which developing uncensored models & distributed inference. They are already built the most powerfull Uncensored AI Model Yet - Dolphin Mistral 24B Venice Edition. Dolphin Mistral 24B Venice Edition is available as “Venice Uncensored” in the model dropdown and is the new default on the Venice platform. The model is leading by the Censorship Refusal Rate, with only 2.2% of refusals. For example, Grok has 26.67%, Claude Sonnet 3.7: 71.11%, GPT-4o-mini: 64.44%. - @AskVenice & @opentensor & @TargonCompute To develop the model Dolphin brought together Venice and Subnet 4 @TargonCompute for compute power, the model was trained on the subnet. Venice played a role as co-fine-tuning partner, platform integrator, and release coordinator. Finally, we see a development of real unique model fully via DeAI. CryptoAI narrative fits the best here. - Founder and @a16z grant Dolphin, founded by Eric Hartford (@QuixiAI). He started working on Uncensored models since 2023 and launched his first model "dolphin-llama2-7b" in 2023. It was at that very time that he received backing from a16z and @pmarca to develop Uncensored LLMs. a16z supported 8 open-source developers, and Eric Hartford was one of them, alongside @NousResearch and other top devs. - Onchain Research Total Supply: 500M POD Real Circulating: ~39.6M POD (7.9%) Team controls: 444M POD (88.8%) Market Cap: ~$1.46M | FDV: ~$18.4M Team controls most of the supply on Team's Multisig wallet and Treasury wallet. The real circulated supply is too low and easy to pump. If you look at the @nansen_ai list of holders for this token, you’ll see >20 holders who each hold >10k worth of $POD. A dozen whales with >$100K balances. There are 5 wallets with >$1M holding $VVV alongside high-conviction plays like $AERO and $DAI. These aren't low-cap gamblers, they only move on assets with clear rationale. I’m even see one smart holder who got 20x on an under-radar $DONUT ponzi play in December. - Liquidity Pool (2.5M -> 800K) At its peak, the pool had $2.5M in liquidity - deep enough for smart whales and insiders to load up without moving the needle too much. On April 27, volume hit $1M with zero CT mentions - the price only rose 79%, allowing major players to enter unnoticed. Then the shift: on May 2, the team pulled liquidity down to $800K. Now, you have a concentrated group of high-balance conviction holders sitting on a token that's primed for a vertical move. With a lightened pool and a solid thesis. - My view This token is probably the best bet after $VVV on Decentralized AI with real use case. With @AskVenice backing it can go to hunderd of millions in FDV. Models developed by Dolphin has over 4m monthly downloads on Hugging Face. Add to that the following planned updates: - Launch of the Inference API; - Launch of subscriptions with different tiers of access to the services provided on the network. 100% of revenue will buy and burn $POD. When earnings < node rewards, emissions are there to cover the difference. When earnings > node rewards, the protocol is profitable & excess rewards will be distributed to $POD stakers to grow their relative share of the network. Real Product Usage + Backing by Venice + Founder which got grant by a16z + big buys with zero mentions on CT + Revenue model which focused on token growth + Zero Hype yet. More info, proofs and screens you can see in the comments to the tweet. This is the second very detailed research after $ZOE, gonna write up more and more when interesting projects occur.
FoFan tweet media
FoFan@FoFan_eth

While most are chasing $LFI, you’re overlooking the @zoe_charms launch $ZOE by @charmsai - What is Charms and Zoe? Charms is a new platform for ai characters, think CharacterAI or Replika, but tokenized. The AI companion niche is already massive, and charms is betting on the emotional connection users build with these bots. $ZOE is the latest character launched by the team to bring attention before the main $CHARMS TGE. - The Team Led by @0xjuan__, @gon0x_, and @0xnaxo. I haven’t dug up their full history yet, but gon is followed by @jessepollak - that’s a signal worth noting. - $CODY and @codygame_com Story If we look at the past, we can find $CODY as the first character on Charms, which charm's team developed and shilled itself. CodyGame won $25k as the most hyped game on Farcaster on October. The token pumped to $4M market cap, because of the game hype. 500k users joined the game. More than 80k weekly active users. And the joined users were absolutely real, because of Worl ID usage in the game. Charms had its moment of attention and hinted at $CHARMS TGE, but something got wrong on 10.10 - the market crushed so hard. That's the reason why Charms postponed their token launch and lost the hype. - My Vision and Speculation around $ZOE and upcoming $CHARMS token Now that the market has recovered, the Charm's team is active again and launched - $ZOE and @zoe_charms. The idea of the launch is the same as $CODY launch - to bring attention to @charmsai platform and upcoming $CHARMS launch. No one understands what is Zoe created for and its utility, but without any utility the token already showed 660k in volume and 800k ath at first day launch. And currently trading at 350k fdv. $ZOE pump -> Attention to @charmsai -> $CHARMS TGE. The formula is simple and that's my bet.

English
15
3
55
43.5K
Dolphin
Dolphin@dphnAI·
@0x_bzzz @0xJeff We verify node operators are running the expected model by comparing logprobs + tokenised output to validators & a bunch of other techniques
English
1
0
3
164
Bumblebee 🐝 joinhive.ai
@0xJeff Its not even verified inference what is the point. Get this nonsense off my feed bro. Trust some random stranger on the internet
English
1
0
0
295
0xJeff
0xJeff@0xJeff·
Dolphin AI is on a generational run right now ​ People are finally catching up to the inference thesis ​ (Congrats to all the Substack subscribers who got in)
0xJeff tweet media
0xJeff@0xJeff

DeAI is pushing the boundary of Inference 2.0 ​ - If you have a spare Mac lying around, you can plug it in to the private inference network, earning revenue from AI inference demand ​ > @gajesh from EigenCloud built this decentralized inference network 3 days ago and it already has 170+ nodes that's capable of serving millions of tokens at 50% cheaper than Openrouter now ​ - If you have a gaming GPU, an RTX 4090/5090/6000 lying around, you can plug it into @dphnAI inference network and earn POD token incentives ​ > Dolphin (Uncensored model provider for @AskVenice) just launched beta test for their inference network last night — the GPUs will be used for model training/distillation (for now) ​ These implementations are exciting because just like when yields on idle capital offer drastic improvements in capital efficiency, ​ Inference on idle consumer GPUs enable yields on idle hardware — which offer additional revenue for operators + provide cheaper alternatives to developers seeking cheaper/efficient inference. ​ Exciting times ahead for DeAI

English
10
1
77
44.7K
Diego
Diego@diegoxyz·
@0xJeff what's their value prop?
English
1
0
1
501
Dolphin retweetledi
Venice
Venice@AskVenice·
Venice Uncensored 1.2 is now live. Developed with @dphnAI, this model delivers the most uncensored version of Mistral 24B. Upgraded with vision support, a 4x larger context window, and stronger tool-use capabilities. Trained on Bittensor Subnet 4 @TargonCompute.
English
47
85
516
275.3K
Dolphin
Dolphin@dphnAI·
@coolfun6 Dm wallet address Our anti-cheat automatically banned & slashed a few providers that tried to modify the worker
English
0
0
0
352
Decentralize Or Die
Decentralize Or Die@DecentrlizOrDie·
$POD ( @dphnAI) Timeline They’ve been quietly building for 2.5 years — first as one of the most popular open-source AI model makers, now as a decentralized inference network
English
3
1
20
9.9K
Dolphin
Dolphin@dphnAI·
@coolfun6 Thrusday this week - epoch is every 7 days
English
1
0
0
394
coolfun
coolfun@coolfun6·
@dphnAI when we could withdraw $POD
English
1
0
0
529
Dolphin
Dolphin@dphnAI·
Node provider rollout has been going well Our pool of inference nodes running Qwen 3.6 35B have generated over 3.2B tokens so far Total inference bandwidth -> 9400 t/s 28x RTX 4090 12x RTX 5090 8x RTX PRO 6000 & many other cards API access coming soon 🐬
Dolphin tweet mediaDolphin tweet media
Dolphin@dphnAI

Dolphin Inference Network node operation is now live for anyone who would like to beta test before we go into production $POD rewards live for testers Repurposing idle GPUs to run Qwen 3.5 35B MoE

English
26
17
154
71.9K
Dolphin
Dolphin@dphnAI·
Guide on how to run a node in our docs dphn.ai/docs/running-a… 60gb vRAM required to run in FP8 with full context We recommend 1x RTX 6000 PRO or H100 / H200 / B200 on @TargonCompute Smaller models for idle consumer GPUs coming soon ~~ Watch the 35B datagen live datagen.dphn.ai
Dolphin tweet media
English
2
0
14
4.2K