Together AI

2.6K posts

Together AI banner
Together AI

Together AI

@togethercompute

Accelerate inference, model shaping, and pre-training on a research-optimized platform.

San Francisco, CA Katılım Kasım 2022
396 Takip Edilen54.9K Takipçiler
Sabitlenmiş Tweet
Together AI
Together AI@togethercompute·
Introducing DeepSeek V4 Pro, a long-context model with hybrid attention, three reasoning modes, and SOTA coding performance. AI natives can now use DeepSeek V4 Pro on Together AI and benefit from reliable inference for long-horizon coding and agentic workflows.
Together AI tweet media
English
15
5
122
1M
Together AI
Together AI@togethercompute·
The AI industry is racing toward superintelligence. Dan and Scaled Cognition built something more : Super-Reliable Intelligence. "The smartest model in the world is useless if it can’t get the answer right every single time." Read the full story here: #6" target="_blank" rel="nofollow noopener">together.ai/customers/scal…
English
0
0
3
731
Together AI
Together AI@togethercompute·
Switching to Together AI flipped it: ⚡️Zero training-blocking failures ⚡️~50% cost savings vs. AWS ⚡️Issues resolved within hours via shared Slack They could finally focus on building the model.
English
1
0
2
754
Together AI
Together AI@togethercompute·
@profdanklein spent 20 years studying how language forms intelligence. When LLMs exploded, he saw something everyone missed. These systems were fluent, confident, and wrong. And no one could tell the difference. 🧵
Together AI tweet media
English
3
4
11
3K
Together AI retweetledi
Vipul Ved Prakash
Vipul Ved Prakash@vipulved·
On 5/5 @realDanFu and team will discuss DSV4’s hybrid attention and KV cache efficiency, should be a great session!
Together AI@togethercompute

Join us Tue 5/5: #DeepSeek-V4's hybrid attention + sparse MoE reduces KV cache up to 90%, enabling 1M-token context. We'll cover why that makes it great for agentic workflows, what it took to serve at scale, and how to build with it. Hear from @realDanFu @JueWANG26088228 @ZainHasan6 and @zhyncs42togetherai.link/ds-v4-x

English
1
3
16
2.6K
Together AI retweetledi
Sara Hooker
Sara Hooker@sarahookr·
Most model trainings outside of frontier labs fail. 📈 Because of bad or insufficient data. 🚮 Or just data for what you want + not general capabilities. 🎯 Most builders give up + become elevated prompt engineers. Today, @adaption_ai + @togethercompute start fixing that.
adaption@adaption_ai

We believe that intelligence should not arrive preconfigured. @togethercompute is now available directly inside the Adaption platform, connecting Adaptive Data with large-scale training in a single workflow. One platform, end to end. Stop inheriting intelligence. Shape it.

English
8
8
102
11.7K
Together AI
Together AI@togethercompute·
With this integration, teams can optimize data, launch fine-tuning, evaluate results, and deploy on Together AI inference through a tighter workflow. Learn more: together.ai/blog/announcin…
English
0
0
2
735
Together AI
Together AI@togethercompute·
Fine-tuning quality starts before the training run. Adaptive Data helps teams analyze, adapt, and improve datasets; Together Fine-Tuning turns those shaped datasets into specialized open models.
English
1
0
3
820
Together AI
Together AI@togethercompute·
We’re excited to partner with @adaption_ai to make Together Fine-Tuning natively available in Adaptive Data. AI natives can now move from optimized training data in Adaptive Data to fine-tuned open models on Together AI.
Together AI tweet media
English
4
3
21
1.5K
Together AI retweetledi
adaption
adaption@adaption_ai·
We believe that intelligence should not arrive preconfigured. @togethercompute is now available directly inside the Adaption platform, connecting Adaptive Data with large-scale training in a single workflow. One platform, end to end. Stop inheriting intelligence. Shape it.
English
21
17
123
23.1K
Together AI retweetledi
Matthew Berman
Matthew Berman@MatthewBerman·
$3/million output tokens. Qwen 3.5 Plus is basically a frontier model. Let that sink in.
Together AI@togethercompute

Introducing Qwen3.6-Plus from @Alibaba_Qwen, a 1M-context model built for real-world agents, agentic coding, and multimodal reasoning. AI natives can now use Qwen3.6-Plus on Together AI and benefit from reliable inference for production-scale agent workflows.

English
38
26
351
54.2K
Together AI
Together AI@togethercompute·
Highlights: 👉 1M context for long-horizon agentic workflows 👉 Stronger agentic coding across frontend, repo-level, and terminal-based tasks 👉 Multimodal reasoning across text, image, and video inputs 👉 Production-ready on the AI Native Cloud—serverless inference at $0.50 input / $3.00 output per 1M tokens
English
1
0
6
1.5K
Together AI
Together AI@togethercompute·
Introducing Qwen3.6-Plus from @Alibaba_Qwen, a 1M-context model built for real-world agents, agentic coding, and multimodal reasoning. AI natives can now use Qwen3.6-Plus on Together AI and benefit from reliable inference for production-scale agent workflows.
Together AI tweet media
English
2
7
108
65.4K
Together AI
Together AI@togethercompute·
Last week we announced DeepSeek-V4. Today we’re sharing a closer look at DeepSeek-V4 Pro on Together AI: 512K context, controllable reasoning modes, and cached-input pricing for long-context workloads. Read more: together.ai/blog/deepseek-…
English
2
4
24
2.5K