
Microsoftの最新画像生成AI 「MAI-Image-2」 ・写真みたいなリアルな画像生成が得意 ・シンプル&日本語のプロンプトでも◎ GoogleのNano Banana Proと比べても、遜色ない印象👀 ←MAI-Image-2 Nano Banana Pro→
Microsoft AI
38 posts

@MicrosoftAI
Building a new class of safer, more capable AI systems we call Humanist Superintelligence: AI that is always aligned, controllable, and in service of humanity.

Microsoftの最新画像生成AI 「MAI-Image-2」 ・写真みたいなリアルな画像生成が得意 ・シンプル&日本語のプロンプトでも◎ GoogleのNano Banana Proと比べても、遜色ない印象👀 ←MAI-Image-2 Nano Banana Pro→

Our new image generator MAI-Image-2 is out! Available now on MAI Playground for everything from lifelike realism to detailed infographics. Our team has been pushing immensely hard for this release, and we are now among the top models out there: #3 family on @arena. Check out the details in our blog: microsoft.ai/news/introduci… It's shipping soon in Copilot and Bing Image Creator, as well as Microsoft Foundry. Really proud of our progress on models and products - stay tuned for new releases and come join us on our Superintelligence mission!















Our newest AI accelerator Maia 200 is now online in Azure. Designed for industry-leading inference efficiency, it delivers 30% better performance per dollar than current systems. And with 10+ PFLOPS FP4 throughput, ~5 PFLOPS FP8, and 216GB HBM3e with 7TB/s of memory bandwidth it's optimized for large-scale AI workloads. It joins our broader portfolio of CPUs, GPUs, and custom accelerators, giving customers more options to run advanced AI workloads faster and more cost-effectively on Azure.




Today we announced our new Fairwater datacenter in Atlanta, connected with our first Fairwater site in Wisconsin and our broader Azure footprint to create the world’s first AI superfactory. Fairwater exemplifies our vision for a fungible fleet: infra that can serve any workload, anywhere, on fit-for-purpose accelerators and network paths, with maximum performance and efficiency. AI workloads have evolved beyond large-scale pre-training. Today, they encompass fine-tuning, reinforcement learning (RL), synthetic data generation, evaluation pipelines, and more. Fairwater is built to support this full lifecycle: Max density: Fairwater’s two-story design and liquid cooling system lets us place racks in three dimensions and pack them with GPUs as densely as possible, minimizing cable runs and improving latency and effective bandwidth. Fleet: Each Fairwater DC can integrate hundreds of thousands of the latest NVIDIA GPUs into a single coherent cluster. This provides flexible infra that can support the full spectrum of workloads, and ensure no GPU is left unnecessarily idle. And that’s on top of the more than 100,000 GB300s coming online this quarter alone for inference across the rest of our fleet. For us, it’s all about turning every gigawatt into the maximum number of useful tokens. Not every GW is created equal! Planet-scale: Every Fairwater DC will connect through our continent-spanning AI WAN to prior generations of AI supercomputers, forming a truly fungible pool of compute. This enables developers to scale beyond the capacity of a single site and dynamically land workloads on the right infra for their needs. Together, these innovations let us bring together different generations of silicon and AI systems across DCs and geos into a single elastic system that scales seamlessly across training and inference workloads And this elastic AI capacity is all available alongside all the other cloud services (compute, storage, databases, app services) that AI agents and workloads need. This is what we mean when we talk about building a fungible fleet – a single, unified platform that pushes the limits of performance per watt and per dollar. Read more: blogs.microsoft.com/blog/2025/11/1…
