DeepSeek V4 Pro starts at $1.74 per million input tokens — roughly half what you'd pay for comparable closed-source models. As reported by Bloomberg, the Chinese AI company released both V4 Pro and V4 Flash models in preview this April, making them fully open-source with a 1 million token context window.
Key Takeaways
- DeepSeek V4 Pro costs $1.74 per million input tokens and $3.48 per million output tokens.
- Both V4 models feature 1 million token context windows and are fully open-sourced.
- V4 Pro has 1.6 trillion total parameters but only activates 49 billion during inference.
- Huawei's Ascend 950 AI chips will provide full hardware support for DeepSeek V4.
- V4 Flash offers budget-friendly AI at $0.14 input and $0.28 output per million tokens.
What makes DeepSeek V4 Pro different?
According to DeepSeek's technical specifications, V4 Pro packs 1.6 trillion total parameters but only activates 49 billion during inference — a mixture-of-experts approach that keeps costs down whilst maintaining performance. This puts it roughly 3-6 months behind state-of-the-art closed-source models in capability, but at a fraction of the cost.
V4 Flash takes efficiency further with 284 billion total parameters and 13 billion active ones. Both models handle the same 1 million token context window, letting them process roughly 750,000 words of text in a single conversation — enough for entire books or massive codebases.
The thing is, this isn't just about parameter counts. DeepSeek's pricing strategy targets the enterprise market directly. Where OpenAI charges around $3-5 per million tokens for similar capability, DeepSeek undercuts by 50-70%.
How much does DeepSeek V4 cost?
DeepSeek V4 Pro costs $1.74 per million input tokens and $3.48 per million output tokens. V4 Flash drops to $0.14 input and $0.28 output per million tokens — making it the cheapest option in its performance class.
For context, processing a 50-page business document (roughly 25,000 tokens) would cost about 4 cents with V4 Pro's input pricing. The same task on GPT-4 would run closer to 8-10 cents.
What they don't mention is whether these preview prices will hold post-launch. DeepSeek hasn't confirmed UAE dirham pricing or local payment methods yet.
What does Huawei support mean for UAE adoption?
Huawei announced that its Ascend Supernode, powered by Ascend 950 AI chips, will fully support DeepSeek V4 models. This matters for UAE businesses already invested in Huawei's infrastructure ecosystem.
The UAE's AI adoption challenges include both cost and technical complexity. Open-source models like V4 Pro could address the first issue — budget constraints remain a common barrier to enterprise AI adoption in the region.
Huawei's hardware support means enterprises won't need to rebuild their AI infrastructure from scratch. The Ascend 950 chips can run DeepSeek models locally, addressing data sovereignty concerns that some UAE companies have with cloud-based AI services.
How does DeepSeek V4 fit UAE deployment scenarios?
DeepSeek positions V4 Pro for enterprises that need GPT-4 level capability at substantially lower pricing. The open-source release removes vendor lock-in.
The catch? You're trading convenience for savings. Unlike plug-and-play solutions from Microsoft or Google, implementing open-source models requires technical expertise. Given that these models require ML engineering expertise that many companies will source via integrators or hiring.
V4 Flash targets a different use case — high-volume, lower-stakes applications like content moderation, data extraction, or customer service automation. At 28 cents per million output tokens, it's cheap enough for experimental projects.
Based on what we've seen from previous DeepSeek releases, expect solid performance but occasional quirks. Open-source models often require more prompt engineering than their commercial counterparts.
DeepSeek V4 availability and pricing
Both DeepSeek V4 Pro and Flash are available in preview mode through DeepSeek's API platform. UAE businesses can access the models immediately, though payment processing may require USD transactions initially.
Official launch timing hasn't been announced, but DeepSeek typically moves from preview to general availability within 2-3 months. The company hasn't confirmed local UAE partnerships or dirham pricing yet.
For comparison with local alternatives, the UAE's K2 Think model offers similar efficiency but different capabilities, focusing on reasoning over raw parameter count.
Frequently Asked Questions
What is DeepSeek V4 Pro?
DeepSeek V4 Pro is DeepSeek's new flagship open-source AI model with 1.6 trillion total parameters and 49 billion active parameters. It offers performance comparable to state-of-the-art closed-source models like GPT-4, typically trailing by 3-6 months in capability.
How much does DeepSeek V4 Pro cost?
DeepSeek V4 Pro is priced at $1.74 per million input tokens and $3.48 per million output tokens. This makes it roughly 50-70% cheaper than comparable closed-source alternatives from OpenAI or Anthropic.
What is the context window of DeepSeek V4 models?
Both DeepSeek V4 Pro and V4 Flash feature a 1 million token context window. This allows them to process approximately 750,000 words of text in a single conversation, enough for entire books or large codebases.
Are DeepSeek V4 models open-source?
Yes, both DeepSeek V4 Pro and V4 Flash are fully open-sourced. This means businesses can download, modify, and run the models on their own infrastructure without vendor lock-in or ongoing licensing fees.
Can UAE businesses use DeepSeek V4 models locally?
Yes, especially with Huawei's Ascend Supernode support. UAE companies using Huawei infrastructure can run DeepSeek V4 models locally on Ascend 950 AI chips, addressing data sovereignty and latency concerns while maintaining cost benefits.
Subscribe to our newsletter to get the latest updates and news
Member discussion