Qwen: Qwen3 VL 235B A22B Thinking
Qwen3-VL-235B-A22B Thinking is a multimodal model that unifies strong text generation with visual understanding across images and video. The Thinking model is optimized for multimodal reasoning in STEM and math. The series emphasizes robust perception (recognition of diverse real-world and synthetic categories), spatial understanding (2D/3D grounding), and long-form visual comprehension, with competitive results on public multimodal benchmarks for both perception and reasoning. Beyond analysis, Qwen3-VL supports agentic interaction and tool use: it can follow complex instructions over multi-image, multi-turn dialogues; align text to video timelines for precise temporal queries; and operate GUI elements for automation tasks. The models also enable visual coding workflows, turning sketches or mockups into code and assisting with UI debugging, while maintaining strong text-only performance comparable to the flagship Qwen3 language models. This makes Qwen3-VL suitable for production scenarios spanning document AI, multilingual OCR, software/UI assistance, spatial/embodied tasks, and research on vision-language agents.
Model Information
Pricing Information
Supported Parameters
Common Use Cases
Multimodal Tasks
- • Image analysis and description
- • Visual question answering
- • Document understanding
- • Content moderation
Image Generation
- • Creative artwork generation
- • Product visualization
- • Marketing materials
- • Concept art and design
General Applications
- • Chatbots and virtual assistants
- • Educational content creation
- • Research and analysis
- • Automation and workflow
Frequently Asked Questions
What is the context length of this model?
This model has a context length of 66K tokens, which means it can process and remember up to 66K tokens of text in a single conversation or request.
How much does it cost to use this model?
Prompt tokens cost $0.50M/1M tokens and completion tokens cost $3.50M/1M tokens.
What modalities does this model support?
This model supports text+image->text modality, accepting text and imageas input and producing text as output.
When was this model created?
This model was created on September 23, 2025.