NVIDIA: Nemotron Nano 12B 2 VL (free)
NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a hybrid Transformer-Mamba architecture, combining transformer-level accuracy with Mamba’s memory-efficient sequence modeling for significantly higher throughput and lower latency. The model supports inputs of text and multi-image documents, producing natural-language outputs. It is trained on high-quality NVIDIA-curated synthetic datasets optimized for optical-character recognition, chart reasoning, and multimodal comprehension. Nemotron Nano 2 VL achieves leading results on OCRBench v2 and scores ≈ 74 average across MMMU, MathVista, AI2D, OCRBench, OCR-Reasoning, ChartQA, DocVQA, and Video-MME—surpassing prior open VL baselines. With Efficient Video Sampling (EVS), it handles long-form videos while reducing inference cost. Open-weights, training data, and fine-tuning recipes are released under a permissive NVIDIA open license, with deployment supported across NeMo, NIM, and major inference runtimes.
Model Information
Pricing Information
Supported Parameters
Common Use Cases
Multimodal Tasks
- • Image analysis and description
- • Visual question answering
- • Document understanding
- • Content moderation
Image Generation
- • Creative artwork generation
- • Product visualization
- • Marketing materials
- • Concept art and design
General Applications
- • Chatbots and virtual assistants
- • Educational content creation
- • Research and analysis
- • Automation and workflow
Frequently Asked Questions
What is the context length of this model?
This model has a context length of 128K tokens, which means it can process and remember up to 128K tokens of text in a single conversation or request.
How much does it cost to use this model?
This model is free to use with no cost per token.
What modalities does this model support?
This model supports text+image->text modality, accepting image and textas input and producing text as output.
When was this model created?
This model was created on October 28, 2025.