5
4mon
0

Jan v1: 4B open model for web search with 91% SimpleQA, slightly outperforms Perplexity Pro

https://arxiv.org/abs/2508.00360

Jan v1 delivers 91% SimpleQA accuracy, slightly outperforming Perplexity Pro while running fully locally. It's built on the new version of Qwen's Qwen3-4B-Thinking (up to 256k context length), fine-tuned for reasoning and tool use in Jan.

The model in llama.cpp and vLLM and uses serper-mcp to access the web https://github.com/marcopesani/mcp-server-serper

Model links:

Recommended parameters:

    temperature: 0.6
    top_p: 0.95
    top_k: 20
    min_p: 0.0
    max_tokens: 2048