54mon0yogthos in technologyJan v1: 4B open model for web search with 91% SimpleQA, slightly outperforms Perplexity Prohttps://arxiv.org/abs/2508.00360Jan v1 delivers 91% SimpleQA accuracy, slightly outperforming Perplexity Pro while running fully locally. It's built on the new version of Qwen's Qwen3-4B-Thinking (up to 256k context length), fine-tuned for reasoning and tool use in Jan. The model in llama.cpp and vLLM and uses serper-mcp to access the web https://github.com/marcopesani/mcp-server-serper Model links: Jan-v1-4B: https://huggingface.co/janhq/Jan-v1-4B Jan-v1-4B-GGUF: https://huggingface.co/janhq/Jan-v1-4B-GGUF Recommended parameters: temperature: 0.6 top_p: 0.95 top_k: 20 min_p: 0.0 max_tokens: 2048
yogthos in technology
Jan v1: 4B open model for web search with 91% SimpleQA, slightly outperforms Perplexity Pro
https://arxiv.org/abs/2508.00360Jan v1 delivers 91% SimpleQA accuracy, slightly outperforming Perplexity Pro while running fully locally. It's built on the new version of Qwen's Qwen3-4B-Thinking (up to 256k context length), fine-tuned for reasoning and tool use in Jan.
The model in llama.cpp and vLLM and uses serper-mcp to access the web https://github.com/marcopesani/mcp-server-serper
Model links:
Recommended parameters: