<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Ollama models</title>
  <id>https://ollama.com/search?o=newest</id>
  <author>
    <name>Model Library</name>
  </author>
  <link href="https://ollama.com/search?o=newest" rel="self"/>
  <updated>2026-03-15T06:42:22.877281+00:00</updated>
  <entry>
    <title>nemotron-3-super</title>
    <id>https://ollama.com/library/nemotron-3-super</id>
    <link href="https://ollama.com/library/nemotron-3-super"/>
    <summary>NVIDIA Nemotron 3 Super is a 120B open MoE model activating just 12B parameters to deliver maximum compute efficiency and accuracy for complex multi-agent applications.</summary>
    <updated>2026-03-11T16:00:00+00:00</updated>
    <category term="120b"/>
    <category term="tools"/>
    <category term="thinking"/>
    <content type="html">&lt;p&gt;NVIDIA Nemotron 3 Super is a 120B open MoE model activating just 12B parameters to deliver maximum compute efficiency and accuracy for complex multi-agent applications.&lt;/p&gt;&lt;p&gt;Pulls: 29.5K&lt;/p&gt;&lt;p&gt;Tags: 7&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>qwen3.5</title>
    <id>https://ollama.com/library/qwen3.5</id>
    <link href="https://ollama.com/library/qwen3.5"/>
    <summary>Qwen 3.5 is a family of open-source multimodal models that delivers exceptional utility and performance.</summary>
    <updated>2026-03-02T21:55:00+00:00</updated>
    <category term="0.8b"/>
    <category term="2b"/>
    <category term="4b"/>
    <category term="9b"/>
    <category term="27b"/>
    <category term="35b"/>
    <category term="122b"/>
    <category term="vision"/>
    <category term="tools"/>
    <category term="thinking"/>
    <content type="html">&lt;p&gt;Qwen 3.5 is a family of open-source multimodal models that delivers exceptional utility and performance.&lt;/p&gt;&lt;p&gt;Pulls: 1.8M&lt;/p&gt;&lt;p&gt;Tags: 30&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>lfm2</title>
    <id>https://ollama.com/library/lfm2</id>
    <link href="https://ollama.com/library/lfm2"/>
    <summary>LFM2 is a family of hybrid models designed for on-device deployment. LFM2-24B-A2B is the largest model in the family, scaling the architecture to 24 billion parameters while keeping inference efficient.</summary>
    <updated>2026-02-24T01:17:00+00:00</updated>
    <category term="24b"/>
    <category term="tools"/>
    <content type="html">&lt;p&gt;LFM2 is a family of hybrid models designed for on-device deployment. LFM2-24B-A2B is the largest model in the family, scaling the architecture to 24 billion parameters while keeping inference efficient.&lt;/p&gt;&lt;p&gt;Pulls: 949.7K&lt;/p&gt;&lt;p&gt;Tags: 6&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>minimax-m2.5</title>
    <id>https://ollama.com/library/minimax-m2.5</id>
    <link href="https://ollama.com/library/minimax-m2.5"/>
    <summary>MiniMax-M2.5 is a state-of-the-art large language model designed for real-world productivity and coding tasks.</summary>
    <updated>2026-02-12T09:22:00+00:00</updated>
    <content type="html">&lt;p&gt;MiniMax-M2.5 is a state-of-the-art large language model designed for real-world productivity and coding tasks.&lt;/p&gt;&lt;p&gt;Pulls: 127.8K&lt;/p&gt;&lt;p&gt;Tags: 1&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>glm-5</title>
    <id>https://ollama.com/library/glm-5</id>
    <link href="https://ollama.com/library/glm-5"/>
    <summary>A strong reasoning and agentic model from Z.ai with 744B total parameters (40B active), built for complex systems engineering and long-horizon tasks.</summary>
    <updated>2026-02-11T18:43:00+00:00</updated>
    <content type="html">&lt;p&gt;A strong reasoning and agentic model from Z.ai with 744B total parameters (40B active), built for complex systems engineering and long-horizon tasks.&lt;/p&gt;&lt;p&gt;Pulls: 109.7K&lt;/p&gt;&lt;p&gt;Tags: 1&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>qwen3-coder-next</title>
    <id>https://ollama.com/library/qwen3-coder-next</id>
    <link href="https://ollama.com/library/qwen3-coder-next"/>
    <summary>Qwen3-Coder-Next is a coding-focused language model from Alibaba's Qwen team, optimized for agentic coding workflows and local development.</summary>
    <updated>2026-02-06T05:23:00+00:00</updated>
    <category term="tools"/>
    <content type="html">&lt;p&gt;Qwen3-Coder-Next is a coding-focused language model from Alibaba's Qwen team, optimized for agentic coding workflows and local development.&lt;/p&gt;&lt;p&gt;Pulls: 790.2K&lt;/p&gt;&lt;p&gt;Tags: 4&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>glm-ocr</title>
    <id>https://ollama.com/library/glm-ocr</id>
    <link href="https://ollama.com/library/glm-ocr"/>
    <summary>GLM-OCR is a multimodal OCR model for complex document understanding, built on the GLM-V encoder–decoder architecture.</summary>
    <updated>2026-02-02T23:29:00+00:00</updated>
    <category term="vision"/>
    <category term="tools"/>
    <content type="html">&lt;p&gt;GLM-OCR is a multimodal OCR model for complex document understanding, built on the GLM-V encoder–decoder architecture.&lt;/p&gt;&lt;p&gt;Pulls: 125.4K&lt;/p&gt;&lt;p&gt;Tags: 3&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>kimi-k2.5</title>
    <id>https://ollama.com/library/kimi-k2.5</id>
    <link href="https://ollama.com/library/kimi-k2.5"/>
    <summary>Kimi K2.5 is an open-source, native multimodal agentic model that seamlessly integrates vision and language understanding with advanced agentic capabilities, instant and thinking modes, as well as conversational and agentic paradigms.</summary>
    <updated>2026-01-27T07:29:00+00:00</updated>
    <content type="html">&lt;p&gt;Kimi K2.5 is an open-source, native multimodal agentic model that seamlessly integrates vision and language understanding with advanced agentic capabilities, instant and thinking modes, as well as conversational and agentic paradigms.&lt;/p&gt;&lt;p&gt;Pulls: 151.8K&lt;/p&gt;&lt;p&gt;Tags: 1&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>glm-4.7-flash</title>
    <id>https://ollama.com/library/glm-4.7-flash</id>
    <link href="https://ollama.com/library/glm-4.7-flash"/>
    <summary>As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.</summary>
    <updated>2026-01-24T23:40:00+00:00</updated>
    <category term="tools"/>
    <category term="thinking"/>
    <content type="html">&lt;p&gt;As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.&lt;/p&gt;&lt;p&gt;Pulls: 524.5K&lt;/p&gt;&lt;p&gt;Tags: 4&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>lfm2.5-thinking</title>
    <id>https://ollama.com/library/lfm2.5-thinking</id>
    <link href="https://ollama.com/library/lfm2.5-thinking"/>
    <summary>LFM2.5 is a new family of hybrid models designed for on-device deployment.</summary>
    <updated>2026-01-20T12:41:00+00:00</updated>
    <category term="1.2b"/>
    <category term="tools"/>
    <content type="html">&lt;p&gt;LFM2.5 is a new family of hybrid models designed for on-device deployment.&lt;/p&gt;&lt;p&gt;Pulls: 971.7K&lt;/p&gt;&lt;p&gt;Tags: 5&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>translategemma</title>
    <id>https://ollama.com/library/translategemma</id>
    <link href="https://ollama.com/library/translategemma"/>
    <summary>A new collection of open translation models built on Gemma 3, helping people communicate across 55 languages.</summary>
    <updated>2026-01-16T20:57:00+00:00</updated>
    <category term="4b"/>
    <category term="12b"/>
    <category term="27b"/>
    <category term="vision"/>
    <content type="html">&lt;p&gt;A new collection of open translation models built on Gemma 3, helping people communicate across 55 languages.&lt;/p&gt;&lt;p&gt;Pulls: 618.9K&lt;/p&gt;&lt;p&gt;Tags: 13&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>glm-4.7</title>
    <id>https://ollama.com/library/glm-4.7</id>
    <link href="https://ollama.com/library/glm-4.7"/>
    <summary>Advancing the Coding Capability</summary>
    <updated>2025-12-23T17:56:00+00:00</updated>
    <content type="html">&lt;p&gt;Advancing the Coding Capability&lt;/p&gt;&lt;p&gt;Pulls: 65.7K&lt;/p&gt;&lt;p&gt;Tags: 1&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>minimax-m2.1</title>
    <id>https://ollama.com/library/minimax-m2.1</id>
    <link href="https://ollama.com/library/minimax-m2.1"/>
    <summary>Exceptional multilingual capabilities to elevate code engineering</summary>
    <updated>2025-12-23T03:19:00+00:00</updated>
    <content type="html">&lt;p&gt;Exceptional multilingual capabilities to elevate code engineering&lt;/p&gt;&lt;p&gt;Pulls: 24.5K&lt;/p&gt;&lt;p&gt;Tags: 1&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>gemini-3-flash-preview</title>
    <id>https://ollama.com/library/gemini-3-flash-preview</id>
    <link href="https://ollama.com/library/gemini-3-flash-preview"/>
    <summary>Gemini 3 Flash offers frontier intelligence built for speed at a fraction of the cost.</summary>
    <updated>2025-12-20T20:44:00+00:00</updated>
    <content type="html">&lt;p&gt;Gemini 3 Flash offers frontier intelligence built for speed at a fraction of the cost.&lt;/p&gt;&lt;p&gt;Pulls: 82.4K&lt;/p&gt;&lt;p&gt;Tags: 2&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>functiongemma</title>
    <id>https://ollama.com/library/functiongemma</id>
    <link href="https://ollama.com/library/functiongemma"/>
    <summary>FunctionGemma is a specialized version of Google's Gemma 3 270M model fine-tuned explicitly for function calling.</summary>
    <updated>2025-12-18T07:03:00+00:00</updated>
    <category term="270m"/>
    <category term="tools"/>
    <content type="html">&lt;p&gt;FunctionGemma is a specialized version of Google's Gemma 3 270M model fine-tuned explicitly for function calling.&lt;/p&gt;&lt;p&gt;Pulls: 89.8K&lt;/p&gt;&lt;p&gt;Tags: 4&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>nemotron-3-nano</title>
    <id>https://ollama.com/library/nemotron-3-nano</id>
    <link href="https://ollama.com/library/nemotron-3-nano"/>
    <summary>Nemotron 3 Nano - A new Standard for Efficient, Open, and Intelligent Agentic Models</summary>
    <updated>2025-12-16T06:27:00+00:00</updated>
    <category term="30b"/>
    <category term="tools"/>
    <category term="thinking"/>
    <content type="html">&lt;p&gt;Nemotron 3 Nano - A new Standard for Efficient, Open, and Intelligent Agentic Models&lt;/p&gt;&lt;p&gt;Pulls: 216.4K&lt;/p&gt;&lt;p&gt;Tags: 6&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>olmo-3.1</title>
    <id>https://ollama.com/library/olmo-3.1</id>
    <link href="https://ollama.com/library/olmo-3.1"/>
    <summary>Olmo is a series of Open language models designed to enable the science of language models. These models are pre-trained on the Dolma 3 dataset and post-trained on the Dolci datasets.</summary>
    <updated>2025-12-16T05:56:00+00:00</updated>
    <category term="32b"/>
    <category term="tools"/>
    <content type="html">&lt;p&gt;Olmo is a series of Open language models designed to enable the science of language models. These models are pre-trained on the Dolma 3 dataset and post-trained on the Dolci datasets.&lt;/p&gt;&lt;p&gt;Pulls: 131.8K&lt;/p&gt;&lt;p&gt;Tags: 10&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>olmo-3</title>
    <id>https://ollama.com/library/olmo-3</id>
    <link href="https://ollama.com/library/olmo-3"/>
    <summary>Olmo is a series of Open language models designed to enable the science of language models. These models are pre-trained on the Dolma 3 dataset and post-trained on the Dolci datasets.</summary>
    <updated>2025-12-16T05:55:00+00:00</updated>
    <category term="7b"/>
    <category term="32b"/>
    <content type="html">&lt;p&gt;Olmo is a series of Open language models designed to enable the science of language models. These models are pre-trained on the Dolma 3 dataset and post-trained on the Dolci datasets.&lt;/p&gt;&lt;p&gt;Pulls: 214K&lt;/p&gt;&lt;p&gt;Tags: 15&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>devstral-small-2</title>
    <id>https://ollama.com/library/devstral-small-2</id>
    <link href="https://ollama.com/library/devstral-small-2"/>
    <summary>24B model that excels at using tools to explore codebases, editing multiple files and power software engineering agents.</summary>
    <updated>2025-12-13T06:47:00+00:00</updated>
    <category term="24b"/>
    <category term="vision"/>
    <category term="tools"/>
    <content type="html">&lt;p&gt;24B model that excels at using tools to explore codebases, editing multiple files and power software engineering agents.&lt;/p&gt;&lt;p&gt;Pulls: 642.8K&lt;/p&gt;&lt;p&gt;Tags: 6&lt;/p&gt;</content>
  </entry>
  <entry>
    <title>nomic-embed-text-v2-moe</title>
    <id>https://ollama.com/library/nomic-embed-text-v2-moe</id>
    <link href="https://ollama.com/library/nomic-embed-text-v2-moe"/>
    <summary>nomic-embed-text-v2-moe is a multilingual MoE text embedding model that excels at multilingual retrieval.</summary>
    <updated>2025-12-10T22:09:00+00:00</updated>
    <category term="embedding"/>
    <content type="html">&lt;p&gt;nomic-embed-text-v2-moe is a multilingual MoE text embedding model that excels at multilingual retrieval.&lt;/p&gt;&lt;p&gt;Pulls: 103K&lt;/p&gt;&lt;p&gt;Tags: 1&lt;/p&gt;</content>
  </entry>
</feed>
