JustinTong commited on
Commit
1d1ebdf
·
verified ·
1 Parent(s): 6dffa3d

Update README.md to include SGLang instruction

Browse files
Files changed (1) hide show
  1. README.md +236 -125
README.md CHANGED
@@ -1,16 +1,18 @@
1
  ---
2
  library_name: vllm
3
  inference: false
 
 
4
  extra_gated_description: >-
5
  If you want to learn more about how we process your personal data, please read
6
  our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
7
  tags:
8
  - mistral-common
9
- license: other
10
  ---
11
 
12
- # Devstral 2 123B Instruct 2512
13
- Devstral is an agentic LLM for software engineering tasks. **Devstral 2** excels at using tools to explore codebases, editing multiple files and power software engineering agents.
14
  The model achieves remarkable performance on SWE-bench.
15
 
16
  This model is an Instruct model in **FP8**, fine-tuned to follow instructions, making it ideal for chat, agentic and instruction based tasks for SWE use cases.
@@ -18,12 +20,18 @@ This model is an Instruct model in **FP8**, fine-tuned to follow instructions, m
18
  For enterprises requiring specialized capabilities (increased context, domain-specific knowledge, etc.), we invite companies to [reach out to us](https://mistral.ai/contact).
19
 
20
  ## Key Features
21
- The Devstral 2 Instruct model offers the following capabilities:
22
  - **Agentic Coding**: Devstral is designed to excel at agentic coding tasks, making it a great choice for software engineering agents.
23
- - **Improved Performance**: Devstral 2 is a step-up compared to its predecessors.
24
- - **Better Generalization**: Generalises better to diverse prompts and coding environments.
25
  - **Context Window**: A 256k context window.
26
 
 
 
 
 
 
 
27
  ### Use Cases
28
 
29
  AI Code Assistants, Agentic Coding, and Software Engineering Tasks. Leveraging advanced AI capabilities for complex tool integration and deep codebase understanding in coding environments.
@@ -94,15 +102,16 @@ If this is your first time running Vibe, it will:
94
 
95
  ### Local Deployment
96
 
97
- The model can also be deployed with the following libraries, we advise everyone to use the Mistral AI API if the model is subpar with local serving:
98
  - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended)
99
  - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
100
 
101
  We're thankful to the llama.cpp team and their community as well as the LM Studio and Ollama teams that worked hard to make these models also available for their frameworks.
102
 
103
  You can now also run Devstral using these (alphabetical ordered) frameworks:
104
- - [`llama.cpp`](https://github.com/ggml-org/llama.cpp): To use community ones such as [Unsloth's](https://huggingface.co/unsloth/Devstral-2-123B-Instruct-2512-GGUF) or [Bartowski's](https://huggingface.co/bartowski/mistralai_Devstral-2-123B-Instruct-2512-GGUF) make sure to use changes from this [PR](https://github.com/ggml-org/llama.cpp/pull/17945).
105
- - [`Ollama`](https://ollama.com/): https://ollama.com/library/devstral-2
 
106
 
107
  If you notice subpar performance with local serving, please submit issues to the relevant framework so that it can be fixed and in the meantime we advise to use the Mistral AI API.
108
 
@@ -131,6 +140,8 @@ docker pull vllm/vllm-openai:nightly
131
  docker run -it vllm/vllm-openai:nightly
132
  ```
133
 
 
 
134
  > [!Warning]
135
  > Make sure that your vllm installation includes [this commit](https://github.com/vllm-project/vllm/commit/5c213d2899f5a2d439c8d771a0abc156a5412a2b).
136
  > If you do not have this commit included, you will get incorrectly parsed tool calls.
@@ -148,7 +159,7 @@ We recommand that you use Devstral in a server/client setting.
148
  1. Spin up a server:
149
 
150
  ```
151
- vllm serve mistralai/Devstral-2-123B-Instruct-2512 --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 8
152
  ```
153
 
154
 
@@ -163,7 +174,7 @@ from huggingface_hub import hf_hub_download
163
  url = "http://<your-server-url>:8000/v1/chat/completions"
164
  headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
165
 
166
- model = "mistralai/Devstral-2-123B-Instruct-2512"
167
 
168
  def load_system_prompt(repo_id: str, filename: str) -> str:
169
  file_path = hf_hub_download(repo_id=repo_id, filename=filename)
@@ -188,7 +199,7 @@ messages = [
188
 
189
  data = {"model": model, "messages": messages, "temperature": 0.15}
190
 
191
- # Devstral 2 supports tool calling. If you want to use tools, follow this:
192
  # tools = [ # Define tools for vLLM
193
  # {
194
  # "type": "function",
@@ -215,10 +226,105 @@ print(response.json()["choices"][0]["message"]["content"])
215
  ```
216
  </details>
217
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
218
  #### Transformers
219
 
220
  <details>
221
  <summary>Expand</summary
 
222
 
223
  Make sure to install from main:
224
 
@@ -229,15 +335,16 @@ uv pip install git+https://github.com/huggingface/transformers
229
  And run the following code snippet:
230
 
231
  ```python
 
232
  from transformers import (
233
- MistralForCausalLM,
234
  MistralCommonBackend,
235
  )
236
 
237
- model_id = "mistralai/Devstral-2-123B-Instruct-2512"
238
 
239
  tokenizer = MistralCommonBackend.from_pretrained(model_id)
240
- model = MistralForCausalLM.from_pretrained(model_id, device_map="auto")
241
 
242
  SP = """You are operating as and within Mistral Vibe, a CLI coding-agent built by Mistral AI and powered by default by the Devstral family of models. It wraps Mistral's Devstral models to enable natural language interaction with a local codebase. Use the available tools when helpful.
243
 
@@ -400,7 +507,6 @@ print(decoded_output)
400
  </details>
401
 
402
  ## Tests
403
-
404
  To help test our model via vLLM or test that other frameworks' implementations are correct, here is a set of prompts you can try with the expected outputs.
405
 
406
  1. Call one tool
@@ -587,8 +693,14 @@ messages = [
587
  "content": [
588
  {
589
  "type": "text",
590
- "text": "\n1. 6 + 2 x 3\n2. 3a + 7b - 2a\n19 - (8+2) + 1\n4. 5y (6x - 7).\n\nCompute the results steps by steps for the equations that involve only numbers ? You have to call tools to perform the operations and can do one operation at a time per equation."
591
  },
 
 
 
 
 
 
592
  ]
593
  }
594
  ]
@@ -716,111 +828,111 @@ tools = [
716
 
717
  ```json
718
  [
719
- {
720
- "role": "assistant",
721
- "tool_calls": [
722
- {
723
- "id": "chatcmpl-tool-84aa136ef835f886",
724
- "function": {
725
- "arguments": "{\"a\": \"2\", \"b\": \"3\"}",
726
- "name": "multiply_number"
727
- },
728
- "type": "function"
729
- }
730
- ],
731
- "content": "Let's compute the results step by step for the equations that involve only numbers.\n\n### 1. \\( 6 + 2 \\times 3 \\)\n\nFirst, we perform the multiplication \\( 2 \\times 3 \\):\n"
732
- },
733
- {
734
- "role": "tool",
735
- "tool_call_id": "chatcmpl-tool-84aa136ef835f886",
736
- "name": "multiply_number",
737
- "content": "6.0"
738
- },
739
- {
740
- "role": "assistant",
741
- "tool_calls": [
742
- {
743
- "id": "chatcmpl-tool-bedc049b0a248952",
744
- "function": {
745
- "arguments": "{\"a\": \"6\", \"b\": \"6\"}",
746
- "name": "add_number"
747
- },
748
- "type": "function"
749
- }
750
- ],
751
- "content": "Now, we add the result to 6:\n"
752
- },
753
- {
754
- "role": "tool",
755
- "tool_call_id": "chatcmpl-tool-bedc049b0a248952",
756
- "name": "add_number",
757
- "content": "12.0"
758
- },
759
- {
760
- "role": "assistant",
761
- "tool_calls": [
762
- {
763
- "id": "chatcmpl-tool-ae4558bc0558f4e7",
764
- "function": {
765
- "arguments": "{\"a\": \"8\", \"b\": \"2\"}",
766
- "name": "add_number"
767
- },
768
- "type": "function"
769
- }
770
- ],
771
- "content": "The result of \\( 6 + 2 \\times 3 \\) is **12**.\n\n---\n\n### 3. \\( 19 - (8 + 2) + 1 \\)\n\nFirst, we compute the expression inside the parentheses \\( 8 + 2 \\):\n"
772
- },
773
- {
774
- "role": "tool",
775
- "tool_call_id": "chatcmpl-tool-ae4558bc0558f4e7",
776
- "name": "add_number",
777
- "content": "10.0"
778
- },
779
- {
780
- "role": "assistant",
781
- "tool_calls": [
782
- {
783
- "id": "chatcmpl-tool-b21f003d298e65df",
784
- "function": {
785
- "arguments": "{\"a\": \"19\", \"b\": \"10\"}",
786
- "name": "substract_number"
787
- },
788
- "type": "function"
789
- }
790
- ],
791
- "content": "Next, we subtract the result from 19:\n"
792
- },
793
- {
794
- "role": "tool",
795
- "tool_call_id": "chatcmpl-tool-b21f003d298e65df",
796
- "name": "substract_number",
797
- "content": "9.0"
798
- },
799
- {
800
- "role": "assistant",
801
- "tool_calls": [
802
- {
803
- "id": "chatcmpl-tool-bb7628cded899f04",
804
- "function": {
805
- "arguments": "{\"a\": \"9\", \"b\": \"1\"}",
806
- "name": "add_number"
807
- },
808
- "type": "function"
809
- }
810
- ],
811
- "content": "Finally, we add 1 to the result:\n"
812
- },
813
- {
814
- "role": "tool",
815
- "tool_call_id": "chatcmpl-tool-bb7628cded899f04",
816
- "name": "add_number",
817
- "content": "10.0"
818
- },
819
- {
820
- "role": "assistant",
821
- "tool_calls": [],
822
- "content": "The result of \\( 19 - (8 + 2) + 1 \\) is **10**.\n\n---\n\n### Summary of Results:\n1. \\( 6 + 2 \\times 3 = 12 \\)\n3. \\( 19 - (8 + 2) + 1 = 10 \\)"
823
- }
824
  ]
825
  ```
826
 
@@ -1334,7 +1446,7 @@ tools = [
1334
  {
1335
  "role": "assistant",
1336
  "tool_calls": [],
1337
- "content": "I am Devstral-2-123B-Instruct-2512, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris. I power an AI assistant called Le Chat.\n\nToday's date is 2025-12-09."
1338
  }
1339
  ```
1340
 
@@ -1584,9 +1696,8 @@ print(json.dumps(messages[origin_messages_len:], indent=2))
1584
 
1585
  </details>
1586
 
1587
-
1588
  ## License
1589
 
1590
- This model is licensed under a [Modified MIT License](https://huggingface.co/mistralai/Devstral-2-123B-Instruct-2512/blob/main/LICENSE).
1591
 
1592
  *You must not use this model in a manner that infringes, misappropriates, or otherwise violates any third party’s rights, including intellectual property rights.*
 
1
  ---
2
  library_name: vllm
3
  inference: false
4
+ base_model:
5
+ - mistralai/Mistral-Small-3.1-24B-Base-2503
6
  extra_gated_description: >-
7
  If you want to learn more about how we process your personal data, please read
8
  our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
9
  tags:
10
  - mistral-common
11
+ license: apache-2.0
12
  ---
13
 
14
+ # Devstral Small 2 24B Instruct 2512
15
+ Devstral is an agentic LLM for software engineering tasks. **Devstral Small 2** excels at using tools to explore codebases, editing multiple files and power software engineering agents.
16
  The model achieves remarkable performance on SWE-bench.
17
 
18
  This model is an Instruct model in **FP8**, fine-tuned to follow instructions, making it ideal for chat, agentic and instruction based tasks for SWE use cases.
 
20
  For enterprises requiring specialized capabilities (increased context, domain-specific knowledge, etc.), we invite companies to [reach out to us](https://mistral.ai/contact).
21
 
22
  ## Key Features
23
+ The Devstral Small 2 Instruct model offers the following capabilities:
24
  - **Agentic Coding**: Devstral is designed to excel at agentic coding tasks, making it a great choice for software engineering agents.
25
+ - **Lightweight**: with its compact size of just 24 billion parameters, Devstral is light enough to run on a single RTX 4090 or a Mac with 32GB RAM, making it an appropriate model for local deployment and on-device use.
26
+ - **Apache 2.0 License**: Open-source license allowing usage and modification for both commercial and non-commercial purposes.
27
  - **Context Window**: A 256k context window.
28
 
29
+ Updates compared to [`Devstral Small 1.1`](https://huggingface.co/mistralai/Devstral-Small-2507):
30
+ - **Vision Capabilities**: Enables the model to analyze images and provide insights based on visual content, in addition to text.
31
+ - **Improved Performance**: Devstral Small 2 is a step-up compared to its predecessors.
32
+ - **Attention Softmax Temperature**: Devstral Small 2 uses the same architecture as Ministral 3 using rope-scaling as introduced by Llama 4 and [Scalable-Softmax Is Superior for Attention](https://arxiv.org/abs/2501.19399).
33
+ - **Better Generalization**: Generalises better to diverse prompts and coding environments.
34
+
35
  ### Use Cases
36
 
37
  AI Code Assistants, Agentic Coding, and Software Engineering Tasks. Leveraging advanced AI capabilities for complex tool integration and deep codebase understanding in coding environments.
 
102
 
103
  ### Local Deployment
104
 
105
+ The model can also be deployed with the following libraries:
106
  - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm-recommended)
107
  - [`transformers`](https://github.com/huggingface/transformers): See [here](#transformers)
108
 
109
  We're thankful to the llama.cpp team and their community as well as the LM Studio and Ollama teams that worked hard to make these models also available for their frameworks.
110
 
111
  You can now also run Devstral using these (alphabetical ordered) frameworks:
112
+ - [`llama.cpp`](https://github.com/ggml-org/llama.cpp): To use community ones such as [Unsloth's](https://huggingface.co/unsloth/Devstral-Small-2-24B-Instruct-2512-GGUF) or [Bartowski's](https://huggingface.co/bartowski/mistralai_Devstral-Small-2-24B-Instruct-2512-GGUF) make sure to use changes from this [PR](https://github.com/ggml-org/llama.cpp/pull/17945).
113
+ - [`LM Studio`](https://lmstudio.ai/): https://lmstudio.ai/models/devstral-2
114
+ - [`Ollama`](https://ollama.com/): https://ollama.com/library/devstral-small-2
115
 
116
  If you notice subpar performance with local serving, please submit issues to the relevant framework so that it can be fixed and in the meantime we advise to use the Mistral AI API.
117
 
 
140
  docker run -it vllm/vllm-openai:nightly
141
  ```
142
 
143
+ Alternatively, you can also install `vllm` from latest main by following instructions [here](https://docs.vllm.ai/en/latest/getting_started/installation/gpu/#python-only-build).
144
+
145
  > [!Warning]
146
  > Make sure that your vllm installation includes [this commit](https://github.com/vllm-project/vllm/commit/5c213d2899f5a2d439c8d771a0abc156a5412a2b).
147
  > If you do not have this commit included, you will get incorrectly parsed tool calls.
 
159
  1. Spin up a server:
160
 
161
  ```
162
+ vllm serve mistralai/Devstral-Small-2-24B-Instruct-2512 --tool-call-parser mistral --enable-auto-tool-choice --tensor-parallel-size 2
163
  ```
164
 
165
 
 
174
  url = "http://<your-server-url>:8000/v1/chat/completions"
175
  headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
176
 
177
+ model = "mistralai/Devstral-Small-2-24B-Instruct-2512"
178
 
179
  def load_system_prompt(repo_id: str, filename: str) -> str:
180
  file_path = hf_hub_download(repo_id=repo_id, filename=filename)
 
199
 
200
  data = {"model": model, "messages": messages, "temperature": 0.15}
201
 
202
+ # Devstral Small 2 supports tool calling. If you want to use tools, follow this:
203
  # tools = [ # Define tools for vLLM
204
  # {
205
  # "type": "function",
 
226
  ```
227
  </details>
228
 
229
+ #### SGLang (recommended)
230
+
231
+ <details>
232
+ <summary>Expand</summary>
233
+
234
+ We recommend using this model with [SGLang](https://github.com/sgl-project/sglang)
235
+ to implement production-ready inference pipelines (OpenAI-compatible API server).
236
+
237
+ **_Installation_**
238
+
239
+ Install SGLang from source (track latest `main` locally):
240
+
241
+ ```
242
+ git clone https://github.com/sgl-project/sglang.git
243
+ cd sglang
244
+ uv pip install -e python
245
+ uv pip install transformers==5.0.0rc # required
246
+ uv pip install nvidia-cudnn-cu12==9.16.0.29 # required for VLM
247
+ ```
248
+
249
+ **_Launch server_**
250
+
251
+ We recommend that you use Devstral Small 2 in a server/client setting.
252
+
253
+ 1. Spin up a server:
254
+
255
+ ```
256
+ python -m sglang.launch_server --model-path mistralai/Devstral-Small-2-24B-Instruct-2512 --host 0.0.0.0 --port 30000 --tp 2 --tool-call-parser mistral
257
+ ```
258
+
259
+
260
+ 2. To ping the client you can use a simple Python snippet.
261
+
262
+ ```py
263
+ import requests
264
+ import json
265
+ from huggingface_hub import hf_hub_download
266
+
267
+
268
+ url = "http://<your-server-url>:30000/v1/chat/completions"
269
+ headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
270
+
271
+ model = "mistralai/Devstral-Small-2-24B-Instruct-2512"
272
+
273
+ def load_system_prompt(repo_id: str, filename: str) -> str:
274
+ file_path = hf_hub_download(repo_id=repo_id, filename=filename)
275
+ with open(file_path, "r") as file:
276
+ system_prompt = file.read()
277
+ return system_prompt
278
+
279
+ SYSTEM_PROMPT = load_system_prompt(model, "CHAT_SYSTEM_PROMPT.txt")
280
+
281
+ messages = [
282
+ {"role": "system", "content": SYSTEM_PROMPT},
283
+ {
284
+ "role": "user",
285
+ "content": [
286
+ {
287
+ "type": "text",
288
+ "text": "<your-command>",
289
+ },
290
+ ],
291
+ },
292
+ ]
293
+
294
+ data = {"model": model, "messages": messages, "temperature": 0.15}
295
+
296
+ # Devstral Small 2 supports tool calling. If you want to use tools, follow this:
297
+ # tools = [ # Define tools (OpenAI-compatible)
298
+ # {
299
+ # "type": "function",
300
+ # "function": {
301
+ # "name": "git_clone",
302
+ # "description": "Clone a git repository",
303
+ # "parameters": {
304
+ # "type": "object",
305
+ # "properties": {
306
+ # "url": {
307
+ # "type": "string",
308
+ # "description": "The url of the git repository",
309
+ # },
310
+ # },
311
+ # "required": ["url"],
312
+ # },
313
+ # },
314
+ # }
315
+ # ]
316
+ # data = {"model": model, "messages": messages, "temperature": 0.15, "tools": tools} # Pass tools to payload.
317
+
318
+ response = requests.post(url, headers=headers, data=json.dumps(data))
319
+ print(response.json()["choices"][0]["message"]["content"])
320
+ ```
321
+ </details>
322
+
323
  #### Transformers
324
 
325
  <details>
326
  <summary>Expand</summary
327
+
328
 
329
  Make sure to install from main:
330
 
 
335
  And run the following code snippet:
336
 
337
  ```python
338
+ import torch
339
  from transformers import (
340
+ Mistral3ForConditionalGeneration,
341
  MistralCommonBackend,
342
  )
343
 
344
+ model_id = "mistralai/Devstral-Small-2-24B-Instruct-2512"
345
 
346
  tokenizer = MistralCommonBackend.from_pretrained(model_id)
347
+ model = Mistral3ForConditionalGeneration.from_pretrained(model_id, device_map="auto")
348
 
349
  SP = """You are operating as and within Mistral Vibe, a CLI coding-agent built by Mistral AI and powered by default by the Devstral family of models. It wraps Mistral's Devstral models to enable natural language interaction with a local codebase. Use the available tools when helpful.
350
 
 
507
  </details>
508
 
509
  ## Tests
 
510
  To help test our model via vLLM or test that other frameworks' implementations are correct, here is a set of prompts you can try with the expected outputs.
511
 
512
  1. Call one tool
 
693
  "content": [
694
  {
695
  "type": "text",
696
+ "text": "Compute the results steps by steps for the equations that involve only numbers displayed in the image. You have to call tools to perform the operations and can do one operation at a time per equation."
697
  },
698
+ {
699
+ "type": "image_url",
700
+ "image_url": {
701
+ "url": "https://math-coaching.com/img/fiche/46/expressions-mathematiques.jpg"
702
+ }
703
+ }
704
  ]
705
  }
706
  ]
 
828
 
829
  ```json
830
  [
831
+ {
832
+ "role": "assistant",
833
+ "tool_calls": [
834
+ {
835
+ "id": "chatcmpl-tool-ab0c849bd20aaebb",
836
+ "function": {
837
+ "arguments": "{\"a\": \"2\", \"b\": \"3\"}",
838
+ "name": "multiply_number"
839
+ },
840
+ "type": "function"
841
+ }
842
+ ],
843
+ "content": "Let's solve the equations step by step. I will start with the first equation: \\(6 + 2 \\times 3\\).\n\n### Equation 1: \\(6 + 2 \\times 3\\)\n\n1. **Multiplication Step**:\n - Multiply \\(2\\) by \\(3\\).\n - I will use the `multiply_number` tool for this operation."
844
+ },
845
+ {
846
+ "role": "tool",
847
+ "tool_call_id": "chatcmpl-tool-ab0c849bd20aaebb",
848
+ "name": "multiply_number",
849
+ "content": "6.0"
850
+ },
851
+ {
852
+ "role": "assistant",
853
+ "tool_calls": [
854
+ {
855
+ "id": "chatcmpl-tool-8270f0471244e880",
856
+ "function": {
857
+ "arguments": "{\"a\": \"6\", \"b\": \"6\"}",
858
+ "name": "add_number"
859
+ },
860
+ "type": "function"
861
+ }
862
+ ],
863
+ "content": " - The result of \\(2 \\times 3\\) is \\(6\\).\n\n2. **Addition Step**:\n - Add \\(6\\) to the result from the previous step (\\(6\\)).\n - I will use the `add_number` tool for this operation."
864
+ },
865
+ {
866
+ "role": "tool",
867
+ "tool_call_id": "chatcmpl-tool-8270f0471244e880",
868
+ "name": "add_number",
869
+ "content": "12.0"
870
+ },
871
+ {
872
+ "role": "assistant",
873
+ "tool_calls": [
874
+ {
875
+ "id": "chatcmpl-tool-858ecb90f0ec568b",
876
+ "function": {
877
+ "arguments": "{\"a\": \"8\", \"b\": \"2\"}",
878
+ "name": "add_number"
879
+ },
880
+ "type": "function"
881
+ }
882
+ ],
883
+ "content": " - The result of \\(6 + 6\\) is \\(12\\).\n\nSo, the final result for the first equation is \\(12\\).\n\n### Equation 3: \\(19 - (8 + 2) + 1\\)\n\n1. **Parentheses Step**:\n - Add \\(8\\) and \\(2\\) inside the parentheses.\n - I will use the `add_number` tool for this operation."
884
+ },
885
+ {
886
+ "role": "tool",
887
+ "tool_call_id": "chatcmpl-tool-858ecb90f0ec568b",
888
+ "name": "add_number",
889
+ "content": "10.0"
890
+ },
891
+ {
892
+ "role": "assistant",
893
+ "tool_calls": [
894
+ {
895
+ "id": "chatcmpl-tool-971da6e6d0c56f0a",
896
+ "function": {
897
+ "arguments": "{\"a\": \"19\", \"b\": \"10\"}",
898
+ "name": "substract_number"
899
+ },
900
+ "type": "function"
901
+ }
902
+ ],
903
+ "content": " - The result of \\(8 + 2\\) is \\(10\\).\n\n2. **Subtraction Step**:\n - Subtract the result from the previous step (\\(10\\)) from \\(19\\).\n - I will use the `substract_number` tool for this operation."
904
+ },
905
+ {
906
+ "role": "tool",
907
+ "tool_call_id": "chatcmpl-tool-971da6e6d0c56f0a",
908
+ "name": "substract_number",
909
+ "content": "9.0"
910
+ },
911
+ {
912
+ "role": "assistant",
913
+ "tool_calls": [
914
+ {
915
+ "id": "chatcmpl-tool-9a8fecc42530d395",
916
+ "function": {
917
+ "arguments": "{\"a\": \"9\", \"b\": \"1\"}",
918
+ "name": "add_number"
919
+ },
920
+ "type": "function"
921
+ }
922
+ ],
923
+ "content": " - The result of \\(19 - 10\\) is \\(9\\).\n\n3. **Addition Step**:\n - Add \\(1\\) to the result from the previous step (\\(9\\)).\n - I will use the `add_number` tool for this operation."
924
+ },
925
+ {
926
+ "role": "tool",
927
+ "tool_call_id": "chatcmpl-tool-9a8fecc42530d395",
928
+ "name": "add_number",
929
+ "content": "10.0"
930
+ },
931
+ {
932
+ "role": "assistant",
933
+ "tool_calls": [],
934
+ "content": " - The result of \\(9 + 1\\) is \\(10\\).\n\nSo, the final result for the third equation is \\(10\\).\n\n### Summary of Results:\n- **Equation 1**: \\(6 + 2 \\times 3 = 12\\)\n- **Equation 3**: \\(19 - (8 + 2) + 1 = 10\\)"
935
+ }
936
  ]
937
  ```
938
 
 
1446
  {
1447
  "role": "assistant",
1448
  "tool_calls": [],
1449
+ "content": "I am Devstral-Small-2-24B-Instruct-2512, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris. I power an AI assistant called Le Chat.\n\nToday's date is 2025-12-09."
1450
  }
1451
  ```
1452
 
 
1696
 
1697
  </details>
1698
 
 
1699
  ## License
1700
 
1701
+ This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0.txt).
1702
 
1703
  *You must not use this model in a manner that infringes, misappropriates, or otherwise violates any third party’s rights, including intellectual property rights.*