| --- |
| model_size: 494034560 |
| required_memory: 1.84 |
| metrics: |
| - GLUE_MRPC |
| license: apache-2.0 |
| datasets: |
| - jtatman/python-code-dataset-500k |
| - Vezora/Tested-143k-Python-Alpaca |
| language: |
| - en |
| - es |
| base_model: Qwen/Qwen2-0.5B |
| library_name: adapter-transformers |
| tags: |
| - code |
| - python |
| - tiny |
| - open |
| - mini |
| - minitron |
| - tinytron |
| --- |
| |
| # Uploaded model |
|
|
| [<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" width="100"/><img src="https://github.githubassets.com/assets/GitHub-Logo-ee398b662d42.png" width="100"/>](https://github.com/Agnuxo1) |
| - **Developed by:** [Agnuxo](https://github.com/Agnuxo1) |
| - **License:** apache-2.0 |
| - **Finetuned from model:** Agnuxo/Tinytron-codegemma-2b |
|
|
| This model was fine-tuned using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
| [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|
| ## Benchmark Results |
|
|
| This model has been fine-tuned for various tasks and evaluated on the following benchmarks: |
|
|
| ### GLUE_MRPC |
| **Accuracy:** 0.3382 |
| **F1:** 0.0753 |
| |
|  |
| |
| |
| Model Size: 494,034,560 parameters |
| Required Memory: 1.84 GB |
| |
| For more details, visit my [GitHub](https://github.com/Agnuxo1). |
| |
| Thanks for your interest in this model! |