c90dba61992ce13b962d6a315b00a581
This model is a fine-tuned version of openai-community/gpt2-large on the nyu-mll/glue [stsb] dataset. It achieves the following results on the evaluation set:
- Loss: 0.4788
- Data Size: 1.0
- Epoch Runtime: 46.6523
- Mse: 0.4790
- Mae: 0.5393
- R2: 0.7857
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Data Size | Epoch Runtime | Mse | Mae | R2 |
|---|---|---|---|---|---|---|---|---|
| No log | 0 | 0 | 11.8708 | 0 | 3.8049 | 11.8719 | 3.0847 | -4.3107 |
| No log | 1 | 179 | 7.7273 | 0.0078 | 4.4553 | 7.7277 | 2.2192 | -2.4569 |
| No log | 2 | 358 | 2.3675 | 0.0156 | 5.7047 | 2.3683 | 1.2690 | -0.0594 |
| No log | 3 | 537 | 1.4222 | 0.0312 | 7.4723 | 1.4227 | 0.9663 | 0.3636 |
| No log | 4 | 716 | 0.8176 | 0.0625 | 10.1735 | 0.8178 | 0.7287 | 0.6342 |
| No log | 5 | 895 | 0.7726 | 0.125 | 13.3945 | 0.7727 | 0.7087 | 0.6544 |
| 0.0912 | 6 | 1074 | 0.6098 | 0.25 | 19.2932 | 0.6100 | 0.6195 | 0.7271 |
| 0.5293 | 7 | 1253 | 0.5783 | 0.5 | 28.9126 | 0.5785 | 0.5982 | 0.7412 |
| 0.3919 | 8.0 | 1432 | 0.5675 | 1.0 | 47.9233 | 0.5677 | 0.5867 | 0.7460 |
| 0.2448 | 9.0 | 1611 | 0.6561 | 1.0 | 48.6200 | 0.6563 | 0.6303 | 0.7064 |
| 0.1739 | 10.0 | 1790 | 0.4839 | 1.0 | 46.6943 | 0.4840 | 0.5299 | 0.7835 |
| 0.1481 | 11.0 | 1969 | 0.5863 | 1.0 | 47.2605 | 0.5864 | 0.6054 | 0.7377 |
| 0.1053 | 12.0 | 2148 | 0.5150 | 1.0 | 46.5543 | 0.5151 | 0.5651 | 0.7696 |
| 0.0902 | 13.0 | 2327 | 0.4750 | 1.0 | 47.2309 | 0.4751 | 0.5379 | 0.7875 |
| 0.0768 | 14.0 | 2506 | 0.5267 | 1.0 | 47.0802 | 0.5269 | 0.5799 | 0.7643 |
| 0.0627 | 15.0 | 2685 | 0.4757 | 1.0 | 46.9345 | 0.4759 | 0.5366 | 0.7871 |
| 0.0626 | 16.0 | 2864 | 0.4664 | 1.0 | 47.7827 | 0.4665 | 0.5298 | 0.7913 |
| 0.0561 | 17.0 | 3043 | 0.4764 | 1.0 | 46.8209 | 0.4764 | 0.5332 | 0.7869 |
| 0.0548 | 18.0 | 3222 | 0.5269 | 1.0 | 47.2221 | 0.5271 | 0.5729 | 0.7642 |
| 0.0445 | 19.0 | 3401 | 0.4676 | 1.0 | 47.0163 | 0.4678 | 0.5299 | 0.7908 |
| 0.0394 | 20.0 | 3580 | 0.4788 | 1.0 | 46.6523 | 0.4790 | 0.5393 | 0.7857 |
Framework versions
- Transformers 4.57.0
- Pytorch 2.8.0+cu128
- Datasets 4.3.0
- Tokenizers 0.22.1
- Downloads last month
- -
Model tree for contemmcm/c90dba61992ce13b962d6a315b00a581
Base model
openai-community/gpt2-large