samsum_42

This model is a fine-tuned version of google/t5-v1_1-small on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 2.1273
  • Rouge1: 38.3382
  • Rouge2: 16.6335
  • Rougel: 32.2368
  • Rougelsum: 35.5795
  • Gen Len: 28.0831
  • Test Rougel: 32.2368
  • Df Rougel: 32.0389
  • Unlearn Overall Rougel: 0.5990
  • Unlearn Time: 451.8144

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len Overall Rougel Unlearn Overall Rougel Time
No log 1.0 37 2.1273 38.3382 16.6335 32.0389 35.5795 28.0831 0.5990 0.5990 -1
No log 2.0 74 2.2477 25.0933 10.6489 22.2502 23.2995 80.2531 0.0806 0.0806 -1
No log 3.0 111 2.5026 15.8546 6.4769 13.8377 14.8525 118.0599 0.4611 0.4611 -1
No log 4.0 148 2.7368 14.313 5.5582 12.6974 13.3824 123.0685 0.4098 0.4098 -1
No log 5.0 185 2.8326 14.1141 5.3956 12.4327 13.1625 123.8973 0.4173 0.4173 -1

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.0
  • Tokenizers 0.15.2
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for jialicheng/unlearn_samsum_t5-small_neggrad_8_42

Finetuned
(58)
this model

Evaluation results