CIS6930_DAAGR_T5_Emo
This model is a fine-tuned version of t5-small on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 0.3253
- Train Accuracy: 0.9647
- Validation Loss: 0.4468
- Validation Accuracy: 0.9495
- Epoch: 19
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
Training results
Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
---|---|---|---|---|
0.4976 | 0.9412 | 0.4567 | 0.9459 | 0 |
0.4359 | 0.9482 | 0.4462 | 0.9474 | 1 |
0.4228 | 0.9502 | 0.4406 | 0.9484 | 2 |
0.4131 | 0.9517 | 0.4370 | 0.9488 | 3 |
0.4050 | 0.9528 | 0.4349 | 0.9493 | 4 |
0.3981 | 0.9539 | 0.4335 | 0.9496 | 5 |
0.3914 | 0.9548 | 0.4327 | 0.9498 | 6 |
0.3851 | 0.9558 | 0.4328 | 0.9500 | 7 |
0.3794 | 0.9565 | 0.4328 | 0.9501 | 8 |
0.3738 | 0.9574 | 0.4321 | 0.9502 | 9 |
0.3685 | 0.9582 | 0.4328 | 0.9502 | 10 |
0.3632 | 0.9589 | 0.4340 | 0.9502 | 11 |
0.3582 | 0.9597 | 0.4343 | 0.9501 | 12 |
0.3531 | 0.9605 | 0.4363 | 0.9501 | 13 |
0.3482 | 0.9612 | 0.4381 | 0.9501 | 14 |
0.3436 | 0.9619 | 0.4390 | 0.9500 | 15 |
0.3391 | 0.9626 | 0.4396 | 0.9500 | 16 |
0.3340 | 0.9633 | 0.4438 | 0.9499 | 17 |
0.3297 | 0.9640 | 0.4454 | 0.9498 | 18 |
0.3253 | 0.9647 | 0.4468 | 0.9495 | 19 |
Framework versions
- Transformers 4.27.4
- TensorFlow 2.11.0
- Datasets 2.11.0
- Tokenizers 0.13.2
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.