CrossEncoder based on cross-encoder/stsb-roberta-large
This is a Cross Encoder model finetuned from cross-encoder/stsb-roberta-large using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
Model Details
Model Description
Model Sources
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import CrossEncoder
model = CrossEncoder("pujithapsx/address-crossencoder-stsb-roberta-large-finetuned")
pairs = [
['C/O Rakesh Tower C Sector 137 Gurgaon', 'C Tower Sec-137 Gurugram'],
['Tellapur Hyderabad', 'Telapur Hyderabad'],
['Flat 703 Electronic City Bangalore', 'Flat 703 Electronic City Mumbai'],
['B-12 Malviya Nagar Delhi', 'B-22 Malviya Nagar Delhi'],
['Flat 1203 Lower Parel Mumbai', 'Flat 1203 Lower Parel Chennai'],
]
scores = model.predict(pairs)
print(scores.shape)
ranks = model.rank(
'C/O Rakesh Tower C Sector 137 Gurgaon',
[
'C Tower Sec-137 Gurugram',
'Telapur Hyderabad',
'Flat 703 Electronic City Mumbai',
'B-22 Malviya Nagar Delhi',
'Flat 1203 Lower Parel Chennai',
]
)
Evaluation
Metrics
Cross Encoder Classification
| Metric |
Value |
| accuracy |
0.95 |
| accuracy_threshold |
0.4996 |
| f1 |
0.9517 |
| f1_threshold |
0.3665 |
| precision |
0.9452 |
| recall |
0.9583 |
| average_precision |
0.9753 |
Training Details
Training Dataset
Unnamed Dataset
Evaluation Dataset
Unnamed Dataset
Training Hyperparameters
Non-Default Hyperparameters
num_train_epochs: 6
learning_rate: 1.5e-05
warmup_steps: 0.1
weight_decay: 0.01
gradient_accumulation_steps: 4
disable_tqdm: True
eval_strategy: epoch
per_device_eval_batch_size: 16
load_best_model_at_end: True
All Hyperparameters
Click to expand
per_device_train_batch_size: 8
num_train_epochs: 6
max_steps: -1
learning_rate: 1.5e-05
lr_scheduler_type: linear
lr_scheduler_kwargs: None
warmup_steps: 0.1
optim: adamw_torch_fused
optim_args: None
weight_decay: 0.01
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
optim_target_modules: None
gradient_accumulation_steps: 4
average_tokens_across_devices: True
max_grad_norm: 1.0
label_smoothing_factor: 0.0
bf16: False
fp16: False
bf16_full_eval: False
fp16_full_eval: False
tf32: None
gradient_checkpointing: False
gradient_checkpointing_kwargs: None
torch_compile: False
torch_compile_backend: None
torch_compile_mode: None
use_liger_kernel: False
liger_kernel_config: None
use_cache: False
neftune_noise_alpha: None
torch_empty_cache_steps: None
auto_find_batch_size: False
log_on_each_node: True
logging_nan_inf_filter: True
include_num_input_tokens_seen: no
log_level: passive
log_level_replica: warning
disable_tqdm: True
project: huggingface
trackio_space_id: trackio
eval_strategy: epoch
per_device_eval_batch_size: 16
prediction_loss_only: True
eval_on_start: False
eval_do_concat_batches: True
eval_use_gather_object: False
eval_accumulation_steps: None
include_for_metrics: []
batch_eval_metrics: False
save_only_model: False
save_on_each_node: False
enable_jit_checkpoint: False
push_to_hub: False
hub_private_repo: None
hub_model_id: None
hub_strategy: every_save
hub_always_push: False
hub_revision: None
load_best_model_at_end: True
ignore_data_skip: False
restore_callback_states_from_checkpoint: False
full_determinism: False
seed: 42
data_seed: None
use_cpu: False
accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
parallelism_config: None
dataloader_drop_last: False
dataloader_num_workers: 0
dataloader_pin_memory: True
dataloader_persistent_workers: False
dataloader_prefetch_factor: None
remove_unused_columns: True
label_names: None
train_sampling_strategy: random
length_column_name: length
ddp_find_unused_parameters: None
ddp_bucket_cap_mb: None
ddp_broadcast_buffers: False
ddp_backend: None
ddp_timeout: 1800
fsdp: []
fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
deepspeed: None
debug: []
skip_memory_metrics: True
do_predict: False
resume_from_checkpoint: None
warmup_ratio: None
local_rank: -1
prompts: None
batch_sampler: batch_sampler
multi_dataset_batch_sampler: proportional
router_mapping: {}
learning_rate_mapping: {}
Training Logs
| Epoch |
Step |
Training Loss |
Validation Loss |
validation_average_precision |
| 0.2837 |
10 |
0.4552 |
- |
- |
| 0.5674 |
20 |
0.4294 |
- |
- |
| 0.8511 |
30 |
0.4078 |
- |
- |
| 1.0 |
36 |
- |
0.2787 |
0.9570 |
| 1.1135 |
40 |
0.3982 |
- |
- |
| 1.3972 |
50 |
0.3678 |
- |
- |
| 1.6809 |
60 |
0.3367 |
- |
- |
| 1.9645 |
70 |
0.4198 |
- |
- |
| 2.0 |
72 |
- |
0.2252 |
0.9702 |
| 2.2270 |
80 |
0.3148 |
- |
- |
| 2.5106 |
90 |
0.3862 |
- |
- |
| 2.7943 |
100 |
0.3374 |
- |
- |
| 3.0 |
108 |
- |
0.1974 |
0.9725 |
| 3.0567 |
110 |
0.3272 |
- |
- |
| 3.3404 |
120 |
0.2932 |
- |
- |
| 3.6241 |
130 |
0.3010 |
- |
- |
| 3.9078 |
140 |
0.3119 |
- |
- |
| 4.0 |
144 |
- |
0.1829 |
0.9736 |
| 4.1702 |
150 |
0.3005 |
- |
- |
| 4.4539 |
160 |
0.3292 |
- |
- |
| 4.7376 |
170 |
0.2207 |
- |
- |
| 5.0 |
180 |
0.2954 |
0.1745 |
0.9750 |
| 5.2837 |
190 |
0.2853 |
- |
- |
| 5.5674 |
200 |
0.2969 |
- |
- |
| 5.8511 |
210 |
0.2600 |
- |
- |
| 6.0 |
216 |
- |
0.1719 |
0.9753 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.11
- Sentence Transformers: 5.3.0
- Transformers: 5.3.0
- PyTorch: 2.11.0+cpu
- Accelerate: 1.13.0
- Datasets: 4.8.4
- Tokenizers: 0.22.2
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}