Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Dutch
ArXiv:
Libraries:
Datasets
pandas
License:
mteb-nl-bbsard / README.md
nicolaebanari's picture
fix citations
11a98bf verified
metadata
annotations_creators:
  - expert-annotated
language:
  - nld
license: cc-by-nc-sa-4.0
multilinguality: monolingual
source_datasets:
  - clips/mteb-nl-bbsard
task_categories:
  - text-retrieval
task_ids: []
dataset_info:
  - config_name: corpus
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
      - name: title
        dtype: string
    splits:
      - name: test
        num_bytes: 21656574
        num_examples: 22415
    download_size: 8789748
    dataset_size: 21656574
  - config_name: qrels
    features:
      - name: query-id
        dtype: string
      - name: corpus-id
        dtype: string
      - name: score
        dtype: int64
    splits:
      - name: test
        num_bytes: 24338
        num_examples: 1059
    download_size: 6847
    dataset_size: 24338
  - config_name: queries
    features:
      - name: id
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: test
        num_bytes: 23204
        num_examples: 222
    download_size: 13641
    dataset_size: 23204
configs:
  - config_name: corpus
    data_files:
      - split: test
        path: corpus/test-*
  - config_name: qrels
    data_files:
      - split: test
        path: qrels/test-*
  - config_name: queries
    data_files:
      - split: test
        path: queries/test-*
tags:
  - mteb
  - text

bBSARDNLRetrieval

An MTEB dataset
Massive Text Embedding Benchmark

Building on the Belgian Statutory Article Retrieval Dataset (BSARD) in French, we introduce the bilingual version of this dataset, bBSARD. The dataset contains parallel Belgian statutory articles in both French and Dutch, along with legal questions from BSARD and their Dutch translation.

Task category t2t
Domains Legal, Written
Reference https://aclanthology.org/2025.regnlp-1.3.pdf

Source datasets:

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_task("bBSARDNLRetrieval")
evaluator = mteb.MTEB([task])

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repository.

Citation

If you use this dataset, please cite the dataset as well as MTEB-NL, as this dataset includes additional processing.


@article{lotfi2025bilingual,
  author = {Lotfi, Ehsan and Banar, Nikolay and Yuzbashyan, Nerses and Daelemans, Walter},
  journal = {COLING 2025},
  pages = {10},
  title = {Bilingual BSARD: Extending Statutory Article Retrieval to Dutch},
  year = {2025},
}


@misc{banar2025mtebnle5nlembeddingbenchmark,
      title={MTEB-NL and E5-NL: Embedding Benchmark and Models for Dutch}, 
      author={Nikolay Banar and Ehsan Lotfi and Jens Van Nooten and Cristina Arhiliuc and Marija Kliocaite and Walter Daelemans},
      year={2025},
      eprint={2509.12340},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2509.12340}, 
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("bBSARDNLRetrieval")

desc_stats = task.metadata.descriptive_stats
{
    "test": {
        "num_samples": 22637,
        "number_of_characters": 21218611,
        "documents_text_statistics": {
            "total_text_length": 21197901,
            "min_text_length": 7,
            "average_text_length": 945.7015837608744,
            "max_text_length": 37834,
            "unique_texts": 22415
        },
        "documents_image_statistics": null,
        "queries_text_statistics": {
            "total_text_length": 20710,
            "min_text_length": 22,
            "average_text_length": 93.28828828828829,
            "max_text_length": 250,
            "unique_texts": 222
        },
        "queries_image_statistics": null,
        "relevant_docs_statistics": {
            "num_relevant_docs": 1059,
            "min_relevant_docs_per_query": 1,
            "average_relevant_docs_per_query": 4.77027027027027,
            "max_relevant_docs_per_query": 57,
            "unique_relevant_docs": 491
        },
        "top_ranked_statistics": null
    }
}