Instructions to use EMBEDDIA/est-roberta with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use EMBEDDIA/est-roberta with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="EMBEDDIA/est-roberta")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("EMBEDDIA/est-roberta") model = AutoModelForMaskedLM.from_pretrained("EMBEDDIA/est-roberta") - Notebooks
- Google Colab
- Kaggle
Usage
Load in transformers library with:
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("EMBEDDIA/est-roberta")
model = AutoModelForMaskedLM.from_pretrained("EMBEDDIA/est-roberta")
Est-RoBERTa
Est-RoBERTa model is a monolingual Estonian BERT-like model. It is closely related to French Camembert model https://camembert-model.fr/. The Estonian corpora used for training the model have 2.51 billion tokens in total. The subword vocabulary contains 40,000 tokens.
Est-RoBERTa was trained for 40 epochs.
- Downloads last month
- 54