Submission Name: Finetuned LM
Model Name: FaBERT
GitHub/Model URL:
Model Description
FaBERT is a Persian BERT-base model trained on the diverse HmBlogs corpus, encompassing both casual and formal Persian texts. Developed for natural language processing tasks, FaBERT is a robust solution for processing Persian text. Through evaluation across various Natural Language Understanding (NLU) tasks, FaBERT consistently demonstrates notable improvements, while having a compact model size.
Submission Details
Number of Submitted Tasks: 2
Task (Irony-fas-Golazizian)Score: 74.8299
SPARROW score: 81.1726
Submission Description: Hyperparameters for finetuning Sentiment-fas-Ashrafi: Learning Rate = 2e-05, Batch Size = 16,
Number of Epochs = 1/// Hyperparameters for finetuning Irony-fas-Golazizian: Learning Rate = 5e-05, Batch Size = 8, Number of Epochs = 3