Subject-ita-Basile
Task Identifier: subject-2016-basile-ita
Cluster: Subjectivity Analysis
Data Type: ita
Score Metric: Macro-F1
Paper/GitHub/Website URL:


RankSubmission Title Model
URL
Score
Details
1
Finetuned LMs XLM-RoBERTa-Large
75.9493
2
Zero-shot Chatgpt
73.5103
3
Finetuned LMs InfoDCL
72.3612
4
Finetuned LMs XLM-Twitter
71.6795
5
Finetuned LMs Bernice
71.6492
6
Finetuned LMs TwHIN-BERT
71.4277
7
Finetuned LMs XLM-RoBERTa-Base
71.3414
8
Finetuned LMs mBERT
68.0972
9
three-shot in-context learning Vicuna-7B
61.4881
10
Zero-shot Chatgpt with translated prompts
60.5326
11
five-shot in-context learning Vicuna-7B
57.7947
12
three-shot in-context learning LLaMA-7B
55.1325
13
five-shot in-context learning LLaMA-7B
55.0741
14
Zero-shot BLOOM-7B
50.1428
15
Baseline Random
48.5147
16
Zero-shot Bactrian-BLOOM
48.176
17
Zero-shot LLaMA-7B
47.2432
18
Zero-shot mT0-XL
47.0432
19
Zero-shot mT5-XL
43.4267
20
five-shot in-context learning mT5-XL
40.7959
21
Zero-shot Bactrian-LLaMA-7B
40.2991
22
Zero-shot Vicuna-7B
39.547
23
Baseline Majority
39.5081
24
three-shot in-context learning mT5-XL
38.3977
25
three-shot in-context learning BLOOM-7B
38.2163
26
three-shot in-context learning mT0-XL
33.3616
27
five-shot in-context learning BLOOM-7B
31.8681
28
five-shot in-context learning mT0-XL
28.4031
29
Zero-shot BLOOMZ-P3-7B
26.1448
30
three-shot in-context learning BLOOMZ-P3-7B
26.1448
31
Zero-shot BLOOMZ-7B
26.1448
32
five-shot in-context learning BLOOMZ-P3-7B
26.1448
33
Zero-shot Alpaca-7B
26.1448