Sentiment-arq-Muhammad
Task Identifier: sentiment-2022-muhammad-arq
Cluster: Sentiment Analysis
Data Type: arq
Score Metric: Weight-F1
Paper/GitHub/Website URL:


RankSubmission Title Model
URL
Score
Details
1
Finetuned LMs InfoDCL
71.2518
2
Finetuned LMs XLM-Twitter
70.9273
3
Finetuned LMs Bernice
70.2063
4
Finetuned LMs XLM-RoBERTa-Large
69.1729
5
Zero-shot Chatgpt with translated prompts
67.5892
6
Finetuned LMs XLM-RoBERTa-Base
65.0878
7
Zero-shot Chatgpt
63.8888
8
Finetuned LMs TwHIN-BERT
63.1467
9
Finetuned LMs mBERT
57.1563
10
Zero-shot BLOOMZ-P3-7B
52.0161
11
Zero-shot BLOOMZ-7B
49.7895
12
five-shot in-context learning LLaMA-7B
45.6852
13
three-shot in-context learning LLaMA-7B
45.0512
14
five-shot in-context learning BLOOM-7B
43.9481
15
three-shot in-context learning BLOOMZ-P3-7B
41.3628
16
three-shot in-context learning BLOOM-7B
40.2874
17
Zero-shot Alpaca-7B
39.7977
18
five-shot in-context learning Vicuna-7B
38.7331
19
five-shot in-context learning BLOOMZ-P3-7B
38.678
20
Zero-shot Bactrian-BLOOM
37.8342
21
three-shot in-context learning mT5-XL
36.4863
22
Zero-shot BLOOM-7B
35.2645
23
five-shot in-context learning mT5-XL
35.0892
24
Zero-shot LLaMA-7B
34.9061
25
Baseline Random
34.235
26
Zero-shot mT5-XL
34.226
27
Baseline Majority
32.8704
28
three-shot in-context learning Vicuna-7B
25.7833
29
three-shot in-context learning mT0-XL
25.7813
30
five-shot in-context learning mT0-XL
18.649
31
Zero-shot mT0-XL
18.0502
32
Zero-shot Bactrian-LLaMA-7B
16.9896
33
Zero-shot Vicuna-7B
5.32639