Sarcasm-ara-Abufarha
Task Identifier: sarcasm-2020-abufarha-ara
Cluster: Irony & Sarcasm
Data Type: ara
Score Metric: Macro-F1
Paper/GitHub/Website URL:


RankSubmission Title Model
URL
Score
Details
1
Finetuned LMs XLM-RoBERTa-Large
75.575
2
Zero-shot Chatgpt with translated prompts
75.4719
3
Zero-shot Chatgpt
74.378
4
Finetuned LMs XLM-Twitter
73.3587
5
Finetuned LMs InfoDCL
71.8744
6
Finetuned LMs TwHIN-BERT
71.6758
7
Finetuned LMs Bernice
69.5748
8
Finetuned LMs XLM-RoBERTa-Base
69.5681
9
Finetuned LMs mBERT
69.0587
10
Zero-shot Bactrian-LLaMA-7B
60.6962
11
three-shot in-context learning Vicuna-7B
55.2114
12
five-shot in-context learning Vicuna-7B
54.7127
13
Zero-shot Vicuna-7B
53.7386
14
Zero-shot LLaMA-7B
53.385
15
three-shot in-context learning LLaMA-7B
50.347
16
five-shot in-context learning mT5-XL
48.6116
17
three-shot in-context learning BLOOM-7B
47.7595
18
five-shot in-context learning LLaMA-7B
47.6873
19
three-shot in-context learning mT5-XL
47.0881
20
three-shot in-context learning mT0-XL
46.2663
21
Baseline Majority
45.5484
22
three-shot in-context learning BLOOMZ-P3-7B
45.484
23
Zero-shot Bactrian-BLOOM
45.1477
24
Zero-shot BLOOMZ-7B
44.5676
25
Zero-shot BLOOMZ-P3-7B
44.5676
26
Zero-shot mT0-XL
44.5676
27
five-shot in-context learning BLOOMZ-P3-7B
44.5061
28
Baseline Random
44.2556
29
five-shot in-context learning BLOOM-7B
43.8897
30
five-shot in-context learning mT0-XL
33.5479
31
Zero-shot BLOOM-7B
27.158
32
Zero-shot mT5-XL
20.6622
33
Zero-shot Alpaca-7B
17.2113