Sarcasm-ara-Farha
Task Identifier: sarcasm-2021-farha-ara
Cluster: Irony & Sarcasm
Data Type: ara
Score Metric: Macro-F1
Paper/GitHub/Website URL:


RankSubmission Title Model
URL
Score
Details
1
Finetuned LMs XLM-RoBERTa-Large
70.8533
2
Finetuned LMs TwHIN-BERT
70.6109
3
Finetuned LMs InfoDCL
68.8114
4
Finetuned LMs Bernice
68.7975
5
Finetuned LMs XLM-RoBERTa-Base
68.4483
6
Zero-shot Chatgpt with translated prompts
68.4314
7
Finetuned LMs XLM-Twitter
67.2958
8
Finetuned LMs mBERT
66.8688
9
Zero-shot Chatgpt
66.2387
10
Zero-shot Bactrian-BLOOM
53.3091
11
three-shot in-context learning Vicuna-7B
51.8096
12
five-shot in-context learning Vicuna-7B
50.7341
13
five-shot in-context learning LLaMA-7B
50.5481
14
Zero-shot Vicuna-7B
50.371
15
three-shot in-context learning mT5-XL
50.0499
16
three-shot in-context learning LLaMA-7B
49.8387
17
Zero-shot Bactrian-LLaMA-7B
49.7891
18
five-shot in-context learning mT5-XL
49.2082
19
Baseline Random
46.2202
20
three-shot in-context learning BLOOM-7B
46.1537
21
five-shot in-context learning BLOOM-7B
45.7855
22
three-shot in-context learning mT0-XL
45.2304
23
five-shot in-context learning BLOOMZ-P3-7B
43.3665
24
Zero-shot LLaMA-7B
42.6547
25
Baseline Majority
42.0738
26
Zero-shot mT0-XL
41.9954
27
Zero-shot BLOOMZ-7B
41.9954
28
Zero-shot BLOOMZ-P3-7B
41.928
29
three-shot in-context learning BLOOMZ-P3-7B
41.7249
30
Zero-shot BLOOM-7B
41.6667
31
five-shot in-context learning mT0-XL
35.1852
32
Zero-shot mT5-XL
30.3392
33
Zero-shot Alpaca-7B
23.4659