Irony-eng-Hee
Task Identifier: irony-2018-hee-eng
Cluster: Irony & Sarcasm
Data Type: eng
Score Metric: F1-irony
Paper/GitHub/Website URL:


RankSubmission Title Model
URL
Score
Details
1
Finetuned LMs InfoDCL
68.2543
2
Finetuned LMs XLM-Twitter
67.8378
3
Finetuned LMs Bernice
67.4306
4
Finetuned LMs TwHIN-BERT
65.7501
5
Finetuned LMs XLM-RoBERTa-Large
65.631
6
Finetuned LMs XLM-RoBERTa-Base
63.4302
7
Finetuned LMs mBERT
59.9867
8
Zero-shot Chatgpt with translated prompts
59
9
Zero-shot Chatgpt
59
10
Zero-shot Alpaca-7B
55.0595
11
Zero-shot Bactrian-LLaMA-7B
54.389
12
Baseline Random
45
13
five-shot in-context learning BLOOM-7B
41.6667
14
Zero-shot Vicuna-7B
41.1911
15
three-shot in-context learning mT5-XL
40.5195
16
five-shot in-context learning BLOOMZ-P3-7B
37.3259
17
three-shot in-context learning BLOOM-7B
26.4151
18
Zero-shot BLOOMZ-P3-7B
25.4072
19
five-shot in-context learning mT5-XL
23.97
20
three-shot in-context learning BLOOMZ-P3-7B
23.8596
21
five-shot in-context learning Vicuna-7B
21.8341
22
three-shot in-context learning Vicuna-7B
11.9816
23
five-shot in-context learning LLaMA-7B
9.7561
24
Zero-shot LLaMA-7B
5.2863
25
three-shot in-context learning LLaMA-7B
3
26
three-shot in-context learning mT0-XL
1.0363
27
Zero-shot mT5-XL
1.0256
28
Baseline Majority
0
29
Zero-shot BLOOM-7B
0
30
Zero-shot BLOOMZ-7B
0
31
Zero-shot Bactrian-BLOOM
0
32
Zero-shot mT0-XL
0
33
five-shot in-context learning mT0-XL
0