Sarcasm-eng-Ptacek
Task Identifier: sarcasm-2014-ptacek-eng
Cluster: Irony & Sarcasm
Data Type: eng
Score Metric: Macro-F1
Paper/GitHub/Website URL:


RankSubmission Title Model
URL
Score
Details
1
Finetuned LMs XLM-RoBERTa-Base
95.7637
2
Finetuned LMs XLM-RoBERTa-Large
95.7629
3
Finetuned LMs InfoDCL
95.5601
4
Finetuned LMs Bernice
94.9925
5
Finetuned LMs TwHIN-BERT
94.7614
6
Finetuned LMs XLM-Twitter
94.4875
7
Finetuned LMs mBERT
94.2818
8
Zero-shot Chatgpt with translated prompts
74.2981
9
Zero-shot Chatgpt
74.2981
10
Zero-shot Bactrian-LLaMA-7B
60.4941
11
five-shot in-context learning LLaMA-7B
56.2667
12
three-shot in-context learning Vicuna-7B
55.4193
13
five-shot in-context learning Vicuna-7B
54.2168
14
three-shot in-context learning BLOOM-7B
52.1562
15
five-shot in-context learning BLOOM-7B
50.9291
16
three-shot in-context learning mT5-XL
50.021
17
Baseline Random
49.8536
18
three-shot in-context learning LLaMA-7B
49.6532
19
three-shot in-context learning mT0-XL
49.3213
20
Zero-shot Vicuna-7B
48.1682
21
five-shot in-context learning mT0-XL
46.5672
22
Zero-shot mT5-XL
45.2549
23
Zero-shot BLOOM-7B
41.7715
24
five-shot in-context learning mT5-XL
41.4212
25
Zero-shot LLaMA-7B
40.4486
26
Baseline Majority
39.7314
27
Zero-shot BLOOMZ-P3-7B
39.4896
28
five-shot in-context learning BLOOMZ-P3-7B
38.7634
29
three-shot in-context learning BLOOMZ-P3-7B
38.6842
30
Zero-shot BLOOMZ-7B
38.4236
31
Zero-shot mT0-XL
38.4236
32
Zero-shot Bactrian-BLOOM
37.0461
33
Zero-shot Alpaca-7B
33.009