Sarcasm-eng-Riloff
Task Identifier: sarcasm-2013-riloff-eng
Cluster: Irony & Sarcasm
Data Type: eng
Score Metric: F1-sarcasm
Paper/GitHub/Website URL:


RankSubmission Title Model
URL
Score
Details
1
Finetuned LMs InfoDCL
57.4589
2
Finetuned LMs XLM-RoBERTa-Large
57.0864
3
Finetuned LMs XLM-Twitter
56.605
4
Finetuned LMs Bernice
54.8948
5
Finetuned LMs TwHIN-BERT
53.5636
6
Finetuned LMs XLM-RoBERTa-Base
52.4981
7
Zero-shot Chatgpt with translated prompts
50
8
Zero-shot Chatgpt
50
9
Finetuned LMs mBERT
46.7595
10
five-shot in-context learning BLOOM-7B
39.4737
11
three-shot in-context learning mT0-XL
38.9831
12
Zero-shot Alpaca-7B
38.9744
13
three-shot in-context learning BLOOM-7B
37.6812
14
three-shot in-context learning Vicuna-7B
37.037
15
Zero-shot mT5-XL
36.3636
16
Zero-shot Bactrian-LLaMA-7B
36.1446
17
five-shot in-context learning mT0-XL
33.9181
18
three-shot in-context learning LLaMA-7B
32.4324
19
three-shot in-context learning mT5-XL
28
20
Baseline Random
27
21
five-shot in-context learning Vicuna-7B
24.6914
22
five-shot in-context learning LLaMA-7B
24.2991
23
five-shot in-context learning mT5-XL
21.0526
24
Zero-shot Vicuna-7B
14.8148
25
Zero-shot BLOOM-7B
10.5263
26
five-shot in-context learning BLOOMZ-P3-7B
9.7561
27
three-shot in-context learning BLOOMZ-P3-7B
4.7619
28
Zero-shot BLOOMZ-P3-7B
0
29
Zero-shot BLOOMZ-7B
0
30
Zero-shot Bactrian-BLOOM
0
31
Baseline Majority
0
32
Zero-shot mT0-XL
0
33
Zero-shot LLaMA-7B
0