Sarcasm-eng-Oraby
Task Identifier: sarcasm-2016-oraby-eng
Cluster: Irony & Sarcasm
Data Type: eng
Score Metric: Macro-F1
Paper/GitHub/Website URL:


RankSubmission Title Model
URL
Score
Details
1
Finetuned LMs XLM-RoBERTa-Large
77.0769
2
Finetuned LMs XLM-RoBERTa-Base
75.8721
3
Finetuned LMs InfoDCL
75.6374
4
Finetuned LMs XLM-Twitter
74.8853
5
Finetuned LMs Bernice
74.6933
6
Finetuned LMs TwHIN-BERT
74.4968
7
Zero-shot Chatgpt with translated prompts
73.7823
8
Zero-shot Chatgpt
73.7823
9
Finetuned LMs mBERT
72.7302
10
five-shot in-context learning Vicuna-7B
55.84
11
three-shot in-context learning mT0-XL
54.7993
12
Zero-shot Bactrian-LLaMA-7B
54.7709
13
three-shot in-context learning Vicuna-7B
53.0918
14
three-shot in-context learning BLOOM-7B
52.5679
15
five-shot in-context learning LLaMA-7B
52.1404
16
three-shot in-context learning LLaMA-7B
52.0932
17
five-shot in-context learning BLOOM-7B
50.4565
18
Baseline Random
48.9982
19
five-shot in-context learning mT0-XL
47.8531
20
Zero-shot Alpaca-7B
47.0763
21
three-shot in-context learning mT5-XL
44.3531
22
Zero-shot Bactrian-BLOOM
42.1081
23
Zero-shot mT5-XL
41.8274
24
Zero-shot BLOOM-7B
41.484
25
five-shot in-context learning mT5-XL
38.036
26
Zero-shot Vicuna-7B
36.0801
27
three-shot in-context learning BLOOMZ-P3-7B
35.8638
28
five-shot in-context learning BLOOMZ-P3-7B
34.551
29
Zero-shot LLaMA-7B
33.7051
30
Baseline Majority
33.666
31
Zero-shot BLOOMZ-7B
32.8859
32
Zero-shot mT0-XL
32.8859
33
Zero-shot BLOOMZ-P3-7B
32.7957