Sarcasm-ces-Ptacek
Task Identifier: sarcasm-2014-ptacek-ces
Cluster: Irony & Sarcasm
Data Type: ces
Score Metric: Macro-F1
Paper/GitHub/Website URL:


RankSubmission Title Model
URL
Score
Details
1
Finetuned LMs XLM-RoBERTa-Large
70.4269
2
Finetuned LMs InfoDCL
67.8944
3
Finetuned LMs mBERT
66.1225
4
Finetuned LMs Bernice
65.9749
5
Finetuned LMs TwHIN-BERT
64.9002
6
Finetuned LMs XLM-Twitter
64.0092
7
Finetuned LMs XLM-RoBERTa-Base
60.1656
8
Zero-shot Vicuna-7B
55.0008
9
Zero-shot Chatgpt with translated prompts
52.4798
10
Zero-shot Bactrian-LLaMA-7B
52.0595
11
Zero-shot Chatgpt
51.1953
12
three-shot in-context learning Vicuna-7B
50.6944
13
five-shot in-context learning Vicuna-7B
50.1913
14
Zero-shot Bactrian-BLOOM
49.7517
15
Zero-shot mT0-XL
49.0256
16
Zero-shot BLOOMZ-P3-7B
49.0256
17
Zero-shot BLOOMZ-7B
49.0256
18
Baseline Majority
49.0256
19
five-shot in-context learning BLOOMZ-P3-7B
48.657
20
three-shot in-context learning BLOOMZ-P3-7B
48.3904
21
five-shot in-context learning mT5-XL
47.1277
22
Zero-shot BLOOM-7B
46.9017
23
three-shot in-context learning LLaMA-7B
45.8618
24
three-shot in-context learning mT5-XL
42.4252
25
five-shot in-context learning LLaMA-7B
41.3124
26
three-shot in-context learning BLOOM-7B
37.7434
27
three-shot in-context learning mT0-XL
36.2502
28
Baseline Random
35.9155
29
Zero-shot LLaMA-7B
34.4617
30
five-shot in-context learning BLOOM-7B
34.3129
31
Zero-shot mT5-XL
33.9408
32
five-shot in-context learning mT0-XL
23.4419
33
Zero-shot Alpaca-7B
6.21882