Irony-ita-Cignarella
Task Identifier: irony-2018-cignarella-ita
Cluster: Irony & Sarcasm
Data Type: ita
Score Metric: Macro-F1
Paper/GitHub/Website URL:


RankSubmission Title Model
URL
Score
Details
1
Finetuned LMs XLM-RoBERTa-Large
79.2685
2
Finetuned LMs Bernice
77.4445
3
Finetuned LMs InfoDCL
75.9227
4
Zero-shot Chatgpt with translated prompts
74.1999
5
Finetuned LMs XLM-Twitter
73.3818
6
Zero-shot Chatgpt
73.3222
7
Finetuned LMs XLM-RoBERTa-Base
72.6571
8
Finetuned LMs TwHIN-BERT
71.2862
9
Finetuned LMs mBERT
70.3667
10
Zero-shot Vicuna-7B
56.2189
11
Baseline Random
51.7983
12
five-shot in-context learning BLOOMZ-P3-7B
48.7669
13
Zero-shot LLaMA-7B
48.2239
14
Zero-shot BLOOMZ-P3-7B
48.2056
15
five-shot in-context learning BLOOM-7B
47.5942
16
Zero-shot Bactrian-LLaMA-7B
45.7364
17
three-shot in-context learning BLOOMZ-P3-7B
44.9021
18
three-shot in-context learning mT5-XL
44.7432
19
three-shot in-context learning BLOOM-7B
43.3448
20
five-shot in-context learning Vicuna-7B
39.4515
21
five-shot in-context learning mT5-XL
39.3402
22
Zero-shot mT5-XL
38.782
23
three-shot in-context learning Vicuna-7B
38.7763
24
five-shot in-context learning LLaMA-7B
35.8077
25
Zero-shot BLOOM-7B
34.6705
26
three-shot in-context learning LLaMA-7B
34.583
27
five-shot in-context learning mT0-XL
34.5821
28
Zero-shot mT0-XL
34.2105
29
Zero-shot BLOOMZ-7B
34.2105
30
Zero-shot Bactrian-BLOOM
34.2105
31
three-shot in-context learning mT0-XL
34.0369
32
Baseline Majority
33.3843
33
Zero-shot Alpaca-7B
33.2396