IroSvA: Irony Detection in Spanish Variants


Welcome to IroSvA (Irony Detection in Spanish Variants) the first shared task fully dedicated to identify the presence of irony in short messages (tweets and news comments) written in Spanish. This task will be organised within the Iberian Languagues Evaluation Forum (IberLEF 2019) which will be co-located with the SEPLN Conference. The conference will be held in Bilbao, Spain in September 2019. You can join the official mailing group of the task. We will be sharing news and important information about the task in that group. You can contact us via email irosva19@gmail.com

Motivation and Description

Irony is a peculiar case of figurative devices frequently used in real life communication. As human beings, we appeal to irony for expressing in implicit way an opposite meaning to the literal sense of the utterance [1]. Thus, understanding irony requires a more complex set of cognitive and linguistics abilities than literal meaning. Due to its nature, irony has important implications in sentiment analysis and other related tasks. Considering that, detecting irony automatically from textual messages is an important issue to improve the performance in sentiment analysis [3,5,6] and it is still an open research problem. Recently, automatic irony detection has gained importance in the research community, paying special attention to Social Media content in English. However, for Spanish, the availability of corpora is scarce, which limits the amount of research done for this language.

This year we propose the new task (IroSvA) which aims at investigating whether a short message, written in Spanish language, is ironic or not with respect to a given context.  In particular, we aim at studying the way irony changes in distinct Spanish variants. Concretely, we focus on Spanish from Spain, Mexico and Cuba. The task will be structured into three subtasks, each one for predicting whether messages are ironic or not in one of the three Spanish variants. The main difference with previous tasks on irony detection ( SemEval 2018 Task 3 [7] and IronITA 2018 [2]) is that messages are not considered as isolated texts but together with a given context (e.g. news or a topic).


This year we encourage the participation of NLP researchers, industrial teams and students in three subtasks:

  1. Subtask A: Irony detection in Spanish tweets from Spain

  2. Subtask B: Irony detection in Spanish tweets from Mexico

  3. Subtask C: Irony detection in Spanish news comments from Cuba

The three subtasks aim to the same goal: participants should determine whether a message is ironic or not according to an specified context (by assigning a binary value 1 or 0). The main differences between them are the textual genre (tweets for subtasks A and B and short news comments for subtask C) and the Spanish variants.

Ironic vs. No Ironic

The following statements show examples of an ironic and non-ironic news comments written in Cuban variant of Spanish.

Given de CONTEXT: ETECSA informa sobre nuevos servicios de telefonía móvil para clientes prepago.

  1. Example of NO IRONIC message
    Acabo de realizar el proceso desde el móvil, y no explica absolutamente nada. Alguien puede decir en qué consisten los servicios y el valor de los mismos.

  2. Example of IRONIC message
    Muy claro el mensaje, da una explicación MUY detallada. ETECSA nunca nos defrauda.


Participating teams will be provided with training and test datasets for each Spanish variants. Standard evaluation metrics (precision, recall, and F1) will be used for assessing the performance of the participating systems. The three measures will be calculated per class label and macro-averaged. The submissions will be ranked according to F1-AVG which implies that all class labels have equal weight in the final score.

Participating teams may submit only one run for each subtask. We will make no distinction between constrained and unconstrained systems, but the participants will be asked to report what additional resources and corpora they have used for each submitted run.


  1. S. Attardo. Irony as relevant inappropriateness. Journal of Pragmatics, 32(6):793-826, 2000.
  2. A. C. Cignarella, S. Frenda, V.Basile, C. Bosco, V. Patti, P. Rosso, S. Frenda, V. Basile, C. Bosco, V. Patti, and P. Rosso. Overview of the Evalita 2018 Task on Irony Detection in Italian Tweets (IronITA). In Proceedings of the 6th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA’18), Turin, Italy, 2018. CEUR.org.
  3. R. K. Gupta and Y. Yang. CrystalNest at SemEval-2017 Task 4 : Using Sarcasm Detection for Enhancing Sentiment Classification and Quantification. In Proceedings ofthe 11th International Workshop on Semantic Evaluations (SemEval-2017), pages 626-633, Vancouver, Canada, 2017. Association for Computational Linguistics.
  4. D. I. Hernández Farías, V. Patti, and P. Rosso. Irony detection in Twitter: The role of affective content. ACM Transactions on Internet Technology (TOIT), 16(3):1-24, 2016.
  5. D. I. Hernández Farías and P. Rosso. Irony, sarcasm, and sentiment analysis. In E. M. F.A. Pozzi, E. Fersini and B. Liu, editors, Sentiment Analysis in Social Networks, chapter Chapter 7, pages 113-128. Elsevier, 2017.
  6. D. Maynard and M. A. Greenwood. Who cares about sarcastic tweets? Investigating the impact of sarcasm on sentiment analysis. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). European Language Resources Association, 2014.
  7. C. Van Hee, E. Lefever, and V. Hoste. Semeval-2018 task 3: Irony detection in english tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 39-50, New Orleans, Louisiana, June 2018. Association for Computational Linguistics.

Important Dates

  • 10th February 2019, 23:00 UTC: Call for Participation and Website of the task.

  • 30th March 2019, 23:00 UTC: Training set available.

  • 16th April 2019, 23:00 UTC: Testing set available.

  • 6th May 2019, 23:00 UTC: Submission of runs.

  • 13th May 2019, 23:00 UTC: 10th Jun 2019, 23:00 UTC: Notification of results.

  • 30th May 2019, 23:00 UTC: Submission of Working Notes by participants.

  • 10th June 2019, 23:00 UTC: Reviews to participants (peer-reviews).

  • 20th June 2019, 23:00 UTC: Camera-ready submissions due.

  • 30th June 2019, 23:00 UTC: Submission of the camera-ready version of Working Notes.


The corpus consist of 9,000 short messages about different topics written in Spanish –3,000 from Cuba, 3,000 from Mexico and 3,000 from Spain- and annotated with irony. Approximately, 80% of the corpus will be used for training purposes, while the remaining 20% will be used for testing.

The corpus is password protected. To obtain the password, send an email to: irosva19@gmail.com

Task Organizers

  • Reynier Ortega-Bueno
    Center for Pattern Recognition and Data Mining, University of Oriente, Cuba
  • Paolo Rosso
    PRHLT Research Center, Universitat Politècnica de València, Spain
  • Manuel Montes y Gómez
    Laboratory of Language Technologies, Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE), Mexico
  • Delia Irazú Hernández Farías
    Laboratory of Language Technologies, Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE), Mexico
  • Francisco Rangel
    Autoritas Consulting, S.A. & PRHLT Research Center, Universitat Politècnica de València, Spain
  • José E. Medina Pagola
    University of Informatics Science, Havana, Cuba

Output submission

Your system must generate for each subtask a corresponding .CSV file. The file must contain two columns separated by comma: the message id and the prediction label (0: no ironic; 1: ironic). For example:


The naming of the output files will be composed by the TeamName concatenated with the underscore char (_) and the indicator of the subtask (es, mx, cu). For example:

    TeamName_es.txt for Spanish variants of irony.
    TeamName_mx.txt for Mexican variants of irony.
    TeamName_cu.txt for Cuban variants of irony.

All subtasks output files must be compressed into a .ZIP file with the name of the Team (e.g., TeamName.zip). Notice that, only one ouput file for each task is permitted. In case of more than one output file will be submitted for the same subtask, we only consider the last one. However, in the case a team is interested in investigating different methods or features, it will be possible to submit two runs per language variety.

The zip file must contain also a brief explanation of the authors' system. Concretely, for each subtask, the authors should explain if they carried out some kind of data preprocessing, the features used to represent the texts and the machine learning approach.

Submissions should be sent to irosva19 (at) gmail (dot) com

Paper submission

Participants will be given the opportunity to write a paper that describes their system, resources used, results, and analysis that will be part of the official IberLef-2019 proceedings. The paper should use Springer style (https://www.springer.com/gp/livingreviews/latex-templates). The minimum length of a regular paper should be 5 pages. Papers must be written in English.