HAREM and Klue: how to compare two tagsets for named entities annotation


Abstract:

This paper describes an undergoing experiment to compare two tagsets for Named Entities (NE) annotation. We compared Klue 2 tagset, developed by IBM Research, with HAREM tagset, developed for tagging the Portuguese corpora used in Second HAREM competition. From this report, we expected to evaluate our methodology for comparison and to survey the problems that arise from it.

Downloads:

BibTeX:

@inproceedings{acl-news-2015,
  author = {Real, Livy and Rademaker, Alexandre},
  title = {HAREM and Klue: how to compare two tagsets for named
                    entities annotation},
  booktitle = {Proceedings of 53rd Annual Meeting of the
                    Association for Computational Linguistics and The
                    7th International Joint Conference on Natural
                    Language Processing of Asian Federation of Natural
                    Language Processing - Named Entities Workshop (NEWS
                    2015)},
  year = {2015},
  pdflink1 = {/files/acl-news-2015.pdf},
  pdflink2 = {https://aclweb.org/anthology/W/W15/W15-3906.pdf},
  month = jul,
  address = {Beijing, China}
}