keyboard_arrow_up
Ensemble Model for Chunking

Authors

Nilamadhaba Mohapatra, Namrata Sarraf and Swapna sarit Sahu, Zeotap, Bangalore, India

Abstract

Transformer Models have taken over most of the Natural language Inference tasks. In recent times they have proved to beat several benchmarks. Chunking means splitting the sentences into tokens and then grouping them in a meaningful way. Chunking is a task that has gradually moved from POS tag-based statistical models to neural nets using Language models such as LSTM, Bidirectional LSTMs, attention models, etc. Deep neural net Models are deployed indirectly for classifying tokens as different tags defined under Named Recognition Tasks. Later these tags are used in conjunction with pointer frameworks for the final chunking task. In our paper, we propose an Ensemble Model using a fine-tuned Transformer Model and a recurrent neural network model together to predict tags and chunk substructures of a sentence. We analyzed the shortcomings of the transformer models in predicting different tags and then trained the BILSTM+CNN accordingly to compensate for the same.

Keywords

Natural Language Processing- Named Entity Recognition, Chunking, Recurrent Neural networks, Transformer Model.

Full Text  Volume 11, Number 8