English
 
Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language

Authors

Weber,  L.
External Organizations;

Minervini,  P.
External Organizations;

/persons/resource/munchmej

Münchmeyer,  J.
2.4 Seismology, 2.0 Geophysics, Departments, GFZ Publication Database, Deutsches GeoForschungsZentrum;

Leser,  U.
External Organizations;

Rocktäschl,  T.
External Organizations;

External Ressource
Fulltext (public)
There are no public fulltexts stored in GFZpublic
Supplementary Material (public)
There is no public supplementary material available
Citation

Weber, L., Minervini, P., Münchmeyer, J., Leser, U., Rocktäschl, T. (2019): NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language - Proceedings, 57th Annual Meeting of the Association for Computational Linguistics (Florence, Italy), 6151-6161.
https://doi.org/10.18653/v1/P19-1618


Cite as: https://gfzpublic.gfz-potsdam.de/pubman/item/item_4751895
Abstract
Rule-based models are attractive for various tasks because they inherently lead to interpretable and explainable decisions and can easily incorporate prior knowledge. However, such systems are difficult to apply to problems involving natural language, due to its large linguistic variability. In contrast, neural models can cope very well with ambiguity by learning distributed representations of words and their composition from data, but lead to models that are difficult to interpret. In this paper, we describe a model combining neural networks with logic programming in a novel manner for solving multi-hop reasoning tasks over natural language. Specifically, we propose to use an Prolog prover which we extend to utilize a similarity function over pretrained sentence encoders. We fine-tune the representations for the similarity function via backpropagation. This leads to a system that can apply rule-based reasoning to natural language, and induce domain-specific natural language rules from training data. We evaluate the proposed system on two different question answering tasks, showing that it outperforms two baselines – BiDAF (Seo et al., 2016a) and FastQA( Weissenborn et al., 2017) on a subset of the WikiHop corpus and achieves competitive results on the MedHop data set (Welbl et al., 2017).