Chakrabarti, Soumen (2022) Deep Knowledge Graph Representation Learning for Completion, Alignment, and Question Answering In: The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval.
Full text not available from this repository.
Official URL: http://doi.org/10.1145/3477495.3532679
Related URL: http://dx.doi.org/10.1145/3477495.3532679
Abstract
A knowledge graph (KG) has nodes and edges representing entities and relations. KGs are central to search and question answering (QA), yet research on deep/neural representation of KGs, as well as deep QA, have moved largely to AI, ML and NLP communities. The goal of this tutorial is to give IR researchers a thorough update on the best practices of neural KG representation and inference from AI, ML and NLP communities, and then explore how KG representation research in the IR community can be better driven by the needs of search, passage retrieval, and QA. In this tutorial, we will study the most widely-used public KGs, important properties of their relations, types and entities, best-practice deep representations of KG elements and how they support or cannot support such properties, loss formulations and learning methods for KG completion and inference, the representation of time in temporal KGs, alignment across multiple KGs, possibly in different languages, and the use and benefits of deep KG representations in QA applications.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Source: | Copyright of this article belongs to Association for Computing Machinery |
ID Code: | 130854 |
Deposited On: | 01 Dec 2022 04:11 |
Last Modified: | 01 Dec 2022 04:11 |
Repository Staff Only: item control page