Neural tensor networks are knowledge graph embedding models which infer relationships between two given entities. Although demonstrated to be effective, any inference about an individual relation from a neural tensor network is isolated from the model's intelligence about the other relations in the problem domain. We introduce cross-relational reasoning, a novel inference mechanism for neural tensor networks which intelligently coordinates all of the model's relation-specific outputs to augment a prediction corresponding to a single relation. We frame the process of coordinating the relation-specific outputs as a meta-learning problem, not unlike stacked ensemble learning, and illustrate that cross-relational reasoning consistently outperforms the original inference mechanism on the WN18RR knowledge graph. We also explore modifications to the neural tensor network's internal activation function, and illustrate that using ReLu or Elu can rapidly speed up the neural tensor network's convergence at the cost of long-term improvement during training, and that using sigmoid universally improves the model's performance in the context provided by this paper. |
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.