Network Representation Learning

logo

Lab of Media and Network

Department of Computer Science & Technology

Tsinghua University

Deep Recursive Network Embedding with Regular Equivalence

K Tu, P Cui, X Wang, PS Yu, W Zhu. KDD 2018

Motivation

Network embedding aims to preserve vertex similarity in an em- bedding space. Existing approaches usually define the similarity by direct links or common neighborhoods between nodes, i.e. struc- tural equivalence. However, vertexes which reside in different parts of the network may have similar roles or positions, i.e. regular equivalence, which is largely ignored by the literature of network embedding. Regular equivalence is defined in a recursive way that two regularly equivalent vertexes have network neighbors which are also regularly equivalent. Accordingly, we propose a new ap- proach named Deep Recursive Network Embedding (DRNE) to learn network embeddings with regular equivalence. More specifically, we propose a layer normalized LSTM to represent each node by aggregating the representations of their neighborhoods in a recur- sive way. We theoretically prove that some popular and typical centrality measures which are consistent with regular equivalence are optimal solutions of our model. This is also demonstrated by empirical results that the learned node representations can well predict the indexes of regular equivalence and related centrality scores. Furthermore, the learned node representations can be di- rectly used for end applications like structural role classification in networks, and the experimental results show that our method can consistently outperform centrality-based methods and other state-of-the-art network embedding methods.

Code

The code is available on here.

References

K Tu, P Cui, X Wang, PS Yu, W Zhu. "Deep Recursive Network Embedding with Regular Equivalence". KDD, 2018.