96 lines
7.5 KiB
TeX
96 lines
7.5 KiB
TeX
% Created 2020-01-08 Wed 13:49
|
||
\documentclass[11pt]{article}
|
||
\usepackage[utf8]{inputenc}
|
||
\usepackage[T1]{fontenc}
|
||
\usepackage{fixltx2e}
|
||
\usepackage{graphicx}
|
||
\usepackage{longtable}
|
||
\usepackage{float}
|
||
\usepackage{wrapfig}
|
||
\usepackage{rotating}
|
||
\usepackage[normalem]{ulem}
|
||
\usepackage{amsmath}
|
||
\usepackage{textcomp}
|
||
\usepackage{marvosym}
|
||
\usepackage{wasysym}
|
||
\usepackage{amssymb}
|
||
\usepackage{hyperref}
|
||
\tolerance=1000
|
||
\usepackage{minted}
|
||
\author{alex}
|
||
\date{\today}
|
||
\title{notes}
|
||
\hypersetup{
|
||
pdfkeywords={},
|
||
pdfsubject={},
|
||
pdfcreator={Emacs 25.2.2 (Org mode 8.2.10)}}
|
||
\begin{document}
|
||
|
||
\maketitle
|
||
\tableofcontents
|
||
|
||
\section{Graph alignment}
|
||
\label{sec-1}
|
||
\subsection{REGAL}
|
||
\label{sec-1-1}
|
||
\subsubsection{Intro}
|
||
\label{sec-1-1-1}
|
||
\begin{itemize}
|
||
\item network alignment, or the task of identifying corresponding nodes in different networks, has applications across the social and natural sciences.
|
||
\item REGAL (REpresentation learning-based Graph ALignment)
|
||
\begin{itemize}
|
||
\item Motivated by recent advancements in node representation learning for single-graph tasks
|
||
\item a framework that leverages the power of automatically learned node representations to match nodes across different graphs.
|
||
\end{itemize}
|
||
\item xNetMF, an elegant and principled node embedding formulation that uniquely generalizes to multi-network problems.
|
||
\item network alignment or matching, which is the problem of finding corresponding nodes in different networks.
|
||
\begin{itemize}
|
||
\item Crucial for identifying similar users in different social networks or analysing chemical compounds
|
||
\end{itemize}
|
||
\item Many existing methods try to relax the computationally hard optimization problem, as designing features that directly compared for nodes in different networks is not an easy task.
|
||
\item we propose network alignment via matching latent, learned node representations.
|
||
\item \textbf{Problem:} Given two graphs G$_{\text{1}}$ and G$_{\text{2}}$ with nodesets V$_{\text{1}}$ and V$_{\text{2}}$ and possibly node attributes A$_{\text{1}}$ and A$_{\text{2}}$ resp., devise an efficient network alignment method that aligns nodes by learning directly comparable node representations Y$_{\text{1}}$ and Y$_{\text{2}}$, from which a node mapping $\phi: V_1 \rightarrow V_2$ between the networks can be inferred.
|
||
\item REGAL is a framework that efficiently identifies node matchings by greedily aligning their latent feature representations.
|
||
\item They use Cross-Network Matrix Factorization (xNetMF) to learn the representations
|
||
\begin{itemize}
|
||
\item xNetMF preserves structural similarities rather than proximity-based similarities, allowing for generalization beyond a single network.
|
||
\item xNetMF is formulated as matrix factorization over a similarity matrix which incorporates structural similarity and attribute agreement between nodes in disjoint graphs.
|
||
\item Constructing the similarity matrix is tough, as is requires computing all pairs of similarities between nodes in the multiple networks, they extend the Nyström low-rank approximation, which is commonly used for large-scale kernel machines.
|
||
\item This makes xNetMF a principled and efficient implicit matrix factorization-based approach.
|
||
\end{itemize}
|
||
\item our approach can be applied to attributed and unattributed graphs with virtually no change in formulation, and is unsupervised: it does not require prior alignment information to find high-quality matchings.
|
||
\item Many well-known node embedding methods based on shallow architectures such as the popular skip-gram with negative sampling (SGNS) have been cast in matrix factorization frameworks. However, ours is the first to cast node embedding using SGNS to capture structural identity in such a framework
|
||
\item we consider the significantly harder problem of learning embeddings that may be individually matched to infer node-level alignments.
|
||
\end{itemize}
|
||
\subsubsection{REGAL Description}
|
||
\label{sec-1-1-2}
|
||
\begin{itemize}
|
||
\item Let G$_{\text{1}}$(V$_{\text{1}}$, E$_{\text{1}}$) and G$_{\text{2}}$(V$_{\text{2}}$, E$_{\text{2}}$) be two unweighted and undirected graphs (described in the setting of two graphs, but can be extended to more), with node sets V$_{\text{1}}$ and V$_{\text{2}}$ and edge sets E$_{\text{1}}$ and E$_{\text{2}}$; and possible node attribute sets A$_{\text{1}}$ and A$_{\text{2}}$.
|
||
\begin{itemize}
|
||
\item Graphs does not have to be the same size
|
||
\end{itemize}
|
||
\item Let n = |V$_{\text{1|}}$ + |V$_{\text{2|}}$, so the amount of nodes across the two graphs.
|
||
\item The steps are then:
|
||
\begin{enumerate}
|
||
\item \textbf{Node Identity Extraction:} Extract structure and attribute-related info from all n nodes
|
||
\item \textbf{Efficient Similarity-based Representation:} Obtains node embeddings, conceptually by factorising a similarity matrix of the node identities from step 1. However, the computation of this similarity matrix and the factorisation of it is expensive, so they extend the Nystrom Method for low-rank matrix approximation to perform an implicit similarity matrix factorisation by \textbf{(a)} comparing similarity of each node only to a sample of p << n so-called "landmark nodes" and \textbf{(b)} using these node-to-landmark similarities to construct the representations from a decomposition of its low-rank approximation.
|
||
\item \textbf{Fast Node Representation Alignment:} Align nodes between the two graphs by greedily matching the embeddings with an efficient data structure (KD-tree) that allows for fast identification of the top-a most similar embeddings from the other graph.
|
||
\end{enumerate}
|
||
\item The first two steps are the xNetMF method
|
||
\end{itemize}
|
||
\begin{enumerate}
|
||
\item Step 1
|
||
\label{sec-1-1-2-1}
|
||
\begin{itemize}
|
||
\item The goal of REGAL’s representation learning module, xNetMF, is to define node “identity” in a way that generalizes to multi-network problems.
|
||
\item As nodes in multi-network problems have no direct connections to each other, their proximity can't be sampled by random walks on separate graphs. This is overcome by instead focusing on more broadly comparable, generalisable quantities: Structural Identity which relates to structural roles and Attribute-Based Identity.
|
||
\item \textbf{Structural Identity}: In network alignment, the well-established assumption is that aligned nodes have similar structural connectivity or degrees. Thus, we can use the degrees of the neighbours of a node as structural identity. They also consider neighbors up to k hops from the original node.
|
||
\begin{itemize}
|
||
\item For some node $u \in V$, $R_u^k$ is then the set of nodes at exactly (up to??) k hops from $u$. We could capture the degrees of these nodes within a vector of length the highest degree within the graph $(D)$ $d_u^k$ where the i'th entry of $d_u^k(i)$ then denotes the amount of nodes in $R_u^k$ of degree $i$. This will however potentially be very long and very sparse, if a single node has a high degree, forcing up the length of $d_u^k$. Instead, nodes are bin'ned together into $b = [log_2(D)]$ logarithmically scaled buckets with entry $i$ of $d_u^k$ contains number of nodes $u \in R_u^k$ such that $floor([log_2(deg(u))]) = i$. Is both much shorter ($log_2(D)$) but also more robust to noise.
|
||
\end{itemize}
|
||
\item \textbf{Attribute-Based Identity}: Given $F$ node attributes, they create for each node $u$ an $F$-dimensional vector $f_u$ representing the values of $u$. So $f_u(i)$ = the i'th attribute of $u$.
|
||
\item \textbf{Cross-Network Node Similarity}: Relies on the structural and attribute information rather than direct proximity: $sim(u,v) = exp[-\gamma_s \cdot \left\lVert d_u - d_v \right\rVert_2^2 - \gamma_a \cdot dist(f_u, f_v)]$
|
||
\end{itemize}
|
||
\end{enumerate}
|
||
% Emacs 25.2.2 (Org mode 8.2.10)
|
||
\end{document} |