YOU MUST EAT THIS GRAPH NEWS, GRAPH OMAKASE. 1 week May
[ad]
graph community wants you !
Searching the person who share the knowledge to others. the community site address is below link.
the graph community members almost 200 person only in south korea. I think the community is bigger the better. so i ad this news for you who interested in community builder.
GL2vec: Graph Embedding Enriched by Line Graphs with Edge Features
[https://dl.acm.org/doi/10.1007/978-3-030-36718-3_1]
Introduction
This paper GL2vec complements the shortcomings of Graph2vec, an ancestor of graph embeddings. Traditional Graph2vec uses only node label information to embed. But the graph doesn’t just have nodes, it also has edges. Graph2vec only considers ‘node’ information and does not consider information about the edge, so there is a limitation in that it does not reflect the importance when the attributes contained in the edge are important. To overcome this limitation, the author of the paper presents an idea using linegraph.
preliminary
Linegraph, Graph elements, nodes, and edges are replaced. By swapping nodes and edges, each of the two nodes connected as an edge is considered one.
Summary
‘’nx.line_graph()’ is a different line from existing graph2vec and implementation code. All other codes are the same. We apply Weisfeiler-Leman algorithm to extract unique structure features, synthesize the information, convert it into documentation form, and apply doc2vec to complete the embedding.
insight
For those of you who were curious about how performance changes before and after applying node, edge-attributed, I think this paper can provide insights for those who needed a reference.
Graph Kalman Filters
[https://arxiv.org/pdf/2303.12021.pdf]
Introduction
Kalman Filter, I think you’ve all heard of it at least once. Perspective of predicting future states based on past present states. This paper incorporates Kalman filter perspectives into graphs. Specifically, it measures how the graph attribute nodes and topologies change over time. Unlike the existing Kalman filter, there seems to be no obvious difference, except that a layer that reads out (compound graph information) is added.
Preliminary
Kalman Filters
The Kalman filter is an algorithm used for estimating the state of a dynamic system based on noisy or incomplete measurements. It operates recursively, continuously updating the state estimate as new measurements become available.
Summary
Stochastic understanding of how topology changes as white noise is put into node signals. Nodes and graphs are measured by adding n_t and v_t noise, respectively. It is efficient in terms of computation because it does not store all of these changes, but extracts the approximate covariate value (prior — posterior) and then utilizes the value.
Insight
One of the key points of temporary learning is to identify and predict noise that appears in real time. Resources are often burdensome because of the large amount of computation to relearn and predict all of the previous history. In this case, it seems good to borrow the idea of this paper when you try to apply it with the feeling of grasping the rough context rather than the exact prediction and the feeling of primary filtering.
DiffWire: Inductive Graph Rewiring via the Lovász Bound
[https://arxiv.org/pdf/2206.07369.pdf]
Introduction
To solve three chronic problems: under-reaching, over-smoothing, and over-squashing, this paper proposes graph rewiring. I think the paper might be awkward because the somewhat unfamiliar concept of graph rewiring has emerged. However, it is similar to the view of modularity metrics that we often encounter in community detection, which we have encountered a lot in network science.
Simply put, it’s a technology that optimizes how useful it is in terms of the amount of information you get when you connect other nodes, except for the existing connections of nodes in the graph. To be more specific, the idea of over-squashing problems minimizing graph structural deformation and redirecting graphs is the core of this paper.
Preliminary
I think it would be difficult to present it separately because there are so many prior background knowledge related to reading this paper. On the contrary, it can be seen that this paper contains many important concepts in network science. For those of you who are wondering how we solved the GNN trend in terms of network science, I recommend you to watch it.
Summary
Two layers are key to this idea.
CT-LAYER learns commute time to minimize changes in graph structure distribution and layer whether information can be evenly transmitted. By combining the effective resistance perspective mainly used in circuit (semiconductor design), specify which node and node are connected to the lovász bound, and measured by Dirichlet energy of the rate of energy change. so there 2 boundary between specific status (sparsification) min and max format. If you repeat this process, the optimal bound is extracted, and the results sampled within it create a new graph that is optimized for structural distribution, but information is well exchanged.
GAP-LAYER optimizes the spectral gap, the distribution of the network. The criteria for the spectral gap utilize the Fiedler vector. In other words, the function of cost function is used as a fielder vector. hence , Fielder vector usually considerd by second smallest eigenvalue of the Laplacian matrix of a graph. This is the eigenvalue that preserves the maximum information in the matrix representing the structure of the graph. This layer GAP-layer utilizes appropriate cutting (Ratio-cut) to optimize the cost function mentioned above.
It intervenes directly in the structure distribution by cutting directly, unlike sampling in the previous CT layer. In addition, by using CTlayer, which measures the node-to-node information structure, commit time, and Rcut, which applies the graph’s own information structure, it is rewiring in consideration of both macro-microscopic aspects.
Insight
I think it’s a concept that will be used often in technical interviews. If you have a JD related to GNN in your company, you’ve got some basic concepts that you might want to ask when an applicant comes in. I think it is a paper that I recommend again because it contains important concepts that are the basis for solving many problems in the field.
While the paper DiffWire above focused on theoretical elements, the posting below focuses on practical elements that machine learning engineers would like to see. The results are similar, but the purpose is different, so I recommend you to choose according to the purpose.