YOU MUST EAT THIS GRAPH NEWS, GRAPH OMAKASE. 2 weeks May

Jeong Yitae
4 min readMay 14, 2023

--

Higher-order components in hypergraphs

[https://arxiv.org/pdf/2208.05718.pdf]

Introduction

Hypergraph vs. Simpical graph. There seems to be no end to discussing whether it is better to see the relationship as 1:1 and 1:N. It seems similar to the context in which a good graph is born if you constantly think about what graph shape would be good with purpose because the rhythm is all clear depending on the specific situation. This paper shows the results of a study on how higher-order components (HOCs) affect the dynamics of the hypergraph. Does one component by year group reflect contagious epidemiology?

preliminary

Higher-order graph vs. Hypergraph

In summary, hypergraphs extend this concept further by allowing higher-order relationships involving two or more vertices, while hypergraphs are generalizations of graphs whose edges can connect any number of vertices, represented by simplicity. While both structures provide a more expressive way to model complex relationships, higher-order graphs provide more diverse and robust representations.

Summary

There are two ideas that go in to see if it’s affecting you. 1. Graphs that sample the original (real-world) in the form of a higher-order graph 2. Subgroups of upper graph components (HOCs) and lower groups (higher-order) tube vs. Experiment with a parent graph and a parent component associated SIR because it is pre-assigned to a random subgraph.

In conclusion, there is a difference. In the presence of ginat HOC, the parameter space in which outbreaks occur is much larger, which means that ``’HOC systems are more susceptible to outbreaks than systems without HOCs. ’’’

insight

I think it is the result of quantification of what was considered only crowd psychology. I think imporvement for room is a topic that seems to have a lot of empty rooms because it is judged to be the same paper as the baseline that came out at the right time when the number of communities to which individuals belong is increasing.

Kùzu 0.0.3 Release

[https://kuzudb.com/blog/kuzu-0.0.3-release.html]

it is good. it isreally good graph database. It is so trendy that you can say that if there is DuckDB in RDB as an emerging power in the database world, there is Kuzudb in GDB, and it is an open source faithful to the basics. Currently, the concept of scalable ML with DB is blowing in the machine learning industry. To add a bit of TMI, nvidia’s cugraph also implements the graph-all-in-one with GPU concepts to store, analyze, and predict, and then conducts aggressive marketing.

Back again, I mentioned it once before. Torch_geometry provided a backend low-level function so that you can store and load large graph data flexibly. The main issue of this article is the result of applying torch_geometric backend to their db using the backend function graphstore and featurestore mentioned above in kuzudb.

Anyone can simply graft it. However, kuzudb improves performance such as saving inquiry through elements such as ‘buffer manager’, ‘data type’, and ‘query optimizer’. There’s a core technology called ‘factorization’ on all of these bases. Simply put, it is a function that reduces scanning time by disassembling the table to optimize the catechian product that occurs in inquiry situations such as (a)-[b]→©.

With the trend of model dominant being confirmed, the ability to handle data and how many tb and pb is now becoming important, and I think the technology that will be in the spotlight is the technology to handle data as shown in this post.

DGL v1.1.0 release

https://github.com/dmlc/dgl/releases/tag/1.1.0

DGL, which had been dormant for a while, released 1.1.0. The highlights are as follows.

  • Sparse API improvement
  • Datasets for evaluating graph transformers and graph learning under heterophily
  • Modules and utilities, including Cugraph convolution modules and SubgraphX
  • Graph transformer deprecation
  • Performance improvement

The part that you should pay particular attention to is Sparse. In the graph, the Sparse matrix handling is so important that it was presented at the AI Expo Kaist Technology Exchange the day before yesterday. DGL has provided a very simple implementation of that important part, so I recommend you to experience it.

jax vs pytorch.

original post ; twitter @zhu_zhaocheng

code ; https://colab.research.google.com/drive/1CBcPN-PdRsWpBMCjJkdDVClLq_tvf9Rg?usp=sharing#scrollTo=WfMOP-H32Wmk

Is TPU superior to GPU? With the permission of the original author, I share interesting content. It’s a good material for those who had a lot of stress about OOM. We briefly experimented with what would be the outcome of frameworks like JAX, JIT (just in time), Pytorch, etc. in the context of predicting large graphs.

Message passing on homogeneous graphs
Without JIT, JAX is 8.7% slower than PyTorch.

With JIT, JAX is 9.6% faster than PyTorch. JAX automatically fuses the operations to reduce memory.

torch.compile is 49.2% faster than torch.jit.script.

Based on T4 GPU available in Colab.

Profile on regular-sized inputs.

JAX: 33.3ms
JAX + JIT: 17.8ms
PyTorch: 30.4ms
PyTorch + JIT: 29.1ms
PyTorch + compile: 19.5ms
PyTorch Scatter: 28.7ms
PyTorch sparse tensor: 44.4ms
Profile on large inputs.

JAX: OOM
JAX + JIT: 541ms
PyTorch: OOM
PyTorch + JIT: OOM
PyTorch + compile: OOM
PyTorch Scatter: OOM
PyTorch sparse tensor: 817ms

It is simply and clearly organized with Korap, so when you have time, I recommend you to try to think about why TPU is advantageous and what is the best way to compare it with your ML infrastructure.

--

--

Jeong Yitae

Linkedin : jeongiitae / i'm the graph and network data enthusiast. I always consider how the graph data is useful in the real-world.