TelmoMenezes 5 years ago

This is very interesting work. Regarding:

"However, graphs are inherently combinatorial structures made of discrete parts like nodes and edges, while many common ML methods, like neural networks, favor continuous structures, in particular vector representations."

I apologize in advance for a bit of self-promotion, but I would like to point out my own approach, which instead favors discrete ML methods to discover symbolic generators of networks. That is to say, small programs that are capable of generating synthetic networks with similar topological and other characteristics to some empirically observed one. If you happen to be interested:

https://www.nature.com/articles/srep06284

http://www.telmomenezes.net/2014/09/using-evolutionary-compu...

I am not saying that my approach is better, it depends on the goal. On the contrary, I am increasingly a believer in hybrid (symbolic-statistic) methods.

lmeyerov 5 years ago

Focusing on multiple labels (overlapping communities) is great! It can also be similar to changing the problem to edge labeling, instead of node labeling.

The way we handle the common cases of entity overlap like in event data is another simple way: hypergraph modeling. In terms of reusing graph tech, that just means you make a bipartite graph between samples/events and features. That is one of the most common data transforms folks toggle on/off with Graphistry visuals.

I'm guessing still another couple of years before a notion of standard practice emerges for graph learning. Very cool time!

pagutierrezn 5 years ago

Why this site can't be seen in Firefox Focus is a mistery to me