Skip to main content

About this Research Topic

Abstract Submission Deadline 11 October 2023
Manuscript Submission Deadline 11 February 2024

Graph data, which captures intricate relationships and interactions between entities, has witnessed a significant rise in its prevalence across diverse domains, including social networks, recommendation systems, biological networks, healthcare informatics, and transportation networks. The growing prevalence of these applications has spurred an increasing demand for the development of advanced machine-learning algorithms specifically tailored to handle graph-structured data. These algorithms, comprising both traditional network embedding-based approaches and graph neural networks-based methods, aim to unveil latent patterns, enable accurate predictions, and extract valuable insights from the interconnected nature of graph data, thereby empowering various domains with enhanced decision-making capabilities.

However, despite the remarkable empirical success and commercial value achieved by existing efforts in graph machine learning, certain drawbacks have emerged, posing potential adverse effects. These drawbacks encompass susceptibility to data noise, data scarcity, and adversarial attacks, limited interpretability in model predictions, the amplification of societal bias inherent in the training data, and inadvertent leakage of private information, inadvertently resulting in harm to users and society. For instance, prevailing methods often make decisions in a black-box manner, impeding end-users from comprehending and trusting the reasoning behind model decisions. Furthermore, numerous commonly employed approaches have been found to be vulnerable to malicious attacks, biased against individuals from specific demographic groups, or insecure in terms of information leakage. Consequently, a fundamental and largely underexplored research question remains: How can we develop trustworthy learning algorithms on graphs?

In this Research Topic, we cordially invite submissions that focus on dedicated endeavors to enhance the trustworthiness of machine learning on graphs, covering critical aspects such as robustness, fairness, interpretability, and privacy. Potential topics include, but are not limited to:
● Explainable and interpretable graph machine learning
● Causality-aware graph machine learning
● Fairness and bias in graph machine learning
● Out-of-distribution detection and generalization on graphs
● Robustness against data noise, data scarcity, and adversarial attacks on graphs
● Responsible and privacy-preserving techniques in graph learning
● Federated graph neural networks
● Trustworthy graph machine learning applications (e.g. recommendation systems, urban computing)

Keywords: Graph Neural Networks, Safe and Robust Graph Representation Learning, Privacy-aware Graph Learning, Federated Graph Learning, Trustworthy Graph Learning in Recommendation


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Graph data, which captures intricate relationships and interactions between entities, has witnessed a significant rise in its prevalence across diverse domains, including social networks, recommendation systems, biological networks, healthcare informatics, and transportation networks. The growing prevalence of these applications has spurred an increasing demand for the development of advanced machine-learning algorithms specifically tailored to handle graph-structured data. These algorithms, comprising both traditional network embedding-based approaches and graph neural networks-based methods, aim to unveil latent patterns, enable accurate predictions, and extract valuable insights from the interconnected nature of graph data, thereby empowering various domains with enhanced decision-making capabilities.

However, despite the remarkable empirical success and commercial value achieved by existing efforts in graph machine learning, certain drawbacks have emerged, posing potential adverse effects. These drawbacks encompass susceptibility to data noise, data scarcity, and adversarial attacks, limited interpretability in model predictions, the amplification of societal bias inherent in the training data, and inadvertent leakage of private information, inadvertently resulting in harm to users and society. For instance, prevailing methods often make decisions in a black-box manner, impeding end-users from comprehending and trusting the reasoning behind model decisions. Furthermore, numerous commonly employed approaches have been found to be vulnerable to malicious attacks, biased against individuals from specific demographic groups, or insecure in terms of information leakage. Consequently, a fundamental and largely underexplored research question remains: How can we develop trustworthy learning algorithms on graphs?

In this Research Topic, we cordially invite submissions that focus on dedicated endeavors to enhance the trustworthiness of machine learning on graphs, covering critical aspects such as robustness, fairness, interpretability, and privacy. Potential topics include, but are not limited to:
● Explainable and interpretable graph machine learning
● Causality-aware graph machine learning
● Fairness and bias in graph machine learning
● Out-of-distribution detection and generalization on graphs
● Robustness against data noise, data scarcity, and adversarial attacks on graphs
● Responsible and privacy-preserving techniques in graph learning
● Federated graph neural networks
● Trustworthy graph machine learning applications (e.g. recommendation systems, urban computing)

Keywords: Graph Neural Networks, Safe and Robust Graph Representation Learning, Privacy-aware Graph Learning, Federated Graph Learning, Trustworthy Graph Learning in Recommendation


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Loading..

Topic Coordinators

Loading..

Articles

Sort by:

Loading..

Authors

Loading..

total views

total views article views downloads topic views

}
 
Top countries
Top referring sites
Loading..

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.