<aside> 📌
The aim of this dissertation is to design the attention mechanism within Graph Transformers to improve their computational efficiency and scalability for large graphs.
This work will address the limitations of Graph Transformers, which face high computational costs and memory usage as graph size increases. The goal is to improve these models for practical real-world applications, demonstrating the scalability and efficiency of the proposed techniques.
Specifically, the research will focus on enhancing attention mechanisms such as sparse attention or localized attention, to handle large-scale graphs without significant compromises in performance.
</aside>
Research Question: Q. How can attention mechanisms be optimized in Graph Transformers to improve scalability and efficiency when applied to large graphs, while maintaining accuracy?
Sparse Attention: Investigating methods like Linformer or Reformer to reduce the quadratic complexity of attention mechanisms.
Localized Attention: Exploring approaches that limit the attention scope to a smaller neighborhood or region, rather than applying attention globally.
Scalability Testing: Evaluating the performance of the optimized attention mechanisms on large real-world graphs, measuring both accuracy and computational efficiency.
Interpretability: Ensuring that the optimized model remains interpretable, allowing insights into which parts of the graph influence model predictions.
<aside> 🟣
<aside> 🟡
<aside> ðŸŸ
<aside> 🔵
<aside> 🔴
<aside> âš«
<aside> 🟢