IEEE VIS 2024 Content: Graph Transformer for Label Placement

Graph Transformer for Label Placement

Jingwei Qu - Southwest University, Beibei, China

Pingshun Zhang - Southwest University, Chongqing, China

Enyu Che - Southwest University, Beibei, China

Yinan Chen - COLLEGE OF COMPUTER AND INFORMATION SCIENCE, SOUTHWEST UNIVERSITY SCHOOL OF SOFTWAREC, Chongqin, China

Haibin Ling - Stony Brook University, New York, United States

Room: Bayshore II

2024-10-17T15:03:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T15:03:00Z
Exemplar figure, described by caption below
GNN-driven label placement. For a set of labels to be placed in a graphic, Label Placement Graph Transformer (LPGT) predicts the label layout given the graphic and raw label information. First, a complete graph is constructed to capture the relationship between labels. Its node and edge features are generated from the label information and image features. Next, given the graph as input, LPGT iteratively learns the displacements of the nodes by a sequence of GNN modules. The graph is updated by each module and taken as input for the next module.
Fast forward
Full Video
Keywords

Label placement, Graph neural network, Transformer

Abstract

Placing text labels is a common way to explain key elements in a given scene. Given a graphic input and original label information, how to place labels to meet both geometric and aesthetic requirements is an open challenging problem. Geometry-wise, traditional rule-driven solutions struggle to capture the complex interactions between labels, let alone consider graphical/appearance content. In terms of aesthetics, training/evaluation data ideally require nontrivial effort and expertise in design, thus resulting in a lack of decent datasets for learning-based methods. To address the above challenges, we formulate the task with a graph representation, where nodes correspond to labels and edges to interactions between labels, and treat label placement as a node position prediction problem. With this novel representation, we design a Label Placement Graph Transformer (LPGT) to predict label positions. Specifically, edge-level attention, conditioned on node representations, is introduced to reveal potential relationships between labels. To integrate graphic/image information, we design a feature aligning strategy that extracts deep features for nodes and edges efficiently. Next, to address the dataset issue, we collect commercial illustrations with professionally designed label layouts from household appliance manuals, and annotate them with useful information to create a novel dataset named the Appliance Manual Illustration Labels (AMIL) dataset. In the thorough evaluation on AMIL, our LPGT solution achieves promising label placement performance compared with popular baselines. Our algorithm is available at https://github.com/JingweiQu/LPGT.