ASTR: Adaptive Spot-Guided Transformer for Consistent Local Feature Matching

CVPR 2023


Jiahuan Yu*, Jiahao Chang*, Jianfeng He, Tianzhu Zhang, Feng Wu

University of Science and Technology of China
* denotes equal contribution

Thanks to LoFTR for their great work and well-organized code.
Also see our another work Structured Epipolar Matcher (SEM) in local feature matching,
accepted by CVPR 2023 Image Matching Workshop

Abstract


TL;DR: ASTR proposes a novel attention mechanism (spot-guided attention) to maintain the local consistency of feature matching, while dealing with large scale variations by calculating depth information.


Local feature matching aims at finding correspondences between a pair of images. Although current detector-free methods leverage Transformer architecture to obtain an impressive performance, few works consider maintaining local consistency. Meanwhile, most methods struggle with large scale variations. To deal with the above issues, we propose Adaptive Spot-Guided Transformer (ASTR) for local feature matching, which jointly models the local consistency and scale variations in a unified coarse-to-fine architecture. The proposed ASTR enjoys several merits. First, we design a spot-guided aggregation module to avoid interfering with irrelevant areas during feature aggregation. Second, we design an adaptive scaling module to adjust the size of grids according to the calculated depth information at fine stage. Extensive experimental results on five standard benchmarks demonstrate that our ASTR performs favorably against state-of-the-art methods.


Overview video (4:44)


-->

Pipeline Overview



Performance



Visualizations



Citation


@inproceedings{yu2023adaptive,
  title={Adaptive Spot-Guided Transformer for Consistent Local Feature Matching},
  author={Yu, Jiahuan and Chang, Jiahao and He, Jianfeng and Zhang, Tianzhu and Yu, Jiyang and Wu, Feng},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={21898--21908},
  year={2023}
}