← Back to Dashboard

CVPR

Conference on Computer Vision and Pattern Recognition

14,003
Total papers
1,934
Papers in 2025
Dataset & Benchmark
Top topic (2025)

Paper Count Over Time

Top Topics (2025)

Topic Trajectory (Top 10)

Fraction of papers covering each topic over time

Distinctive to CVPR

Topics over-represented at CVPR vs. the field average (2025)

Most Cited Papers

Year Title Citations Links
2021 High-Resolution Image Synthesis with Latent Diffusion Models 21,909 S2 · arXiv
2019 Momentum Contrast for Unsupervised Visual Representation Learning 14,254 S2 · arXiv
2021 Masked Autoencoders Are Scalable Vision Learners 10,415 S2 · arXiv
2022 YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors 9,437 S2 · arXiv
2019 nuScenes: A Multimodal Dataset for Autonomous Driving 7,395 S2 · arXiv
2022 A ConvNet for the 2020s 7,304 S2 · arXiv
2019 Analyzing and Improving the Image Quality of StyleGAN 6,705 S2 · arXiv
2019 EfficientDet: Scalable and Efficient Object Detection 6,432 S2 · arXiv
2019 ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks 5,401 S2 · arXiv
2019 Generalized Intersection Over Union: A Metric and a Loss for Bounding Box Regression 5,258 S2 · arXiv
2019 Deep High-Resolution Representation Learning for Human Pose Estimation 4,836 S2 · arXiv
2020 Exploring Simple Siamese Representation Learning 4,751 S2 · arXiv
2023 Improved Baselines with Visual Instruction Tuning 4,381 S2 · arXiv
2021 Coordinate Attention for Efficient Mobile Network Design 4,368 S2 · arXiv
2019 DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation 4,290 S2 · arXiv
2020 Taming Transformers for High-Resolution Image Synthesis 3,891 S2 · arXiv
2022 DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation 3,850 S2 · arXiv
2019 Scalability in Perception for Autonomous Driving: Waymo Open Dataset 3,701 S2 · arXiv
2019 GhostNet: More Features From Cheap Operations 3,684 S2 · arXiv
2020 Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers 3,464 S2 · arXiv