Qonvolution: Towards Learning High-Frequency Signals with Queried Convolution

1Samsung Research America, AI Center – Mountain View, CA, USA
2Samsung Research, AI Center – Toronto, ON, Canada
*Denotes equal contributions

BibTeX

@article{kumar2025qonvolution,
    title={Qonvolution: Towards Learning of High-Frequency Signals with Queried Convolution},
    author={Kumar, Abhinav and Aumentado-Armstrong*, Tristan and Valkov*, Lazar and Sharma, Gopal and Levinshtein, Alex and Grzeszczuk, Radek and Kumar, Suren},
    journal={arXiv preprint arXiv:2512.12898},
    year={2025}
}

Abstract

Accurately learning high-frequency signals is a challenge in computer vision and graphics, as neural networks often struggle with these signals due to spectral bias or optimization difficulties. While current techniques like Fourier encodings have made great strides in improving performance, there remains scope for improvement when presented with high-frequency information. This paper introduces Queried-Convolutions (Qonvolutions), a simple yet powerful modification using the neighborhood properties of convolution. Qonvolution convolves a low-frequency signal with queries (such as coordinates) to enhance the learning of intricate high-frequency signals. We empirically demonstrate that Qonvolutions enhance performance across a variety of high-frequency learning tasks crucial to both the computer vision and graphics communities, including 1D regression, 2D super-resolution, 2D image regression, and novel view synthesis (NVS). In particular, by combining Gaussian splatting with Qonvolutions for NVS, we showcase state-of-the-art performance on real-world complex scenes, even outperforming powerful radiance field models on image quality.

1D Regression Task

1D Regression Results.

1D Regression Results.QNN outperforms MLP-based architectures including Fourier encodings in regressing high-frequency signals. This simple experiment compares networks which take the 1D queries and low-frequency (LF) signal as input to predict the high-frequency 1D signal. The standard MLP-based networks including Fourier encodings take 1D coordinates as queries. QNN changes the linear layer to a 1D convolutional layer and also takes the low-frequency signal in addition to the 1D queries.

2D Image Super Resolution (SR) Task

2D SR Results.

SR Results of DIV2K Val images.Adding QNN to Real-ESRGAN faithfully reconstructs high- frequency details in various regions and results in higher quality synthesis visually. We highlight the differences in inset figures.

3D Novel View Synthesis Task

3D NVS Results.

NVS Results.We provide examples of NVS task using 3DGS (Kerbl et al., 2023) baseline on multiple datasets. Adding QNN to faithfully reconstructs high-frequency details in various regions and results in higher quality synthesis visually. We highlight the differences in inset figures.

Related Works

``We stand on the shoulder of giants. (William of Conches, 1123)"
Following are some great works for learning high-frequency signals / details:
1. Encodings: Fourier encodings and Hash-grids change the input coordinates to a higher dimensional coordinates for an MLP.
2. Activations: SIREN, sinc, QIREN and FINER change activation functions for MLPs.
3. Frequency Domain Methods: Lee et al. predict Fourier series coefficients, while Cai et al. predict phase-shifted signals for MLP.
4. Frequency-weighted Loss: Fre-GS applies frequency-weighted losses during training.
5. Network Ensembles: Galerkin neural networks use multiple networks to approximate high-frequency signals.

There are probably many more by the time you are reading this.