← Back to Timeline

Asymptotically Fast Clebsch-Gordan Tensor Products with Vector Spherical Harmonics

Foundational AI

Authors

YuQing Xie, Ameya Daigavane, Mit Kotak, Tess Smidt

Abstract

$E(3)$-equivariant neural networks have proven to be effective in a wide range of 3D modeling tasks. A fundamental operation of such networks is the tensor product, which allows interaction between different feature types. Because this operation scales poorly, there has been considerable work towards accelerating this interaction. However, recently \citet{xieprice} have pointed out that most speedups come from a reduction in expressivity rather than true algorithmic improvements on computing Clebsch-Gordan tensor products. A modification of Gaunt tensor product \citep{gaunt} can give a true asymptotic speedup but is incomplete and misses many interactions. In this work, we provide the first complete algorithm which truly provides asymptotic benefits Clebsch-Gordan tensor products. For full CGTP, our algorithm brings runtime complexity from the naive $O(L^6)$ to $O(L^4\log^2 L)$, close to the lower bound of $O(L^4)$. We first show how generalizing fast Fourier based convolution naturally leads to the previously proposed Gaunt tensor product \citep{gaunt}. To remedy antisymmetry issues, we generalize from scalar signals to irrep valued signals, giving us tensor spherical harmonics. We prove a generalized Gaunt formula for the tensor harmonics. Finally, we show that we only need up to vector valued signals to recover the missing interactions of Gaunt tensor product.

Concepts

equivariant neural networks clebsch-gordan tensor product group theory vector spherical harmonics geometric deep learning symmetry preservation generalized gaunt formula spectral methods scalability feature extraction molecular dynamics

The Big Picture

Imagine trying to describe the shape of a molecule, not just which atoms are where, but how it would look from every possible angle. To do that faithfully, a neural network needs to “speak the language” of rotations and reflections, treating space the way physics actually works.

That’s what E(3)-equivariant neural networks do: AI architectures built to respect the symmetries of 3D space (rotations, translations, and reflections). They’re extremely effective for molecular modeling, protein structure prediction, and materials discovery.

But there’s a bottleneck buried in the math. At the heart of these networks lives the Clebsch-Gordan tensor product (CGTP), a procedure that lets features at different levels of geometric detail interact with each other. The problem? It’s slow. In the standard formulation, its computational cost scales as O(L⁶), where L is the maximum level of detail the network captures. Double L, and you wait 64 times longer.

For years, researchers tried to speed this up. A recent study by Xie and Price showed that nearly every proposed shortcut was quietly discarding information rather than genuinely computing faster.

Now, a team from MIT has found a way through. They designed the first algorithm that delivers true speedups without sacrificing any expressive power, reducing the runtime from O(L⁶) to O(L⁴ log² L), close to the theoretical floor of O(L⁴).

Key Insight: This is the first complete tensor product algorithm that achieves genuine algorithmic speedup, not by throwing away interactions, but by computing all of them smarter using vector spherical harmonics as the mathematical key.

How It Works

Start with a signal-processing analogy. Multiplying two functions on a circle in frequency space is just pointwise multiplication; that’s the magic of Fourier transforms. Can you do something similar on a sphere?

The answer leads to the Gaunt tensor product (GTP), an operation that projects signals onto spherical harmonics (mathematical functions describing patterns on a sphere’s surface, much like Fourier modes describe patterns on a circle), multiplies them pointwise, and transforms back. This is genuinely faster than naive CGTP.

Figure 1

But GTP is incomplete. It misses certain interactions. Most glaringly, it can never simulate a cross product.

The reason is subtle. Spherical harmonics come in two flavors based on how they behave under reflection: even (scalar) and odd (pseudoscalar). A pseudoscalar quantity flips sign when you hold it up to a mirror, like the difference between a left-handed and right-handed twist. Standard scalar spherical harmonics can’t encode this handedness, so GTP, built on scalar harmonics, is blind to those interactions by construction.

The fix is to upgrade. Instead of scalar-valued signals on the sphere, the authors move to tensor spherical harmonics, signals that carry not just a number at each point on the sphere but a small vector pointing in some direction. They show that vector-valued signals are sufficient to recover every interaction that scalar harmonics miss. The resulting algorithm is called Vector Signal Tensor Product (VSTP).

Underlying VSTP is a generalized Gaunt formula, a new identity governing how tensor spherical harmonics multiply. The classical Gaunt formula (which underpins GTP) describes how scalar harmonics of degrees l₁ and l₂ decompose when multiplied. The new formula extends this to tensor harmonics, providing the coefficients VSTP needs while proving it captures all CGTP interactions.

The runtime improvement follows from fast spherical harmonic transforms:

  • Naive CGTP: O(L⁶), summing over all combinations of angular momentum indices
  • CGTP with sparsity: O(L⁵), exploiting known zero coefficients
  • Gaunt tensor product: O(L⁴ log² L), using fast SH transforms, but incomplete
  • VSTP (this work): O(L⁴ log² L), same asymptotic speed, now complete

Figure 2

Vector-valued signals add just enough structure to cover the interactions scalar harmonics miss without blowing up the runtime. The authors prove that vector signals are sufficient; you don’t need rank-2 tensors or higher, which keeps the algorithm lean.

Why It Matters

Equivariant networks are scaling up fast. Models like NequIP, MACE, and Equiformer are being applied to systems with thousands of atoms: simulating protein folding, discovering new catalysts, predicting quantum properties of materials. At those scales, the O(L⁶) bottleneck is a real ceiling.

Researchers who want to capture finer-grained physical interactions have been paying exponential costs in compute time. VSTP removes that ceiling without compromise.

The implications go beyond raw speed. Because VSTP is a complete drop-in replacement for GTP, existing pipelines can adopt it without redesigning their architectures. The authors frame their results in terms of generalized Fourier transforms, which suggests the approach could extend to equivariant networks built on other symmetry groups. In fields like particle physics or crystallography, where different symmetries govern the problem, similar algorithmic strategies may apply.

Figure 3

Bottom Line: By upgrading from scalar to vector spherical harmonics, this paper delivers the first algorithm that computes Clebsch-Gordan tensor products both completely and asymptotically fast, bringing a core bottleneck of 3D equivariant AI from O(L⁶) to O(L⁴ log² L) without sacrificing a single interaction.


IAIFI Research Highlights

Interdisciplinary Research Achievement
This work takes representation theory from quantum mechanics (Clebsch-Gordan coefficients, spherical harmonics, group Fourier transforms) and applies it to a core computational bottleneck in modern AI, the kind of physics-informed algorithmic thinking at the heart of IAIFI's mission.
Impact on Artificial Intelligence
The Vector Signal Tensor Product is the first provably complete and asymptotically fast tensor product operation for E(3)-equivariant neural networks, letting them scale to higher angular frequencies without the runtime penalty that has constrained current models.
Impact on Fundamental Interactions
The generalized Gaunt formula for tensor spherical harmonics extends a classical result in mathematical physics. It may find independent applications in quantum chemistry, atomic physics, and other fields where angular momentum coupling is central.
Outlook and References
Future work may extend VSTP to other compact Lie groups and integrate it into hardware-optimized kernels like cuEquivariance and openEquivariance. The preprint is available as [arXiv:2602.21466](https://arxiv.org/abs/2602.21466) from the Smidt group at MIT.

Original Paper Details

Title
Asymptotically Fast Clebsch-Gordan Tensor Products with Vector Spherical Harmonics
arXiv ID
[2602.21466](https://arxiv.org/abs/2602.21466)
Authors
YuQing Xie, Ameya Daigavane, Mit Kotak, Tess Smidt