0:00
/
0:00
Transcript

"Kolmogorov-Arnold Networks for Time Series Granger Causality Inference"

Generated below podcast on this paper with Google's Illuminate.

GCKAN extends Kolmogorov-Arnold Networks to analyze time series causality by extracting base weights and using sparsity penalties, enabling automatic time lag selection and improved inference accuracy.

https://arxiv.org/abs/2501.08958

🤔 Original Problem :

→ Existing neural network models for Granger causality struggle with high-dimensional nonlinear time series and limited samples

→ RNN models can't select time lags automatically, while MLP models have low inference efficiency with noisy data

-----

🔧 Solution in this Paper :

→ Introduces GCKAN that uses learnable univariate functions at edges instead of weights, making computation more efficient

→ Extracts base weights from KAN layers and applies sparsity-inducing penalty with ridge regularization

→ Proposes time-reversed algorithm that compares original and reversed series to reduce spurious connections

→ Uses component-wise architecture where each time series component is modeled separately

-----

💡 Key Insights :

→ KAN's smaller computational graph enables better handling of high-dimensional data

→ Time-reversed causality helps validate true causal relationships

→ Automatic time lag selection improves accuracy significantly

-----

📊 Results :

→ Outperformed baselines on Lorenz-96 with AUROC of 0.995-1.0

→ Achieved highest performance in 22/28 fMRI simulations

→ Superior results on gene networks with limited samples (AUROC 0.747)

→ Perfect AUROC 1.0 on VAR dataset scenarios

Discussion about this video