Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update IJCAI 2022 Papers #3

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 12 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,8 @@ We divided these papers into several fundamental tasks as follows.
- Present the **datasets** used in papers

## Update
- [2022-05-31] Add papers published in ICML 2022
- [2022-08-09] Add papers published in IJCAI 2022
- [2022-08-01] Add papers published in ICML 2022
- [2022-05-31] Add papers published in NeurIPS, ICML, ICLR, SIGKDD, SIGIR, AAAI, IJCAI 2019!
- [2022-05-05] Add papers published in WWW 2022!
- [2022-04-25] **TS-Paper v1.0 is released!** We support the published time series papers from 2020 to 2022. Stay tuned!
Expand Down Expand Up @@ -58,6 +59,10 @@ We divided these papers into several fundamental tasks as follows.

| Paper | Conference | Year | Code | Used Datasets |Key Contribution|
| :-------------------: | :----------: | :----------: | :------------------------: | ----------------------- |------ |
|[Triformer: Triangular, Variable-Specifc Attentions for Long Sequence Multivariate Time Series Forecasting](https://www.ijcai.org/proceedings/2022/0277.pdf)| IJCAI | 2022 | | ETT, Weather, ECL | To ensure high effciency and accuracy, we propose Triformer, a triangular, variable-specifc attention. (i) Linear complexity: we introduce a novel patch attention with linear complexity. When stacking multiple layers of the patch attentions, a triangular structure is proposed such that then layer sizes shrink exponentially, thus maintaining linear complexity. (ii) Variable-specifc parameters: we propose a light-weight method to enable distinct sets of model parameters for different variables’ time series to enhance accuracy without compromising effciency and memory usage. |
|[Memory Augmented State Space Model for Time Series Forecasting](https://www.ijcai.org/proceedings/2022/0479.pdf)| IJCAI | 2022 | | Exchange, Solar, Electricity, Traffic, Wiki | We present External Memory Augmented State Space Model (EMSSM) within the sequential Monte Carlo (SMC) framework. Unlike the common fixed-order Markovian SSM, our model features an external memory system, in which we store informative latent state experience, whereby to create “memoryful” latent dynamics modeling complex long-term dependencies. Moreover, conditional normalizing flows are incorporated in our emission model, enabling the adaptation to a broad class of underlying data distributions. We further propose a Monte Carlo Objective that employs an efficient variational proposal distribution, which fuses the filtering and the dynamic prior information, to approximate the posterior state with proper particles.|
|[DeepExtrema: A Deep Learning Approach for Forecasting Block Maxima in Time Series Data](https://aps.arxiv.org/pdf/2205.02441.pdf)| IJCAI | 2022 | | Synthetic, Hurricane, Solar, Weather | Accurate forecasting of extreme values in time series is critical due to the significant impact of extreme events on human and natural systems. This paper presents DeepExtrema, a novel framework that combines a deep neural network (DNN) with generalized extreme value (GEV) distribution to forecast the block maximum value of a time series. Implementing such a network is a challenge as the framework must preserve the inter-dependent constraints among the GEV model parameters even when the DNN is initialized. We describe our approach to address this challenge and present an architecture that enables both conditional mean and quantile prediction of the block maxima. |
|[Regularized Graph Structure Learning with Semantic Knowledge for Multi-variates Time-Series Forecasting](https://www.ijcai.org/proceedings/2022/0328.pdf)| IJCAI | 2022 | | PeMSD, RPCM | In this paper, we propose Regularized Graph Structure Learning (RGSL) model to incorporate both explicit prior structure and implicit structure together and learn the forecasting deep networks along with the graph structure. |
|[FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting](https://arxiv.org/abs/2201.12740)| ICML | 2022 | [code](https://github.com/MAZiqing/FEDformer) | ETT, Electricity, Exchange, Weather, ILI | We propose to combine Transformer with the seasonal-trend decomposition method, in which the decomposition method captures the global profile of time series while Transformers capture more detailed structures. The proposed method, termed as Frequency Enhanced Decomposed Transformer (FEDformer), is more efficient than standard Transformer with a linear complexity to the sequence length. |
|[TACTiS: Transformer-Attentional Copulas for Time Series](https://arxiv.org/abs/2202.03528)| ICML | 2022 | [code](https://github.com/servicenow/tactis) | electricity, fred-md, kdd-cup, solar-10min, traffic | We propose a versatile method, based on the transformer architecture, that estimates joint distributions using an attentionbased decoder that provably learns to mimic the properties of non-parametric copulas.|
|[Domain Adaptation for Time Series Forecasting via Attention Sharing](https://arxiv.org/abs/2102.06828)| ICML | 2022 | [code](https://github.com/DMIRLAB-Group/SASA) | UCI, Wiki | we propose a novel domain adaptation framework, Domain Adaptation Forecaster (DAF). DAF leverages statistical strengths from a relevant domain with abundant data samples (source) to improve the performance on the domain of interest with limited data (target).|
Expand Down Expand Up @@ -154,6 +159,8 @@ We divided these papers into several fundamental tasks as follows.
## Time Series Classification
| Paper | Conference | Year | Code | Used Datasets |Key Contribution|
| :-------------------: | :----------: | :----------: | :------------------------: | ----------------------- |------ |
|[T-SMOTE: Temporal-oriented Synthetic Minority Oversampling Technique for Imbalanced Time Series Classification](https://www.ijcai.org/proceedings/2022/0334.pdf)| IJCAI | 2022 | | UCR, UCI | In this paper, to address the class imbalance problem, we propose a novel and a practical oversampling method named T-SMOTE, which can make full use of the temporal information of time-series data. In particular, for each sample of the minority class, T-SMOTE generates multiple samples that are close to the class border. Then, based on those samples near the class border, T-SMOTE synthesizes more samples. Finally, a weighted sampling method is called on both generated samples near class borders and synthetic samples. |
| [A Reinforcement Learning-Informed Pattern Mining Framework for Multivariate Time Series Classification](https://cpsl.pratt.duke.edu/sites/cpsl.pratt.duke.edu/files/docs/gao_ijcai22.pdf) | IJCAI | 2022 | | UEA, ECG. EEG | In this work, we propose a reinforcement learning (RL) informed PAttern Mining framework (RLPAM) to identify interpretable yet important patterns for MTS classification. Our framework has been validated by 30 benchmark datasets as well as real-world large-scale electronic health records (EHRs) for an extremely challenging task: sepsis shock early prediction. |
| [OMNI-SCALE CNNS: A SIMPLE AND EFFECTIVE KERNEL SIZE CONFIGURATION FOR TIME SERIES CLASSIFICATION](https://openreview.net/pdf?id=PDYs7Z2XFGv) | ICLR |2022 | [Code link](https://github.com/Wensi-Tang/OS-CNN) | MEG-TLE, UEA 30 archive, UCR 85 archive, UCR 128 archive | presents a simple 1D-CNN block, namely OS-block. |
| [Correlative Channel-Aware Fusion for Multi-View Time Series Classification](https://arxiv.org/abs/1911.11561) | AAAI | 2021 | - | EV-Action, NTU RGB+D, UCI Daily and Sports Activities | The global-local temporal encoders are developed to extract robust temporal representations for each view, and a learnable fusion mechanism is proposed to boost the multi-view label information. |
| [Learnable Dynamic Temporal Pooling for Time Series Classification](https://arxiv.org/abs/2104.02577) | AAAI | 2021 | - | UCR/UEA |proposes a dynamic temporal pooling + a learning framework to simultaneously optimize the network parameters of a CNN classifier and the prototypical hidden series that encodes the latent semantic of the segments. |
Expand All @@ -173,6 +180,8 @@ We divided these papers into several fundamental tasks as follows.
## Anomaly Detection
| Dataset | Conference | Year | Code | Used Datasets |Key Contribution|
| :-------------------: | :----------: | :------------------------: | ----------------------- | ------------------------- |------ |
| [GRELEN: Multivariate Time Series Anomaly Detection from the Perspective of Graph Relational Learning](https://www.ijcai.org/proceedings/2022/0332.pdf) | IJCAI | 2022 | | SWaT, WaDI, SMD, PSM | In this paper, we propose a novel Graph Relational Learning Network (GReLeN) to detect multivariate time series anomaly from the perspective of between-sensor dependence relationship learning. Variational AutoEncoder (VAE) serves as the overall framework for feature extraction and system representation. Graph Neural Network (GNN) and stochastic graph relational learning strategy are also imposed to capture the between-sensor dependence. Then a composite anomaly metric is established with the learned dependence structure explicitly. |
| [Neural Contextual Anomaly Detection for Time Series](https://www.ijcai.org/proceedings/2022/0394.pdf) | IJCAI | 2022 | | SMAP, YAHOO, KPI, MSL, SMD | We introduce Neural Contextual Anomaly Detection (NCAD), a framework for anomaly detection on time series that scales seamlessly from the unsupervised to supervised setting, and is applicable to both univariate and multivariate time series. This is achieved by combining recent developments in representation learning for multivariate time series, with techniques for deep anomaly detection originally developed for computer vision that we tailor to the time series setting. Our window-based approach facilitates learning the boundary between normal and anomalous classes by injecting generic synthetic anomalies into the available data. NCAD can effectively take advantage of domain knowledge and of any available training labels. We demonstrate empirically on standard benchmark datasets that our approach obtains a state-of-theart performance in the supervised, semi-supervised, and unsupervised settings.|
|[Deep Variational Graph Convolutional Recurrent Network for Multivariate Time Series Anomaly Detection](https://proceedings.mlr.press/v162/chen22x/chen22x.pdf)| ICML | 2022 | | DND, SMD, MSL, SMAP| In this paper, we model channel dependency and stochasticity within MTS by developing an embedding-guided probabilistic generative network. We combine it with adaptive Variational Graph Convolutional Recurrent Network (VGCRN) to model both spatial and temporal fine-grained correlations in MTS. To explore hierarchical latent representations, we further extend VGCRN into a deep variational network, which captures multilevel information at different layers and is robust to noisy time series.|
| [A Semi-Supervised VAE Based Active Anomaly Detection Framework in Multivariate Time Series for Online Systems](https://dl.acm.org/doi/pdf/10.1145/3485447.3511984) | WWW | 2022 | - | online cloud server data from two different types of game business | SLA-VAE first defines anomalies based on feature extraction module, introduces semi-supervised VAE to identify anomalies in multivariate time series, and employs active learning to update the online model via a small number of uncertain samples. |
| [Towards a Rigorous Evaluation of Time-series Anomaly Detection](https://arxiv.org/abs/2109.05257) | AAAI | 2022 | - | Secure water treatment (SWaT), ...... | applying PA can severely overestimate a TAD model’s capability.|
Expand Down Expand Up @@ -227,6 +236,8 @@ We divided these papers into several fundamental tasks as follows.
## Others
| Paper | Conference | Year | Code | Used Datasets |Key Contribution|
| :-------------------: | :----------: | :----------: | :------------------------: | ----------------------- |------ |
| [MetaER-TTE: An Adaptive Meta-learning Model for En Route Travel Time Estimation](https://zheng-kai.com/paper/ijcai_2022_fan.pdf) | IJCAI | 2022 | | Synthetic Data, Physionet | we propose a novel adaptive meta-learning model called MetaER-TTE. Particularly, we utilize soft-clustering and derive cluster-aware initialized parameters to better transfer the shared knowledge across trajectories with similar contextual information. In addition, we adopt a distribution-aware approach for adaptive learning rate optimization, so as to avoid task-overfitting which will occur when guiding the initial parameters with a fixed learning rate for tasks under imbalanced distribution. |
| [Cumulative Stay-time Representation for Electronic Health Recordsin Medical Event Time Prediction](https://arxiv.org/pdf/2204.13451.pdf) | IJCAI | 2022 | | Beijing | We address the problem of predicting when a disease will develop, i.e., medical event time (MET), from a patient’s electronic health record (EHR). The MET of non-communicable diseases like diabetes is highly correlated to cumulative health conditions, more specifically, how much time the patient spent with specific health conditions in the past. We derive a trainable construction of CTR based on neural networks that have the flexibility to fit the target data and scalability to handle high-dimensional EHR. |
| [Adaptive Conformal Predictions for Time Series](https://arxiv.org/pdf/2202.07282.pdf)| ICML | 2022| [code](https://github.com/mzaffran/adaptiveconformalpredictionstimeseries) | | Uncertainty quantification of predictive models is crucial in decision-making problems. Conformal prediction is a general and theoretically sound answer. However, it requires exchangeable data, excluding time series. While recent works tackled this issue, we argue that Adaptive Conformal Inference (ACI, Gibbs and Candes` , 2021), developed for distribution-shift time series, is a good procedure for time series with general dependency. We theoretically analyse the impact of the learning rate on its efficiency in the exchangeable and auto-regressive case. We propose a parameter-free method, AgACI, that adaptively builds upon ACI based on online expert aggregation. We lead extensive fair simulations against competing methods that advocate for ACI’s use in time series. We conduct a real case study: electricity price forecasting. The proposed aggregation algorithm provides efficient prediction intervals for day-ahead forecasting. All the code and data to reproduce the experiments is made available.|
|[Modeling Irregular Time Series with Continuous Recurrent Units](https://arxiv.org/pdf/2111.11344.pdf)| ICML | 2022| [code](https://github.com/boschresearch/Continuous-Recurrent-Units) | Pendulum Images, Climate Data (USHCN), Electronic Health Records (Physionet) | In many datasets (e.g. medical records) observation times are irregular and can carry important information. To address this challenge, we propose continuous recurrent units (CRUs) – a neural architecture that can naturally handle irregular intervals between observations. |
|[Unsupervised Time-Series Representation Learning with Iterative Bilinear Temporal-Spectral Fusion](https://arxiv.org/abs/2202.04770) | ICML | 2022| | HAR, SleepEDF, ECG Waveform, ETT, Weather, SaaT, WADI, SMD, SMAP, MSL | We devise a novel iterative bilinear temporal-spectral fusion to explicitly encode the affinities of abundant time-frequency pairs, and iteratively refines representations in a fusion-and-squeeze manner with Spectrum-to-Time (S2T) and Time-to-Spectrum (T2S) Aggregation modules |
Expand Down