Online Continual Learning via Logit Adjusted Softmax

May 29, 2024ยท
Zhehao Huang
Zhehao Huang
,
Tao Li
,
Chenhe Yuan
,
Yingwen Wu
,
Xiaolin Huang
ยท 1 min read
The diagram of our proposed ER-LAS
Abstract
Online continual learning is a challenging problem where models must learn from a non-stationary data stream while avoiding catastrophic forgetting. Inter-class imbalance during training has been identified as a major cause of forgetting, leading to model prediction bias towards recently learned classes. In this paper, we theoretically analyze that inter-class imbalance is entirely attributed to imbalanced class-priors, and the function learned from intra-class intrinsic distributions is the Bayes-optimal classifier. To that end, we present that a simple adjustment of model logits during training can effectively resist prior class bias and pursue the corresponding Bayes-optimum. Our proposed method, Logit Adjusted Softmax, can mitigate the impact of inter-class imbalance not only in class-incremental but also in realistic general setups, with little additional computational cost. We evaluate our approach on various benchmarks and demonstrate significant performance improvements compared to prior arts. For example, our approach improves the best baseline by 4.6% on CIFAR10.
Type
Publication
In Transactions on Machine Learning Research 05/2024

Continual Learning.