This repository contains a curated list of meta-learning papers closely related to my PhD thesis. The research papers are primarily focused on optimization-based meta-learning approaches for learning loss functions, optimizers, and parameter initialization.
- Meta-Learning Survey Papers
- Meta-Learning Loss Functions
- Meta-Learning Optimizers
- Meta-Learning Parameter Initializations
- Meta-Learning Miscellaneous
- Meta-Optimization
- Meta-Learning Blog Posts
- Meta-Learning Libraries
- A Perspective View and Survey of Meta-Learning. (AIR2002), [paper].
- Meta-Learning: A Survey. (arXiv2018), [paper]
- A Comprehensive Overview and Survey of Recent Advances in Meta-Learning. (arXiv2020), [paper].
- Meta-Learning in Neural Networks: A Survey. (TPAMI2022), [paper].
- Learning to Teach with Dynamic Loss Functions. (NeurIPS2018), [paper].
- Learning to Learn by Self-Critique. (NeurIPS2019), [paper].
- A General and Adaptive Robust Loss Function. (CVPR2019), [paper].
- Learning Surrogate Losses. (arXiv2019), [paper].
- Addressing the Loss-Metric Mismatch with Adaptive Loss Alignment. (ICML2019), [paper].
- AM-LFS: AutoML for Loss Function Search. (ICCV2019), [paper].
- Improved Training Speed, Accuracy, and Data Utilization Through Loss Function Optimization. (CEC2020), [paper].
- Effective Regularization through Loss Function Meta-Learning. (arXiv2020), [paper].
- Improving Deep Learning through Loss-Function Evolution. (Thesis2020), [paper].
- Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search. (ICLR2020), [paper].
- Learning State-Dependent Losses for Inverse Dynamics Learning. (IROS2020), [paper].
- Loss Function Search for Face Recognition. (ICML2020), [paper].
- Improving Generalization in Meta Reinforcement Learning using Learned Objectives. (ICLR2020), [paper].
- Meta-Learning via Learned Loss. (ICPR2021), [paper].
- Searching for Robustness: Loss Learning for Noisy Classification Tasks. (ICCV2021), [paper].
- Optimizing Loss Functions through Multi-Variate Taylor Polynomial Parameterization. (GECCO2021), [paper].
- Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning. (ICCV2021), [paper].
- Loss Function Learning for Domain Generalization by Implicit Gradient. (ICML2022), [paper].
- AutoLoss-Zero: Searching Loss Functions from Scratch for Generic Tasks. (CVPR2022), [paper].
- PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions. (ICLR2022), [paper].
- Meta-Learning PINN Loss Functions. (JCP2022), [paper].
- Learning Symbolic Model-Agnostic Loss Functions via Meta-Learning. (TPAMI2023), [paper].
- Fast and Efficient Local-Search for Genetic Programming Based Loss Function Learning. (GECCO2023), [paper].
- Online Loss Function Learning. (arXiv2023), [paper].
- Meta-Learning to Optimise: Loss Functions and Update Rules. (Thesis2023), [paper].
- Meta-Tuning Loss Functions and Data Augmentation for Few-shot Object Detection. (CVPR2023), [paper].
- OWAdapt: An Adaptive Loss Function For Deep Learning using OWA Operators. (arXiv2023), [paper].
- Neural Loss Function Evolution for Large-Scale Image Classifier Convolutional Neural Networks. (arXiv2024), [paper].
- Evolving Loss Functions for Specific Image Augmentation Techniques. (arXiv2024), [paper].
- Meta-Learning Loss Functions for Deep Neural Networks. (Thesis2024), [paper].
- Learning to Learn by Gradient Descent by Gradient Descent. (NeurIPS2016), [paper].
- Meta-SGD: Learning to Learn Quickly for Few-Shot Learning. (arXiv2017), [paper].
- Learning to Learn Without Gradient Descent by Gradient Descent. (ICML2017), [paper].
- Learned Optimizers that Scale and Generalize. (ICML2017), [paper].
- Optimization as a Model for Few-Shot Learning. (ICLR2017), [paper].
- Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace. (ICML2018), [paper].
- Meta-Curvature. (NeurIPS2019), [paper].
- Understanding and Correcting Pathologies in the Training of Learned Optimizers. (ICML2019), [paper].
- Meta-Learning with Warped Gradient Descent. (ICLR2020), [paper].
- On Modulating the Gradient for Meta-Learning. (ECCV2020), [paper].
- Tasks, Stability, Architecture, and Compute: Training More Effective Learned Optimizers, and Using Them to Train Themselves. (arXiv2020), [paper].
- Learning to Optimize: A Primer and a Benchmark. (JMLR2022), [paper].
- VeLO: Training Versatile Learned Optimizers by Scaling Up. (arXiv2022), [paper]
- Practical Tradeoffs Between Memory, Compute, and Performance in Learned Optimizers. (CoLLA2002), [paper].
- A Closer Look at Learned Optimization: Stability, Robustness, and Inductive Biases. (NeurIPS2022), [paper]
- Meta-Learning with a Geometry-Adaptive Preconditioner. (CVPR2023), [paper].
- Symbolic Discovery of Optimization Algorithms. (arXiv2023), [paper].
- Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. (ICML2017), [paper].
- On First-Order Meta-Learning Algorithms. (arXiv2018), [paper].
- Probabilistic Model-Agnostic Meta-Learning. (NeurIPS2018), [paper].
- Toward Multimodal Model-Agnostic Meta-Learning. (NeurIPS2018), [paper].
- Meta-Learning with Implicit Gradients. (NeurIPS2019), [paper].
- Alpha MAML: Adaptive Model-Agnostic Meta-Learning. (arXiv2019), [paper].
- How to Train Your MAML. (ICLR2019), [paper].
- Meta-Learning with Latent Embedding Optimization. (ICLR2019), [paper].
- Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation. (NeurIPS2019), [paper].
- Fast Context Adaptation via Meta-Learning. (ICML2019), [paper].
- Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML. (arXiv2019), [paper].
- ES-MAML: Simple Hessian-Free Meta-Learning. (ICLR2020), [paper].
- BOIL: Towards Representation Change for Few-Shot Learning. (arXiv2020), [paper].
- How to Train Your MAML to Excel in Few-Shot Classification. (ICLR2021), [paper].
- Meta-Learning Neural Procedural Biases. (arXiv2024), [paper].
- Siamese Neural Networks for One-Shot Image Recognition. (ICML2015), [paper].
- Matching Networks for One-Shot Learning. (NeurIPS2016), [paper].
- Meta-Learning with Memory-Augmented Neural Networks. (ICML2016), [paper].
- Prototypical Networks for Few-Shot Learning. (NeurIPS2017), [paper].
- Searching for Activation Functions. (arXiv2017), [paper].
- Learning to Learn: Meta-Critic Networks for Sample Efficient Learning. (arXiv2017), [paper].
- Meta-Learning with Differentiable Closed-Form Solvers. (arXiv2018), [paper].
- Learning to Reweight Examples for Robust Deep Learning. (ICML2018), [paper].
- Learning to Compare: Relation Network for Few-Shot Learning. (CVPR2018), [paper].
- Online Learning Rate Adaptation with Hypergradient Descent. (ICLR2018), [paper].
- TADAM: Task Dependent Adaptive Metric for Improved Few-Shot Learning. (NeurIPS2018), [paper].
- MetaReg: Towards Domain Generalization using Meta-Regularization. (NeurIPS2018), [paper].
- Learning to Learn with Conditional Class Dependencies. (ICLR2018), [paper].
- Few-Shot Image Recognition by Predicting Parameters from Activations. (CVPR2018), [paper].
- Fast and Flexible Multi-Task Classification using Conditional Neural Adaptive Processes. (NeurIPS2019), [paper].
- Meta-Learning for Semi-Supervised Few-Shot Classification. (ICLR2019), [paper].
- Meta-Learning with Differentiable Convex Optimization. (CVPR2019), [paper].
- AutoML-Zero: Evolving Machine Learning Algorithms From Scratch. (ICML2020), [paper].
- Evolving Normalization-Activation Layers. (NeurIPS2020), [paper].
- Meta-Learning with Adaptive Hyperparameters. (NeurIPS2020), [paper].
- Differentiable Automatic Data Augmentation. (ECCV2020), [paper].
- Few-Shot Learning via Embedding Adaptation with Set-to-Set Functions. (CVPR2020), [paper].
- Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples. (ICLR2020), [paper].
- Evolving Reinforcement Learning Algorithms. (ICLR2021), [paper].
- Learning to Learn Task-Adaptive Hyperparameters for Few-Shot Learning. (TPAMI2023), [paper].
- Gradient-Based Hyperparameter Optimization through Reversible Learning. (ICML2015), [paper].
- Forward and Reverse Gradient-based Hyperparameter Optimization. (ICML2017), [paper].
- Bilevel Programming for Hyperparameter Optimization and Meta-Learning. (ICML2018), [paper].
- Understanding Short-Horizon Bias in Stochastic Meta-Optimization. (ICLR2018), [paper].
- Generalized Inner Loop Meta-Learning. (arXiv2019), [paper].
- Transferring Knowledge Across Learning Processes. (ICLR2019), [paper].
- Truncated Back-propagation for Bilevel Optimization. (AISTATS2019), [paper].
- Optimizing Millions of Hyperparameters by Implicit Differentiation. (AISTATS2020), [paper].
- EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization. (NeurIPS2021), [paper].
- Gradient-Based Bi-level Optimization for Deep Learning: A Survey. (arXiv2022), [paper].
- The Curse of Unrolling: Rate of Differentiating through Optimization. (NeurIPS2022), [paper].
- Bootstrapped Meta-Learning. (ICLR2022), [paper].
- Optimistic Meta-Gradients. (NeurIPS2024), [paper].