Repository navigation
adam
- Website
- Wikipedia
On the Variance of the Adaptive Learning Rate and Beyond
Adam (or adm) is a coroutine-friendly Android Debug Bridge client written in Kotlin
ADAM - A Question Answering System. Inspired from IBM Watson
RAdam implemented in Keras & TensorFlow
This is a repo for my master thesis research about the Fusion of Visual SLAM and GPS. It contains the research paper, code and other interesting data.
Pytorch LSTM RNN for reinforcement learning to play Atari games from OpenAI Universe. We also use Google Deep Mind's Asynchronous Advantage Actor-Critic (A3C) Algorithm. This is much superior and efficient than DQN and obsoletes it. Can play on many games
A Deep Learning and preprocessing framework in Rust with support for CPU and GPU.
Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch
A tour of different optimization algorithms in PyTorch.
ADAS is short for Adaptive Step Size, it's an optimizer that unlike other optimizers that just normalize the derivative, it fine-tunes the step size, truly making step size scheduling obsolete, achieving state-of-the-art training performance
Unofficial implementation of Switching from Adam to SGD optimization in PyTorch.
Lion and Adam optimization comparison
Easy-to-use linear and non-linear solver
Toy implementations of some popular ML optimizers using Python/JAX
Deep learning projects including applications (face recognition, neural style transfer, autonomous driving, sign language reading, music generation, translation, speech recognition and NLP) and theories (CNNs, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, hyperparameter tuning, regularization, optimization, Residual Networks). Deep Learning Specialization by Andrew Ng, deeplearning.ai
Partially Adaptive Momentum Estimation method in the paper "Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks" (accepted by IJCAI 2020)