Repository navigation
xnor-net
- Website
- Wikipedia
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
BinaryNets in TensorFlow with XNOR GEMM op
An implementation of a variation of Sketch-A-Net using XNOR ConvNets using TensorFlow
XNOR-Net, CUDNN5 supported version of XNOR-Net-caffe: https://github.com/loswensiana/BWN-XNOR-caffe
A PyTorch implemenation of real XNOR-popcount (1-bit op) GEMM Linear PyTorch extension support both CPU and CUDA
A hardware implementation of a feed-forward Convolutional Neural Network called XNOR-Net which has faster execution due to the replacement of vector-matrix multiplication to “XNOR + Popcount” operation
Markov Chain Monte Carlo binary network optimization
Official repository for the research article "Pruning vs XNOR-Net: A ComprehensiveStudy on Deep Learning for AudioClassification in Microcontrollers"
XNOR-Net with binary conv2d kernels with XNOR GEMM op, support both CPU and GPU.
A simple and very crude ann predicting xnor written from scratch using numpy
BiDet implement