Repository navigation
float16
- Website
- Wikipedia
Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
Up to 200x Faster Dot Products & Similarity Metrics — for Python, Rust, C, JS, and Swift, supporting f64, f32, f16 real & complex, i8, and bit vectors using SIMD for both AVX2, AVX-512, NEON, SVE, & SVE2 📐
Half-precision floating point types f16 and bf16 for Rust.
IEEE 754 half-precision floating-point ponyfill
float16 provides IEEE 754 half-precision format (binary16) with correct conversions to/from float32
🎯 Gradient Accumulation for TensorFlow 2
A modular, accelerator-ready machine learning framework built in Go that speaks float8/16/32/64. Designed with clean architecture, strong typing, and native concurrency for scalable, production-ready AI systems. Ideal for engineers who value simplicity, speed, and maintainability.
TFLite applications: Optimized .tflite models (i.e. lightweight and low latency) and code to run directly on your Microcontroller!
Main purpose of this library is to provide functions for conversion to and from half precision (16bit) floating point numbers. It also provides functions for basic arithmetic and comparison of half floats.
Minimum safe half-precision floating-point integer.
Half-precision floating-point positive infinity.
The bias of a half-precision floating-point number's exponent.
Half-precision floating-point mathematical constants.