To create such an FPGA implementation, we compress the second CNN by applying dynamic quantization approaches. As opposed to fine-tuning an already trained network, this move involves retraining the CNN from scratch with constrained bitwidths for that weights and activations. This course of action is known as quantization-aware schooling (QAT). https://ignacyj047biq1.wikipublicity.com/user