Researchers have proposed a hash-based binary network training method that is 3% better than current methods.

The research team led by Cheng Jian recently introduced a hash-based method for training binary neural networks, revealing a strong connection between inner product hashing and binary weight networks. This approach demonstrates that network parameter binarization can be rephrased as a hashing problem. On the ResNet-18 model, this method achieved a 3% improvement over existing state-of-the-art techniques. In recent years, deep convolutional neural networks have made significant strides in computer vision tasks such as image recognition, object tracking, and semantic segmentation. In many practical scenarios, these networks perform well enough to be deployed in real-world applications, encouraging further exploration of their use in diverse domains. However, deploying deep convolutional networks presents two major challenges: large parameter size and high computational complexity. The sheer number of parameters requires substantial storage and memory, which is often limited on mobile or embedded devices. Additionally, the high computational cost of deep networks leads to slower inference and increased power consumption, making them less suitable for resource-constrained environments. To address these issues, various network acceleration and compression techniques have been developed. One promising approach is network parameter binarization, where weights are represented using only +1 and -1. This allows multiplication operations to be replaced with additions, significantly reducing hardware requirements and computation time. Moreover, instead of using 32-bit floating-point numbers, binary networks use just one bit per parameter, resulting in a 32x compression ratio. However, this drastic reduction in precision introduces significant quantization errors, often leading to a drop in network accuracy. Finding a way to train binary networks without sacrificing performance remains a key challenge. Hu Qinghao and colleagues from the Institute of Automation proposed a novel hash-based method for training binary networks, showing that the relationship between inner product preserving hashing and binary weights is fundamental. The core idea is to preserve the inner product similarity during the binarization process, rather than focusing solely on minimizing quantization error. By doing so, the method ensures better alignment between the original and quantized representations, improving overall accuracy. This approach was tested on several models, including VGG9, AlexNet, and ResNet-18. The results showed that it outperformed existing binary weight methods, achieving a 3% improvement on ResNet-18. The experiments demonstrated the effectiveness of the hash-based strategy in maintaining both efficiency and accuracy. Interestingly, there is an intriguing parallel between binary weight networks and biological neural circuits. Dasgupta et al. [2] highlighted that the olfactory neural circuit in fruit flies functions like a specialized hash, projecting data into a sparse binary space while preserving similarity relationships. This suggests that binary connections in neural networks may have a biological basis and could inspire new insights into how information is processed in both artificial and natural systems. The work has been accepted at AAAI 2018 [1], where it will be presented in an oral session, indicating its significance in the field of deep learning and neural network compression. References: [1] Qinghao Hu, Peisong Wang, Jian Cheng. From Hashing to CNNs: Training Binary Weight Networks via Hashing. AAAI 2018 [2] Dasgupta S, Stevens CF, Navlakha S. A neural algorithm for a fundamental computing problem. Science, 2017, 358(6364): 793-796.

Spring Steel Wire

11mm and 3mm Spring Steel Wire,compression spring,tension spring,torsion spring,wire forming

Yixing Steel Pole International Trading Co., Ltd , https://www.yx-steelpole.com

This entry was posted in on