The popularity of convolutional neural network (CNN) models and the ubiquity of CPUs means that better inference performance can deliver significant gains to a larger number of users than ever before. As multi-core processors become the norm, efficient threading is required to leverage parallelism. This blog post covers recent advances in the Intel® Distribution of [...]

Read More…

[...]

Read More…

The post Maximize CPU Inference Performance with Improved Threads and Memory Management in Intel® Distribution of OpenVINO™ toolkit appeared first on Blogs@Intel.


Source: Intel – Blog