https://bugs.kde.org/show_bug.cgi?id=497565

--- Comment #3 from Michael Miller <michael_mil...@msn.com> ---
(In reply to js333031 from comment #2)
> Thank you Mike. Please see
> https://github.com/opencv/opencv/wiki/Intel-OpenVINO-backend#usage
> 
> The link, if I understood correctly states that OpenVINO backend will be
> used if it's installed.  The optional part of that section mentions device
> selection. That would be useful to select between integrated/discreet Intel
> GPUs or VPU.

Here's a list of the targets and backends I'm currently working to support:

>    const std::map<std::string, int> str2backend
>    {
>       { "default", cv::dnn::DNN_BACKEND_DEFAULT          },
>        { "halide",  cv::dnn::DNN_BACKEND_HALIDE           },
>        { "ie",      cv::dnn::DNN_BACKEND_INFERENCE_ENGINE },
>        { "opencv",  cv::dnn::DNN_BACKEND_OPENCV           },
>        { "cuda",    cv::dnn::DNN_BACKEND_CUDA             }
>    };
>
>    const std::map<std::string, int> str2target
>    {
>        { "cpu",         cv::dnn::DNN_TARGET_CPU           },
>        { "opencl",      cv::dnn::DNN_TARGET_OPENCL        },
>        { "myriad",      cv::dnn::DNN_TARGET_MYRIAD        },
>        { "vulkan",      cv::dnn::DNN_TARGET_VULKAN        },
>        { "opencl_fp16", cv::dnn::DNN_TARGET_OPENCL_FP16   },
>        { "cuda",        cv::dnn::DNN_TARGET_CUDA          },
>        { "cuda_fp16",   cv::dnn::DNN_TARGET_CUDA_FP16     }


You can see cv::dnn::DNN_BACKEND_INFERENCE_ENGINE is already on the list.  It
looks like the correct combination for OpenVINO is DNN_TARGET_OPENCL target
with DNN_BACKEND_INFERENCE_ENGINE backend.  The issue is that most of the
models used by digiKam are in .onnx format.  The documentation link you sent
says DNN_BACKEND_INFERENCE_ENGINE can only be used with .bin or .xml DNN
models.  I will see if that has changed since that documentation is a year and
half old, which is ancient by OpenCV standards.

When selecting a target and backend, I need to check if the hardware installed
in the computer supports the model.  If so, then I enable GPU processing,
otherwise it defaults to CPU.  

Right now I only have a check to see if OpenCL is enabled.  I will add checks
for other hardware soon.

Cheers,
Mike

-- 
You are receiving this mail because:
You are watching all bug changes.

Reply via email to