query_available_dl_devicesT_query_available_dl_devicesQueryAvailableDlDevicesQueryAvailableDlDevicesquery_available_dl_devices (算子)

名称

query_available_dl_devicesT_query_available_dl_devicesQueryAvailableDlDevicesQueryAvailableDlDevicesquery_available_dl_devices — 获取支持深度学习的硬件设备列表。

签名

query_available_dl_devices( : : GenParamName, GenParamValue : DLDeviceHandles)

Herror T_query_available_dl_devices(const Htuple GenParamName, const Htuple GenParamValue, Htuple* DLDeviceHandles)

void QueryAvailableDlDevices(const HTuple& GenParamName, const HTuple& GenParamValue, HTuple* DLDeviceHandles)

static HDlDeviceArray HDlDevice::QueryAvailableDlDevices(const HTuple& GenParamName, const HTuple& GenParamValue)

void HDlDevice::QueryAvailableDlDevices(const HString& GenParamName, const HString& GenParamValue)

void HDlDevice::QueryAvailableDlDevices(const char* GenParamName, const char* GenParamValue)

void HDlDevice::QueryAvailableDlDevices(const wchar_t* GenParamName, const wchar_t* GenParamValue)   ( Windows only)

static void HOperatorSet.QueryAvailableDlDevices(HTuple genParamName, HTuple genParamValue, out HTuple DLDeviceHandles)

static HDlDevice[] HDlDevice.QueryAvailableDlDevices(HTuple genParamName, HTuple genParamValue)

void HDlDevice.QueryAvailableDlDevices(string genParamName, string genParamValue)

def query_available_dl_devices(gen_param_name: MaybeSequence[str], gen_param_value: MaybeSequence[Union[int, float, str]]) -> Sequence[HHandle]

def query_available_dl_devices_s(gen_param_name: MaybeSequence[str], gen_param_value: MaybeSequence[Union[int, float, str]]) -> HHandle

描述

query_available_dl_devicesquery_available_dl_devicesQueryAvailableDlDevicesQueryAvailableDlDevicesQueryAvailableDlDevicesquery_available_dl_devices returns a list of handles. Each handle refers to a deep-learning-capable hardware device (hereafter referred to as device) that can be used for inference or training of a deep learning model. For each returned device, every parameter mentioned in GenParamNameGenParamNameGenParamNameGenParamNamegenParamNamegen_param_name must be equal to at least one of its corresponding values that appear in GenParamValueGenParamValueGenParamValueGenParamValuegenParamValuegen_param_value. A parameter can have more than one value by duplicating its name in GenParamNameGenParamNameGenParamNameGenParamNamegenParamNamegen_param_name and adding different corresponding value in GenParamValueGenParamValueGenParamValueGenParamValuegenParamValuegen_param_value

A deep-learning-capable device is either supported directly through HALCON or through an AI 2-interface.

The devices that are supported directly through HALCON are equivalent to those that can be set to a deep learning model via set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParamset_dl_model_param using 'runtime'"runtime""runtime""runtime""runtime""runtime" = 'cpu'"cpu""cpu""cpu""cpu""cpu" or 'runtime'"runtime""runtime""runtime""runtime""runtime" = 'gpu'"gpu""gpu""gpu""gpu""gpu". HALCON provides an internal implementation for the inference or training of a deep learning model for those devices. See 深度学习 for more details.

Devices that are supported through the AI 2-interface can also be set to a deep learning model using set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParamset_dl_model_param。In this case the inference is not executed by HALCON but by the device itself.

query_available_dl_devicesquery_available_dl_devicesQueryAvailableDlDevicesQueryAvailableDlDevicesQueryAvailableDlDevicesquery_available_dl_devices returns a handle for each deep-learning-capable device supported through HALCON and through an inference engine.

If a device is supported through HALCON and one or several inference engines, query_available_dl_devicesquery_available_dl_devicesQueryAvailableDlDevicesQueryAvailableDlDevicesQueryAvailableDlDevicesquery_available_dl_devices returns a handle for HALCON and for each inference engine.

GenParamNameGenParamNameGenParamNameGenParamNamegenParamNamegen_param_name can be used to filter for the devices. All GenParamNameGenParamNameGenParamNameGenParamNamegenParamNamegen_param_name that are gettable by get_dl_device_paramget_dl_device_paramGetDlDeviceParamGetDlDeviceParamGetDlDeviceParamget_dl_device_param and that do not return a handle-typed value for GenParamValueGenParamValueGenParamValueGenParamValuegenParamValuegen_param_value are supported for filtering. See the operator reference of get_dl_device_paramget_dl_device_paramGetDlDeviceParamGetDlDeviceParamGetDlDeviceParamget_dl_device_param for the list of gettable parameters. In addition, the following values are supported:

'runtime'"runtime""runtime""runtime""runtime""runtime"

The devices, which are directly supported by HALCON for this device type.

Possible values (standard): 'cpu'"cpu""cpu""cpu""cpu""cpu", 'gpu'"gpu""gpu""gpu""gpu""gpu".

GenParamNameGenParamNameGenParamNameGenParamNamegenParamNamegen_param_name can have multiple entries for the same value. In this case filter combines the entries with a logical 'or'. Please see the example code below for some examples how to use the filter.

执行信息

此算子返回一个句柄。请注意,即使该句柄被用作特定算子的输入参数,这些算子仍可能改变此句柄类型的实例状态。

参数

GenParamNameGenParamNameGenParamNameGenParamNamegenParamNamegen_param_name (输入控制)  attribute.name(-array) HTupleMaybeSequence[str]HTupleHtuple (string) (string) (HString) (char*)

Name of the generic parameter.

默认值: []

值列表: 'ai_accelerator_interface'"ai_accelerator_interface""ai_accelerator_interface""ai_accelerator_interface""ai_accelerator_interface""ai_accelerator_interface", 'calibration_precisions'"calibration_precisions""calibration_precisions""calibration_precisions""calibration_precisions""calibration_precisions", 'cast_precisions'"cast_precisions""cast_precisions""cast_precisions""cast_precisions""cast_precisions", 'conversion_supported'"conversion_supported""conversion_supported""conversion_supported""conversion_supported""conversion_supported", 'id'"id""id""id""id""id", 'inference_only'"inference_only""inference_only""inference_only""inference_only""inference_only", 'name'"name""name""name""name""name", 'optimize_for_inference_params'"optimize_for_inference_params""optimize_for_inference_params""optimize_for_inference_params""optimize_for_inference_params""optimize_for_inference_params", 'precisions'"precisions""precisions""precisions""precisions""precisions", 'runtime'"runtime""runtime""runtime""runtime""runtime", 'settable_device_params'"settable_device_params""settable_device_params""settable_device_params""settable_device_params""settable_device_params", 'type'"type""type""type""type""type"

GenParamValueGenParamValueGenParamValueGenParamValuegenParamValuegen_param_value (输入控制)  attribute.value(-array) HTupleMaybeSequence[Union[int, float, str]]HTupleHtuple (string / integer / real) (string / int / long / double) (HString / Hlong / double) (char* / Hlong / double)

Value of the generic parameter.

默认值: []

建议值:

DLDeviceHandlesDLDeviceHandlesDLDeviceHandlesDLDeviceHandlesDLDeviceHandlesdldevice_handles (输出控制)  dl_device(-array) HDlDevice, HTupleSequence[HHandle]HTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Tuple of DLDevice handles

示例(HDevelop)

* Query all deep-learning-capable hardware devices
query_available_dl_devices ([], [], DLDeviceHandles)

* Query all GPUs with ID 0 or 2
query_available_dl_devices (['type', 'id', 'id'], ['gpu', 0, 2],\
                            DLDeviceHandles)

* Query the unique GPU with ID 1 supported by HALCON
query_available_dl_devices (['runtime', 'id'], ['gpu', 1], DLDeviceHandles)

结果

如果参数有效,算子 query_available_dl_devicesquery_available_dl_devicesQueryAvailableDlDevicesQueryAvailableDlDevicesQueryAvailableDlDevicesquery_available_dl_devices 返回值 2 ( H_MSG_TRUE )。如有必要,则抛出异常。

可能的后继

get_dl_device_paramget_dl_device_paramGetDlDeviceParamGetDlDeviceParamGetDlDeviceParamget_dl_device_param

模块

基础