apply_dl_modelT_apply_dl_modelApplyDlModelApplyDlModelapply_dl_model (算子)

名称

apply_dl_modelT_apply_dl_modelApplyDlModelApplyDlModelapply_dl_model — 在一组图像上应用基于深度学习的网络进行推理。

签名

apply_dl_model( : : DLModelHandle, DLSampleBatch, Outputs : DLResultBatch)

Herror T_apply_dl_model(const Htuple DLModelHandle, const Htuple DLSampleBatch, const Htuple Outputs, Htuple* DLResultBatch)

void ApplyDlModel(const HTuple& DLModelHandle, const HTuple& DLSampleBatch, const HTuple& Outputs, HTuple* DLResultBatch)

HDictArray HDlModel::ApplyDlModel(const HDictArray& DLSampleBatch, const HTuple& Outputs) const

static void HOperatorSet.ApplyDlModel(HTuple DLModelHandle, HTuple DLSampleBatch, HTuple outputs, out HTuple DLResultBatch)

HDict[] HDlModel.ApplyDlModel(HDict[] DLSampleBatch, HTuple outputs)

def apply_dl_model(dlmodel_handle: HHandle, dlsample_batch: Sequence[HHandle], outputs: Sequence[str]) -> Sequence[HHandle]

描述

apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModelapply_dl_model applies the deep-learning-based network given by DLModelHandleDLModelHandleDLModelHandleDLModelHandleDLModelHandledlmodel_handle on the batch of input images handed over through the tuple of dictionaries DLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchdlsample_batch. The operator returns DLResultBatchDLResultBatchDLResultBatchDLResultBatchDLResultBatchdlresult_batch, a tuple with a result dictionary DLResultDLResultDLResultDLResultDLResultdlresult for every input image.

Please see the chapter Deep Learning / Model for more information on the concept and the dictionaries of the deep learning model in HALCON.

In order to apply the network on images, you have to hand them over through a tuple of dictionaries DLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchdlsample_batch, where a dictionary refers to a single image. You can create such a dictionary conveniently using the procedure gen_dl_samples_from_images. The tuple DLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchdlsample_batch can contain an arbitrary number of dictionaries. The operator apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModelapply_dl_model always processes a batch with up to 'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size" images simultaneously. In case the tuple contains more images, apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModelapply_dl_model iterates over the necessary number of batches internally. For a DLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchdlsample_batch with less than 'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size" images, the tuple is padded to a full batch which means that the time required to process a DLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchdlsample_batch is independent of whether the batch is filled up or just consists of a single image. This also means that if fewer images than 'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size" are processed in one operator call, the network still requires the same amount of memory as for a full batch. The current value of 'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size" can be retrieved using get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParamget_dl_model_param.

Note that the images might have to be preprocessed before feeding them into the operator apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModelapply_dl_model in order to fulfill the network requirements. You can retrieve the current requirements of your network, such as e.g., the image dimensions, using get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParamget_dl_model_param. The procedure preprocess_dl_dataset provides guidance on how to implement such a preprocessing stage.

The results are returned in DLResultBatchDLResultBatchDLResultBatchDLResultBatchDLResultBatchdlresult_batch, a tuple with a dictionary DLResultDLResultDLResultDLResultDLResultdlresult for every input image. Please see the chapter Deep Learning / Model for more information to the output dictionaries in DLResultBatchDLResultBatchDLResultBatchDLResultBatchDLResultBatchdlresult_batch and their keys. In OutputsOutputsOutputsOutputsoutputsoutputs you can specify, which output data is returned in DLResultDLResultDLResultDLResultDLResultdlresult. OutputsOutputsOutputsOutputsoutputsoutputs can be a single string, a tuple of strings, or an empty tuple with which you retrieve all possible outputs. If apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModelapply_dl_model is used with an AI 2-interface, it might be required to set 'is_inference_output'"is_inference_output""is_inference_output""is_inference_output""is_inference_output""is_inference_output" = 'true'"true""true""true""true""true" for all requested layers in OutputsOutputsOutputsOutputsoutputsoutputs before the model is optimized for the AI 2-interface, see optimize_dl_model_for_inferenceoptimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference and set_dl_model_layer_paramset_dl_model_layer_paramSetDlModelLayerParamSetDlModelLayerParamSetDlModelLayerParamset_dl_model_layer_param for further details. The values for OutputsOutputsOutputsOutputsoutputsoutputs depend on the model type of your network:

Models of 'type'"type""type""type""type""type"='3d_gripping_point_detection'"3d_gripping_point_detection""3d_gripping_point_detection""3d_gripping_point_detection""3d_gripping_point_detection""3d_gripping_point_detection"

  • OutputsOutputsOutputsOutputsoutputsoutputs='[]'"[]""[]""[]""[]""[]": DLResultDLResultDLResultDLResultDLResultdlresult containing:

    • 'gripping_map': Binary image, indicating for each pixel of the scene whether the model predicted a gripping point (pixel value = 1.0) or not (0.0).

    • 'gripping_confidence': Image, containing raw, uncalibrated confidence values for every point in the scene.

Models of 'type'"type""type""type""type""type"='anomaly_detection'"anomaly_detection""anomaly_detection""anomaly_detection""anomaly_detection""anomaly_detection"

  • OutputsOutputsOutputsOutputsoutputsoutputs='[]'"[]""[]""[]""[]""[]": DLResultDLResultDLResultDLResultDLResultdlresult contains an image where each pixel has the score of the according input image pixel. Additionally it contains a score for the entire image.

Models of 'type'"type""type""type""type""type"='counting'"counting""counting""counting""counting""counting"

This model type cannot be run with the operator apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModelapply_dl_model.

Models of 'type'"type""type""type""type""type"='gc_anomaly_detection'"gc_anomaly_detection""gc_anomaly_detection""gc_anomaly_detection""gc_anomaly_detection""gc_anomaly_detection"

For each value of OutputsOutputsOutputsOutputsoutputsoutputs, DLResultDLResultDLResultDLResultDLResultdlresult contains an image where each pixel has the score of the according input image pixel. Additionally it contains a score for the entire image.

  • OutputsOutputsOutputsOutputsoutputsoutputs='[]'"[]""[]""[]""[]""[]": The scores of each input image pixel are calculated as a combination of all available networks.

  • OutputsOutputsOutputsOutputsoutputsoutputs='anomaly_image_local'"anomaly_image_local""anomaly_image_local""anomaly_image_local""anomaly_image_local""anomaly_image_local": The scores of each input image pixel are calculated from the 'local'"local""local""local""local""local" network only. If the 'local'"local""local""local""local""local" network is not available, an error is raised.

  • OutputsOutputsOutputsOutputsoutputsoutputs='anomaly_image_global'"anomaly_image_global""anomaly_image_global""anomaly_image_global""anomaly_image_global""anomaly_image_global": The scores of each input image pixel are calculated from the 'global'"global""global""global""global""global" network only. If the 'global'"global""global""global""global""global" network is not available, an error is raised.

Models of 'type'"type""type""type""type""type"='classification'"classification""classification""classification""classification""classification"

  • OutputsOutputsOutputsOutputsoutputsoutputs='[]'"[]""[]""[]""[]""[]": DLResultDLResultDLResultDLResultDLResultdlresult contains a tuple with confidence values in descending order and tuples with the class names and class IDs sorted accordingly.

Models of 'type'"type""type""type""type""type"='detection'"detection""detection""detection""detection""detection"

  • OutputsOutputsOutputsOutputsoutputsoutputs='[]'"[]""[]""[]""[]""[]": DLResultDLResultDLResultDLResultDLResultdlresult contains the bounding box coordinates as well as the inferred classes and their confidence values resulting from all levels.

  • OutputsOutputsOutputsOutputsoutputsoutputs= '[bboxhead + level + _prediction, classhead + level + _prediction]'"[bboxhead + level + _prediction, classhead + level + _prediction]""[bboxhead + level + _prediction, classhead + level + _prediction]""[bboxhead + level + _prediction, classhead + level + _prediction]""[bboxhead + level + _prediction, classhead + level + _prediction]""[bboxhead + level + _prediction, classhead + level + _prediction]", where 'level'"level""level""level""level""level" stands for the selected level which lies between 'min_level'"min_level""min_level""min_level""min_level""min_level" and 'max_level'"max_level""max_level""max_level""max_level""max_level": DLResultDLResultDLResultDLResultDLResultdlresult contains the bounding box coordinates as well as the inferred classes and their confidence values resulting from specific levels.

Models of 'type'"type""type""type""type""type"='ocr_recognition'"ocr_recognition""ocr_recognition""ocr_recognition""ocr_recognition""ocr_recognition"

  • OutputsOutputsOutputsOutputsoutputsoutputs='[]'"[]""[]""[]""[]""[]": DLResultDLResultDLResultDLResultDLResultdlresult contains the recognized word. Additionally it contains candidates for each character of the word and their confidences.

Models of 'type'"type""type""type""type""type"='ocr_detection'"ocr_detection""ocr_detection""ocr_detection""ocr_detection""ocr_detection"

Models of 'type'"type""type""type""type""type"='segmentation'"segmentation""segmentation""segmentation""segmentation""segmentation"

  • OutputsOutputsOutputsOutputsoutputsoutputs='segmentation_image'"segmentation_image""segmentation_image""segmentation_image""segmentation_image""segmentation_image": DLResultDLResultDLResultDLResultDLResultdlresult contains an image where each pixel has a value corresponding to the class its corresponding pixel has been assigned to.

  • OutputsOutputsOutputsOutputsoutputsoutputs='segmentation_confidence'"segmentation_confidence""segmentation_confidence""segmentation_confidence""segmentation_confidence""segmentation_confidence": DLResultDLResultDLResultDLResultDLResultdlresult contains an image where each pixel has the confidence value out of the classification of the according pixel.

  • OutputsOutputsOutputsOutputsoutputsoutputs='[]'"[]""[]""[]""[]""[]": DLResultDLResultDLResultDLResultDLResultdlresult contains all output values.

注意

System requirements: To run this operator on GPU by setting 'device'"device""device""device""device""device" to 'gpu'"gpu""gpu""gpu""gpu""gpu" (see get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParamget_dl_model_param), cuDNN and cuBLAS are required. For further details, please refer to the “Installation Guide”, paragraph “Requirements for Deep Learning and Deep-Learning-Based Methods”.

执行信息

此算子支持取消超时和中断。

此算子支持中断超时和中断。

参数

DLModelHandleDLModelHandleDLModelHandleDLModelHandleDLModelHandledlmodel_handle (输入控制)  dl_model HDlModel, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Handle of the deep learning model.

DLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchdlsample_batch (输入控制)  dict-array HDict, HTupleSequence[HHandle]HTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Input data.

OutputsOutputsOutputsOutputsoutputsoutputs (输入控制)  string-array HTupleSequence[str]HTupleHtuple (string) (string) (HString) (char*)

Requested outputs.

默认值: []

值列表: [], 'bboxhead2_prediction'"bboxhead2_prediction""bboxhead2_prediction""bboxhead2_prediction""bboxhead2_prediction""bboxhead2_prediction", 'classhead2_prediction'"classhead2_prediction""classhead2_prediction""classhead2_prediction""classhead2_prediction""classhead2_prediction", 'segmentation_confidence'"segmentation_confidence""segmentation_confidence""segmentation_confidence""segmentation_confidence""segmentation_confidence", 'segmentation_image'"segmentation_image""segmentation_image""segmentation_image""segmentation_image""segmentation_image"

DLResultBatchDLResultBatchDLResultBatchDLResultBatchDLResultBatchdlresult_batch (输出控制)  dict-array HDict, HTupleSequence[HHandle]HTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Result data.

结果

If the parameters are valid, the operator apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModelapply_dl_model returns the value 2 ( H_MSG_TRUE) . If necessary, an exception is raised.

可能的前趋

read_dl_modelread_dl_modelReadDlModelReadDlModelReadDlModelread_dl_model, train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch, train_dl_model_anomaly_datasettrain_dl_model_anomaly_datasetTrainDlModelAnomalyDatasetTrainDlModelAnomalyDatasetTrainDlModelAnomalyDatasettrain_dl_model_anomaly_dataset, set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParamset_dl_model_param

模块

基础。此算子采用动态许可机制(详见《安装指南》)。所需模块取决于算子的具体使用场景:
三维计量、光学字符识别/光学字符验证、深度学习推理