Thus, two transformations are applied to the 3D object model before
computing the model views within the pose range. The first
transformation is the translation of the origin of the coordinate
systems to the reference point. The second transformation is the
rotation of the 3D object model to the desired reference orientation
around the axes of the reference coordinate system. By combining
both transformations one obtains the reference pose of the 3D shape
model. The reference pose of the 3D shape model thus describes the
pose of the reference coordinate system with respect to the
coordinate system of the 3D object model defined by the CAD
file. Let t = (x,y,z)' be the coordinates of the
reference point of the 3D object model and R be the
rotation matrix containing the reference orientation. Then, a point
given in the 3D object model coordinate system can
be transformed to a point in the reference
coordinate system of the 3D shape model by applying the following
formula:
For efficiency reasons
the model views are generated on multiple pyramid levels. On
higher levels fewer views are generated than on lower levels. With
the parameter 'num_levels'"num_levels""num_levels""num_levels""num_levels""num_levels" the number of pyramid levels
on which model views are generated can be specified. It should be
chosen as large as possible because by this the time necessary to
find the model is significantly reduced. On the other hand, the
number of levels must be chosen such that the shape
representations of the views on the highest pyramid level are
still recognizable and contain a sufficient number of points (at
least four). If not enough model points are generated for a
certain view, the view is deleted from the model and replaced by a
view on a lower pyramid level. If for all views on a pyramid
level not enough model points are generated, the number of levels
is reduced internally until for at least one view enough model
points are found on the highest pyramid level. If this procedure
would lead to a model with no pyramid levels, i.e., if the number
of model points is too small for all views already on the lowest
pyramid level, create_shape_model_3dcreate_shape_model_3dCreateShapeModel3dCreateShapeModel3dCreateShapeModel3dcreate_shape_model_3d returns an error
message. If 'num_levels'"num_levels""num_levels""num_levels""num_levels""num_levels" is set to 'auto'"auto""auto""auto""auto""auto"
(default value), create_shape_model_3dcreate_shape_model_3dCreateShapeModel3dCreateShapeModel3dCreateShapeModel3dcreate_shape_model_3d determines the
number of pyramid levels automatically. In this case all model
views on all pyramid levels are automatically checked whether
their shape representations are still recognizable. If the shape
representation of a certain view is found to be not recognizable,
the view is deleted from the model and replaced by a view on a
lower pyramid level. Note that if 'num_levels'"num_levels""num_levels""num_levels""num_levels""num_levels" is set to
'auto'"auto""auto""auto""auto""auto", the number of pyramid levels can be different for
different views. In rare cases, it might happen that
create_shape_model_3dcreate_shape_model_3dCreateShapeModel3dCreateShapeModel3dCreateShapeModel3dcreate_shape_model_3d determines a value for the number of
pyramid levels that is too large or too small. If the number of
pyramid levels is chosen too large, the model may not be
recognized in the image or it may be necessary to select very low
parameters for MinScoreMinScoreMinScoreMinScoreminScoremin_score or GreedinessGreedinessGreedinessGreedinessgreedinessgreediness in
find_shape_model_3dfind_shape_model_3dFindShapeModel3dFindShapeModel3dFindShapeModel3dfind_shape_model_3d in order to find the model. If the
number of pyramid levels is chosen too small, the time required to
find the model in find_shape_model_3dfind_shape_model_3dFindShapeModel3dFindShapeModel3dFindShapeModel3dfind_shape_model_3d may increase. In
these cases, the views on the pyramid levels should be checked by
using the output of get_shape_model_3d_contoursget_shape_model_3d_contoursGetShapeModel3dContoursGetShapeModel3dContoursGetShapeModel3dContoursget_shape_model_3d_contours.
The parameter
specifies whether the pose refinement during the search with
find_shape_model_3dfind_shape_model_3dFindShapeModel3dFindShapeModel3dFindShapeModel3dfind_shape_model_3d is sped up. If
'fast_pose_refinement'"fast_pose_refinement""fast_pose_refinement""fast_pose_refinement""fast_pose_refinement""fast_pose_refinement" is set to 'false'"false""false""false""false""false", for
complex models with a large number of faces the pose refinement
step might amount to a significant part of the overall computation
time. If 'fast_pose_refinement'"fast_pose_refinement""fast_pose_refinement""fast_pose_refinement""fast_pose_refinement""fast_pose_refinement" is set to
'true'"true""true""true""true""true", some of the calculations that are necessary
during the pose refinement are already performed during the model
generation and stored in the model. Consequently, the pose
refinement during the search will be faster. Please note, however,
that in this case the memory consumption of the model may increase
significantly (typically by less than 30 percent).
Further note that the resulting poses that are returned by
find_shape_model_3dfind_shape_model_3dFindShapeModel3dFindShapeModel3dFindShapeModel3dfind_shape_model_3d might slightly differ depending on the
value of 'fast_pose_refinement'"fast_pose_refinement""fast_pose_refinement""fast_pose_refinement""fast_pose_refinement""fast_pose_refinement", because internally the
pose refinement is approximated if the parameter is set to
'true'"true""true""true""true""true".
List of values:'true'"true""true""true""true""true", 'false'"false""false""false""false""false"
In some cases the model
generation process might be very time consuming and the memory
consumption of the model might be very high. The reason for this
is that in these cases the number of views, which must be computed
and stored in the model, is very high. The larger the pose range
is chosen and the larger the objects appear in the image (measured
in pixels) the more views are necessary. Consequently, especially
the use of large images (e.g., images exceeding a size of
640×480 ) can result in very large models.
Because the number of views is highest on lower pyramid levels,
the parameter 'lowest_model_level'"lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level" can be used to exclude
the lower pyramid levels from the generation of views. The value
that is passed for 'lowest_model_level'"lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level" determines the
lowest pyramid level down to which views are generated and stored
in the 3d shape model. If, for example, a value of 2 is
passed for large models, the time to generate the model as well as
the size of the resulting model is reduced to approximately one
third of the original values. If 'lowest_model_level'"lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level" is
not passed, views are generated for all pyramid levels, which
corresponds to the behavior when passing a value of 1 for
'lowest_model_level'"lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level". If for
'lowest_model_level'"lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level" a value larger than 1 is
passed, in find_shape_model_3dfind_shape_model_3dFindShapeModel3dFindShapeModel3dFindShapeModel3dfind_shape_model_3d the tracking of matches
through the pyramid will be stopped at this level. However, if in
find_shape_model_3dfind_shape_model_3dFindShapeModel3dFindShapeModel3dFindShapeModel3dfind_shape_model_3d a least-squares adjustment is chosen
for pose refinement, the matches are refined on the lowest pyramid
level using the least-squares adjustment. Note that for different
values for 'lowest_model_level'"lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level" different matches might
be found during the search. Furthermore, the score of the matches
depends on the chosen method for pose refinement. Also note that
the higher 'lowest_model_level'"lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level" is chosen the higher the
portion of the refinement step with respect to the overall
run-time of find_shape_model_3dfind_shape_model_3dFindShapeModel3dFindShapeModel3dFindShapeModel3dfind_shape_model_3d will be. As a consequence
for higher values of 'lowest_model_level'"lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level" the influence
of the generic parameter 'fast_pose_refinement'"fast_pose_refinement""fast_pose_refinement""fast_pose_refinement""fast_pose_refinement""fast_pose_refinement" (see
above) on the runtime will increase. A large value for
'lowest_model_level'"lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level" on the one hand may lead to long
computation times of find_shape_model_3dfind_shape_model_3dFindShapeModel3dFindShapeModel3dFindShapeModel3dfind_shape_model_3d if
'fast_pose_refinement'"fast_pose_refinement""fast_pose_refinement""fast_pose_refinement""fast_pose_refinement""fast_pose_refinement" is switches off
('false'"false""false""false""false""false"). On the other hand it may lead to a decreased
accuracy if 'fast_pose_refinement'"fast_pose_refinement""fast_pose_refinement""fast_pose_refinement""fast_pose_refinement""fast_pose_refinement" is switches on
('true'"true""true""true""true""true") because in this mode the pose refinement is only
approximated. Therefore, the value for
'lowest_model_level'"lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level" should be chosen as small as
possible. Furthermore, 'lowest_model_level'"lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level""lowest_model_level" should be
chosen small enough such that the edges of the 3D object model
are still observable on this level.
For models with particularly large model views, it may be useful
to reduce the number of model points by setting
'optimization'"optimization""optimization""optimization""optimization""optimization" to a value different from 'none'"none""none""none""none""none".
If 'optimization'"optimization""optimization""optimization""optimization""optimization" = 'none'"none""none""none""none""none", all model points
are stored. In all other cases, the number of points is reduced
according to the value of 'optimization'"optimization""optimization""optimization""optimization""optimization". If the number
of points is reduced, it may be necessary in
find_shape_model_3dfind_shape_model_3dFindShapeModel3dFindShapeModel3dFindShapeModel3dfind_shape_model_3d to set the parameter
GreedinessGreedinessGreedinessGreedinessgreedinessgreediness to a smaller value, e.g., 0.7 or 0.8. For
models with small model views, the reduction of the number of
model points does not result in a speed-up of the search because
in this case usually significantly more potential instances of the
model must be examined. If 'optimization'"optimization""optimization""optimization""optimization""optimization" is set to
'auto'"auto""auto""auto""auto""auto", create_shape_model_3dcreate_shape_model_3dCreateShapeModel3dCreateShapeModel3dCreateShapeModel3dcreate_shape_model_3d automatically
determines the reduction of the number of model points for each
model view.
List of values:'auto'"auto""auto""auto""auto""auto",
'none'"none""none""none""none""none", 'point_reduction_low'"point_reduction_low""point_reduction_low""point_reduction_low""point_reduction_low""point_reduction_low",
'point_reduction_medium'"point_reduction_medium""point_reduction_medium""point_reduction_medium""point_reduction_medium""point_reduction_medium", 'point_reduction_high'"point_reduction_high""point_reduction_high""point_reduction_high""point_reduction_high""point_reduction_high"
This parameter determines the conditions
under which the model is recognized in the image. If
'metric'"metric""metric""metric""metric""metric" = 'ignore_part_polarity'"ignore_part_polarity""ignore_part_polarity""ignore_part_polarity""ignore_part_polarity""ignore_part_polarity", the
contrast polarity is allowed to change only between different
parts of the model, whereas the polarity of model points that are
within the same model part must not change. Please note that the
term 'ignore_part_polarity'"ignore_part_polarity""ignore_part_polarity""ignore_part_polarity""ignore_part_polarity""ignore_part_polarity" is capable of being
misunderstood. It means that polarity changes between
neighboring model parts do not influence the score, and hence
are ignored. Appropriate model
parts are automatically determined. The size of the parts can be
controlled by the generic parameter 'part_size'"part_size""part_size""part_size""part_size""part_size", which is
described below. Note that this metric only works for one-channel
images. Consequently, if the model is created by using this
metric and searched in a multi-channel image by using
find_shape_model_3dfind_shape_model_3dFindShapeModel3dFindShapeModel3dFindShapeModel3dfind_shape_model_3d an error will be returned. If
'metric'"metric""metric""metric""metric""metric" = 'ignore_local_polarity'"ignore_local_polarity""ignore_local_polarity""ignore_local_polarity""ignore_local_polarity""ignore_local_polarity", the model
is found even if the contrast polarity changes for each individual
model point. This metric works for one-channel images as well as
for multi-channel images. The metric
'ignore_part_polarity'"ignore_part_polarity""ignore_part_polarity""ignore_part_polarity""ignore_part_polarity""ignore_part_polarity" should be used if the images
contain strongly textured backgrounds or clutter objects, which
might result in wrong matches. Note that in general the scores of
the matches that are returned by find_shape_model_3dfind_shape_model_3dFindShapeModel3dFindShapeModel3dFindShapeModel3dfind_shape_model_3d are
lower for 'ignore_part_polarity'"ignore_part_polarity""ignore_part_polarity""ignore_part_polarity""ignore_part_polarity""ignore_part_polarity" than for
'ignore_local_polarity'"ignore_local_polarity""ignore_local_polarity""ignore_local_polarity""ignore_local_polarity""ignore_local_polarity". This should be kept in mind when
choosing the right value for the parameter MinScoreMinScoreMinScoreMinScoreminScoremin_score of
find_shape_model_3dfind_shape_model_3dFindShapeModel3dFindShapeModel3dFindShapeModel3dfind_shape_model_3d.
List of values:'ignore_local_polarity'"ignore_local_polarity""ignore_local_polarity""ignore_local_polarity""ignore_local_polarity""ignore_local_polarity",
'ignore_part_polarity'"ignore_part_polarity""ignore_part_polarity""ignore_part_polarity""ignore_part_polarity""ignore_part_polarity"
This parameter determines the size of
the model parts that is used when 'metric'"metric""metric""metric""metric""metric" is set to
'ignore_part_polarity'"ignore_part_polarity""ignore_part_polarity""ignore_part_polarity""ignore_part_polarity""ignore_part_polarity" (see above). The size must be
specified in pixels and should be approximately twice as large as
the size of the background texture in the image. For example, if
an object should be found in front of a chessboard with black and
white squares of size 5×5 pixels, 'part_size'"part_size""part_size""part_size""part_size""part_size"
should be set to 10. Note that higher values of
'part_size'"part_size""part_size""part_size""part_size""part_size" might also decrease the scores of correct
instances especially when searching for objects with shiny or
reflective surfaces. Therefore, the risk of missing correct
instances might increase if 'part_size'"part_size""part_size""part_size""part_size""part_size" is set to
a higher value. If 'metric'"metric""metric""metric""metric""metric" is set to
'ignore_local_polarity'"ignore_local_polarity""ignore_local_polarity""ignore_local_polarity""ignore_local_polarity""ignore_local_polarity", the value of
'part_size'"part_size""part_size""part_size""part_size""part_size" is ignored.
3D edges are only
included in the shape representations of the views if the angle
between the two 3D faces that are incident with the 3D object
model edge is at least 'min_face_angle'"min_face_angle""min_face_angle""min_face_angle""min_face_angle""min_face_angle". If
'min_face_angle'"min_face_angle""min_face_angle""min_face_angle""min_face_angle""min_face_angle" is set to 0.0, all edges are
included. If 'min_face_angle'"min_face_angle""min_face_angle""min_face_angle""min_face_angle""min_face_angle" is set to
(equivalent to 180 degrees), only the silhouette of the 3D object
model is included. This parameter can be used to suppress edges
within curved surfaces, e.g., the surface of a cylinder or
cone. Curved surfaces are approximated by multiple planar
faces. The edges between such neighboring planar faces should not
be included in the shape representation because they also do not
appear in real images of the model. Thus,
'min_face_angle'"min_face_angle""min_face_angle""min_face_angle""min_face_angle""min_face_angle" should be set sufficiently high to
suppress these edges. The effect of different values for
'min_face_angle'"min_face_angle""min_face_angle""min_face_angle""min_face_angle""min_face_angle" can be inspected by using
project_object_model_3dproject_object_model_3dProjectObjectModel3dProjectObjectModel3dProjectObjectModel3dproject_object_model_3d before calling
create_shape_model_3dcreate_shape_model_3dCreateShapeModel3dCreateShapeModel3dCreateShapeModel3dcreate_shape_model_3d. Note that if edges that are not
visible in the search image are included in the shape
representation, the performance (robustness and speed) of the
matching may decrease considerably.
This value determines a
threshold for the selection of significant model components based
on the size of the components, i.e., connected components that
have fewer points than the specified minimum size are suppressed.
This threshold for the minimum size is divided by two for each
successive pyramid level.
The parameter
specifies the tolerance of the projected 3D object model edges in
the image, given in pixels. The higher the value is chosen, the
fewer views need to be generated. Consequently, a higher value
results in models that are less memory consuming and faster to
find with find_shape_model_3dfind_shape_model_3dFindShapeModel3dFindShapeModel3dFindShapeModel3dfind_shape_model_3d. On the other hand, if the
value is chosen too high, the robustness of the matching will
decrease. Therefore, this parameter should only be modified with
care. For most applications, a good compromise between speed and
robustness is obtained when setting 'model_tolerance'"model_tolerance""model_tolerance""model_tolerance""model_tolerance""model_tolerance" to
1.
If the system variable (see set_systemset_systemSetSystemSetSystemSetSystemset_system)
'opengl_hidden_surface_removal_enable'"opengl_hidden_surface_removal_enable""opengl_hidden_surface_removal_enable""opengl_hidden_surface_removal_enable""opengl_hidden_surface_removal_enable""opengl_hidden_surface_removal_enable" is set to 'true'"true""true""true""true""true"
(which is default if it is available) the graphics card is used to accelerate
the computation of the visible faces in the model views. Depending on the
graphics card this is significantly faster than the analytic visibility
computation.
If 'fast_pose_refinement'"fast_pose_refinement""fast_pose_refinement""fast_pose_refinement""fast_pose_refinement""fast_pose_refinement" is set to 'true'"true""true""true""true""true", the
precomputations necessary for the pose refinement step in
find_shape_model_3dfind_shape_model_3dFindShapeModel3dFindShapeModel3dFindShapeModel3dfind_shape_model_3d are also performed on the graphics card.
Be aware that the results of the OpenGL projection are slightly different
compared to the analytic projection.
执行信息
Multithreading type: reentrant (runs in parallel with non-exclusive operators).
Multithreading scope: global (may be called from any thread).
Automatically parallelized on internal data level.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific operators even though the handle is used as an input parameter by those operators.
If the parameters are valid, the operator
create_shape_model_3dcreate_shape_model_3dCreateShapeModel3dCreateShapeModel3dCreateShapeModel3dcreate_shape_model_3d returns the value 2 (
H_MSG_TRUE)
. If necessary
an exception is raised. If the parameters are chosen such that all
model views contain too few points, the error 8510 is raised. In the
case that the projected model is bigger than twice the image size in
at least one model view, the error 8910 is raised.
Markus Ulrich, Christian Wiedemann, Carsten Steger, “Combining
Scale-Space and Similarity-Based Aspect Graphs for Fast 3D Object
Recognition,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, pp. 1902-1914, Oct., 2012.