vector_to_essential_matrixT_vector_to_essential_matrixVectorToEssentialMatrixVectorToEssentialMatrixvector_to_essential_matrix (算子)

名称

vector_to_essential_matrixT_vector_to_essential_matrixVectorToEssentialMatrixVectorToEssentialMatrixvector_to_essential_matrix — 根据图像点对应关系和已知相机矩阵计算本质矩阵,并重建三维点。

签名

vector_to_essential_matrix( : : Rows1, Cols1, Rows2, Cols2, CovRR1, CovRC1, CovCC1, CovRR2, CovRC2, CovCC2, CamMat1, CamMat2, Method : EMatrix, CovEMat, Error, X, Y, Z, CovXYZ)

Herror T_vector_to_essential_matrix(const Htuple Rows1, const Htuple Cols1, const Htuple Rows2, const Htuple Cols2, const Htuple CovRR1, const Htuple CovRC1, const Htuple CovCC1, const Htuple CovRR2, const Htuple CovRC2, const Htuple CovCC2, const Htuple CamMat1, const Htuple CamMat2, const Htuple Method, Htuple* EMatrix, Htuple* CovEMat, Htuple* Error, Htuple* X, Htuple* Y, Htuple* Z, Htuple* CovXYZ)

void VectorToEssentialMatrix(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HTuple& CamMat1, const HTuple& CamMat2, const HTuple& Method, HTuple* EMatrix, HTuple* CovEMat, HTuple* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* CovXYZ)

HHomMat2D HHomMat2D::VectorToEssentialMatrix(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HHomMat2D& CamMat2, const HString& Method, HTuple* CovEMat, HTuple* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* CovXYZ) const

HHomMat2D HHomMat2D::VectorToEssentialMatrix(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HHomMat2D& CamMat2, const HString& Method, HTuple* CovEMat, double* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* CovXYZ) const

HHomMat2D HHomMat2D::VectorToEssentialMatrix(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HHomMat2D& CamMat2, const char* Method, HTuple* CovEMat, double* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* CovXYZ) const

HHomMat2D HHomMat2D::VectorToEssentialMatrix(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HHomMat2D& CamMat2, const wchar_t* Method, HTuple* CovEMat, double* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* CovXYZ) const   ( Windows only)

static void HOperatorSet.VectorToEssentialMatrix(HTuple rows1, HTuple cols1, HTuple rows2, HTuple cols2, HTuple covRR1, HTuple covRC1, HTuple covCC1, HTuple covRR2, HTuple covRC2, HTuple covCC2, HTuple camMat1, HTuple camMat2, HTuple method, out HTuple EMatrix, out HTuple covEMat, out HTuple error, out HTuple x, out HTuple y, out HTuple z, out HTuple covXYZ)

HHomMat2D HHomMat2D.VectorToEssentialMatrix(HTuple rows1, HTuple cols1, HTuple rows2, HTuple cols2, HTuple covRR1, HTuple covRC1, HTuple covCC1, HTuple covRR2, HTuple covRC2, HTuple covCC2, HHomMat2D camMat2, string method, out HTuple covEMat, out HTuple error, out HTuple x, out HTuple y, out HTuple z, out HTuple covXYZ)

HHomMat2D HHomMat2D.VectorToEssentialMatrix(HTuple rows1, HTuple cols1, HTuple rows2, HTuple cols2, HTuple covRR1, HTuple covRC1, HTuple covCC1, HTuple covRR2, HTuple covRC2, HTuple covCC2, HHomMat2D camMat2, string method, out HTuple covEMat, out double error, out HTuple x, out HTuple y, out HTuple z, out HTuple covXYZ)

def vector_to_essential_matrix(rows_1: Sequence[Union[float, int]], cols_1: Sequence[Union[float, int]], rows_2: Sequence[Union[float, int]], cols_2: Sequence[Union[float, int]], cov_rr1: Sequence[Union[float, int]], cov_rc1: Sequence[Union[float, int]], cov_cc1: Sequence[Union[float, int]], cov_rr2: Sequence[Union[float, int]], cov_rc2: Sequence[Union[float, int]], cov_cc2: Sequence[Union[float, int]], cam_mat_1: Sequence[Union[float, int]], cam_mat_2: Sequence[Union[float, int]], method: str) -> Tuple[Sequence[float], Sequence[float], Sequence[float], Sequence[float], Sequence[float], Sequence[float], Sequence[float]]

def vector_to_essential_matrix_s(rows_1: Sequence[Union[float, int]], cols_1: Sequence[Union[float, int]], rows_2: Sequence[Union[float, int]], cols_2: Sequence[Union[float, int]], cov_rr1: Sequence[Union[float, int]], cov_rc1: Sequence[Union[float, int]], cov_cc1: Sequence[Union[float, int]], cov_rr2: Sequence[Union[float, int]], cov_rc2: Sequence[Union[float, int]], cov_cc2: Sequence[Union[float, int]], cam_mat_1: Sequence[Union[float, int]], cam_mat_2: Sequence[Union[float, int]], method: str) -> Tuple[Sequence[float], Sequence[float], float, Sequence[float], Sequence[float], Sequence[float], Sequence[float]]

描述

For a stereo configuration with known camera matrices the geometric relation between the two images is defined by the essential matrix. The operator vector_to_essential_matrixvector_to_essential_matrixVectorToEssentialMatrixVectorToEssentialMatrixVectorToEssentialMatrixvector_to_essential_matrix determines the essential matrix EMatrixEMatrixEMatrixEMatrixEMatrixematrix from in general at least six given point correspondences, that fulfill the epipolar constraint:

算子 vector_to_essential_matrixvector_to_essential_matrixVectorToEssentialMatrixVectorToEssentialMatrixVectorToEssentialMatrixvector_to_essential_matrix is designed to deal only with a linear camera model. This is in contrast to the operator vector_to_rel_posevector_to_rel_poseVectorToRelPoseVectorToRelPoseVectorToRelPosevector_to_rel_pose, that encompasses lens distortions too. The internal camera parameters are passed by the arguments CamMat1CamMat1CamMat1CamMat1camMat1cam_mat_1 and CamMat2CamMat2CamMat2CamMat2camMat2cam_mat_2, which are 3x3 upper triangular matrices describing an affine transformation. The relation between the vector (X,Y,1), defining the direction from the camera to the viewed 3D point, and its (projective) 2D image coordinates (col,row,1) is:

The focal length is denoted by f, are scaling factors, s describes a skew factor and indicates the principal point. Mainly, these are the elements known from the camera parameters as used for example in calibrate_camerascalibrate_camerasCalibrateCamerasCalibrateCamerasCalibrateCamerascalibrate_cameras。Alternatively, the elements of the camera matrix can be described in a different way, see e.g. stationary_camera_self_calibrationstationary_camera_self_calibrationStationaryCameraSelfCalibrationStationaryCameraSelfCalibrationStationaryCameraSelfCalibrationstationary_camera_self_calibration

The point correspondences (Rows1Rows1Rows1Rows1rows1rows_1,Cols1Cols1Cols1Cols1cols1cols_1) and (Rows2Rows2Rows2Rows2rows2rows_2,Cols2Cols2Cols2Cols2cols2cols_2) are typically found by applying the operator match_essential_matrix_ransacmatch_essential_matrix_ransacMatchEssentialMatrixRansacMatchEssentialMatrixRansacMatchEssentialMatrixRansacmatch_essential_matrix_ransac。Multiplying the image coordinates by the inverse of the camera matrices results in the 3D direction vectors, which can then be inserted in the epipolar constraint.

The parameter MethodMethodMethodMethodmethodmethod decides whether the relative orientation between the cameras is of a special type and which algorithm is to be applied for its computation. If MethodMethodMethodMethodmethodmethod is either 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt" or 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard" the relative orientation is arbitrary. Choosing 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt" or 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard" means that the relative motion between the cameras is a pure translation. The typical application for this special motion case is the scenario of a single fixed camera looking onto a moving conveyor belt. In this case the minimum required number of corresponding points is just two instead of six in the general case.

The essential matrix is computed by a linear algorithm if 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt" or 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt" is chosen. With 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard" or 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard" the algorithm gives a statistically optimal result. Here, 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt" and 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard" stand for direct-linear-transformation and gold-standard-algorithm respectively. All methods return the coordinates (XXXXxx,YYYYyy,ZZZZzz) of the reconstructed 3D points. The optimal methods also return the covariances of the 3D points in CovXYZCovXYZCovXYZCovXYZcovXYZcov_xyz. Let n be the number of points then the 3x3 covariance matrices are concatenated and stored in a tuple of length 9n. Additionally, the optimal methods return the covariance of the essential matrix CovEMatCovEMatCovEMatCovEMatcovEMatcov_emat

If an optimal gold-standard-algorithm is chosen the covariances of the image points (CovRR1CovRR1CovRR1CovRR1covRR1cov_rr1, CovRC1CovRC1CovRC1CovRC1covRC1cov_rc1, CovCC1CovCC1CovCC1CovCC1covCC1cov_cc1, CovRR2CovRR2CovRR2CovRR2covRR2cov_rr2, CovRC2CovRC2CovRC2CovRC2covRC2cov_rc2, CovCC2CovCC2CovCC2CovCC2covCC2cov_cc2) can be incorporated in the computation. They can be provided for example by the operator points_foerstnerpoints_foerstnerPointsFoerstnerPointsFoerstnerPointsFoerstnerpoints_foerstner。If the point covariances are unknown, which is the default, empty tuples are input. In this case the optimization algorithm internally assumes uniform and equal covariances for all points.

The value ErrorErrorErrorErrorerrorerror indicates the overall quality of the optimization process and is the root-mean-square Euclidean distance in pixels between the points and their corresponding epipolar lines.

For the operator vector_to_essential_matrixvector_to_essential_matrixVectorToEssentialMatrixVectorToEssentialMatrixVectorToEssentialMatrixvector_to_essential_matrix a special configuration of scene points and cameras exists: if all 3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution in the essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by the operator. This means that all output parameters are of double length and the values of the second solution are simply concatenated behind the values of the first one.

执行信息

参数

Rows1Rows1Rows1Rows1rows1rows_1 (输入控制)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Input points in image 1 (row coordinate).

限制: length(Rows1) >= 6 || length(Rows1) >= 2

Cols1Cols1Cols1Cols1cols1cols_1 (输入控制)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Input points in image 1 (column coordinate).

限制: length(Cols1) == length(Rows1)

Rows2Rows2Rows2Rows2rows2rows_2 (输入控制)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Input points in image 2 (row coordinate).

限制: length(Rows2) == length(Rows1)

Cols2Cols2Cols2Cols2cols2cols_2 (输入控制)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Input points in image 2 (column coordinate).

限制: length(Cols2) == length(Rows1)

CovRR1CovRR1CovRR1CovRR1covRR1cov_rr1 (输入控制)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Row coordinate variance of the points in image 1.

默认值: []

CovRC1CovRC1CovRC1CovRC1covRC1cov_rc1 (输入控制)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Covariance of the points in image 1.

默认值: []

CovCC1CovCC1CovCC1CovCC1covCC1cov_cc1 (输入控制)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Column coordinate variance of the points in image 1.

默认值: []

CovRR2CovRR2CovRR2CovRR2covRR2cov_rr2 (输入控制)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Row coordinate variance of the points in image 2.

默认值: []

CovRC2CovRC2CovRC2CovRC2covRC2cov_rc2 (输入控制)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Covariance of the points in image 2.

默认值: []

CovCC2CovCC2CovCC2CovCC2covCC2cov_cc2 (输入控制)  number-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Column coordinate variance of the points in image 2.

默认值: []

CamMat1CamMat1CamMat1CamMat1camMat1cam_mat_1 (输入控制)  hom_mat2d HHomMat2D, HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Camera matrix of the 1st camera.

CamMat2CamMat2CamMat2CamMat2camMat2cam_mat_2 (输入控制)  hom_mat2d HHomMat2D, HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Camera matrix of the 2nd camera.

MethodMethodMethodMethodmethodmethod (输入控制)  string HTuplestrHTupleHtuple (string) (string) (HString) (char*)

Algorithm for the computation of the essential matrix and for special camera orientations.

默认值: 'normalized_dlt' "normalized_dlt" "normalized_dlt" "normalized_dlt" "normalized_dlt" "normalized_dlt"

值列表: 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard", 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt", 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard", 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt"

EMatrixEMatrixEMatrixEMatrixEMatrixematrix (输出控制)  hom_mat2d HHomMat2D, HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Computed essential matrix.

CovEMatCovEMatCovEMatCovEMatcovEMatcov_emat (输出控制)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

9x9 covariance matrix of the essential matrix.

ErrorErrorErrorErrorerrorerror (输出控制)  real(-array) HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Root-Mean-Square of the epipolar distance error.

XXXXxx (输出控制)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

X coordinates of the reconstructed 3D points.

YYYYyy (输出控制)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Y coordinates of the reconstructed 3D points.

ZZZZzz (输出控制)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Z coordinates of the reconstructed 3D points.

CovXYZCovXYZCovXYZCovXYZcovXYZcov_xyz (输出控制)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Covariance matrices of the reconstructed 3D points.

可能的前趋

match_essential_matrix_ransacmatch_essential_matrix_ransacMatchEssentialMatrixRansacMatchEssentialMatrixRansacMatchEssentialMatrixRansacmatch_essential_matrix_ransac

可能的后继

essential_to_fundamental_matrixessential_to_fundamental_matrixEssentialToFundamentalMatrixEssentialToFundamentalMatrixEssentialToFundamentalMatrixessential_to_fundamental_matrix

替代

vector_to_rel_posevector_to_rel_poseVectorToRelPoseVectorToRelPoseVectorToRelPosevector_to_rel_pose, vector_to_fundamental_matrixvector_to_fundamental_matrixVectorToFundamentalMatrixVectorToFundamentalMatrixVectorToFundamentalMatrixvector_to_fundamental_matrix

另见

stationary_camera_self_calibrationstationary_camera_self_calibrationStationaryCameraSelfCalibrationStationaryCameraSelfCalibrationStationaryCameraSelfCalibrationstationary_camera_self_calibration

参考文献

Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press, Cambridge; 2003.
J.Chris McGlone (editor): “Manual of Photogrammetry”; American Society for Photogrammetry and Remote Sensing ; 2004.

模块

三维计量