可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效，请关闭广告屏蔽插件后再试):

# 问题:

I am using glm to create a camera class, and I am running into some problems with a lookat function. I am using a quaternion to represent rotation, but I want to use glm's prewritten lookat function to avoid duplicating code. This is my lookat function right now:

```
void Camera::LookAt(float x, float y, float z) {
glm::mat4 lookMat = glm::lookAt(position, glm::vec3(x, y, z), glm::vec3(0, 1, 0));
rotation = glm::toQuat(lookMat);
}
```

However when I call `LookAt(0.0f,0.0f,0.0f)`

, my camera is not rotated to that point. When I call `glm::eulerangles(rotation)`

after the lookat call, I get a vec3 with the following values: (180.0f, 0.0f, 180.0f). `position`

is (0.0f,0.0f,-10.0f), so I should not have any rotation at all to look at 0,0,0. This is the function which builds the view matrix:

```
glm::mat4 Camera::GetView() {
view = glm::toMat4(rotation) * glm::translate(glm::mat4(), position);
return view;
}
```

Why am I not getting the correct quaternion, and how can I fix my code?

# 回答1:

**Solution:**

You have to invert the rotation of the quaternion by conjugating it:

```
using namespace glm;
quat orientation = conjugate(toQuat(lookAt(vecA, vecB, up)));
```

**Explanation:**
The lookAt function is a replacement for gluLookAt, which is used to construct a view matrix.

The view matrix is used to rotate the world around the viewer, and is therefore the inverse of the cameras transform.

By taking the inverse of the inverse, you can get the actual transform.

# 回答2:

I ran into something similar, the short answer is your lookMat might need to be inverted/transposed, because it is a camera rotation (at least in my case), as opposed to a world rotation. Rotating the world would be a inverse of a camera rotation.

I have a m_current_quat which is a quaternion that stores the current camera rotation. I debugged the issue by printing out the matrix produced by glm::lookAt, and comparing with the resulting matrix that I get by applying m_current_quat and a translation by m_camera_position. Here is the relevant code for my test.

```
void PrintMatrix(const GLfloat m[16], const string &str)
{
printf("%s:\n", str.c_str());
for (int i=0; i<4; i++)
{
printf("[");
//for (int j=i*4+0; j<i*4+4; j++) // row major, 0, 1, 2, 3
for (int j=i+0; j<16; j+=4) // OpenGL is column major by default, 0, 4, 8, 12
{
//printf("%d, ", j); // print matrix index
printf("%.2f, ", m[j]);
}
printf("]\n");
}
printf("\n");
}
void CameraQuaternion::SetLookAt(glm::vec3 look_at)
{
m_camera_look_at = look_at;
// update the initial camera direction and up
//m_initial_camera_direction = glm::normalize(m_camera_look_at - m_camera_position);
//glm::vec3 initial_right_vector = glm::cross(m_initial_camera_direction, glm::vec3(0, 1, 0));
//m_initial_camera_up = glm::cross(initial_right_vector, m_initial_camera_direction);
m_camera_direction = glm::normalize(m_camera_look_at - m_camera_position);
glm::vec3 right_vector = glm::cross(m_camera_direction, glm::vec3(0, 1, 0));
m_camera_up = glm::cross(right_vector, m_camera_direction);
glm::mat4 lookat_matrix = glm::lookAt(m_camera_position, m_camera_look_at, m_camera_up);
// Note: m_current_quat quat stores the camera rotation with respect to the camera space
// The lookat_matrix produces a transformation for world space, where we rotate the world
// with the camera at the origin
// Our m_current_quat need to be an inverse, which is accompolished by transposing the lookat_matrix
// since the rotation matrix is orthonormal.
m_current_quat = glm::toQuat(glm::transpose(lookat_matrix));
// Testing: Make sure our model view matrix after gluLookAt, glmLookAt, and m_current_quat agrees
GLfloat current_model_view_matrix[16];
//Test 1: gluLookAt
gluLookAt(m_camera_position.x, m_camera_position.y, m_camera_position.z,
m_camera_look_at.x, m_camera_look_at.y, m_camera_look_at.z,
m_camera_up.x, m_camera_up.y, m_camera_up.z);
glGetFloatv(GL_MODELVIEW_MATRIX, current_model_view_matrix);
PrintMatrix(current_model_view_matrix, "Model view after gluLookAt");
//Test 2: glm::lookAt
lookat_matrix = glm::lookAt(m_camera_position, m_camera_look_at, m_camera_up);
PrintMatrix(glm::value_ptr(lookat_matrix), "Model view after glm::lookAt");
//Test 3: m_current_quat
glLoadIdentity();
glMultMatrixf( glm::value_ptr( glm::transpose(glm::mat4_cast(m_current_quat))) );
glTranslatef(-m_camera_position.x, -m_camera_position.y, -m_camera_position.z);
glGetFloatv(GL_MODELVIEW_MATRIX, current_model_view_matrix);
PrintMatrix(current_model_view_matrix, "Model view after quaternion transform");
return;
}
```

Hope this helps.

# 回答3:

I want to use glm's prewritten lookat function to avoid duplicating code.

But it's *not* duplicating code. The matrix that comes out of `glm::lookat`

is just a `mat4`

. Going through the conversion from a quaternion to 3 vectors, only so that `glm::lookat`

can convert it back into an orientation is just a waste of time. You've already done 85% of `lookat`

's job; just do the rest.

# 回答4:

You *are* getting the (or better: *a*) correct rotation.

When I call `glm::eulerangles(rotation)`

after the lookat call, I get a
`vec3`

with the following values: (180.0f, 0.0f, 180.0f). `position`

is
(0.0f,0.0f,-10.0f), so I should not have any rotation at all to look
at 0,0,0.

glm is following the conventions of the old fixed-function GL. And there, eye space was defined as the camera placed at origin, with `x`

pointng to the right, `y`

up and looking in `-z`

direction. Since you want to look in positive `z`

direction, the camera has to turn. Now, as a human, I would have described that as a rotation of 180 degrees around `y`

, but a rotation of 180 degrees around `x`

in combination with another 180 degrees rotation around`z`

will have the same effect.

# 回答5:

When multiplied by the `LookAt`

*view matrix*, the *world-space vectors* are rotated (**brought**) into the camera's view while *the camera's orientation is kept in place*.

So an actual rotation of the camera by 45 degress to the **right** is achieved with a matrix which applies a 45 degree rotation to the **left** to all the world-space vertices.

For a `Camera`

object you would need to get its **local** `forward`

and `up`

direction vectors in order to calculate a `lookAt`

view matrix.

```
viewMatrix = glm::lookAtLH (position, position + camera_forward, camera_up);
```

When using quaternions to store the orientation of an object (be it a camera or anything else), usually this `rotation`

quat is used to calculate the vectors which define its *local-space* (left-handed one in the below example):

```
glm::vec3 camera_forward = rotation * glm::vec3(0,0,1); // +Z is forward direction
glm::vec3 camera_right = rotation * glm::vec3(1,0,0); // +X is right direction
glm::vec3 camera_up = rotation * glm::vec3(0,1,0); // +Y is up direction
```

Thus, the world-space directions should be rotated 45 degress to the **right** in order to reflect the correct orientation of the camera.

This is why the `lookMat`

or the quat obtained from it cannot be directly used for this purpose, since the orientation they describe is a reversed one.

Correct rotation can be done in two ways:

- Calculate the inverse of the
`lookAt`

matrix and multiply the world-space direction vectors by this rotation matrix
*(more efficient)* Convert the LookAt matrix into a quaternion and conjugate it instead of applying `glm::inverse`

, since the result is a unit quat and for such quats the inverse is equal to the conjugate.

Your `LookAt`

should look like this:

```
void Camera::LookAt(float x, float y, float z) {
glm::mat4 lookMat = glm::lookAt(position, glm::vec3(x, y, z), glm::vec3(0, 1, 0));
rotation = glm::conjugate( glm::quat_cast(lookMat));
}
```