At work I recently was asked why I inverted the y-Axis for rendering to PBOs like:
const glm::mat4 invert = glm::scale(glm::mat4(1),1.f,-1.f,1.f); glm::mat4 MVP = invert * P * V * M;
The answer is the OpenGl coordinate system of the screen is different from the assumed memory layout of images. While a PBO is roughly an „array“ on the GPU where pixels are stored from top to bottom, OpenGl assumes the screen coordinate system, which has its origin in the lower left boundary (images have theirs in the upper left!). Reading out the whole PBO now directly into an array on the CPU results in a „vertically mirrored“ image. Instead of first rendering „inverted“ and then mirroring it back on the CPU, its just faster to directly change the MVP matrix in the shaders and receive what you are looking for. This is the theory, if it’s faster in practice I did not measure though.