中文字幕日韩精品一区二区免费_精品一区二区三区国产精品无卡在_国精品无码专区一区二区三区_国产αv三级中文在线

TheModel–ViewTransform(模型視口變換)-創(chuàng)新互聯(lián)

周一到周五,每天一篇,北京時(shí)間早上7點(diǎn)準(zhǔn)時(shí)更新~

成都創(chuàng)新互聯(lián)公司是一家專業(yè)提供憑祥企業(yè)網(wǎng)站建設(shè),專注與成都網(wǎng)站制作、成都做網(wǎng)站、H5網(wǎng)站設(shè)計(jì)、小程序制作等業(yè)務(wù)。10年已為憑祥眾多企業(yè)、政府機(jī)構(gòu)等服務(wù)。創(chuàng)新互聯(lián)專業(yè)網(wǎng)站設(shè)計(jì)公司優(yōu)惠進(jìn)行中。

The Model–View Transform(模型視口變換)

In a simple OpenGL application, one of the most common transformations is to take a model from model space to view space so as to render it(在OpenGL程序里,最常見的變換就是把一個(gè)模型轉(zhuǎn)到視口坐標(biāo)系下去,然后渲染它). In effect, we move the model first into world space (i.e., place it relative to the world’s origin) and then from there into view space (placing it relative to the viewer)(實(shí)際上我們先把模型變換到了世界坐標(biāo)系下,然后再把它變換到了視口坐標(biāo)系下). This process establishes the vantage point of the scene. By default, the point of observation in a perspective projection is at the origin (0,0,0) looking down the negative z axis (into the monitor or screen)(這種處理流程就決定了觀察點(diǎn)的位置,在默認(rèn)狀況下,觀察者的位置在投影矩陣?yán)镌?,0,0的地方,看向z軸的負(fù)方向). This point of observation is moved relative to the eye coordinate system to provide a specific vantage point(這個(gè)觀察點(diǎn)會(huì)相對(duì)于視口坐標(biāo)系運(yùn)動(dòng)). When the point of observation is located at the origin, as in a perspective projection, objects drawn with positive z values are behind the observer(當(dāng)觀察者位于投影坐標(biāo)系的原點(diǎn)的時(shí)候,所有z軸正方向的東西都在觀察者后面). In an orthographic projection, however, the viewer is assumed to be infinitely far away on the positive z axis and can see everything within the viewing volume(在正交投影里,觀察者被設(shè)定為可以看到z軸的無窮遠(yuǎn)處,所以只要在觀察者的視錐體之內(nèi),都可以被觀察者看到). Because this transform takes vertices from model space (which is also sometimes known as object space) directly into view space and effectively bypasses world space(因?yàn)檫@個(gè)矩陣會(huì)直接把物體從模型坐標(biāo)系扔到世界坐標(biāo)系里,然后直接丟進(jìn)視口里去), it is often referred to as the model–view transform and the matrix that encodes this transformation is known as the model–view matrix(模型矩陣和視口矩陣的合體一般叫做模型視口矩陣). The model transform essentially places objects into world space(模型矩陣基本上就是把模型變換到世界坐標(biāo)系里去). Each object is likely to have its own model transform, which will generally consist of a sequence of scale, rotation, and translation operations(每個(gè)物體基本上都包含一個(gè)模型變換,它包含了旋轉(zhuǎn)、縮放和平移). The result of multiplying the positions of vertices in model space by the model transform is a set of positions in world space. This transformation is sometimes called the model–world transform(模型矩陣乘以點(diǎn)能將點(diǎn)轉(zhuǎn)換到世界坐標(biāo)系里去,所以有時(shí)候模型矩陣又叫做模型世界變換). The view transformation allows you to place the point of observation anywhere you want and look in any direction(視口變換可以讓你把觀察者放在任意位置,面向任意方向). Determining the viewing transformation is like placing and pointing a camera at the scene(定義一個(gè)視口變換就像是定義一個(gè)場(chǎng)景中的攝像機(jī)). In the grand scheme of things, you must apply the viewing transformation before any other modeling transformations(宏觀意義上,你必須在模型變換之前進(jìn)行視口變換). The reason is that it appears to move the current working coordinate system with respect to the eye coordinate system(原因是它會(huì)相對(duì)于視口坐標(biāo)系對(duì)物體進(jìn)行操作). All subsequent transformations then occur based on the newly modified coordinate system(所有后續(xù)的變換都是基于當(dāng)前的坐標(biāo)系統(tǒng)進(jìn)行的). The transform that moves coordinates from world space to view space is sometimes called the world–view transform(將世界坐標(biāo)系的東西變換到視口坐標(biāo)系的變換叫世界視口變換). Concatenating the model–world and world–view transform matrices by multiplying them together yields the model–view matrix(模型矩陣和視口矩陣結(jié)合的產(chǎn)物叫模型視口矩陣) (i.e., the matrix that takes coordinates from model to view space). There are some advantages to doing this(這么做是有好處的). First, there are likely to be many models in your scene and many vertices in each model(第一,你的場(chǎng)景中會(huì)有很多模型,模型中會(huì)有很多點(diǎn)). Using a singlecomposite transform to move the model into view space is more efficient than moving it first into world space and then into view space as explained earlier(使用合成矩陣比起分開會(huì)更高效). The second advantage has more to do with the numerical accuracy of single-precision floating-point numbers: The world could be huge and computation performed in world space will have different precision depending on how far the vertices are from the world origin(第二個(gè)好處就是與單精度浮點(diǎn)數(shù)的精度有關(guān)了:世界會(huì)很大,在世界坐標(biāo)系下根據(jù)各個(gè)點(diǎn)離遠(yuǎn)點(diǎn)的距離的不同變換的各個(gè)點(diǎn)的精度是不一樣的). However, if you perform the same calculations in view space, then precision is dependent on how far vertices are from the viewer, which is probably what you want— a great deal of precision is applied to objects that are close to the viewer at the expense of precision very far from the viewer(如果把他們直接轉(zhuǎn)換到視口坐標(biāo)系里,精度與觀察者的距離有關(guān),剛好符合我們的需要).

The Lookat Matrix(模型視口變換)

If you have a vantage point at a known location and a thing you want to look at, you will wish to place your your virtual camera at that location and then point it in the right direction(如果你希望在某個(gè)點(diǎn)看向某個(gè)方向,那么你將會(huì)希望吧虛擬攝像機(jī)放到那個(gè)點(diǎn)并指向那個(gè)你想看的方向). To orient the camera correctly, you also need to know which way is up; otherwise, the camera could spin around its forward axis and, even though it would still be technically be pointing in the right direction, this is almost certainly not what you want(為了知道正確的朝向,你需要知道指向頭頂?shù)南蛄渴鞘裁矗瑲g聚話來說,攝像機(jī)可能繞著它的前方轉(zhuǎn),即便它有可能指向的是右邊的方向). So, given an origin, a point of interest, and a direction that we consider to be up, we want to construct a sequence of transforms, ideally baked together into a single matrix, that will represent a rotation that will point a camera in the correct direction and a translation that will move the origin to the center of the camera(所以,給定一個(gè)原點(diǎn)、一個(gè)方向、一個(gè)視點(diǎn)后,我們需要構(gòu)造一個(gè)矩陣剛好能表達(dá)攝像機(jī)的這樣一個(gè)姿態(tài)). This matrix is known as a lookat matrix and can be constructed using only the math covered in this chapter so far.(這個(gè)矩陣叫l(wèi)ookat矩陣,你可以使用普通的數(shù)學(xué)知識(shí)就可以得到這個(gè)矩陣) First, we know that subtracting two positions gives us a vector which would move a point from the first position to the second and that normalizing that vector result gives us its directional(首先,我們通過向量減法,可以得到一個(gè)方向向量). So, if we take the coordinates of a point of interest, subtract from that the position of our camera, and then normalize the resulting vector, we have a new vector that represents the direction of view from the camera to the point of interest. We call this the forward vector(所以我們使用視點(diǎn)減去攝像機(jī)的位置,然后單位化,就得到了我們的指向前方的向量了). Next, we know that if we take the cross product of two vectors, we will receive a third vector that is orthogonal (at a right angle) to both input vectors(其次,我們知道叉乘倆向量可以得到一個(gè)與這倆向量都垂直的向量). Well, we have two vectors—the forward vector we just calculated, and the up vector, which represents the direction we consider to be upward. Taking the cross product of those two vectors results in a third vector that is orthogonal to each of them and points sideways with respect to our camera(我們讓指向前方的向量和up向量叉乘得到一個(gè)向量,這個(gè)向量依然是相對(duì)于攝像機(jī)的,我們管這個(gè)叫sideways向量). We call this the sideways vector. However, the up and forward vectors are not necessarily orthogonal to each other and we need a third orthogonal vector to construct a rotation matrix(然而,up和forward向量并不是一定要互相垂直,所以我們還需要第三個(gè)垂直的向量去構(gòu)成一個(gè)旋轉(zhuǎn)矩陣). To obtain this vector, we can simply apply the same process again—taking the cross product of the forward vector and our sideways vector to produce a third that is orthogonal to both and that represents up with respect to the camera(這個(gè)第三個(gè)向量就使用向前的向量做叉積與我們剛才得到的sideway向量做叉積).These three vectors are of unit length and are all orthogonal to one another, so they form a set of orthonormal basis vectors and represent our view frame(這仨向量?jī)蓛纱怪鼻叶际菃挝幌蛄?,所以他們?gòu)造成了一個(gè)正交基). Given these three vectors, we can construct a rotation matrix that will take a point in the standard Cartesian basis and move it into the basis of our camera(使用這些向量,我們可以構(gòu)建一個(gè)旋轉(zhuǎn)矩陣,可以把物體轉(zhuǎn)換到視口坐標(biāo)系中去). In the following math, e is the eye (or camera) position, p is the point of interest, and u is the up vector. Here we go.(接下來的數(shù)學(xué)中,e是攝像機(jī)的位置,p是視點(diǎn)、u是up向量,讓我們開始推導(dǎo)吧) First, construct our forward vector, f:(首先計(jì)算向前的向量)
The Model–View Transform(模型視口變換)
Next, take the cross product of f and u to construct a side vector s:(然后計(jì)算side向量)
The Model–View Transform(模型視口變換)
Now, construct a new up vector u′ in our camera’s reference:(然后計(jì)算一個(gè)新的攝像機(jī)的向上的向量)
The Model–View Transform(模型視口變換)
Finally, construct a rotation matrix representing a reorientation into our newly constructed orthonormal basis:(最后我們可以得到我們正交的旋轉(zhuǎn)矩陣了)

The Model–View Transform(模型視口變換)
Finally, we have our lookat matrix, T. If this seems like a lot of steps to you, you’re in luck. There’s a function in the vmath library that will construct the matrix for you:(最后我們得到了lookat矩陣,看起來你需要做很多事,但vmath已經(jīng)幫你完成了)

template
static inline Tmat4 lookat(const vecN& eye,const vecN& center,
const vecN& up) { ... }
The matrix produced by the vmath::lookat function can be used as the basis for your camera matrix—the matrix that represents the position and orientation of your camera. In other words, this can be your view matrix(vmath::lookat產(chǎn)生的矩陣就是你的視口矩陣了).

本日的翻譯就到這里,明天見,拜拜~~

第一時(shí)間獲取最新橋段,請(qǐng)關(guān)注東漢書院以及圖形之心公眾號(hào)

東漢書院,等你來玩哦

另外有需要云服務(wù)器可以了解下創(chuàng)新互聯(lián)cdcxhl.cn,海內(nèi)外云服務(wù)器15元起步,三天無理由+7*72小時(shí)售后在線,公司持有idc許可證,提供“云服務(wù)器、裸金屬服務(wù)器、高防服務(wù)器、香港服務(wù)器、美國(guó)服務(wù)器、虛擬主機(jī)、免備案服務(wù)器”等云主機(jī)租用服務(wù)以及企業(yè)上云的綜合解決方案,具有“安全穩(wěn)定、簡(jiǎn)單易用、服務(wù)可用性高、性價(jià)比高”等特點(diǎn)與優(yōu)勢(shì),專為企業(yè)上云打造定制,能夠滿足用戶豐富、多元化的應(yīng)用場(chǎng)景需求。

文章名稱:TheModel–ViewTransform(模型視口變換)-創(chuàng)新互聯(lián)
轉(zhuǎn)載來于:http://m.rwnh.cn/article20/doioco.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供外貿(mào)建站、企業(yè)網(wǎng)站制作網(wǎng)站導(dǎo)航、品牌網(wǎng)站制作外貿(mào)網(wǎng)站建設(shè)、軟件開發(fā)

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)

手機(jī)網(wǎng)站建設(shè)
宜都市| 资兴市| 阿克陶县| 乌兰察布市| 满城县| 四川省| 永登县| 科尔| 越西县| 平远县| 隆德县| 谢通门县| 筠连县| 和田市| 繁峙县| 句容市| 曲靖市| 宁阳县| 深水埗区| 绵竹市| 琼海市| 介休市| 长宁县| 东宁县| 德兴市| 鞍山市| 怀宁县| 西安市| 伊金霍洛旗| 九龙城区| 宜黄县| 都兰县| 孝义市| 宁陵县| 饶河县| 内江市| 北碚区| 平果县| 寻乌县| 上犹县| 玉龙|