I use such piece of code to rescale in KeymapSwitcher.docky the image flag shown as docky (dd->scaleflag is a float value for rescaling the default IMAGE_H/IMAGE_W[those 2 are #define]):
Rotation requires using vertex arrays. Have a look at the documentation for that. I don't know of any easy example code. Composite3DDemo uses vertex arrays, but the details are buried within the rendering pipeline.
Rotation is best done using matrix algebra. You need to rotate, scale and translate the destination coordinates of your vertices to the pose (i.e., position and orientation) that you want your object to be rendered at. A 3x3 matrix can do 2D rotation, scaling, & transformation in one go (and other weird transforms if the bottom row isn't kept to [0, 0, 1]). This page shows you all of the relevant formulae.
This is where having a set of matrix & vector C++ classes makes life a lot easier. If you look at the Composite3DDemo code, you'll see that I have C3DVector3D & C3DMatrix4x4f classes for the handling the basic 3D transformations. Those two classes allow you to do things like:
C3DMatrix4x4f rotationMatrix = C3DMatrix4x4f::rotationX(0.21);// Rotate by 0.21 radians
For information about rotation using vertex array, there is some quite usable source code with CompositePOC, http://aminet.net/dev/src/CompositePOC.lha
You will find the rotation code within the "MoveP()" function in CompositePoc.h file (the source code organization is bit weird, but it works ;-) ). It shows scaling, too (which is a very simple thing to do, that is). Please note that this code is using a different approach with positioning of the objects, it uses x and y axis coordinates (which makes rotation a bit easier).
Just play a bit with this code to understand how it works (I did the same ;-) ).
Edit: It´s for 2D gfx, but the principles are the same. For 3D use the information Hans already gave you. Very useful, too!
Precision should be good enough, as long as you don´t aim on extremely high resolutions or extremely big 3D worlds without partitioning. As it is all about floats being juggled, precision is as high as the compiler float math gives.
If you try the program itself you will see that there is no jitter or something. Rotation is smooth and precise.
There is one thing that isn´t very nice, but it´s for sure driver (or hardware) related. Positive scaling (upsizing) is limited to a factor about 4.0 (didn´t test it to the exact factor. For me it´s enough to know about it). Negative scaling is no problem at all and after a sign change (further scaling would lead to a negative size, which is impossible, so the vertices are mirrored) there is no apparent limit in scaling (I scaled the space ship down-up ;-) to a size bigger than my display). But, as I said, I think that this is a problem of my hardware or the driver (SAM440ep, Radeon M9).
I would like to start with a simple image rotation (but i have to find a code for this).
It depends on what you mean with "simple rotation". If it should be 2D, study the code of CompositePOC. It´s very simple. It just rotates the points of a rectangle. After that, the rectangle is divided into two triangles, but then the coordinates are the coordinates of a rotated rectangle already. Very simple ;-)
For the Compositing engine it´s unimportant how the points (vertices) of a triangle are located, as long as these points result in a closed triangle. If you divide a rectangle into two triangles, the two triangles share some vertices (coordinates). That´s why for 2D graphics a rectangle is good enough to describe the triangles the Compositing engine likes to see.
0____3
|\ |
| \|
1----2
(bad drawing ;-) )
For the texture, the reading of the texture data will always start with the upper left coordinate you gave as texture coordinate (normal mode). This coordinate will be mapped to the first vertex of the first triangle (0 in the "drawing"). The texture reading propagates to the right, and the mapping upon the triangle will do that, too. IIRC, it propagates to the fourth vertex (3 in the "drawing"), which is the upper right coordinate of the rectangle (and the last coordinate of the second triangle). Thios goes for every line of the texture (top - bottom).
There is no rotation math needed for the texture, btw. It´s all about vertices and triangles (which form a rectangle in the end). The texture will be "rotated" by the Compositing engine (as it knows where to set the next pixel of the texture by calculating angle and distance to the vertex on the "right" and to the "bottom").
Same goes for scaling. As the Compositing engine knows that there is a difference between the texture size and the distance between the vertices, it could compute the factor that is needed for the texture to fit between the vertices. It just repeats or omits texture pixels then. If you don´t use filtering, you will see, how it is done.
Of course you can combine rotation and scaling (of the rectangle/triangles). The texture will follow. You don´t need calculating like nuts for this ;-)
Get a piece of paper and try to draw, what I described here. Maybe this helps with understanding the principles.
I would have one question about homogeneous coordinates.
I read some compositing examples and I noted that the coder always used 2 triangles to define its image. Why ? Is there any interest to define three or more triangles?
My second question is there is some advantages to use verteces except for rotations ?
For homogenous coordinates, Hans is far more competent than me. I live in a 2d games world ;-) As far as I understand it, the homogenous coordinate is not used by compositing directly, but it´s an calculation aid for 3D graphics on 2D displays.
Regarding the triangles, if you need just one rectangle for your compositing task, there is no need for 3 or more triangles. In fact, you don´t need any triangle, as Compositing could be used on a source and a destination rectangle. As soon as you handle more than one rectangle, or you want to do prespective gfx, you will need more triangles. This is were vertex mode gets to the game...
Vertex mode is useful in all cases where more than one rectangle (2 triangles) should be displayed in one go. You could use it for e.g. game gfx (2D), moving gfx in general, utilities displaying the contents of more than one screen/window in a downscaled display and so on.
First I was sceptic about Compositing, but I discovered the opportunities it gives. It´s a really nice tool for game developers, as you could handle several hundred moving objects with it. Runtime profile of Compositing is quite low, if it´s hardware accellerated. I have a demo here, displaying 370 objects on a 1280 x 800 display. All objects are moving constantly, and CPU load is shown by CPUDocky at 10% appr. On a SAM440ep. Not the fastest machine available, but fast enough to do really nice 2D games ;-)
Why ? Is there any interest to define three or more triangles?
The main reason would be to render multiple objects in one render call. For example, you could store the characters of a font in one bitmap, and render an entire sentence (or even an entire paragraph or page) in one go. Or, you could render the tiles for a game map in one go.
Why? It's more efficient that way. There is a certain amount of setup/overhead involved per rendering operation. So, rendering many small objects in one operation using vertex arrays is much faster than rendering each item one by one.
Of course, vertex arrays also allow complex shapes to be rendered efficiently.
My second question is there is some advantages to use verteces except for rotations ?
You could also warp images in all sorts of different ways, including 3D perspective projection (as was demonstrated in Composite3DDemo).
@whose
For homogenous coordinates, Hans is far more competent than me. I live in a 2d games world ;-) As far as I understand it, the homogenous coordinate is not used by compositing directly, but it´s an calculation aid for 3D graphics on 2D displays.
Homogeneous coordinates do have their use in 2D, which is why they are included in the compositing API. They come in handy when warping images (e.g., in morphing), of which 3D perspective projection is just one very special case. AFAIK, Hans-Joerg wasn't thinking about 3D projection at all when he designed the API (IIRC, he didn't realise that it was possible until he saw the Composite3DDemo ;-) ).
So, what's their use? Well, imagine that you have rendered a rectangular texture/image on-screen. Next, you grab one of its corners and pull it out, warping the image. 2D linear interpolation can't handle this correctly, so the rendered image will start looking pretty bad. The more that it's warped away from a parallelogram shape, the worse it becomes. Using homogeneous coordinates allows the interpolation of the texture coordinates to match the warping, thereby correctly rendering the warped image.
This may not be of much use in most cases, but I could see it being used in drawing & morphing programs.
I was planning to write a demo & tutorial to show how this is done, but I never found the time.
Hans
P.S., Such "warp" transformations are known as homographies or projective transformations. Beware that information about this topic is usually written by mathematicians, and so can be pretty abstract.
There is some code out there that demonstrates the very simple use of Compositing, putting some source bitmap into a destination bitmap using some operation modes. But that isn´t different to the blitting techniques used in earlier days (and I believe you know how to use the classic Amiga blitter and its minterms. Essentially this is how CompositeTags() operates in its simplest mode. It´s a "blitter" that is additionally aware of alpha channels). As Hans already said, if you would use this very simple mode to display lots of rectangles, the overhead for setting up the hardware and internal structures would outrun the advantage of hardware accelleration at some point. Using the vertex mode could reduce the need of hardware/software setup significantly.
But as soon as vertex mode is used, things get a bit more complicated for YOUR code. A very simple code would use a minimalist engine to show some things. Nonetheless it´s a kind of engine. Functions to set up the triangles (vertices), functions to manipulate the triangle by its vertices (rotation, scaling), functions to set up the textures, functions to set up the vertex array and at last some function calling CompositeTags() to draw all the triangles contained in the array. Thats roughly the way CompositePOC works. There is some unnecessary complexity in this code, but the principle I explained above is used in it.
Maybe it needs much more commentary to make clear how it works?
I see... unfortunately the CompositeTags() documentation is very unclear about this. I don´t have the mathematical background to discover the use of homogenous coordinates for image warping myself, but maybe you could help me a little bit with it? As you said, mathematicians tend to explain things very, ermh, formal ;-), which is not understandable easily if you don´t share the knowledge background with them (which in turn costs much more time to understand what the heck they mean ;-) ).
I just need some hint how homogenous coordinate and the "warping" effect are connected. Say, e.g. some coordinate of a texture is moved away from the rectangle shape (say, it´s the upper left corner), how does this affect the homogenous coordinate? As soon as I understood it, I will set up some quick demo and maybe I understand better how other 2D transformations work...
As you seem not to understand the mathematical relations between homogenous coordiantes and transformations, I encourage you to read for example wikipedia. I read the french version which explains very well. Even if you don't understand french, the matrices are given. in fact each transformation is done by a matrix. If you understand how the matrices work, that is enough to use them. You don't even understand the all the mathematical processes.
I already made some tests with composite and rectangle shapes but not with a lot of objects. So i would be interested to see your demo.
Hans,
I thank you for your answer. So, if I understood well, I can create one unique tab containing all the vertices describing differents objects taken from a bitmap.
And I can display them in one go. Is that correct ?
If yes, it is faster that displaying one rectangle after the other.
I have one question for you.
Imagine that I have a fixed background and some moving objects.
What is the best method to render the scene ?
I see two. The first, the simpler.
while not done
Display the background, display moving objects ordering them.
until done
The second is :
Display the background
while not done
save and copy each region where a object is located.
Display objects.
until done
What do you think of these methods ? Is there another way to do it ?
I understand the mathematical relations regarding rotation, scaling, translation and so on. The problem is, that these transformations use a homogenous coordinate just as calculation aid (it´s always 1), and you don´t need it, if you don´t use matrix calculations. This differs as soon as you get "perspective", and "warping" is a perspective transformation. That´s where the homogenous coordinate of CompositeTags() steps in (if I understood Hans correctly) and I think it´s a different beast there (different to 1.0). And the german WikiPedia article describing e.g. "central projection" is really, really bad for non-mathematicians. Way too formal and badly written as all other articles describing the maths of perspective graphics.
My demo is a rather simple one. In the essence it´s the same you did, but much more triangles used ;) Btw., it uses the first technique you´re asking about (blitting background, displaying objects). The difference is, that the "background" is indeed blitted "in the background", as it is a hidden BitMap (not on display) and the objects are drawn to it, too. This BitMap is then blitted into the display after a WaitTOF(). I could mail you the executable, if you wish.
The "specialty" of it is, that it uses just one call of CompositeTags() to draw all objects at once (you asked for this). This is rather easy to achieve. Allocate a vertex buffer that is big enough to hold all objects you want to see (each object consisting of 30 float values representing texture coordinates and vertex coordinates), then fill it with the "object" data. The texture information is given by some Init...() function, which intializes the object (texture coordinates, initial destination vertex coordinates). Moving is done by manipulating only the triangle (vertex) data each new frame. Give the buffer address to CompositeTags() and all objects are drawn by one call per frame. The "big secret" is the manipulation of object data and how you handle all the data in a fast way.
It´s even more easy to do parallax displays, as it was "mandatory" in good ol´ times. Just fill the vertex buffer beginning with "back" objects and walk through to "front" objects. If those object "layers" get different movement speeds: voilá, Parallax scrolling ;-) The advantage of using Compositing for this is, that there is no speed penalty regarding the number of "layers". It´s just ordering of the objects and careful adjustment of moving speed differences to achieve the Parallax effect. For this, you could use one specialized call to CompositeTags() per layer or use fixed "slots" within the vertex buffer (that´s what I do in the demo).
Edit: dang... just saw the link in Hans´ first post... must have been blind this day :-/ way better than Wikipedia, well explained. But, it´s missing "warping" :-(
What is the best method to render the scene ?
I see two. The first, the simpler.
while not done
Display the background, display moving objects ordering them.
until done
The second is :
Display the background
while not done
save and copy each region where a object is located.
Display objects.
until done
Which is best depends on the scene. If there are just a few small objects moving on a large screen, then the second option is likely to be most efficient. If there are lots of things moving, then you might as well just render the whole scene including the background every time.
Personally, I'd always use the first method. That's partially because I'm likely to create something with lots of stuff moving, but also because it's easier. Modern graphics cards are more than fast enough to handle it, so I can't think of a situation in which you'd truly need the second method.
I just need some hint how homogenous coordinate and the "warping" effect are connected. Say, e.g. some coordinate of a texture is moved away from the rectangle shape (say, it´s the upper left corner), how does this affect the homogenous coordinate? As soon as I understood it, I will set up some quick demo and maybe I understand better how other 2D transformations work...
When you render a texture/image to screen, you are implicitly mapping pixels in screen space back to points in texture space. With affine transformations, this mapping is described by the following equation:
|x||a b c||s|
|y|=|d e f|*|t|
|1||001||1|
You may not have explicitly created this matrix, but it's there implicitly in your vertex data. If you look at the equation above, you'll see that the graphics card can calculate values of the texture coordinates s & t between vertices using 2D linear interpolation; that third row can be ignored.
However, when you warp your quad, then the equation above can't describe the mapping. Now, you are using a full homography:
|x * z||a b c||s|
|y * z|=|d e f|*|t|
|z ||g h i||1|
I've written it in this odd way, because you get the final x & y coordinates by dividing them by the 3rd element z (i.e., converting homogeneous coordinates back to 2D coordinates). This notation should give you a clue as to how to calculate the third texture coordinate... Yes, divide everything by z:
|x||a b c||s / z|
|y|=|d e f|*|t / z|
|1||g h i||1/ z|
This is exactly what Composite3DDemo does.
So, how do you find z for each vertex? You need to have the 3x3 matrix that maps the texture coordinate for each vertex in your quad, to its respective destination coordinate on-screen. When you're doing set transformations such as 3D projection, then you already have that matrix. If you're arbitrarily moving vertices, then you're going to have to calculate that matrix it based on the vertices. Yes, it's linear algebra time! The matrix above is defined for any set of 4 vertices (i.e., any quad). Plug 4 vertices into the equation above, and you have 12 equations with 13 unknowns (9 unknowns in the matrix, and one z for each vertex: z1, z2, z3, z4). You need one more equation to be able to solve for the matrix. It can be as simple as z1 = 1, or something similar (z1 + z2 + z3 + z4 = 4 works just as well, and would probably be better for floating-point precision).
If you are familiar with C++, then I recommend using the Eigen library to do the linear algebra for you. I'm not sure what linear solvers are available for C.
I hope that this clarifies the link between homogeneous coordinates and image warping for you.
Although I didn´t get it fully, it´s a good starting point to comprehend the mathematics behind homogenous coordinates and warping. I think, together with your demo source code, I should get it :-)
Thanks whose and Hans for your answers.
Hans, I must admit that I didn't understand your last post (message to Hans). May be it is too late. I will reread it for sure.
I use such piece of code to rescale in KeymapSwitcher.docky the image flag shown as docky (dd->scaleflag is a float value for rescaling the default IMAGE_H/IMAGE_W[those 2 are #define]):
AOS4.1/SAM460ex/PPC460EX-1155MHZ/2048MB/RadeonHD6570/SSD120GB/DVDRW :-P
Thank you.
i will try this way.
What about to rotate a bitmap?
Rotation requires using vertex arrays. Have a look at the documentation for that. I don't know of any easy example code. Composite3DDemo uses vertex arrays, but the details are buried within the rendering pipeline.
Rotation is best done using matrix algebra. You need to rotate, scale and translate the destination coordinates of your vertices to the pose (i.e., position and orientation) that you want your object to be rendered at. A 3x3 matrix can do 2D rotation, scaling, & transformation in one go (and other weird transforms if the bottom row isn't kept to [0, 0, 1]). This page shows you all of the relevant formulae.
This is where having a set of matrix & vector C++ classes makes life a lot easier. If you look at the Composite3DDemo code, you'll see that I have C3DVector3D & C3DMatrix4x4f classes for the handling the basic 3D transformations. Those two classes allow you to do things like:
As you can see, the matrix-vector multiplication code looks like a multiplication. That makes following what's going on much easier.
Hans
Join the Kea Campus - upgrade your skills; support my work; enjoy the Amiga corner.
https://keasigmadelta.com/ - see more of my work
Thank you.
Math is not a friend of mine, but i will study...
For information about rotation using vertex array, there is some quite usable source code with CompositePOC, http://aminet.net/dev/src/CompositePOC.lha
You will find the rotation code within the "MoveP()" function in CompositePoc.h file (the source code organization is bit weird, but it works ;-) ). It shows scaling, too (which is a very simple thing to do, that is). Please note that this code is using a different approach with positioning of the objects, it uses x and y axis coordinates (which makes rotation a bit easier).
Just play a bit with this code to understand how it works (I did the same ;-) ).
Edit: It´s for 2D gfx, but the principles are the same. For 3D use the information Hans already gave you. Very useful, too!
Coder Insane-Software
www.insane-software.de
It's seems that the compositePOC rotation are not so precise. Maybe i'm wrong.
Precision should be good enough, as long as you don´t aim on extremely high resolutions or extremely big 3D worlds without partitioning. As it is all about floats being juggled, precision is as high as the compiler float math gives.
If you try the program itself you will see that there is no jitter or something. Rotation is smooth and precise.
There is one thing that isn´t very nice, but it´s for sure driver (or hardware) related. Positive scaling (upsizing) is limited to a factor about 4.0 (didn´t test it to the exact factor. For me it´s enough to know about it). Negative scaling is no problem at all and after a sign change (further scaling would lead to a negative size, which is impossible, so the vertices are mirrored) there is no apparent limit in scaling (I scaled the space ship down-up ;-) to a size bigger than my display). But, as I said, I think that this is a problem of my hardware or the driver (SAM440ep, Radeon M9).
Coder Insane-Software
www.insane-software.de
You're right.
I'm getting confused by the clouds that rotates in a different ways.
I have to study the code.
I would like to start with a simple image rotation (but i have to find a code for this).
It depends on what you mean with "simple rotation". If it should be 2D, study the code of CompositePOC. It´s very simple. It just rotates the points of a rectangle. After that, the rectangle is divided into two triangles, but then the coordinates are the coordinates of a rotated rectangle already. Very simple ;-)
For the Compositing engine it´s unimportant how the points (vertices) of a triangle are located, as long as these points result in a closed triangle. If you divide a rectangle into two triangles, the two triangles share some vertices (coordinates). That´s why for 2D graphics a rectangle is good enough to describe the triangles the Compositing engine likes to see.
(bad drawing ;-) )
For the texture, the reading of the texture data will always start with the upper left coordinate you gave as texture coordinate (normal mode). This coordinate will be mapped to the first vertex of the first triangle (0 in the "drawing"). The texture reading propagates to the right, and the mapping upon the triangle will do that, too. IIRC, it propagates to the fourth vertex (3 in the "drawing"), which is the upper right coordinate of the rectangle (and the last coordinate of the second triangle). Thios goes for every line of the texture (top - bottom).
There is no rotation math needed for the texture, btw. It´s all about vertices and triangles (which form a rectangle in the end). The texture will be "rotated" by the Compositing engine (as it knows where to set the next pixel of the texture by calculating angle and distance to the vertex on the "right" and to the "bottom").
Same goes for scaling. As the Compositing engine knows that there is a difference between the texture size and the distance between the vertices, it could compute the factor that is needed for the texture to fit between the vertices. It just repeats or omits texture pixels then. If you don´t use filtering, you will see, how it is done.
Of course you can combine rotation and scaling (of the rectangle/triangles). The texture will follow. You don´t need calculating like nuts for this ;-)
Get a piece of paper and try to draw, what I described here. Maybe this helps with understanding the principles.
Coder Insane-Software
www.insane-software.de
Whose and others,
I would have one question about homogeneous coordinates.
I read some compositing examples and I noted that the coder always used 2 triangles to define its image. Why ? Is there any interest to define three or more triangles?
My second question is there is some advantages to use verteces except for rotations ?
@YesCop:
For homogenous coordinates, Hans is far more competent than me. I live in a 2d games world ;-) As far as I understand it, the homogenous coordinate is not used by compositing directly, but it´s an calculation aid for 3D graphics on 2D displays.
Regarding the triangles, if you need just one rectangle for your compositing task, there is no need for 3 or more triangles. In fact, you don´t need any triangle, as Compositing could be used on a source and a destination rectangle. As soon as you handle more than one rectangle, or you want to do prespective gfx, you will need more triangles. This is were vertex mode gets to the game...
Vertex mode is useful in all cases where more than one rectangle (2 triangles) should be displayed in one go. You could use it for e.g. game gfx (2D), moving gfx in general, utilities displaying the contents of more than one screen/window in a downscaled display and so on.
First I was sceptic about Compositing, but I discovered the opportunities it gives. It´s a really nice tool for game developers, as you could handle several hundred moving objects with it. Runtime profile of Compositing is quite low, if it´s hardware accellerated. I have a demo here, displaying 370 objects on a 1280 x 800 display. All objects are moving constantly, and CPU load is shown by CPUDocky at 10% appr. On a SAM440ep. Not the fastest machine available, but fast enough to do really nice 2D games ;-)
Coder Insane-Software
www.insane-software.de
@YesCop
The main reason would be to render multiple objects in one render call. For example, you could store the characters of a font in one bitmap, and render an entire sentence (or even an entire paragraph or page) in one go. Or, you could render the tiles for a game map in one go.
Why? It's more efficient that way. There is a certain amount of setup/overhead involved per rendering operation. So, rendering many small objects in one operation using vertex arrays is much faster than rendering each item one by one.
Of course, vertex arrays also allow complex shapes to be rendered efficiently.
You could also warp images in all sorts of different ways, including 3D perspective projection (as was demonstrated in Composite3DDemo).
@whose
Homogeneous coordinates do have their use in 2D, which is why they are included in the compositing API. They come in handy when warping images (e.g., in morphing), of which 3D perspective projection is just one very special case. AFAIK, Hans-Joerg wasn't thinking about 3D projection at all when he designed the API (IIRC, he didn't realise that it was possible until he saw the Composite3DDemo ;-) ).
So, what's their use? Well, imagine that you have rendered a rectangular texture/image on-screen. Next, you grab one of its corners and pull it out, warping the image. 2D linear interpolation can't handle this correctly, so the rendered image will start looking pretty bad. The more that it's warped away from a parallelogram shape, the worse it becomes. Using homogeneous coordinates allows the interpolation of the texture coordinates to match the warping, thereby correctly rendering the warped image.
This may not be of much use in most cases, but I could see it being used in drawing & morphing programs.
I was planning to write a demo & tutorial to show how this is done, but I never found the time.
Hans
P.S., Such "warp" transformations are known as homographies or projective transformations. Beware that information about this topic is usually written by mathematicians, and so can be pretty abstract.
Join the Kea Campus - upgrade your skills; support my work; enjoy the Amiga corner.
https://keasigmadelta.com/ - see more of my work
@whose
Thank you for the info
At the moment i need a little piece of code (out of an engine context) as example. CompositePOC is nice, and i will study...
@AmigaBlitter:
There is some code out there that demonstrates the very simple use of Compositing, putting some source bitmap into a destination bitmap using some operation modes. But that isn´t different to the blitting techniques used in earlier days (and I believe you know how to use the classic Amiga blitter and its minterms. Essentially this is how CompositeTags() operates in its simplest mode. It´s a "blitter" that is additionally aware of alpha channels). As Hans already said, if you would use this very simple mode to display lots of rectangles, the overhead for setting up the hardware and internal structures would outrun the advantage of hardware accelleration at some point. Using the vertex mode could reduce the need of hardware/software setup significantly.
But as soon as vertex mode is used, things get a bit more complicated for YOUR code. A very simple code would use a minimalist engine to show some things. Nonetheless it´s a kind of engine. Functions to set up the triangles (vertices), functions to manipulate the triangle by its vertices (rotation, scaling), functions to set up the textures, functions to set up the vertex array and at last some function calling CompositeTags() to draw all the triangles contained in the array. Thats roughly the way CompositePOC works. There is some unnecessary complexity in this code, but the principle I explained above is used in it.
Maybe it needs much more commentary to make clear how it works?
Coder Insane-Software
www.insane-software.de
@Hans:
I see... unfortunately the CompositeTags() documentation is very unclear about this. I don´t have the mathematical background to discover the use of homogenous coordinates for image warping myself, but maybe you could help me a little bit with it? As you said, mathematicians tend to explain things very, ermh, formal ;-), which is not understandable easily if you don´t share the knowledge background with them (which in turn costs much more time to understand what the heck they mean ;-) ).
I just need some hint how homogenous coordinate and the "warping" effect are connected. Say, e.g. some coordinate of a texture is moved away from the rectangle shape (say, it´s the upper left corner), how does this affect the homogenous coordinate? As soon as I understood it, I will set up some quick demo and maybe I understand better how other 2D transformations work...
Coder Insane-Software
www.insane-software.de
I already use compositing.
What i miss now is the example for rotation.
Think about a simple image of 64x64 pixel or more.
I'm able to load the image, to display on screen.
Now i need to rotate the image by 45°, for example.
Whose,
As you seem not to understand the mathematical relations between homogenous coordiantes and transformations, I encourage you to read for example wikipedia. I read the french version which explains very well. Even if you don't understand french, the matrices are given. in fact each transformation is done by a matrix. If you understand how the matrices work, that is enough to use them. You don't even understand the all the mathematical processes.
I already made some tests with composite and rectangle shapes but not with a lot of objects. So i would be interested to see your demo.
Hans,
I thank you for your answer. So, if I understood well, I can create one unique tab containing all the vertices describing differents objects taken from a bitmap.
And I can display them in one go. Is that correct ?
If yes, it is faster that displaying one rectangle after the other.
I have one question for you.
Imagine that I have a fixed background and some moving objects.
What is the best method to render the scene ?
I see two. The first, the simpler.
while not done
Display the background, display moving objects ordering them.
until done
The second is :
Display the background
while not done
save and copy each region where a object is located.
Display objects.
until done
What do you think of these methods ? Is there another way to do it ?
@YesCop:
I understand the mathematical relations regarding rotation, scaling, translation and so on. The problem is, that these transformations use a homogenous coordinate just as calculation aid (it´s always 1), and you don´t need it, if you don´t use matrix calculations. This differs as soon as you get "perspective", and "warping" is a perspective transformation. That´s where the homogenous coordinate of CompositeTags() steps in (if I understood Hans correctly) and I think it´s a different beast there (different to 1.0). And the german WikiPedia article describing e.g. "central projection" is really, really bad for non-mathematicians. Way too formal and badly written as all other articles describing the maths of perspective graphics.
My demo is a rather simple one. In the essence it´s the same you did, but much more triangles used ;) Btw., it uses the first technique you´re asking about (blitting background, displaying objects). The difference is, that the "background" is indeed blitted "in the background", as it is a hidden BitMap (not on display) and the objects are drawn to it, too. This BitMap is then blitted into the display after a WaitTOF(). I could mail you the executable, if you wish.
The "specialty" of it is, that it uses just one call of CompositeTags() to draw all objects at once (you asked for this). This is rather easy to achieve. Allocate a vertex buffer that is big enough to hold all objects you want to see (each object consisting of 30 float values representing texture coordinates and vertex coordinates), then fill it with the "object" data. The texture information is given by some Init...() function, which intializes the object (texture coordinates, initial destination vertex coordinates). Moving is done by manipulating only the triangle (vertex) data each new frame. Give the buffer address to CompositeTags() and all objects are drawn by one call per frame. The "big secret" is the manipulation of object data and how you handle all the data in a fast way.
It´s even more easy to do parallax displays, as it was "mandatory" in good ol´ times. Just fill the vertex buffer beginning with "back" objects and walk through to "front" objects. If those object "layers" get different movement speeds: voilá, Parallax scrolling ;-) The advantage of using Compositing for this is, that there is no speed penalty regarding the number of "layers". It´s just ordering of the objects and careful adjustment of moving speed differences to achieve the Parallax effect. For this, you could use one specialized call to CompositeTags() per layer or use fixed "slots" within the vertex buffer (that´s what I do in the demo).
Edit: dang... just saw the link in Hans´ first post... must have been blind this day :-/ way better than Wikipedia, well explained. But, it´s missing "warping" :-(
Coder Insane-Software
www.insane-software.de
@YesCop
Which is best depends on the scene. If there are just a few small objects moving on a large screen, then the second option is likely to be most efficient. If there are lots of things moving, then you might as well just render the whole scene including the background every time.
Personally, I'd always use the first method. That's partially because I'm likely to create something with lots of stuff moving, but also because it's easier. Modern graphics cards are more than fast enough to handle it, so I can't think of a situation in which you'd truly need the second method.
Hans
Join the Kea Campus - upgrade your skills; support my work; enjoy the Amiga corner.
https://keasigmadelta.com/ - see more of my work
@whose
When you render a texture/image to screen, you are implicitly mapping pixels in screen space back to points in texture space. With affine transformations, this mapping is described by the following equation:
You may not have explicitly created this matrix, but it's there implicitly in your vertex data. If you look at the equation above, you'll see that the graphics card can calculate values of the texture coordinates s & t between vertices using 2D linear interpolation; that third row can be ignored.
However, when you warp your quad, then the equation above can't describe the mapping. Now, you are using a full homography:
I've written it in this odd way, because you get the final x & y coordinates by dividing them by the 3rd element z (i.e., converting homogeneous coordinates back to 2D coordinates). This notation should give you a clue as to how to calculate the third texture coordinate... Yes, divide everything by z:
This is exactly what Composite3DDemo does.
So, how do you find z for each vertex? You need to have the 3x3 matrix that maps the texture coordinate for each vertex in your quad, to its respective destination coordinate on-screen. When you're doing set transformations such as 3D projection, then you already have that matrix. If you're arbitrarily moving vertices, then you're going to have to calculate that matrix it based on the vertices. Yes, it's linear algebra time! The matrix above is defined for any set of 4 vertices (i.e., any quad). Plug 4 vertices into the equation above, and you have 12 equations with 13 unknowns (9 unknowns in the matrix, and one z for each vertex: z1, z2, z3, z4). You need one more equation to be able to solve for the matrix. It can be as simple as z1 = 1, or something similar (z1 + z2 + z3 + z4 = 4 works just as well, and would probably be better for floating-point precision).
If you are familiar with C++, then I recommend using the Eigen library to do the linear algebra for you. I'm not sure what linear solvers are available for C.
I hope that this clarifies the link between homogeneous coordinates and image warping for you.
Hans
Join the Kea Campus - upgrade your skills; support my work; enjoy the Amiga corner.
https://keasigmadelta.com/ - see more of my work
Thank you very much, Hans!
Although I didn´t get it fully, it´s a good starting point to comprehend the mathematics behind homogenous coordinates and warping. I think, together with your demo source code, I should get it :-)
Coder Insane-Software
www.insane-software.de
Thanks whose and Hans for your answers.
Hans, I must admit that I didn't understand your last post (message to Hans). May be it is too late. I will reread it for sure.