There are basically two things in most 3d graphics formats. There are vertices and triangles. (Some graphics formats include edges also, but I haven't used one which did, and I'm not bothering in this case - an edge is merely two points on a triangle as far as I'm concerned.) The big thing that sets many graphics formats apart is what data is contained in each. You can make arguments for basically whatever you want. However, in our most native format, a point is a location, a normal vector, and bone weighting, and triangles have point numbers, a texture ID, and the necessary texture coordinates.
Here's the problem. Two triangles can share a vertex, and have different texture IDs and different texture coordinates (different sections of texture, basically). Two otherwise "different" vertices can actually be at the same physical location, only with different normals - even with the same texture and the same texture coordinates, and it's still a *different* vertex!
I'm trying to come up with a polygon reduction algorithm. I can't have any tearing in the model (two vertices that should be the same, with different normals, ending up in different locations.) So I have to combine all the vertices with the same actual location, no matter what other differences there might be. However, I still have to *preserve* those differences so I can reconstruct them later - and *break the vertices apart again* into the native format.
my head hurts.
If anyone's curious as to what a vertex normal is, lemme know, and I'll explain it later :P