This function has something to do with fading out our automap as it reaches the edges of the screen. Its output is what I multiply the alpha by in order to fade it. Since it doesn't seem to have any way of determining which way the automap is oriented, logically speaking, its input must have something to do with screen coordinates.
It's worth pointing out at this point that this engine is several years old, and has gone through quite a few changes. Therefore, there are - at the very least - five different coordinate systems that might apply to "screen coordinates", depending on whether the origin is in the upper-left, center, bottom-left, or somewhere a few floors up and about a hundred feet away from the TV, and whether the grid size is post-antialiasing or pre-antialiasing. (I'm actually not joking about that last origin position, btw - one of the coordinates has the center of the TV at (32678,32678).)
Looking through the code to this function, not only can't I tell which coordinate system it uses, but I actually can't tell how it works. Solution: Choose a coordinate system at random, generate output in that format, then see what goes wrong.
Except that takes work, and I'd rather not work, so let's search for the function name instead.
alphaScale = scaleIfScissorEdge(borderSize,mapX+x, mapY+y, tx, ty, bx, by);
p.addGifRGBAQ(128,128,128, int( autoMapBits[yps][xps]*alphaScale ),0);
p.addGifXYZ2( int( ORIGIN + (mapX+x)*2*MAPZOOM ), int( ORIGIN + (mapY+y)*1*MAPZOOM ), 10);
alphaScale = scaleIfScissorEdge(borderSize,(mapX+x)+xAxis.x*DENSITY*mapScale, (mapY+y)+xAxis.y*DENSITY*mapScale, tx, ty, bx, by);
Curiously, this is actually somewhat useful. See, addGifXYZ2 uses internal PS2 units. ORIGIN and MAPZOOM convert from one of the other formats into internal PS2 units. (I forget which format though.) Therefore, scaleIfScissorEdge must use something else. Looking through the code (again) yields the brilliant deduction that the first thing it does is to change some (but not all) of its input into a different coordinate system.
At this point I do a global search for this function, and discover that the only places it's used are in my code, disabled-and-soon-to-be-removed code, and another spot that isn't disabled or removed but will probably have to be rewritten anyway. Joy! I can change the function! At this point I set to work trying to get values out of *my* code that are associated with actual screen coordinates.
This is harder than it seems. See, most people, when drawing a 2d map, have two loops. The outer loop has one coordinate, the inner loop has another coordinate, and there's a bit of code in the middle that draws a square. Most people are not me (and the jury's still out on whether that's a good thing or not). This particular code, in the interests of efficiency, has *four* loops, the inner two of which aren't nested, with a total of four pieces of code that draw squares. In fact none of this code draws squares, it simply sends data to another piece of code which also doesn't draw squares, but rather sends it to an entirely different piece of hardware which tells a third piece of hardware to draw squares. It's all very complex, but runs very quickly.
After spending several minutes writing code that, on command, prints out an enormous amount of debug output and crashes instantly, I realize I've been using the wrong variable this entire time, and produce an entirely different set of inexplicable but blatantly wrong output.
I spend a few minutes looking at it. Some of the numbers seem too high, unless it's using a different coordinate system, in which case they seem too low. Conversely, some of the numberes seem too low (unless it's using the aforementioned different coordinate system, in which case they're too high.) On a hunch I invert several variables and it works.
What, you thought this whole programming thing was *logical*?