Daytona 2 PE replaces unused Dreamcast logo textures by filling in zeroed data from polygon RAM. Before we were reading from the first 4MB of VROM which is filled with 0xFF values, resulting in the replaced textures incorrectly becoming white and opaque.
For some reason Model 3 uses vertex coordinates rather than texel coordinates to calculate mipmap levels
Revised microtexture implementation; results are very close to real hardware when running at native resolution with supersampling disabled
The old texture code was being bottle necked by the texture reads. We mirrored the real3d texture memory directly, including the mipmaps in a single large texture. I *think* most h/w has some sort of texture cache for a 2x2 or 4x4 block of pixels for a texture. What we were doing was reading the base texture, then reading the mipmap data from a totally separate part of the same texture which I can only assume flushed this cache. What I did was to create mipmap chains for the texture sheet, then copy the mipmap data there. Doing this basically doubles performance.
vsprintf may change its va_list argument so repeatedly calling it with the same va_list arg is undefined behavior.
Fix this by creating a copy of the va_list argument before each vsprintf call.
Should stop the sky flashing in lemans24, and the background totally disappearing in sega rally after a game. The logic here is still not totally understood but works well enough for the games.
Late christmas present. Due to the way alpha works on the model3 adding regular anti-aliasing doesn't really work. Supersampling is very much a brute force solution, render the scene at a higher resolution and mipmap it.
It's enabled via command line with the -ss option, for example -ss=4 for 4x supersampling or by adding Supersampling = 4 in the config file.
Note non power of two values work as well, so 3 gives a very good balance between speed and quality. 8 will make your GPU bleed, since it is essentially rendering 64 pixels for every visible pixel on the screen.
The two transparency layers, might not be separate layers at all. We believe the hardware is writing each layer to every other pixel (stipple alpha), then using an anti-aliasing filter to effectively blend the pixels. Because the pixels don't overlap they don't depth test against each other. We are using separate layers to emulate this, so the depth buffer must be saved and restored between the layers.
If two translucent polygons with opposing patterns overlap the result is always opaque
Also the LOD scale calculation depends on Euclidean distance of x, y and z, not just z