The old texture code was being bottle necked by the texture reads. We mirrored the real3d texture memory directly, including the mipmaps in a single large texture. I *think* most h/w has some sort of texture cache for a 2x2 or 4x4 block of pixels for a texture. What we were doing was reading the base texture, then reading the mipmap data from a totally separate part of the same texture which I can only assume flushed this cache. What I did was to create mipmap chains for the texture sheet, then copy the mipmap data there. Doing this basically doubles performance.
Late christmas present. Due to the way alpha works on the model3 adding regular anti-aliasing doesn't really work. Supersampling is very much a brute force solution, render the scene at a higher resolution and mipmap it.
It's enabled via command line with the -ss option, for example -ss=4 for 4x supersampling or by adding Supersampling = 4 in the config file.
Note non power of two values work as well, so 3 gives a very good balance between speed and quality. 8 will make your GPU bleed, since it is essentially rendering 64 pixels for every visible pixel on the screen.
Some games update the tilegen after the ping_ping bit has flipped at 66% of the frame, so we need to split the tilegen drawing up into two stages to get some effects to work. So having the tilegen draw independantly of the 3d chip can make this happen.
Tilegen shaders are mapped to uniforms, and the vram and palette are mapped to two textures.
TODO rip out the redundant code in the tilegen class. We don't need to pre-calculate palettes anymore. etc
The tilegen code supports has a start/end line so we can emulate as many lines as we want in a chunk, which will come in later as some games update the tilegen immediately after the ping_pong bit has flipped ~ 66% of the frame.
The scud rolling start tilegen bug is probably actually a bug in the original h/w implementation, that ends up looking correct on original h/w but not for us. Need hardware testing to confirm what it's actually doing.
-add a new separate class for crosshair
-crosshair coordinates are calculated by matrix instead of recreating every object at correct coordinates
-add ability to scale crosshair by dpi
-add ability to use bitmap crosshair (located in ./Media/). 32bits bmp format + alpha
-cmd line "-bitmapcrosshair" or "-vectorcrosshair" and/or BitmapCrosshair=0|1 in config file
-these changes are only for lost world game with Crosshairs=1|2|3
don't forget to copy the 2 crosshair images in Supermodel/Media folder
The standard triangle render requires gl 4.1 core, so should work on mac. The quad renderer runs on 4.5 core. The legacy renderer should still work, and when enabled a regular opengl context will be created, which allows functions marked depreciated in the core profiles to still work. This will only work in windows/linux I think. Apple doesn't support this.
A GL 4.1 GPU is now the min required spec. Sorry if you have an OLDER gpu. GL 4.1 is over 12 years old now.
This is a big update so I apologise in advance if I accidently broke something :]
- Added 'crosshairs' command line and config option.
- Added 'vsync' command line and config option (so far only tested on NVidia cards on Windows 7 - other graphics drivers, O/Ss or driver settings may simply chose to ignore this).
- Added fullscreen toggle within game using Alt+Enter key combination.
- Added framework for lamp outputs and 'outputs' command line and config option. So far only the lamps for driving games are hooked up in the emulator (others to be added later).
- Added an initial outputs implementation for Windows that sends MAMEHooker compatible messages (-outputs=win to enable)
- Fixed fps calculation in Main.cpp that was producing incorrect results and so giving the impression that frame throttling wasn't working properly when in fact it was.
- Fixed palette indexed colours as the index was always off by one, causing incorrect colours in various games, eg drivers' suits and flashing Start sign in Daytona 2.
- Altered caching of models so that models with palette indexed colours use the dynamic cache rather than the static one. This is so that changes in palette indexed colours appear on screen, eg the flashing Start sign on the advanced course of Daytona 2 (although currently the START message itself is not visible due to other problems with texture decoding).
- Fixed small bug in TileGen.cpp which meant both palettes were being completely recomputed pretty much with every frame. This was a significant performance hit, particularly as palette recomputation is currently being done in SyncSnapshots (it should be moved out of here at some point, although for now it's no big deal).
- Made sure all OpenGL objects and resources are deleted in Render2D/3D destructors, in particular the deleting of the VBO buffer in DestroyModelCache.
- Made sure that GLSL uniforms are always checked to see if they are bound before using them in order to stop unecessary (but harmless) GL errors.
- Altered the default texture sheet handling to use a single large GL texture holding multiple Model3 texture sheets rather than multiple GL textures as before (if required, the old behaviour can still be selected with the mulisheet fragment shader). I believe this fixes the disappearing crosshairs/corrupt GL state problem which the multisheet fragment shader seemed to be triggering somehow.
- Fixed a bug in debugger which meant memory watches were not triggering properly
- Fixed a printf statement in ConsoleDebugger.cpp.
- Fixed memory watches not breaking in CPUDebug.h.
- Added some helpful batch files to the VS2008 directory.
- Added support for automatic loading of parent ROM sets.
To achieve this UploadTextures no longer clears the model cache when called. Instead the cache is kept in-tact (which should help improve cache hits) and all textures referenced by models being rendered are (re-)decoded with every frame.
To help with tracking all the unique texture references contained in a model a new class TextureRefs has been added.