What Happened After Friday
After the revelation last week about the memory situation with FPSC (old and new), I got to thinking about how the solution might be coded. To that end I started a small prototype which directly compared creating segment objects for a 200x200x20 using different techniques.
Using a really intense segment object, the CLONE OBJECT method got to around 2500 objects before running out of memory (11x11x20). The current INSTANCE OBJECT did much better and I stopped it at 500MB which is a nice round number which would yield 200,000 objects and a rather nice 300x300x20. This of course excludes any light mapping or CSG operations, but gives you an idea of the power of instances if we took that approach.
My theoretical third option called 'Workspace Object' uses the idea that you don't need a whole 'instance object container' to reference the placement of a parent object, but just a few simple pieces of data such as 'parent object index, position, rotation, scale, limb visibility' and then extras such as 'lightmap UVs, CSG operations' and so on. Placing this data on a map grid, the workspace object would generate the mesh data directly in the DirectX buffers when then hone into view. My work over the next few days is to finish the prototype along these lines and see what kind of memory activity is produced.
Predicted Results
My guess is that this new system will occupy virtually no additional memory beyond the initial reference data of 200x200x20x8x4=25MB which stores the 'where', the memory for the parent resources such as original segments, entities and other objects, the textures for all these parents, additional UV data from the light-mapping process, any CSG meshes (to be decided) and other incidentals. We also need memory for the vertex/index buffers of course which will represent a size representative of how large we make the visible rendering area. For the technique to work we also need LOD models for distant objects so the polygon count can be controlled when looking at scenes from a distance as well. The good news is that apart from the rendering buffer sizes which will ebb and flow with the content close to the camera, there will be no additional memory cost for painting segments and objects in the scene, which is the holy grail we are after. It's a lot of development, but the basic functionality should be up and running quite quickly.
Community Response
I asked the community for their thoughts on this subject, and the community response was fast and decisive. Within days we had a thread running and users voting their opinions, and the result was as far as I could make out, 100% YES. I even threw in a scary 2 month delay on the release, and that did not seem to scare them from their decision.
I still have to clear it with the internal team as there are marketing and commercial realities to consider, but I think this is one of those features we have to do properly as it's the bedrock of the whole product. I have a meeting on the 8th May to make a concrete decision on this, but in the meantime I am creating prototypes and technology to work out the total feasibility of implementing this new system.
Signing Off
It has been one of those late to rise days, so I will be returning this evening after a spot of food to finish my first 'workspace object' prototype when I hope to tame a DirectX buffer into generating my segment mesh directly from its parent, and then see how many I can cram in before it starts to wobble. Another bonus of this technique is that you get 'automatic batching' of polygons, pre-sorted into texture and shader lists. This means you don't get a slow down as you start to pour in more polygons as graphics cards will take the same time to draw whether it's 15 polygons or 150 polygons. I also need to insert a mechanism for object culling and occlusion as well, but I will be able to look closer at this once I have the buffers up and running.
Why not use a Hash-Grid or a Hierarchical-Hash-Grid instead of a big 25mb table ?? Hgrids are powerful, easy and allow extremely large referencing...
ReplyDeleteI know about regular hash table variants, but not 'Hierarchical-Hash-Grid'. Can you send me a link which explains this one. I could create a hash key from the X,Y,Z reference of the tile to render, but it won't be as fast as accessing a three dimensional array, and performance will be absolutely key to get the new render polygons loaded into the buffer each time an update is required. That said, if the performance is minimal, you could effectively do away with a fixed level size and allow the user to have pretty much space out the objects all the way the limits of the 32-bit float coordinate (i.e. thousands of tiles in any direction, even up)!
ReplyDeleteI like the sound of (virtually) infinite worlds very much.
ReplyDeleteI found them in the book "Real-time collision detection" of C. Ericson.. I 've also found this one: http://www10.informatik.uni-erlangen.de/~schornbaum/hierarchical_hash_grids.pdf
ReplyDeleteIt was just an idea to save memory.. I don't know how critical are the performance concerns about this table.
But I thought that the computation cost in accessing the hash table is balanced by not swapping disk/loading when the table become huge..
From the perspective of saving memory (that I think is often the better even if it implies a small computation cost), I imagine a worst case scenario, for example, a level made of a plain with a building on it..
A HGrid can handle this nearly no memory waste and has benefit of not "bounding" the level..
Do you have plans to increase the AI's performance? I can have about 15+ normal AI characters in one room with stable fps, but they act weird. With about 7+ Dark AI characters with basic scripts it lags Very badly and they just stand there and do nothing. Spawning from trigger zones helps but there seems to be a lot of memory leaks and it still lags a lot.
ReplyDeleteI'm sure Multicore support would help this but...
(For reference I have a 4-core 2.40 Ghz, NVIDIA GeForce GT 650M gaming computer.)
The AI work is down the road a little but I can reveal that I will be using an updated DarkAI module to drive the behaviours and aiming to solve the critical issue of the classic FPSC. Namely, that that stand around in a room waiting to get shot. The new characters will hide, sneak about, snipe you and generally prove difficult to hit. I tried adding 50 characters in a room with FPSC X10, and although it worked just fine the game play was a little stale (there is not much fun walking in a room and spending 2 minutes to chain gun everyone to the floor). I am going to aim for 'super smart' AI where even it it's just one or two enemies, you'll get a good game play experience. The present AI uses scripting entirely and a single core DarkAI add-on. The new system will be using a lot more DarkAI and make full use of threading to spread out the 'thinking' part of the enemy brain. As I say, that's for the future, the present is about memory, performance and a great editing experience :)
ReplyDeleteThats great!
DeleteI also (really) like the sound the of virtually infinite worlds, but when I think about it, even the old 40x40 worlds of FPSC Classic were pretty big, so a 200x200 world is by no means small. Still, FPSC has always been about flexibility and any-size worlds will do a lot to continue the tradition.
ReplyDelete