What Happened After Friday
After the revelation last week about the memory situation with FPSC (old and new), I got to thinking about how the solution might be coded. To that end I started a small prototype which directly compared creating segment objects for a 200x200x20 using different techniques.
Using a really intense segment object, the CLONE OBJECT method got to around 2500 objects before running out of memory (11x11x20). The current INSTANCE OBJECT did much better and I stopped it at 500MB which is a nice round number which would yield 200,000 objects and a rather nice 300x300x20. This of course excludes any light mapping or CSG operations, but gives you an idea of the power of instances if we took that approach.
My theoretical third option called 'Workspace Object' uses the idea that you don't need a whole 'instance object container' to reference the placement of a parent object, but just a few simple pieces of data such as 'parent object index, position, rotation, scale, limb visibility' and then extras such as 'lightmap UVs, CSG operations' and so on. Placing this data on a map grid, the workspace object would generate the mesh data directly in the DirectX buffers when then hone into view. My work over the next few days is to finish the prototype along these lines and see what kind of memory activity is produced.
My guess is that this new system will occupy virtually no additional memory beyond the initial reference data of 200x200x20x8x4=25MB which stores the 'where', the memory for the parent resources such as original segments, entities and other objects, the textures for all these parents, additional UV data from the light-mapping process, any CSG meshes (to be decided) and other incidentals. We also need memory for the vertex/index buffers of course which will represent a size representative of how large we make the visible rendering area. For the technique to work we also need LOD models for distant objects so the polygon count can be controlled when looking at scenes from a distance as well. The good news is that apart from the rendering buffer sizes which will ebb and flow with the content close to the camera, there will be no additional memory cost for painting segments and objects in the scene, which is the holy grail we are after. It's a lot of development, but the basic functionality should be up and running quite quickly.
I asked the community for their thoughts on this subject, and the community response was fast and decisive. Within days we had a thread running and users voting their opinions, and the result was as far as I could make out, 100% YES. I even threw in a scary 2 month delay on the release, and that did not seem to scare them from their decision.
I still have to clear it with the internal team as there are marketing and commercial realities to consider, but I think this is one of those features we have to do properly as it's the bedrock of the whole product. I have a meeting on the 8th May to make a concrete decision on this, but in the meantime I am creating prototypes and technology to work out the total feasibility of implementing this new system.
It has been one of those late to rise days, so I will be returning this evening after a spot of food to finish my first 'workspace object' prototype when I hope to tame a DirectX buffer into generating my segment mesh directly from its parent, and then see how many I can cram in before it starts to wobble. Another bonus of this technique is that you get 'automatic batching' of polygons, pre-sorted into texture and shader lists. This means you don't get a slow down as you start to pour in more polygons as graphics cards will take the same time to draw whether it's 15 polygons or 150 polygons. I also need to insert a mechanism for object culling and occlusion as well, but I will be able to look closer at this once I have the buffers up and running.