Suburbazine This idea speaks to some very clear themes I have been pondering. I would like to, to the extent possible, understand the reality and prevalence of such hardware systems to understand the realistic constraints. E.g., I believe network bandwidth could be a more present limitation in some contexts than RAM?
As I polish the new ultra-fast 3D visibility and multi-resolution terrain/GIS tiling system upon which I'll build as much of the dynamic level of detail content as possible (with sub-millsecond per-frame gridded quadtree traversal down to 0.125-degree tiles if zoomed in that far), I have had thoughts about the incredibly specific data transformations that will be possible since the client and server will share the same Win32 GPU-accelerated code base.
I had ideas similar to yours where you could select certain 2D bounding box tiles for highest-resolution data where stuff gets heavy. From a bandwidth and processing point of view, visible satellite imagery would benefit from this.
Or, there could be a conventional multi-resolution tiling system (e.g. the same for terrain) on dynamic imagery data I deem heavy enough, such as full-res visible satellite, and the user's manipulation of the camera position would seamlessly drive dynamic downloading of LOD-adjusted imagery content.
In terms of RAM usage, things are looking very efficient, however being able to assume a constant budget of 1.5GB or even 2GB is not something I am trying to avoid, especially given the 64-bit exclusive nature of the new product and power-consumption-minimization goal. Please push back against this if there is an independent need for low RAM usage apart from the tactical benefit of minimizing power, which often motivates using more memory for less processing (https://en.wikipedia.org/wiki/Space%E2%80%93time_tradeoff). For the tactical and performance-related use case, what matters the most is battery/power usage, i.e., keeping CPU/GPU processing minimal. RAM allocation itself doesn't use appreciably use more power, and there's always a compute-memory tradeoff where using more memory enables usage of less compute and therefore power. My modern, high-quality coding favors infrequent allocation of large blocks of RAM as opposed to frequent fragmented smaller allocations, even if the latter could amount to less RAM usage.
By having a larger tile cache, disk I/O and processing can be minimized when flying the camera. I've actually been working on this precise aspect, i.e., the resource streaming for imagery/elevation tile data. Like Google Earth, I think there should be a user-adjustable fixed-size RAM cache so you can control the usage and performance characteristics. I also have a dynamic terrain tile composition system which is doing one-time generation of tile data from multiple source datasets - eventually this will need to take vector shapefiles as well to do things like smoothly fade coastal imagery into ocean and to correctly mask land/ocean distinction because using elevation for this is absolutely insufficient. So this dynamic raster tile compositor system has another fixed sized cache, 300mb here giving great results. This system is intended to be used both myself for preprocessing the default mapping data and also be run/customized by the user if they want to make custom background maps with custom datasources.
I would love to hear your ideas more in detail about the constrained data/view mode or certain data products under this category. I am currently optimizing for low GPU/CPU usage (power) and low bandwidth, but can you point me to hardware specifics of real-world EOC/field emergency use Windows PC platforms that have RAM restrictions so severe that keeping 2GB constantly allocated would be suboptimal?
The memory performance is already shaping up to be far more optimal. I have a very powerful GPU memory management datastructure which uses a fixed-size cache and D3D11 2D texture arrays. Only using <300mb RAM allocated to this, flying the camera around is lightning fast and all tiles load imperceptibly with ultra fast 200ms camera transitions. I will show this off in the next YouTube video.
I think your idea is best approached with the following question, "what server-side geospatial secondary real-time data products should be developed to optimize the use case where very high resolution data is needed only over a very limited area of constant interest and under constrained resources?"