eliteo Thanks! Yes, I am was originally upset about the evident reality of 3D terrain engine not being wise or feasible to reconstitute in time for V0 release, but now feeling great about the purist focus on having the absolutely fastest, most optimized, and mobile bandwidth connection friendly timeseries raster display in original dataset projections, where we are going to have multiple domains active concurrently, and the new special compute shader driven rendering which in one single pass rasterizes all the layers - no more GPU waste of rendering multiple layers "on top of" each other with triangle meshes. WSV3 Tactical engine is doing basically the 2D equivalent of raytracing. Given per-pixel lat-lon, it loops over the layers top to bottom and early-outs when alpha >= 1.0.
I do plan to have nationwide automatic local street mapping with angled text rendering like Google Maps. Not sure if that's in V0 but soon thereafter.
I am not opposed to having some form of 3D volume rendering in the base license. We will have to look at it data-wise. That is mid-late 2026. I have always had a dream of doing MRMS national-scale 3D volume rendering. They already have the height level data. I want to have a regular $20/month CONUS standard license and a fancy $40/month workstation license that only adds deeper resolution/limits for industrial users but the value is packed in the base license. Maybe the base license could have current realtime 3D national MRMS volume rendering and the workstation could have the animated. Because that's getting into territory where you need a better network connection/GPU anyways - maybe it's a Tactical Mesoanalyst vs Graphics Producer distinction. But I would love to have some form of volume rendering eventually in the base license.
For V0 my goal is to have the basic working version of RadarHalo Polyradial reflectivity mosaic (multiple automatic camera-based loading L2 sites with original resolution radial data blended into MRMS at range in offscreen grayscale pass before color palette, based on 120 radial chunks and not full sweeps, so realtime websockets push updates for those immediate - similar to LiveScan) and a two-domain same-parameter CONUS + MESO1 original resolution 0.5km Band 2 satellite imagery combination. The common factor here with both aspects is the unique rendering ability for multiple diverse projection/domain data of the same parameter to be synthesized into one parameter display in grayscale, before color palette application. Right now if you have the EGR add-on in first-gen, you can overlay a GOES MESO domain on top of the CONUS, but that's all it is - overlay - as opposed to actual intelligent rendering to choose the source values. As such doing that in first gen will produce artifacts for transparent areas of palettes like in Band 14. The new engine leverages GPU rasterization ability to make per-pixel decisions about, "what is the highest resolution and/or most updated original projection source to evaluate the raw grayscale value from", before color palette application.
This is a natural evolution where GPU is leveraged to remove the limitation of "which dataset do I look at", seamlessly combining multiple same-parameter diverse-domain tile pyramids so the user always sees the highest resolution/most updated data at each pixel. This represents a paradigm-shift in realtime geospatial raster dataset visualization made possible by the theoretical capabilities of modern integrated graphics chipsets if expertly exploited by a modern, compute-shader-based D3D12 rendering pipeline.