TheFNGee This is very coincidental timing for such ideas, as I've spent the last month essentially taking a break from the complex rendering engine development, and working on top-down aspects such as creating the fastest, most lightweight, and most versatile deployment system humanly possible within reason for the next-gen product.
I don't perfectly understand your idea, but it does sound similar to what I've recently been working on.
Before I detail this, let me reassure that a major design goal of WSV3 Tactical Mesoanalyst is the ability to take use of powerhouse hardware (like yours) if it exists, but also to run reliably and performantly on low-end hardware. The recent development of a 2D map mode is a huge win for this, because I can proceed uninhibited to implement maximal graphics optional functionality in the 3D map to put competent hardware to full use. To this end I also purchased a Win11 gaming laptop with NVIDIA RTX 4060 for my testbed.
Back to my understanding of your idea about a "SaaS-based overall distribution" model. What I am trying to do is have the best of both worlds between web browser apps and fast, native, efficient software.
Long story short, in the past week I attained an incredibly successful result on my efforts to minimize the size of the EXE so that it contains as close to purely rendering code as possible. I want the final result to be a surprise, but it is well under 10mb and automatically downloads lightweight deployment resources with Firefox-like automatic background deferred-to-next-launch updates as follows:
The deployment system is maximized for tactical-grade usage where bandwidth is limited (e.g. storm chasing laptops on cellular) and where the hassle and slowness of traditional Windows app installation is totally unnecessary.
I have one installerless WSV3.Client.exe which can be run from any file location, even within a compressed zip file, which, upon first launch, will copy itself to the ProgramData folder, and seamlessly download a few megabytes of initial resources in the background as the user clicks "I agree" on the EULA screen and chooses to add a desktop icon - basically masking-over the remaining few MBs of download with the couple seconds of initial user interaction. I bet most users will press "launch" even without the second screen having to appear, which finishes the small extra download. It launches absolutely instantly. I can do a YouTube video showing this soon.
Then when there's any sort of update, it automatically loads the files in the background during one instantiation, drops a subtle unforced "restart updated" menu item if the user wants to restart then. If they click, it launches the new version within 2 seconds on my dev PC. It will be like reloading a web page, but with the speed and efficiency of a native application. If the user doesn't click "restart updated", the next launch automatically uses the new client data.
Your "SaaS" model shows how great minds think alike. In late April I committed to the idea of building absolutely everything based on server push technology and doing all of the processing on server-side, which:
1) Minimizes CPU/processing overhead on client
2) Lessens bandwidth usage by enabling maximum compression in one place
3) Respects government/public domain resource budgets by centralizing downloads to one place
4) Perhaps most importantly, eliminates all the time delay and bandwidth overhead from the traditional model of client-side polling/processing.
On the last point, since most real-time NWP/radar/satellite datasets are now offered on major cloud providers through NOAA OpenData, and since said providers all offer event-based triggers, this means new data will flow instantly to the server and then the client without anyone doing wasteful, slow, traditional timer-based polling checks.
The client won't even ask the server for new data. The client will keep the server updated with its current data selection, download any past timeseries data, and then the server will push immediate update frames to clients that are subscribed to that product. I have implemented a very fast, firewall friendly WebSocket protocol client-server implementation which uses conventional HTTP/S ports 80/443 but approximates the speed of a raw TCP socket. All the communication will be tiny, compact binary packets, a far cry from the overhead of traditional HTTP/JSON requests and long-polling.
Let me know if this is similar to your idea about a SaaS-based overall distribution and management model as it sounds we both came to very similar conclusions on how to optimally handle the data flow - one where code (updates to client app - rendering only, data ingest all on server) and actual data are covered with the same immediate-mode abstractions.
Very excited to eventually make full use of your beast of a hardware setup, but I suspect this will only be mostly necessary for volume rendering, 4k output, and/or very long loops. The core experience should actually require less system resources than first-gen. I will also respond to your email with some more specific feedback. Thank you for sharing!