Ok, running the spawnfleet and nuke commands was pretty easy. I tried both in a new vanilla campaign, and in my vanilla end-game save. I didn't obtain any permanent slowdown.
FYI here are launch count for commands I used in the new campaign, and of course I encountered all existing and naturally spawning Hegemony military fleets in Jangala + some pirates.
1 spawnfleet tritachyon 50
1 spawnfleet hegemony 100
11 spawnfleet hegemony 200
89 nuke
(depending on fleet size, between 1 and 3 nukes were necessary to defeat the fleet)
Maybe there wasn't enough battles. Or it can't be reproduced with vanilla-only. Or I don't know.
Please note my testing setup was different than my regular gaming setup. When I play the game, I use fullscreen 2560x1440 + sound. While testing this, I used windowed 1920x1080 + no sound. This is a LCD-type display with a 60 Hz refresh rate. Also I almost always use fully zoomed-out view. And I almost always use speed time mode.
That said,
I've made a few interesting observations looking at the fleet activity (and debris) + FPS + idle + CPU usage. Interesting to me, as these are probably obvious to people understanding the game's inner workings.
(I also quickly looked at GPU usage but didn't notice anything, Starsector only use a somewhat constant 50%)
CPU is a 7600K, so 4 cores at 3.8 GHz. Basically Starsector does most of its work on a single core, the percentages indicated below are single-core usage, unless I am badly mistaken.
Whether in a system or in hyperspace, there is a baseline CPU usage. In the new campaign, I had 30% inside Jangala system near the jump point, around 50% in hyperspace near the jump point, and around 60% near Askonia.
That CPU usage increases as fleets and debris are displayed. I got around +5% CPU usage per large fleet. And +10% more if mouse-over a fleet. This +5% increase per fleet seem really high.
So I guess the bigger the screen resolution, the bigger the number of potential displayed fleets and other objects, leading to higher CPU usage. Right? I haven't tried to isolate hyperspace storms CPU usage, but I guess these are kind of objects too and each have some CPU cost.
So I used batches of "spawnfleet player 200" and watched the result. This way I obtained easily reproducible "slowdowns". My understanding of what happened is:
- a bunch of fleets are spawned at the same time
- all are visible and displayed
- they start moving doing their things
- as long as all fleets are displayed, Starsector can't keep up with the 60 FPS goal
- so Starsector drops to a 30 FPS goal, and I saw idle around 50%
- as fleets leave the display area, Starsector changes it's goal back to 60 (or something higher than 30)
Now maybe I should not call these slowdowns, and I can't tell whether / how many people in this thread witnessed the same thing. I saw FPS instability. In the testing conditions, were the player fleet was static and the view stayed static too, it was only mildly annoying seeing the jerky fleet movements. But it is precisely this phenomenon that becomes very annoying once player fleet is moving and there is tons of objects and background display: you don't get a smooth scrolling anymore. Which definitely is something you want to avoid in a 2D game with a lot of scrolling, IMO.
Does the game also use an explicit 45 FPS goal or do you only encounter that when the game transition from 30 to 60, and from 60 to 30?
It seems the game is over-eager to go back to 60, and I suspect in many situations it only leads to bouncing back and forth from 30 to 60 to 30 to 60 ... with all the ugly intermediate FPS and non-smooth scrolling.
If this analysis is correct, I guess the right thing to do is:
- ensure the game use either 30 or 60 (avoid intermediate)
- use a "smart" condition to decide whether staying at 30 or upgrading to 60 such as (timeElapsedSinceLastChange > $DELAY) && (idle > $FPS_UPGRADE_REQ). Where $DELAY is something like 10 seconds, and $FPS_UPGRADE_REQ is something like 70%.
- (check condition currently used to drop from 60 to 30)
If possible, reduce CPU cost to display a fleet?
Another thing to consider would be to implement a "level of detail" thing (as LOD in 3D applications with large scene and a lot of objects) where basically the game would only display the detailed fleet (each ship with its own engine trail) when zoomed-in (=pretty mode), and would display a simplified fleet (one ship or one abstract icon with a single trail) when zoomed-out (=performance mode). You get the idea.
Last minute tests: I checked and obtained the same behaviour fullscreen 2560x1440 with sound, also when turning off the speed time mode the problem is less likely to appear (I guess if game runs slower then it uses less CPU, so more CPU is available to handle peak load such as high number of fleets/objects to display). Which means there currently is two performance leverages: (1) use lower resolution in order to reduce the number of potential displayed objects (less CPU usage), (2) do not enable speed mode in order to "increase available CPU".