Fractal Softworks Forum

Starsector => General Discussion => Topic started by: Lucky Mushroom on August 10, 2020, 01:45:09 PM

Title: AI, something like AlfaStar in StarCraft II.
Post by: Lucky Mushroom on August 10, 2020, 01:45:09 PM
(My English isn't in pristine condition and sorry for any mistakes)

What do you think about mod or complete change of AI in game? I am not meaning algorithms, but something like neural network this could be very intersting and would change any battle in hard chellange, especially in late game. I know this is very hard to achieve, and don't know, if even achieveble. But effect would be amazing.

What do you think about this idea. Is this dream or something to think about?
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: Terethall on August 10, 2020, 06:15:19 PM
If you donated Fractal Softworks the ~$13m that replicating AlphaStar would likely cost to create in compute alone, I'm sure they'd at least consider it. Unfortunately players would need more than a GTX 960 to play StarSector if they wanted to run that AI in a single-player, standalone application. The whole game would have to be rebuilt and operate in an always-online subscription model where the AI can be run on an external server and networked in, and players could pay for the compute used by the AI when they battle it, so I'd throw in another $10m for the setup/rebuild costs. Plus a subscription cost of $32/hr during battles.

Spoiler
https://medium.com/swlh/deepmind-achieved-starcraft-ii-grandmaster-level-but-at-what-cost-32891dd990e4
[close]

But the real reason it isn't feasible? Hegemony inspectors would not approve.   :P

Edit: Also, the AI is the most impressive aspect of the game, considering the whole thing is written in Java by essentially a single person. Why change the best part? $13m would go a much longer way, spent on story content, art assets, QoL features, etc...
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: intrinsic_parity on August 10, 2020, 07:42:13 PM
Neural nets are actually very computationally inexpensive to run (that's one of the main benefits in a lot of applications). It is the training process that is very computationally expensive. Even then, it's more because you have to run simulations a million times, rather than because one simulation is overly expensive to run. Once the network is trained, any computer could run it.

For more evidence, look at leela chess zero, the open source chess neural network. Anyone can download the current version of the network and run it with little difficulty, in fact that is how leela is trained: people just let leela play against itself on their computers for millions of games (basically crowd sourced GPU time). If you added up all the time people have run Leela for on their computers, you would probably get some absurdly large time like the one in the article, but you don't need to achieve that by renting a supercomputer for a week for millions fo dollars. Leela has won the computer chess championship against the best conventional chess engines and it's thought that Leela is currently stronger than Alpha Zero was when google was working on it.

It's difficult to say, but I don't think the game state/decision space of starsector is much more complicated than starcraft (probably less complicated actually), so it's probably possible. I'm not sure if the architecture of the game is really set up to be run thousands or millions of times without human intervention efficiently though, and it would also matter a lot what information is available for the AI to make decisions, and how easily that information could be accessed/stored/saved/used, and I doubt that could be set up without some significant dev support.

TBH, I think it's possible, but not something that could be done without a team of really dedicated and knowledgable people (like nearly full time commitment, not a few hours a week hobby type commitment), some crowd sourced community support and developer support that's not really possible atm.
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: SCC on August 12, 2020, 01:24:36 PM
In Starcraft, there is one neural network per player. In Starsector, due to design, there would be many more agents, although they would be much simpler. Maybe. SS AI has to be capable of handling asymmetrical fights and many, many potential loadouts and battle situations. I wonder if running 30-40 simple neural networks would be doable on an average PC.
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: Thaago on August 12, 2020, 01:39:09 PM
If the neural networks are relatively simple, then sure a standard PC can run 30-40 of them at a time: especially because most AI in SS only needs to 'tick' every half second or so. Even better, they are inherently very easily parallelized so they could leverage many cores or even the GPU.

I'm with intrinsic_parity, the bigger issue is training, because it can be a bit difficult to tell if an individual action is the "correct" one, or at least good enough. If a ship makes an action and then dies 7 seconds later, was it because of that action? Or something subsequent? Or the whole set of programming can be trained at once based on the outcome of the fight, but in that case it only gets 1 run every combat which is very slow.
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: Alex on August 12, 2020, 01:54:38 PM
Something else to consider: with a neural net, you don't really know why it does any one thing it does. So, for example... say there's a relatively innocuous balance change. And in the meantime, the neural net decided that "hull == 3000" is a useful proxy for a ship having some other properties. And changing it to 3001 could, conceivably, break the whole thing horribly.

More realistically, what happens when there's a new ship? A new ship system? New weapons? I think the ability to handle mods would be... questionable.

(This is all assuming an optimal-or-close-to-it AI would be fun, but that's another question.)
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: Nafensoriel on August 12, 2020, 03:17:18 PM
Something else to consider: with a neural net, you don't really know why it does any one thing it does. So, for example... say there's a relatively innocuous balance change. And in the meantime, the neural net decided that "hull == 3000" is a useful proxy for a ship having some other properties. And changing it to 3001 could, conceivably, break the whole thing horribly.

More realistically, what happens when there's a new ship? A new ship system? New weapons? I think the ability to handle mods would be... questionable.

(This is all assuming an optimal-or-close-to-it AI would be fun, but that's another question.)
To be fair... Humans express exactly the same level of randomness and lack of logic when looked at as a massed whole. Where a 20/20 review of choices might reveal a far more logical and efficient path you will be hard-pressed to find an actual human in a navigation situation that will pick anything but the path of least percieved resistance even when the path of actual least total effort is easily visible.

On the note of using a "smarter" AI... I don't think it would really be what people want. A fun AI sounds far more interesting. A smart AI is going to use the character better than you ever do because they (from a physical standpoint) are trained to actually be whatever their input is. They do not have multiple points of subconscious translation in the way humans do where we have to train our fingers to use peripherals to actually engage another layer of software/hardware to do what we want in a field that is 2d and often representing a 3d environment. When I watch someone else game I see people get frustrated more from AI that can play the game better than they do. Considering most players want to play a game for a challenge and fun having an AI that is just slightly worse than the person using the software would be the ideal target.

Not sure if NN systems can replicate "slightly more terrible than me" though. Guess they could since you are training them for a result but man I wouldnt want to teach it.
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: Megas on August 12, 2020, 03:45:38 PM
At least in fighting games, perfect play AI (in whatever form) is highly annoying.  Cannot use the fun flashy moves because AI will generally abuse low-risk moves and rarely leave an opening or will escape throws if caught.  It often boils down to a poke-fest or abusing AI breakers to win.
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: MrDaddyPants on August 12, 2020, 04:42:17 PM
It would be immensely fun.

Especially if reward function was valuing enemy losses above all. Not only every time when you manually control a single ship you'd loose the battle in a scenario where forces are even-ish. Because you are absolutely inadequate monkey unable to behave like all your other ships expect you to behave, because you are not executing the best possible set of actions in a given scenario. But losing ships every time even if you have superior forces would just add an extra level of fun xD.

It wouldn't be that hard or expensive to train for vanilla combat. Ofc every mod. Every ship, every weapon adds to a training time, because neural network needs to run a milion matches using that ship and weapon. But it would be nothing but a minor inconvenience for the modders. After all spewing a couple of pixels in photoshop and edditing a line csv file is pretty much the same as running a neural network. ;D
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: intrinsic_parity on August 12, 2020, 05:50:08 PM
It's worth noting that while the deep mind chess and Go neural networks were actually pure reinforcement learning, meaning they stared with no knowledge of the gamed were trained to maximize the chance of winning, the starcraft neural network was partially trained by supervised learning which means that it was initally attempting to replicate a humans gameplay rather than trying to 'win'. They used that human gameplay as a jumping off point for reinforcement learning because they weren't able to start from scratch successfully. For supervised learning, you actually have human game play (the information the player had and the game inputs they made in response) and you're trying to make a network that will respond in the same way. For that you actually need a person to play a ton of difference scenarios, and you're just trying to copy what they do (so you can only be as 'optimal' as they were). After thinking about it a bit more, I think starsector is actually a lot simpler than starcraft in terms of the set of possible inputs and outputs, but I'm not sure if pure reinforcement learning would actually work, or if you would encounter the same problems that they did with starcraft necessitating some supervised learning. With starcraft, I don't think they ever got it to the point where it was 'optimal' and would crush any human opponent like a chess engine, it was more like a very strong human player.

I'm also inclined to agree that it could be very difficult to span the space of possible scenarios that a ship could encounter with training, regardless of whether that's self play, or player data. It would hopefully be possible that small stat changes wouldn't matter that much because the actions that you take aren't really different if the ship has slightly more hull or a slightly different load out, but things that are categorically different like new ship systems or weapons would definitely require additional training IMO. TBH, I think that the weapons and systems on enemy ships would be an input to the network, in which case unknown weapons would be impossible to deal with.

Debugging would become easy though because the answer is always just 'retrain' ;D, although that might take a while lol.

I'm pretty sure this is the sort of thing that a team of dedicated researchers might attempt over a year or two, not something that Alex or a curious model would do as serious attempt at creating an AI for the actual game.
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: MrDaddyPants on August 12, 2020, 06:08:31 PM
Combat in SS is no where near as complex as Starcraft. You don't build and collect resources. You just move and shoot.

Also for most of these AIs (starcraft, dota open AI, etc) the goal is to beat player by playing "smartly". So there are extreme limits on actions per minute. Starcraft pro players are in advantage they get to spam actions. That's the only reason human players can have somewhat competitive game with AI.

Good AI doesn't have to be machine learning. AI in Age of Empires II DE? or how it's called is also very very good.

Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: Alex on August 12, 2020, 06:34:50 PM
It would hopefully be possible that small stat changes wouldn't matter that much because the actions that you take aren't really different if the ship has slightly more hull or a slightly different load out

An actual example: a neural net trained to tell apart American and Russian tanks actually learned to tell apart low and high quality photos, since the photos of Russian tanks were all low-quality and taken under less-than-ideal conditions. Point being, it may learn to do something, but the *why* is extremely iffy. That's why I'm saying a completely unexpected thing could turn out to in fact be critical. Like, for example, something having exactly 3000 hull, or whatever other random bit of info that's obviously non-critical to a human but a neural net might fixate on for reasons.

Also for most of these AIs (starcraft, dota open AI, etc) the goal is to beat player by playing "smartly". So there are extreme limits on actions per minute. Starcraft pro players are in advantage they get to spam actions. That's the only reason human players can have somewhat competitive game with AI.

The AI had some APM limitations, that's true! But combined with its perfect mouse accuracy, it had an edge over any pro player in terms of useful actions, I don't think it's even particularly close. It's not a stretch to say that whenever it won, it was due to execution, not strategy. So, at least IMO, while it did well, it did well in a... what I'd say an uninteresting way. Fundamentally so much of SC2 is about clicking well, and the AI did that. But, you know, there's code that does perfect marine splits vs banelings that's just a bit of code and not a giant neural net thing. It would be really interesting if it had APM/accuracy limitations imposed that truly made it at a significant disadvantage compared to human players, and it could win through something resembling planning and strategy.

Based on the games I've seen, though (which is quite a few of them), it wasn't that. I mean, it built roaches into Void Rays and almost lost a game that should've never in a million years been close, that by the end looked like a desynced replay due to the AI's baffling decisions. Its whole zerg "strategy" - at least, what I saw of it - was a really, really, really well executed roach timing.

Some things were fascinating, though - the adept harass it did in PvP, and how it over-built probes to compensate. Its army movement was also surprisingly good at times. It's definitely a big accomplishment.


(I'd also argue that combat in Starsector is complicated in different ways. E.G. that starting conditions vary *wildly* and you can't choose a specific strategy to narrow down the possibilities and then refine that (e.g. that roach timing), you have to be able to handle all of it. Also, it wouldn't be training *one* agent per side, it'd be training one agent *per ship*, which I think changes things drastically. Perhaps exponentially? Some sort of cooperation would need to evolve etc. I mean, this is all extremely theory-crafty and there's zero chance of it becoming a real thing, but it seems... complicated. Not that SC2 isn't, but Starsector to me isn't obviously simpler. Just different concerns, and I don't know enough about neural net training etc to really evaluate it.)
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: MrDaddyPants on August 12, 2020, 07:01:58 PM
Oh yeah i'd agree that it won mechanically.

I would also love to see AI finding out that unexpected 3 forge strat or something completely out of the box. Something that would force humans to reavaulate the meta.

Deep learning is at it's infancy. Even google's Alpha GO or how it's called, deep learning AI for GO would basically use neural net to narrow down some positions. Then those positions would be classically brute forced (every turn, every possible option).

However SS is not that complicated.

Dota is super complex, but kinda similar to SS that each agent can move, or use skills and attacks. And where i think dota Open AI excelled is movement and attacks in coordination with each other. They chained their skills beautifully, not in having no "stun overlap" because of timing of two stuns, but purely on each agent's ability to be in position in right time, to get just in range (where's at the beginning of combat only one agent was in range) to use it's spell to score kills in harmony with other agents, and then disengage and escape, or continue fighting for more kills. Provided ofc it's multimilion dollar operation

I think it would work similarly in SS. That's where human doesn't fit in. Agent's would expect human ship to do stuff as they would. And because of humans non compliance, they would fail against purely AI opposition...

I'm pretty sure sooner or later deep learning will be part of the game dev's toolkit in some way. But there is also fun factor. Ppl might complain AI this AI that.

But i don't see many of them saying how much fun they have playing chess or go against AI :))
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: Nick XR on August 12, 2020, 08:18:47 PM
An actual example: a neural net trained to tell apart American and Russian tanks actually learned to tell apart low and high quality photos, since the photos of Russian tanks were all low-quality and taken under less-than-ideal conditions. Point being, it may learn to do something, but the *why* is extremely iffy. That's why I'm saying a completely unexpected thing could turn out to in fact be critical. Like, for example, something having exactly 3000 hull, or whatever other random bit of info that's obviously non-critical to a human but a neural net might fixate on for reasons.

Hah, there's a lot of hilarious/tragic ways models can go wrong.  One of my personal favorites is the general category of Adversarial Objects, specifically:  https://www.digitaltrends.com/cool-tech/image-recognition-turtle-rifle/ TL;DR; Google is very sure a turtle is a rifle. 
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: Alex on August 12, 2020, 08:50:47 PM
I would also love to see AI finding out that unexpected 3 forge strat or something completely out of the box. Something that would force humans to reavaulate the meta.

Yeah, that would have been amazing! But, alas, google seems to have just declared victory and moved on.

I'm pretty sure sooner or later deep learning will be part of the game dev's toolkit in some way.

Yeah, I think you're right about that. I mean, we're seeing some of it now, with texture upscaling tools, and with the Dragon GPT 3 AI Dungeon thing...

But i don't see many of them saying how much fun they have playing chess or go against AI :))

Hah, yeah :)


Hah, there's a lot of hilarious/tragic ways models can go wrong.  One of my personal favorites is the general category of Adversarial Objects, specifically:  https://www.digitaltrends.com/cool-tech/image-recognition-turtle-rifle/ TL;DR; Google is very sure a turtle is a rifle. 

Oh yeah, that's the best! I'm also a fan of the "the AI find this part of the image 'interesting' and will over-focus on it, ignoring the actual image" attacks. Which can be used to foil, say, facial recognition by wearing a patch on your jacket, or some such. (Side note/pet peeve: people - us included - calling this stuff "AI" is such a giant marketing coup. It's about as close to AI as f(x) = mx + b, and yet. Calling it AI just undersells just really how fundamentally non-reasoning it is. And don't even get me started on self-driving cars...)
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: Cyan Leader on August 12, 2020, 09:57:04 PM
I want to reinforce that a very good AI doesn't necessarily mean the most interesting or fun to play against. For example, if the optimum strategy is to kite the player for 10 minutes with a single ship and then chain deploy until the player's flagship CR runs out then it's going to do that every time. At the end of the day, what we want is an engaging experience and I'd say that the current AI already does that in spades.
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: Megas on August 13, 2020, 05:23:50 AM
I want to reinforce that a very good AI doesn't necessarily mean the most interesting or fun to play against. For example, if the optimum strategy is to kite the player for 10 minutes with a single ship and then chain deploy until the player's flagship CR runs out then it's going to do that every time. At the end of the day, what we want is an engaging experience and I'd say that the current AI already does that in spades.
AI already abuses kiting too much, and I would expect perfect play AI in Starsector to kite even more to play the stall war and win by player running out of PPT and CR first.  (It is a major reason why I currently use all-capital or capital and cruiser fleets - maximum PPT to win a stall war.)  Something like Remnants, which have more PPT than most ships, could win just by running away and running out the clock.  I would like to see Starsector AI rolled back to more aggressive macho pre-0.8a AI.  It was more fun.

Remember Timid officers during one of the 0.7.x releases.  They were impossible to catch with the slower ship, and the optimal strategy (when soloing multiple fleets with max-skilled godship Onslaught) was to sit on a relay and wait until the Timid officer ran out of CR and lost it engines.  Fights against a endgame fleet often took close to an hour, due to waiting for Timid officers to self-destruct.
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: Lucky Mushroom on August 13, 2020, 08:23:43 AM
That is a dream but i want to see something like scout with couple of frigats to check what player deployed. Sooo many possibilities. And another one with multiplayer but we, players possibly thousnds of players, fighting in huge sector everyone with own fleet against or with mastermind AI. Something like Planetside 2 but not only PvP, PvAI. This game has so much potencial. With good fonuds and more people with passion like Alex. That would be amazing.
And in this case special server computer or something with AI only, have sense.

And yes 10 milion would be very appreciated.
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: intrinsic_parity on August 13, 2020, 11:37:43 AM
It would hopefully be possible that small stat changes wouldn't matter that much because the actions that you take aren't really different if the ship has slightly more hull or a slightly different load out

An actual example: a neural net trained to tell apart American and Russian tanks actually learned to tell apart low and high quality photos, since the photos of Russian tanks were all low-quality and taken under less-than-ideal conditions. Point being, it may learn to do something, but the *why* is extremely iffy. That's why I'm saying a completely unexpected thing could turn out to in fact be critical. Like, for example, something having exactly 3000 hull, or whatever other random bit of info that's obviously non-critical to a human but a neural net might fixate on for reasons.
The ML is still finding real patterns in this case, even if they aren't the intended patterns. There is a knowable 'why' based on the training data, it's just not the 'why' you wanted. The performance can only ever be as good as the data (in my field we say 'garbage in garbage out'). For image recognition specifically, there's going to be a lot of issues with pixel level information/patterns/noise that just aren't an issue in video games where the ML has direct access to the game states and there's pretty much no variance. For instance, if the tank identification ML was acting directly on the dimensions and properties of the tanks rather than on pixel level information representing those things, it's impossible for some of those things to go wrong. It's just sort of hard to compare results from applications with such different data and methods (supervised vs reinforcement learning etc.).

I particularly think the example of hull is not a good one because the ML/NN will naturally see tons of different hull values on tons of different ships over the course of a million training combats. That's all was trying to say: that small variations that are already covered by the training are probably not going to be an issue.

The general point that rebalancing would almost certainly require retraining, and mods would be difficult to support is completely right though. It's something you would do as a research project on a 'finalized' open source game, not something you would try to implement during the game development process while everything is changing.

Based on the games I've seen, though (which is quite a few of them), it wasn't that. I mean, it built roaches into Void Rays and almost lost a game that should've never in a million years been close, that by the end looked like a desynced replay due to the AI's baffling decisions. Its whole zerg "strategy" - at least, what I saw of it - was a really, really, really well executed roach timing.
I got the impression from reading the paper on alpha star that the space of possible inputs in starcraft was just so large that  reinforcement learning was not able to explore like it does in chess or go. They had to resort to starting it out with supervised learning (basically copying some human gameplay) and then it was allowed to try and improve from there via actual reinforcement learning. I actually noticed that they developed some specialized networks that only did particular strategies, and then tried to make generalized networks that could beat all of those specialized ones, so maybe some of the limited strategies you observed were related to that.
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: Lucky Mushroom on August 15, 2020, 12:59:02 AM
This wolud help i think https://www.youtube.com/watch?v=TJcKYUTaBtg
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: Alex on August 15, 2020, 11:01:23 AM
I particularly think the example of hull is not a good one because the ML/NN will naturally see tons of different hull values on tons of different ships over the course of a million training combats. That's all was trying to say: that small variations that are already covered by the training are probably not going to be an issue.

Ah, perhaps I should've been more clear, by "hull" I mean the max hull value, not the current one. So e.g. it could use "ship has 20,000 hull" as a proxy for "the ship is an Onslaught" (or whatever) and that doesn't seem all that unlikely. Which of the data is going to zero in on isn't really predictable. It's not so much garbage in / garbage out as it is finding patterns other than the ones that would make sense for humans - ones humans would know are unreasonable ones to use, because they can extrapolate beyond that data set and go "yeah, that's not a good assumption to make".

I get what you mean, though, yeah - if we're talking about current hull values which change a lot, that seems like that's less likely to be a problem. But, ah, this is all theorycrafting on my part as my understanding of it is not that deep.

I got the impression from reading the paper on alpha star that the space of possible inputs in starcraft was just so large that  reinforcement learning was not able to explore like it does in chess or go. They had to resort to starting it out with supervised learning (basically copying some human gameplay) and then it was allowed to try and improve from there via actual reinforcement learning. I actually noticed that they developed some specialized networks that only did particular strategies, and then tried to make generalized networks that could beat all of those specialized ones, so maybe some of the limited strategies you observed were related to that.

(I think this was pretty much towards the end of what they were doing, from the ladder games of the agents they had play on the ladder and which got into GM with all 3 races. Terran wasn't pretty either - you could see it had some notion that walling in was a thing, but not an actual understanding of why or how.)
Title: Re: AI, something like AlfaStar in StarCraft II.
Post by: DubTre6 on August 17, 2020, 11:58:07 AM
This thread made my brain hurt :o