Fractal Softworks Forum

Please login or register.

Login with username, password and session length
Pages: 1 [2]

Author Topic: AI, something like AlfaStar in StarCraft II.  (Read 2894 times)

Cyan Leader

  • Admiral
  • *****
  • Posts: 718
    • View Profile
Re: AI, something like AlfaStar in StarCraft II.
« Reply #15 on: August 12, 2020, 09:57:04 PM »

I want to reinforce that a very good AI doesn't necessarily mean the most interesting or fun to play against. For example, if the optimum strategy is to kite the player for 10 minutes with a single ship and then chain deploy until the player's flagship CR runs out then it's going to do that every time. At the end of the day, what we want is an engaging experience and I'd say that the current AI already does that in spades.
« Last Edit: August 12, 2020, 09:59:04 PM by Cyan Leader »
Logged

Megas

  • Admiral
  • *****
  • Posts: 12150
    • View Profile
Re: AI, something like AlfaStar in StarCraft II.
« Reply #16 on: August 13, 2020, 05:23:50 AM »

I want to reinforce that a very good AI doesn't necessarily mean the most interesting or fun to play against. For example, if the optimum strategy is to kite the player for 10 minutes with a single ship and then chain deploy until the player's flagship CR runs out then it's going to do that every time. At the end of the day, what we want is an engaging experience and I'd say that the current AI already does that in spades.
AI already abuses kiting too much, and I would expect perfect play AI in Starsector to kite even more to play the stall war and win by player running out of PPT and CR first.  (It is a major reason why I currently use all-capital or capital and cruiser fleets - maximum PPT to win a stall war.)  Something like Remnants, which have more PPT than most ships, could win just by running away and running out the clock.  I would like to see Starsector AI rolled back to more aggressive macho pre-0.8a AI.  It was more fun.

Remember Timid officers during one of the 0.7.x releases.  They were impossible to catch with the slower ship, and the optimal strategy (when soloing multiple fleets with max-skilled godship Onslaught) was to sit on a relay and wait until the Timid officer ran out of CR and lost it engines.  Fights against a endgame fleet often took close to an hour, due to waiting for Timid officers to self-destruct.
Logged

Lucky Mushroom

  • Ensign
  • *
  • Posts: 18
    • View Profile
Re: AI, something like AlfaStar in StarCraft II.
« Reply #17 on: August 13, 2020, 08:23:43 AM »

That is a dream but i want to see something like scout with couple of frigats to check what player deployed. Sooo many possibilities. And another one with multiplayer but we, players possibly thousnds of players, fighting in huge sector everyone with own fleet against or with mastermind AI. Something like Planetside 2 but not only PvP, PvAI. This game has so much potencial. With good fonuds and more people with passion like Alex. That would be amazing.
And in this case special server computer or something with AI only, have sense.

And yes 10 milion would be very appreciated.
Logged

intrinsic_parity

  • Admiral
  • *****
  • Posts: 3071
    • View Profile
Re: AI, something like AlfaStar in StarCraft II.
« Reply #18 on: August 13, 2020, 11:37:43 AM »

It would hopefully be possible that small stat changes wouldn't matter that much because the actions that you take aren't really different if the ship has slightly more hull or a slightly different load out

An actual example: a neural net trained to tell apart American and Russian tanks actually learned to tell apart low and high quality photos, since the photos of Russian tanks were all low-quality and taken under less-than-ideal conditions. Point being, it may learn to do something, but the *why* is extremely iffy. That's why I'm saying a completely unexpected thing could turn out to in fact be critical. Like, for example, something having exactly 3000 hull, or whatever other random bit of info that's obviously non-critical to a human but a neural net might fixate on for reasons.
The ML is still finding real patterns in this case, even if they aren't the intended patterns. There is a knowable 'why' based on the training data, it's just not the 'why' you wanted. The performance can only ever be as good as the data (in my field we say 'garbage in garbage out'). For image recognition specifically, there's going to be a lot of issues with pixel level information/patterns/noise that just aren't an issue in video games where the ML has direct access to the game states and there's pretty much no variance. For instance, if the tank identification ML was acting directly on the dimensions and properties of the tanks rather than on pixel level information representing those things, it's impossible for some of those things to go wrong. It's just sort of hard to compare results from applications with such different data and methods (supervised vs reinforcement learning etc.).

I particularly think the example of hull is not a good one because the ML/NN will naturally see tons of different hull values on tons of different ships over the course of a million training combats. That's all was trying to say: that small variations that are already covered by the training are probably not going to be an issue.

The general point that rebalancing would almost certainly require retraining, and mods would be difficult to support is completely right though. It's something you would do as a research project on a 'finalized' open source game, not something you would try to implement during the game development process while everything is changing.

Based on the games I've seen, though (which is quite a few of them), it wasn't that. I mean, it built roaches into Void Rays and almost lost a game that should've never in a million years been close, that by the end looked like a desynced replay due to the AI's baffling decisions. Its whole zerg "strategy" - at least, what I saw of it - was a really, really, really well executed roach timing.
I got the impression from reading the paper on alpha star that the space of possible inputs in starcraft was just so large that  reinforcement learning was not able to explore like it does in chess or go. They had to resort to starting it out with supervised learning (basically copying some human gameplay) and then it was allowed to try and improve from there via actual reinforcement learning. I actually noticed that they developed some specialized networks that only did particular strategies, and then tried to make generalized networks that could beat all of those specialized ones, so maybe some of the limited strategies you observed were related to that.
Logged

Lucky Mushroom

  • Ensign
  • *
  • Posts: 18
    • View Profile
Re: AI, something like AlfaStar in StarCraft II.
« Reply #19 on: August 15, 2020, 12:59:02 AM »

Logged

Alex

  • Administrator
  • Admiral
  • *****
  • Posts: 24105
    • View Profile
Re: AI, something like AlfaStar in StarCraft II.
« Reply #20 on: August 15, 2020, 11:01:23 AM »

I particularly think the example of hull is not a good one because the ML/NN will naturally see tons of different hull values on tons of different ships over the course of a million training combats. That's all was trying to say: that small variations that are already covered by the training are probably not going to be an issue.

Ah, perhaps I should've been more clear, by "hull" I mean the max hull value, not the current one. So e.g. it could use "ship has 20,000 hull" as a proxy for "the ship is an Onslaught" (or whatever) and that doesn't seem all that unlikely. Which of the data is going to zero in on isn't really predictable. It's not so much garbage in / garbage out as it is finding patterns other than the ones that would make sense for humans - ones humans would know are unreasonable ones to use, because they can extrapolate beyond that data set and go "yeah, that's not a good assumption to make".

I get what you mean, though, yeah - if we're talking about current hull values which change a lot, that seems like that's less likely to be a problem. But, ah, this is all theorycrafting on my part as my understanding of it is not that deep.

I got the impression from reading the paper on alpha star that the space of possible inputs in starcraft was just so large that  reinforcement learning was not able to explore like it does in chess or go. They had to resort to starting it out with supervised learning (basically copying some human gameplay) and then it was allowed to try and improve from there via actual reinforcement learning. I actually noticed that they developed some specialized networks that only did particular strategies, and then tried to make generalized networks that could beat all of those specialized ones, so maybe some of the limited strategies you observed were related to that.

(I think this was pretty much towards the end of what they were doing, from the ladder games of the agents they had play on the ladder and which got into GM with all 3 races. Terran wasn't pretty either - you could see it had some notion that walling in was a thing, but not an actual understanding of why or how.)
Logged

DubTre6

  • Commander
  • ***
  • Posts: 100
  • Tri-Tachyon Agricultural Rep.
    • View Profile
Re: AI, something like AlfaStar in StarCraft II.
« Reply #21 on: August 17, 2020, 11:58:07 AM »

This thread made my brain hurt :o
Logged
8) why fight the paragon when you can BE the paragon 8)
Pages: 1 [2]