Fractal Softworks Forum

Please login or register.

Login with username, password and session length
Pages: 1 ... 11 12 [13] 14 15 ... 32

Author Topic: Optimizing the Conquest: a Mathematical Model of Space Combat  (Read 25121 times)

intrinsic_parity

  • Admiral
  • *****
  • Posts: 3071
    • View Profile
Re: Optimizing the Conquest: a Mathematical Model of Space Combat
« Reply #180 on: November 26, 2022, 10:47:50 AM »

I know there's one way to exactly calculate the EV, which is to treat the armor as a function of all of the shots up until this point. The problem is that you end up with a nested loop for every shot up until that point which is completely intractable computationally.

edit: I will also say, that I think it does satisfy the markov property, so I think you could categorize it as some sort of 'discrete time continuous state markov process'. In that case you should only have to consider the current state/shot. That's probably the right direction to search if you really want to figure it out, but I have a feeling the non-linearity will make it a bit to niche/specific for you to find to many results.
« Last Edit: November 26, 2022, 11:06:06 AM by intrinsic_parity »
Logged

CapnHector

  • Admiral
  • *****
  • Posts: 1056
    • View Profile
Re: Optimizing the Conquest: a Mathematical Model of Space Combat
« Reply #181 on: November 26, 2022, 09:55:06 PM »

Alright got it fixed. During writing this, I noticed that it is true that there is literally no way to write the matrix D to include the correct hit strength computation such that D is also constant using elementary operations. Maybe using tensors, but not using matrices. So I formally renounce my old method as incorrect, even though it produced some plausible results; this is the only correct way to calculate it regardless of results produced, so if they are incorrect it must be a bug in the code rather than math error.

I do think this brings us closer to computing E(h/(h+armor)) because now there is a consistent expression for where the randomness should be. Ie. it is specifically if you can formulate the column vector X_t to account for probability so you can directly compute E(X_t) then you remove the error from the calculation. Note that this can also be written without a Hadamard product using diagonal matrices instead if that's computationally advantageous.




Edit: notation. Edit2: more fixes to notation. It turns out choosing X as the symbol for the pooling matrix was highly unfortunate as it is identical to capital chi, making page 2 a little less readable, but I'll fix that another time. Edit3: placed eta in its right place on page 2. Edit4: add this note:

« Last Edit: November 27, 2022, 02:52:16 AM by CapnHector »
Logged
5 ships vs 5 Ordos: Executor · Invictus · Paragon · Astral · Legion · Onslaught · Odyssey | Video LibraryHiruma Kai's Challenge

CapnHector

  • Admiral
  • *****
  • Posts: 1056
    • View Profile
Re: Optimizing the Conquest: a Mathematical Model of Space Combat
« Reply #182 on: November 27, 2022, 10:49:15 AM »

Ok sorry about double post but here's a thought that might help resolve the remaining inaccuracy:

Using law of the unconscious statistician, discrete form, and given that we've now split up the damage reduction into a vector, isn't it the case that, using the notation above,

E(armor damage reduction at cell k at t+1)=p_(k-2)*h/(h+Â1)+p_(k-1)*h/(h+Â2)+p_k*h/(h+Â3)+p_(k+1)h/(h+Â4)+p_(k+2)h/(h+Â5)+(1-p_(k-2)-...-p_(k+2))h/(h+pool(A_t))

Where Â1=pooled armor at k after shot hits cell k-2 with 100% probability at timepoint t
Where Â2=pooled armor at k after shot hits cell k-1 with 100% probability at timepoint t
Where Â3=pooled armor at k after shot hits cell k with 100% probability at timepoint t
etc.

Ie. sum of probability of hit locations times armor damage reduction if that hit happens for all hits affecting the pooled value at that location. So, calculating it like this, it would involve only 1 lookback, since we can presume the armor calculation is accurate from step to step?  I think in practice you should keep track of not only expected armor but also expected armor damage reduction, and update it after each shot.

Later in the week can maybe do some math again and build test code, but sound reasonable?
Logged
5 ships vs 5 Ordos: Executor · Invictus · Paragon · Astral · Legion · Onslaught · Odyssey | Video LibraryHiruma Kai's Challenge

Liral

  • Admiral
  • *****
  • Posts: 718
  • Realistic Combat Mod Author
    • View Profile
Re: Optimizing the Conquest: a Mathematical Model of Space Combat
« Reply #183 on: November 27, 2022, 05:56:51 PM »

Later in the week can maybe do some math again and build test code, but sound reasonable?

I have deleted all the old code as you requested and need a copy of the latest working R code, less the combat math.

CapnHector

  • Admiral
  • *****
  • Posts: 1056
    • View Profile
Re: Optimizing the Conquest: a Mathematical Model of Space Combat
« Reply #184 on: November 27, 2022, 07:54:34 PM »

Hm? There should have been no changes to the hit distribution code or shot time sequence code, or even the main time series loop. Those are all fine. When I said I renounce my old model I was referring specifically to the armor damage. Now above I formulated it using linear algebra, but it's just equivalent to intrinsic_parity's formulation unless I figure out this expected hit strength thing fully. So the previously sent graph plotting thing code (weaponsdata4.R) should still be up to date.

Sorry about my ambiguous communication here.

I am thinking I should probably write a latex specification of the exact mathematics of this program and what it should do, if I can get this last piece of the puzzle to come together, rather than attempt to create any polished code since there are folks here who are much better than I am at the latter. Also it keeps happening that whenever I do the pen and paper math first I tend to produce much simpler operations than if I write code first and then try to make sense of it later or do the math based on the code. Valuable life lesson, too.

But there will be no changes expected to the rest of the code, since the distributions are figured out exactly, the time sequence for weapons fire is computed literally exactly as the game does it currently, shield damage is trivial since expected value of (damage times modifier) is just probability of hit times damage times modifier, and hull damage is always only the amount going through armor, re-scaled to modifier vs hull. So only the armor damage code might be re-specced but even then the current is fine and it would only be a slight improvement that can be slotted in if the code is modular.

Edit: here is some more work on this. This likely won't be computationally cheap (it involves computing literally as many armor matrices at each step as the ship has hittable horizontal armor cells) but the hope is that using linear algebra might offset some of the costs.



Incidentally this suggests another way of calculating the damage to the whole armor to me: if we are calculating A_t star 1 ... A_t star n already, then we could just calculate the whole armor at the next step as A_(t+1) = p_1 * A_t star 1 + ... + p_n A_t star n.

This is how that would look as a computer program:



This is literally taking a probability weighted average of all possible hits at each step.

I do not currently have any ideas how to approach the expected value of the armor damage reduction without reference to law of the unconscious statistician (that would imply computing a probability density function for the armor state which is pretty hard) so computing all the hypothetical armor states is the only way to go, so might as well do it like this if going that route. Need to think a bit about how the two things interact to see how this might all work out exactly.
« Last Edit: November 28, 2022, 05:55:24 AM by CapnHector »
Logged
5 ships vs 5 Ordos: Executor · Invictus · Paragon · Astral · Legion · Onslaught · Odyssey | Video LibraryHiruma Kai's Challenge

Liral

  • Admiral
  • *****
  • Posts: 718
  • Realistic Combat Mod Author
    • View Profile
Re: Optimizing the Conquest: a Mathematical Model of Space Combat
« Reply #185 on: November 28, 2022, 05:04:46 AM »

Hm? There should have been no changes to the hit distribution code or shot time sequence code, or even the main time series loop. Those are all fine. When I said I renounce my old model I was referring specifically to the armor damage. Now above I formulated it using linear algebra, but it's just equivalent to intrinsic_parity's formulation unless I figure out this expected hit strength thing fully. So the previously sent graph plotting thing code (weaponsdata4.R) should still be up to date.

Sorry about my ambiguous communication here.

I said have deleted all the code, including the permutation generator, optimizer, etc., and wrote a Python file that extracts data from the game files and exposes it as a dictionary.  Writing this reply, I have reconsidered my request for the original R code because I remember another poster said that the brute-force search it implemented was inefficient, and besides, that code refers to data from an array rather than from a dictionary.  I should rather work backward from the expected hit strength math, which you might change greatly soon, so I suppose I will wait.  :-\  :(

CapnHector

  • Admiral
  • *****
  • Posts: 1056
    • View Profile
Re: Optimizing the Conquest: a Mathematical Model of Space Combat
« Reply #186 on: November 28, 2022, 05:23:13 AM »

Sorry about that Liral. It's just the math here is quite interesting, I can't stop trying to figure it out. At the same time it's going to take a little time to see if this is going anywhere or not, like it feels that every new equation opens a new possibility but can't tell if it's going anywhere meaningful yet and if I rush it I'm going to write something flawed again. But can't the hit probability distribution etc. be translated to Python? This is all purely armor damage, the number of shots time sequence algorithm and hit probability distribution algorithm should be completely fine. They implement math that's basically set in stone.

Brute force search comment was referring to how weaponsoptimize4.R just goes through all the combinations rather than more intelligent search if I understand what intrinsic_parity was saying, but I think the first thing would be to get a complete implementation of damage for a single ship in Python first and then worry about the optimization part.

Edit: attached in the zip are weaponsdata4.R (single ship script) and weaponsoptimize4.R (test of a selection of weapons vs. a variety of ships) . Wrong armor damage code, otherwise fine.

[attachment deleted by admin]
« Last Edit: November 28, 2022, 05:43:47 AM by CapnHector »
Logged
5 ships vs 5 Ordos: Executor · Invictus · Paragon · Astral · Legion · Onslaught · Odyssey | Video LibraryHiruma Kai's Challenge

Liral

  • Admiral
  • *****
  • Posts: 718
  • Realistic Combat Mod Author
    • View Profile
Re: Optimizing the Conquest: a Mathematical Model of Space Combat
« Reply #187 on: November 28, 2022, 02:21:11 PM »

Sorry about that Liral. It's just the math here is quite interesting, I can't stop trying to figure it out. At the same time it's going to take a little time to see if this is going anywhere or not, like it feels that every new equation opens a new possibility but can't tell if it's going anywhere meaningful yet and if I rush it I'm going to write something flawed again.

I didn't ask you to apologize or hurry but rather told you that I don't think I can do anything besides a database of game file information until you settle the math because the rest of the code is optimization, which you have below said should also be ignored for now, and displaying results that we still lack.

Quote
But can't the hit probability distribution etc. be translated to Python? This is all purely armor damage, the number of shots time sequence algorithm and hit probability distribution algorithm should be completely fine. They implement math that's basically set in stone.

Could you point out the math (and R-code) you mean, exactly?

CapnHector

  • Admiral
  • *****
  • Posts: 1056
    • View Profile
Re: Optimizing the Conquest: a Mathematical Model of Space Combat
« Reply #188 on: November 28, 2022, 11:13:13 PM »

Well, for example the hit distribution function

Code

# this function generates the shot distribution (a bhattacharjee distribution for the
#non-trivial case)

hit_distribution <- function(upperbounds, standard_deviation, spread){
  vector <- vector(mode="double", length = length(upperbounds)+1)
  if (standard_deviation == 0){
    if (spread == 0){
      vector[1] <- 0
      for (j in 2:(length(upperbounds))) {
        #if both spread and standard deviation are 0 then all shots hit 1 cell. this should be so even if
        #the ship has an even number of cells to prevent ships with even no. cells appearing tougher which is not
        #the case in the real game most likely
        if ((upperbounds[j] >= 0) & (upperbounds[j-1] < 0)) vector[j] <- 1
      }
      #return part of a box
    } else {
      vector[1] <- min(1,max(0,(upperbounds[1]+spread))/(2*spread))
      for (j in 2:(length(upperbounds))) vector[j] <- min(1,max(0,(upperbounds[j]+spread))/(2*spread)) - min(1,max(0,(upperbounds[j-1]+spread))/(2*spread))
      vector[length(upperbounds)+1] <- 1-min(1,max(0,(upperbounds[length(upperbounds)]+spread))/(2*spread))
    }
  } else {
    if (spread != 0){
      vector[1] <- hit_probability_coord_lessthan_x(upperbounds[1], standard_deviation, spread)
      for (j in 2:(length(upperbounds))) vector[j] <- (hit_probability_coord_lessthan_x(upperbounds[j], standard_deviation, spread)-hit_probability_coord_lessthan_x(upperbounds[j-1], standard_deviation, spread))
      vector[length(upperbounds)+1] <- (1-hit_probability_coord_lessthan_x(upperbounds[length(upperbounds)], standard_deviation, spread))
    } else {
      #if spread is 0 but standard deviation is not 0 we have a normal distribution
      vector[1] <- pnorm(upperbounds[1],0,standard_deviation)
      for (j in 2:(length(upperbounds))) vector[j] <- pnorm(upperbounds[j],0,standard_deviation) - pnorm(upperbounds[j-1], mean=0, sd=standard_deviation)
      vector[length(upperbounds)+1] <- 1-pnorm(upperbounds[length(upperbounds)], mean=0, sd=standard_deviation)
    }
   
  }
  return(vector)
}

The function of this is to return the probability masses for ship cells, ie. for each cell, the definite integral from that cell's lower bound to upper bound, and for the edge cells from negative infinity to lowest bound of first cell, and for the last cell 1-probability of all other cells. The distribution is
1) the convolved normal (Bhattacharjee) distribution using the formulation given by Wilkins, when sd > 0 and spread > 0
2) the normal distribution N(0, sd^2) when sd > 0 and spread = 0
3) the uniform distribution [-spread,spread] when sd=0 and spread > 0
4) the trivial distribution that all shots hit 1 cell when sd=0 and spread=0

Since there is absolutely nothing unclear about the math then it's just a matter of executing it in elegant code. Mine is probably not that, but works.

This should probably be its own module since this is completely independent from the other calculations.
Logged
5 ships vs 5 Ordos: Executor · Invictus · Paragon · Astral · Legion · Onslaught · Odyssey | Video LibraryHiruma Kai's Challenge

CapnHector

  • Admiral
  • *****
  • Posts: 1056
    • View Profile
Re: Optimizing the Conquest: a Mathematical Model of Space Combat
« Reply #189 on: November 29, 2022, 11:07:54 AM »

Alright so I've built a "technology demonstrator". This is NOT ready for prime time, rather, I plan to write a mathematical specification and lead the actual coding to people who know how to do that.

This uses the linear algebra presented above to compute every possible counterfactual ("A-star") and then constructs the armor state as a probability weighted sum of those. It also includes a simulation function to generate comparison data the classical way, simulating damage shot by shot as shots hit random places on the armor and then computing damage the same way the game does it (ie. intrinsic_parity's method). This uses the real (convolved normal) probability distribution and the exact same distribution for simulation and the model.

Here is a graph of the result. Bright line is model, other lines are the plots of 100 dominators undergoing simulation. 500 damage energy weapon.


Now next when I have time I should see just how accurate this is and add a possible statistical correction.

testbench.R
Code

#0. define ship, distribution matrix X, and probability matrix.
#dominator
ship <- c(14000, 500, 10000, 1500, 220, 12, 440, 1.0, 200)
#damage
d <- 500
h <- 500
#constants
omega <- 0.05
kappa <- 0.15
a_0 <- ship[4]
n <- ship[6]
#X
x <- matrix(c(0,1/30,1/30,1/30,0,1/30,1/15,1/15,1/15,1/30,1/30,1/15,1/15,1/15,1/30,1/30,1/15,1/15,1/15,1/30,0,1/30,1/30,1/30,0),5,5)
#draw a random cell from a custom cumulative distribution function
custrandom <- function(dist){
  return(which(dist > runif(1,0,1))[1])
}
range <- 1000
#basic calculations
#this computes a weighted average but can be defined using linear algebra due to what we're working with
poolarmor <- function(matrix, index){
  pooledarmor <- 0
  for(i in 1:5) pooledarmor <- pooledarmor + 15* (x[i,] %*% matrix[,index+i-3])
  return(pooledarmor[[1]])
}
#compute armor damage reduction factor
chi <- function(matrix, index) return(max(kappa,h/(h+max(a_0*omega,poolarmor(matrix,index)))))
#deal damage, the linear algebra way
#for some ungodly reason R insists on transposing the vector
#i refers to row of armor cell, j to column, r to counterfactual vector (star)
damage <- function(damagematrix,i,j,r) return((d * x[i,] %*% damagematrix[(j-2):(j+2),r])[[1]])

create_b_star <- function(armormatrix){
  B_star_vector <- vector(mode="double",length=(n+8))
  for (i in 1:n) {
    B_star_vector[4+i] <- chi(A,4+i)
  }
  B_star <- diag(B_star_vector)
  return(B_star)
}

B_star
A
star_matrix <- function(matrixA,matrixB,r) {
  A_star <- matrix(0,nrow=length(matrixA[,1]),ncol=length(matrixA[1,]))
  for (i in 5:(length(matrixA[1,])-5)){
    for (j in 1:(length(matrixA[,1]))){
      print(c(i,j))
      A_star[i,j] <- max(0, matrixA[i,j] - damage(matrixB, i, j
                                                  , r))
    }
  }

  return(A_star)
}

star_matrix_wonky_hull_dmg <- function(matrixA,matrixB,r) {

  A_star <- matrix(0,nrow=length(A[,1]),ncol=length(matrixA[1,]))
  hulldamage <- 0
  for (j in 5:(length(matrixB[1,])-2)){
    for (i in 1:(length(matrixA[,1]))){
      print(c("i:",i,"j:",j,"r:",r))
      hulldamage <- hulldamage + max(0, damage(matrixB, i, j, r) - matrixA[i,j])
      A_star[i,j] <- max(0, matrixA[i,j] - damage(matrixB, i, j, r))
    }
  }
  A_star[1,1] <- hulldamage
  return(A_star)
}
A
star_matrix_wonky_hull_dmg(A,B_star,10)
#sd
serror <- 50
#spread
spread <- 10

#how much is the visual arc of the ship in rad?
shipangle <- ship[5]/(2* pi *range)
#how much is the visual arc of a single cell of armor in rad?
cellangle <- shipangle/ship[6]

#distribution
anglerangevector <- 0
anglerangevector <- vector(mode="double", length = ship[6]+1)
anglerangevector[1] <- -shipangle/2
for (i in 1:(length(anglerangevector)-1)) anglerangevector[i+1] <- anglerangevector[i]+cellangle
#now convert it to pixels

anglerangevector <- anglerangevector*2*pi*range

# this function generates the shot distribution (a bhattacharjee distribution for the
#non-trivial case)

hit_distribution <- function(upperbounds, standard_deviation, spread){
  vector <- vector(mode="double", length = length(upperbounds)+1)
  if (standard_deviation == 0){
    if (spread == 0){
      vector[1] <- 0
      for (j in 2:(length(upperbounds))) {
        #if both spread and standard deviation are 0 then all shots hit 1 cell. this should be so even if
        #the ship has an even number of cells to prevent ships with even no. cells appearing tougher which is not
        #the case in the real game most likely
        if ((upperbounds[j] >= 0) & (upperbounds[j-1] < 0)) vector[j] <- 1
      }
      #return part of a box
    } else {
      vector[1] <- min(1,max(0,(upperbounds[1]+spread))/(2*spread))
      for (j in 2:(length(upperbounds))) vector[j] <- min(1,max(0,(upperbounds[j]+spread))/(2*spread)) - min(1,max(0,(upperbounds[j-1]+spread))/(2*spread))
      vector[length(upperbounds)+1] <- 1-min(1,max(0,(upperbounds[length(upperbounds)]+spread))/(2*spread))
    }
  } else {
    if (spread != 0){
      vector[1] <- hit_probability_coord_lessthan_x(upperbounds[1], standard_deviation, spread)
      for (j in 2:(length(upperbounds))) vector[j] <- (hit_probability_coord_lessthan_x(upperbounds[j], standard_deviation, spread)-hit_probability_coord_lessthan_x(upperbounds[j-1], standard_deviation, spread))
      vector[length(upperbounds)+1] <- (1-hit_probability_coord_lessthan_x(upperbounds[length(upperbounds)], standard_deviation, spread))
    } else {
      #if spread is 0 but standard deviation is not 0 we have a normal distribution
      vector[1] <- pnorm(upperbounds[1],0,standard_deviation)
      for (j in 2:(length(upperbounds))) vector[j] <- pnorm(upperbounds[j],0,standard_deviation) - pnorm(upperbounds[j-1], mean=0, sd=standard_deviation)
      vector[length(upperbounds)+1] <- 1-pnorm(upperbounds[length(upperbounds)], mean=0, sd=standard_deviation)
    }
   
  }
  return(vector)
}

G <- function(y) return(y*pnorm(y) + dnorm(y))
#a is the SD of the normal distribution and b is the parameter of the uniform distribution
hit_probability_coord_lessthan_x <- function(z, a, b) return(a/2/b*(G(z/a+b/a)-G(z/a-b/a)))

p <- hit_distribution(anglerangevector,serror,spread)
A <- matrix(ship[4]/15,5,ship[6]+4)
#pad A
A <- cbind(A,c(0,0,0,0,0))
A <- cbind(A,c(0,0,0,0,0))
A <- cbind(c(0,0,0,0,0),A)
A <- cbind(c(0,0,0,0,0),A)
B_star <- create_b_star(A)
A
B_star
p
p_cum <- vector(mode = "double", length=length(p))
p_cum[1] <- p[1]
for (i in 2:length(p)) p_cum[i] <- sum(p[1:i])
generatetestdata <-0
if(generatetestdata == 1){
#generate simulated data using 100 models
testresults <- data.frame(hullhp=double(),shot=integer(),series=integer())
for (i in 1:100){
A <- matrix(ship[4]/15,5,ship[6]+4)
#pad A
A <- cbind(A,c(0,0,0,0,0))
A <- cbind(A,c(0,0,0,0,0))
A <- cbind(c(0,0,0,0,0),A)
A <- cbind(c(0,0,0,0,0),A)
B_star <- create_b_star(A)
hullhp <- ship[1]
shot <- 0
while (hullhp > 0){
  shot <- shot+1
  A<-star_matrix_wonky_hull_dmg(A,B_star,custrandom(p_cum))
  hullhp <- hullhp - A[1,1]
  B_star <- create_b_star(A)
  testresults <-rbind(testresults, c(hullhp, shot, i))
}
}
colnames(testresults) <- c("hullhp","shot","series")
}
library(ggplot2)
ggplot(testresults,aes(x=shot,y=hullhp,group=series,col=series))+
  geom_line()

#now the simple model
A <- matrix(ship[4]/15,5,ship[6]+4)
#pad A
A <- cbind(A,c(0,0,0,0,0))
A <- cbind(A,c(0,0,0,0,0))
A <- cbind(c(0,0,0,0,0),A)
A <- cbind(c(0,0,0,0,0),A)
B_star <- create_b_star(A)
hullhp <- ship[1]
shot <- 0
p
matrixlist <- list()

modelresults <- data.frame(hullhp=double(),shot=integer(),series=integer())

while (hullhp > 0){
  shot <- shot+1
  #2 rows of zeroes and 2 rows of padding
  for(r in 1:n) matrixlist[[r]] <-star_matrix_wonky_hull_dmg(A,B_star,r+4)
  hulldamage <- 0
  #probability list has list item 1 as missing
  for(r in 1:n) hulldamage <- hulldamage + matrixlist[[r]][1,1]*p[r+1]
  hullhp <- hullhp - hulldamage
  A <- A*(p[1]+p[length(p)])
  for(r in 1:n) A <- A + matrixlist[[r]]*p[r+1]
  for(i in 1:length(A[1,]))for(j in 1:length(A[,1])) A[j,i] <- max(A[j,i],0)
  B_star <- create_b_star(A)
  modelresults <-rbind(modelresults, c(hullhp, shot, i))
}
matrixlist

star_matrix(A,B_star,10)
colnames(modelresults) <- c("hullhp","shot","series")
modelresults$series <- 999
testresults <- rbind(testresults, modelresults)
matrixlist[[1]]
print(matrixlist[[r]])
ggplot(testresults,aes(x=shot,y=hullhp,group=series,col=series))+
  geom_line()
[close]
Logged
5 ships vs 5 Ordos: Executor · Invictus · Paragon · Astral · Legion · Onslaught · Odyssey | Video LibraryHiruma Kai's Challenge

intrinsic_parity

  • Admiral
  • *****
  • Posts: 3071
    • View Profile
Re: Optimizing the Conquest: a Mathematical Model of Space Combat
« Reply #190 on: November 29, 2022, 02:05:49 PM »

I tried to go through your math but it's pretty hard to follow (lots of superscripts and subscripts that aren't explicitly defined).

But I believe 'taking a probability weighted average of all possible hits at each step' is what I was already doing. That's what I was trying to communicate when I was citing "E[f(X)] = sum(p(x) * f(x)) for x in X" means. A probability weighted sum of the armor values over all possible hit locations. So I think my approach should already be equivalent to this, assuming all your math is correct, since (I think) I was trying to do the same thing. Please correct me if there is some distinction I'm missing.

However, I disagree with the statement 'Calculating E[A_t] is simple given E[A_(t-1)]'. In fact, this is the actual thing I don't know how to do.

Calculating E[A_t] given A_(t-1) (a specific value, not expected value) is simple and is what I believe we are doing here. But E[A_t] given E[A_(t-1)] is the hard part (I suspect it's not possible, and you need a full distribution of A_(t-1), not just Ev). Because E[A_(t-1)] is a function of A_(t-2) and so on, so you end up in the exact same situation as you started (not having the Ev of the previous iteration).

Unless you go all the way back to the initial shot. But then what you are doing is 'a weighted probability average of every possible SEQUENCE of shots up until this point, which is more or less computationally intractable since it scales with the number of combinations of possible shot locations up to that point.
Logged

CapnHector

  • Admiral
  • *****
  • Posts: 1056
    • View Profile
Re: Optimizing the Conquest: a Mathematical Model of Space Combat
« Reply #191 on: November 29, 2022, 07:59:08 PM »

No, you are correct, this should be the same thing. Computing the probability weighted sum of damaged armor is exactly the same thing whether you are subtracting damage or adding armor. The main reason to check how accurate this is is a) we are now using real distributions and b) in case there was an error in either of our code.

So if this produces accurate results for reasonable combinations of values then there ought to be no reason to rewrite the code. But if I can later in testing discover an error that would be significant - say, a HAC vs an Onslaught's armor maybe, then this model should let us compute expected value of armor damage reduction by the math specified above. And then we know we can compute the next expected armor state.

The plan is to keep a separate matrix C of expected damage reduction values that is not calculated from the armor but instead by the the expression above and then use that to perform the calculation again after computing all the possibilities and using that to update armor. See how it goes. Appreciate if point out any logical problems you see.

Mathematically speaking the trick is the E(h/(h+poolA_(k(t-1)))=E(chi_k(t-1)). Since we now have an explicit expression of expected armor damage reduction at step t from 1) the current expected armor state, which we can calculate*, and 2) the expected value of the previous armor damage reduction, which we cannot compute explicitly as that is intractable, but can compute using induction since we do know armor damage reduction at the first step, and now have an induction formula, that should stop the loop. This is pretty non-trivial though so in practice I think the proof will be whether it improves the prediction or not. But first see if its needed because computationally expensive stuff.

* but again not explicitly, because we don't know expected armor damage reduction - but if we apply the induction formula that leads to an induction formula for armor also, since we do know the starting armor state, hence a "taking turns" approach of first compute hypotheticals, then compute expected armor damage reduction, then compute actual armor damage to get expected armor state. For clarity that is not present here but can be done from this as needed since we now have the fundamentals from the math.
« Last Edit: November 29, 2022, 08:35:18 PM by CapnHector »
Logged
5 ships vs 5 Ordos: Executor · Invictus · Paragon · Astral · Legion · Onslaught · Odyssey | Video LibraryHiruma Kai's Challenge

Liral

  • Admiral
  • *****
  • Posts: 718
  • Realistic Combat Mod Author
    • View Profile
Re: Optimizing the Conquest: a Mathematical Model of Space Combat
« Reply #192 on: November 29, 2022, 08:34:16 PM »

The function of this is to return the probability masses for ship cells, ie. for each cell, the definite integral from that cell's lower bound to upper bound, and for the edge cells from negative infinity to lowest bound of first cell, and for the last cell 1-probability of all other cells. The distribution is
1) the convolved normal (Bhattacharjee) distribution using the formulation given by Wilkins, when sd > 0 and spread > 0
2) the normal distribution N(0, sd^2) when sd > 0 and spread = 0
3) the uniform distribution [-spread,spread] when sd=0 and spread > 0
4) the trivial distribution that all shots hit 1 cell when sd=0 and spread=0

Since there is absolutely nothing unclear about the math then it's just a matter of executing it in elegant code. Mine is probably not that, but works.

This should probably be its own module since this is completely independent from the other calculations.

Code
import statistics


def row_hit_probabilities(
        bounds: list,
        standard_deviation: float,
        spread: float) -> list:
    """
    Return the hit probability mass for each cell of a row.
   
    Hit probability mass equals the definite integral of hit
    probability distribution, which depends on the standard
    deviation (sd) of the position of the row and spread of the shots
    fired at it.
   
    sd = 0 and spread = 0: trivial, all shots hitting 1 cell
    sd = 0 and spread > 0: uniform [-spread,spread]
    sd > 0 and spread = 0: normal N(0, sd^2)
    sd > 0 and spread > 0: convolved normal (Bhattacharjee)
        using the formulation given by Wilkins
 
    The bounds of this definite integral across the first, middle, and
    last cells are
   
    - first: negative infinity to lower bound of first cell
    - middle: lower bound of each cell to upper bound of that cell
    - last: upper bound of last cell to positive infinity
   
    Conveniently, that integral for the last cell equals one minus the
    total probability mass of the preceding cells.
 
    bounds - armor row cell upper bounds
    standard_deviation - standard deviation of the row position
    spread - angular shot dispersion
    """
    if standard_deviation == 0 and spread == 0: #all shots hit 1 cell
        return [1 if bounds[i-1] < 0 <= bound else 0 for i, bound in
                enumerate(bounds)]
    elif standard_deviation == 0: #uniform distribution
        return ([min(1, max(0, (bounds[0] / spread + 1) / 2))]
                + [(min(1, max(0, (bounds[i] / spread + 1) / 2))
                    - min(1, max(0, (bounds[i-1] / spread + 1) / 2)))
                    for i in range(1, len(bounds))]
                + [1 - min(1, max(0, (bounds[-1] / spread + 1) / 2))])
    elif spread == 0: #normal distribution
        cdf = statistics.NormalDist(0, standard_deviation).cdf
        return ([cdf(bounds[0])]
                + [cdf(bounds[i]) - cdf(bounds[i-1]) for i in
                    range(1, len(bounds))]
                + [1 - cdf(bounds[-1])])
    #convolved normal (Bhattacharjee) distribution
    dispersion = standard_deviation, spread
    return ([hit_probability_within(bounds[0], *dispersion)]
            + [hit_probability_within(bounds[i], *dispersion)
                - hit_probability_within(bounds[i-1], *dispersion)
                for i in range(1, len(bounds))]
            + [1 - hit_probability_within(bounds[-1], *dispersion)])
Test
Code
bounds = [-3, -2, -1, 0, 1, 2, 3]
print(row_hit_probabilities(bounds, 0, 0))
print(row_hit_probabilities(bounds, 0, 1))
print(row_hit_probabilities(bounds, 1, 0))
Result
[0, 0, 0, 1, 0, 0, 0]
[0, 0, 0, 0.5, 0.5, 0, 0, 0]
[0.0013498980316301035, 0.021400233916549105, 0.13590512198327787, 0.3413447460685429, 0.3413447460685429, 0.13590512198327787, 0.021400233916549105, 0.0013498980316301035]
« Last Edit: November 29, 2022, 09:06:24 PM by Liral »
Logged

CapnHector

  • Admiral
  • *****
  • Posts: 1056
    • View Profile
Re: Optimizing the Conquest: a Mathematical Model of Space Combat
« Reply #193 on: November 29, 2022, 11:46:58 PM »

Looking good Liral. Do you have the hit probability function translated yet? It is this

Code
G <- function(y) return(y*pnorm(y) + dnorm(y))
#a is the SD of the normal distribution and b is the parameter of the uniform distribution
hit_probability_coord_lessthan_x <- function(z, a, b) return(a/2/b*(G(z/a+b/a)-G(z/a-b/a)))

Another thing that is quite ready to be translated is the shot time sequence function. It takes in the chargeup and chargedown times, burst delay, burst size, ammo capacity (-1 for unlimited), ammo regen rate, reload size, travel time and weapon type  to output a discrete series of hits (in case of a beam, tics adjusted for intensity scaling quadratically during chargeup and chargedown. This is very straightforward ie. you compute literally at which timepoints shots hit (it is chargeup, then during burst the next shot hits at that point +burstdelay, then after the burst the next shot is after chargedown and chargeup, while at the same time you keep track of reloading, and then add traveltime and aggregate how many shots hit during each second for all seconds). This should be another module of its own since it is completely independent of the combat calculation.

Spoiler
Code

#times in seconds, ammoregen is in ammo / second
hits <- function(chargeup, chargedown, burstsize, burstdelay, ammo=UNLIMITED, ammoregen=0, reloadsize=0, traveltime=0, mode=GUN){
  #specify sane minimum delays, since the game enforces weapons can only fire once every 0.05 sec
  #for beams, refiring delay is given by burstdelay, for guns it is burstdelay in case burstdelay is > 0 (==0 is shotgun) and chargedown
  if(burstdelay > 0 | mode == BEAM) burstdelay <- max(burstdelay, global_minimum_time)
  if(mode == GUN) chargedown <- max(chargedown, global_minimum_time)
  #this vector will store all the hit time coordinates
  #current time
  #insert a very small fraction here to make time round correctly
  time <- 0.001
  #maximum ammo count is ammo given at start
  maxammo <- ammo
  #this is used to do ammo regeneration, 0 = not regenerating ammo, 1 = regenerating ammo
  regeneratingammo <- 0
  ammoregentimecoordinate <- 0
  ammoregenerated <- 0
 
  #we are firing a gun
  if (mode == GUN) {
    Hits <- vector(mode="double", length = 0)
    while(time < time_limit){
      time <- time + chargeup
      if(time - ammoregentimecoordinate > 1/ammoregen){
        ammoregenerated <- ammoregenerated + floor((time - ammoregentimecoordinate)/(1/ammoregen))
        ammoregentimecoordinate <- ammoregentimecoordinate + 1/ammoregen*floor((time - ammoregentimecoordinate)/(1/ammoregen))
        if(ammoregenerated >= reloadsize){
          ammo <- ammo+ ammoregenerated
          ammoregenerated <- 0
        }
        if(ammo >= maxammo){
          ammo <- maxammo
          regeneratingammo <- 0
        }
      }
     
      if (burstdelay == 0) {
        for (i in 1:burstsize) {
          if (ammo != 0){
            Hits <- c(Hits, time + traveltime)
            ammo <- ammo - 1
            if (regeneratingammo == 0) {
              ammoregentimecoordinate <- time
              regeneratingammo <- 1
            }
          }
        }
      }
      if (burstdelay > 0) {
        for (i in 1:burstsize) {
          if (ammo != 0){
            Hits <- c(Hits, time + traveltime)
            time <- time + burstdelay
            ammo <- ammo -1
            if (regeneratingammo == 0) {
              ammoregentimecoordinate <- time
              regeneratingammo <- 1
            }
            if(time - ammoregentimecoordinate > 1/ammoregen){
              ammoregenerated <- ammoregenerated + floor((time - ammoregentimecoordinate)/(1/ammoregen))
              ammoregentimecoordinate <- ammoregentimecoordinate + 1/ammoregen*floor((time - ammoregentimecoordinate)/(1/ammoregen))
              if(ammoregenerated >= reloadsize){
                ammo <- ammo+ ammoregenerated
                ammoregenerated <- 0
              }
              if(ammo >= maxammo){
                ammo <- maxammo
                regeneratingammo <- 0
              }
            }
           
          }
        }
      }
      time <- time+chargedown
      if(time - ammoregentimecoordinate > 1/ammoregen){
        ammoregenerated <- ammoregenerated + floor((time - ammoregentimecoordinate)/(1/ammoregen))
        ammoregentimecoordinate <- ammoregentimecoordinate + 1/ammoregen*floor((time - ammoregentimecoordinate)/(1/ammoregen))
        if(ammoregenerated >= reloadsize){
          ammo <- ammo+ ammoregenerated
          ammoregenerated <- 0
        }
        if(ammo >= maxammo){
          ammo <- maxammo
          regeneratingammo <- 0
        }
      }
    }
    timeseries <- vector(mode="integer", length = time_limit/time_interval)
    timeseries[1] <- length(Hits[Hits >= 0 & Hits <= 1*time_interval])
    for (i in 2:time_limit/time_interval) timeseries[i] <- length(Hits[Hits > (i-1)*time_interval & Hits <= i*time_interval])
    return(timeseries)
  }
  #we are firing a beam
  if (mode == BEAM) {
    chargeup_ticks <- chargeup/beam_tick
    chargedown_ticks <- chargedown/beam_tick
    burst_ticks <- burstsize/beam_tick
    #for a beam we will instead use a matrix to store timepoint and beam intensity at timepoint
    beam_matrix <- matrix(nrow=0,ncol=2)
    #burst size 0 <- the beam never stops firing
    if(burstsize == 0){
      for (i in 1:chargeup_ticks) {
        #beam intensity scales quadratically during chargeup, so
      }
      while ( time < time_limit) {
        beam_matrix <- rbind(beam_matrix,c(time, 1))
        time <- time+beam_tick
      }
    } else {
      while (time < time_limit) {
        if (ammo != 0){
          ammo <- ammo - 1
          if (chargeup_ticks > 0){
            for (i in 1:chargeup_ticks) {
              beam_matrix <- rbind(beam_matrix,c(time, (i*beam_tick)^2))
              time <- time+beam_tick
              if (regeneratingammo == 0) {
                ammoregentimecoordinate <- time
                regeneratingammo <- 1
              }
              if(time - ammoregentimecoordinate > 1/ammoregen){
                ammoregenerated <- ammoregenerated + floor((time - ammoregentimecoordinate)/(1/ammoregen))
                ammoregentimecoordinate <- ammoregentimecoordinate + 1/ammoregen*floor((time - ammoregentimecoordinate)/(1/ammoregen))
                if(ammoregenerated >= reloadsize){
                  ammo <- ammo+ ammoregenerated
                  ammoregenerated <- 0
                }
                if(ammo >= maxammo){
                  ammo <- maxammo
                  regeneratingammo <- 0
                }
              }
            }
          }
          for (i in 1:burst_ticks){
            beam_matrix <- rbind(beam_matrix,c(time, 1))
            time <- time+beam_tick
            if(time - ammoregentimecoordinate > 1/ammoregen){
              ammoregenerated <- ammoregenerated + floor((time - ammoregentimecoordinate)/(1/ammoregen))
              ammoregentimecoordinate <- ammoregentimecoordinate + 1/ammoregen*floor((time - ammoregentimecoordinate)/(1/ammoregen))
              if(ammoregenerated >= reloadsize){
                ammo <- ammo+ ammoregenerated
                ammoregenerated <- 0
              }
              if(ammo >= maxammo){
                ammo <- maxammo
                regeneratingammo <- 0
              }
            }
          }
         
          if (chargedown_ticks > 0){
            for (i in 1:chargedown_ticks){
              beam_matrix <- rbind(beam_matrix,c(time, ((chargedown_ticks-i)*beam_tick)^2))
              time <- time+beam_tick
            }
            if(time - ammoregentimecoordinate > 1/ammoregen){
              ammoregenerated <- ammoregenerated + floor((time - ammoregentimecoordinate)/(1/ammoregen))
              ammoregentimecoordinate <- ammoregentimecoordinate + 1/ammoregen*floor((time - ammoregentimecoordinate)/(1/ammoregen))
              if(ammoregenerated >= reloadsize){
                ammo <- ammo+ ammoregenerated
                ammoregenerated <- 0
              }
              if(ammo >= maxammo){
                ammo <- maxammo
                regeneratingammo <- 0
              }
            }
          }
          time <- time + burstdelay
          if(time - ammoregentimecoordinate > 1/ammoregen){
            ammoregenerated <- ammoregenerated + floor((time - ammoregentimecoordinate)/(1/ammoregen))
            ammoregentimecoordinate <- ammoregentimecoordinate + 1/ammoregen*floor((time - ammoregentimecoordinate)/(1/ammoregen))
            if(ammoregenerated >= reloadsize){
              ammo <- ammo+ ammoregenerated
              ammoregenerated <- 0
            }
            if(ammo >= maxammo){
              ammo <- maxammo
              regeneratingammo <- 0
            }
          }
        }
        time <- time + global_minimum_time
        if(time - ammoregentimecoordinate > 1/ammoregen){
          ammoregenerated <- ammoregenerated + floor((time - ammoregentimecoordinate)/(1/ammoregen))
          ammoregentimecoordinate <- ammoregentimecoordinate + 1/ammoregen*floor((time - ammoregentimecoordinate)/(1/ammoregen))
          if(ammoregenerated >= reloadsize){
            ammo <- ammo+ ammoregenerated
            ammoregenerated <- 0
          }
          if(ammo >= maxammo){
            ammo <- maxammo
            regeneratingammo <- 0
          }
        }
      }
    }
    timeseries <- vector(mode="double", length = time_limit/time_interval)
    for (i in 1:length(timeseries)) {
      timeseries[i] <- sum(beam_matrix[beam_matrix[,1] < i & beam_matrix[,1] > i-1,2])
    }
    return(timeseries)
  }
}
[close]

Apologies in advance for just copypasting the ammo reload function all over the place. It needs to be checked whenever we increment time, and should probably be its own function but it has so many parameters to pass I thought it was easier like this.





In weapon testing news I figured the next step is trying to look at the accuracy of the model with plots that make it easier on the eye.

To accomplish that, we do the following: First simulate the model, to get the time at which point the model kills the ship. Then simulate 100 enemy ships up to that point, letting hull become negative if it should to preserve statistical properties, then plot the median hull hp (because it is skewed) minus model (I've also plotted all the separate simulated ships). This lets us see just how much error there is in the model at worst.

Here is an example of a "hull residual" (ie. actual hull - predicted hull) plot with green being the median and yellow being the model of course. This is in units of % of starting hull. This is with a 300 damage energy weapon vs Dominator. It does not seem too bad, but I think this needs to be tested systematically next, when I next have time. I would say that if we end up with a 10% error somewhere then it's time to try the error correction. Let me know if you have any particular cases you want to see.




Code if anyone else wants to give it a whirl:

Spoiler
Code
#0. define ship, distribution matrix X, and probability matrix.
#dominator
ship <- c(14000, 500, 10000, 1500, 220, 12, 440, 1.0, 200)
#damage
d <- 200
h <- 200
#constants
omega <- 0.05
kappa <- 0.15
a_0 <- ship[4]
n <- ship[6]
#X
x <- matrix(c(0,1/30,1/30,1/30,0,1/30,1/15,1/15,1/15,1/30,1/30,1/15,1/15,1/15,1/30,1/30,1/15,1/15,1/15,1/30,0,1/30,1/30,1/30,0),5,5)
#draw a random cell from a custom cumulative distribution function
custrandom <- function(dist){
  return(which(dist > runif(1,0,1))[1])
}
range <- 1000
#basic calculations
#this computes a weighted average but can be defined using linear algebra due to what we're working with
poolarmor <- function(matrix, index){
  pooledarmor <- 0
  for(i in 1:5) pooledarmor <- pooledarmor + 15* (x[i,] %*% matrix[,index+i-3])
  return(pooledarmor[[1]])
}
#compute armor damage reduction factor
chi <- function(matrix, index) return(max(kappa,h/(h+max(a_0*omega,poolarmor(matrix,index)))))
#deal damage, the linear algebra way
#for some ungodly reason R insists on transposing the vector
#i refers to row of armor cell, j to column, r to counterfactual vector (star)
damage <- function(damagematrix,i,j,r) return((d * x[i,] %*% damagematrix[(j-2):(j+2),r])[[1]])

create_b_star <- function(armormatrix){
  B_star_vector <- vector(mode="double",length=(n+8))
  for (i in 1:n) {
    B_star_vector[4+i] <- chi(A,4+i)
  }
  B_star <- diag(B_star_vector)
  return(B_star)
}


star_matrix <- function(matrixA,matrixB,r) {
  A_star <- matrix(0,nrow=length(matrixA[,1]),ncol=length(matrixA[1,]))
  for (i in 5:(length(matrixA[1,])-5)){
    for (j in 1:(length(matrixA[,1]))){
      A_star[i,j] <- max(0, matrixA[i,j] - damage(matrixB, i, j
                                                  , r))
    }
  }

  return(A_star)
}

star_matrix_wonky_hull_dmg <- function(matrixA,matrixB,r) {

  A_star <- matrix(0,nrow=length(matrixA[,1]),ncol=length(matrixA[1,]))
  hulldamage <- 0
  for (j in 5:(length(matrixB[1,])-2)){
    for (i in 1:(length(matrixA[,1]))){
      hulldamage <- hulldamage + max(0, damage(matrixB, i, j, r) - matrixA[i,j])
      A_star[i,j] <- max(0, matrixA[i,j] - damage(matrixB, i, j, r))
    }
  }
  A_star[1,1] <- hulldamage
  return(A_star)
}


#sd
serror <- 50
#spread
spread <- 10

#how much is the visual arc of the ship in rad?
shipangle <- ship[5]/(2* pi *range)
#how much is the visual arc of a single cell of armor in rad?
cellangle <- shipangle/ship[6]

#distribution
anglerangevector <- 0
anglerangevector <- vector(mode="double", length = ship[6]+1)
anglerangevector[1] <- -shipangle/2
for (i in 1:(length(anglerangevector)-1)) anglerangevector[i+1] <- anglerangevector[i]+cellangle
#now convert it to pixels

anglerangevector <- anglerangevector*2*pi*range

# this function generates the shot distribution (a bhattacharjee distribution for the
#non-trivial case)

hit_distribution <- function(upperbounds, standard_deviation, spread){
  vector <- vector(mode="double", length = length(upperbounds)+1)
  if (standard_deviation == 0){
    if (spread == 0){
      vector[1] <- 0
      for (j in 2:(length(upperbounds))) {
        #if both spread and standard deviation are 0 then all shots hit 1 cell. this should be so even if
        #the ship has an even number of cells to prevent ships with even no. cells appearing tougher which is not
        #the case in the real game most likely
        if ((upperbounds[j] >= 0) & (upperbounds[j-1] < 0)) vector[j] <- 1
      }
      #return part of a box
    } else {
      vector[1] <- min(1,max(0,(upperbounds[1]+spread))/(2*spread))
      for (j in 2:(length(upperbounds))) vector[j] <- min(1,max(0,(upperbounds[j]+spread))/(2*spread)) - min(1,max(0,(upperbounds[j-1]+spread))/(2*spread))
      vector[length(upperbounds)+1] <- 1-min(1,max(0,(upperbounds[length(upperbounds)]+spread))/(2*spread))
    }
  } else {
    if (spread != 0){
      vector[1] <- hit_probability_coord_lessthan_x(upperbounds[1], standard_deviation, spread)
      for (j in 2:(length(upperbounds))) vector[j] <- (hit_probability_coord_lessthan_x(upperbounds[j], standard_deviation, spread)-hit_probability_coord_lessthan_x(upperbounds[j-1], standard_deviation, spread))
      vector[length(upperbounds)+1] <- (1-hit_probability_coord_lessthan_x(upperbounds[length(upperbounds)], standard_deviation, spread))
    } else {
      #if spread is 0 but standard deviation is not 0 we have a normal distribution
      vector[1] <- pnorm(upperbounds[1],0,standard_deviation)
      for (j in 2:(length(upperbounds))) vector[j] <- pnorm(upperbounds[j],0,standard_deviation) - pnorm(upperbounds[j-1], mean=0, sd=standard_deviation)
      vector[length(upperbounds)+1] <- 1-pnorm(upperbounds[length(upperbounds)], mean=0, sd=standard_deviation)
    }
   
  }
  return(vector)
}

G <- function(y) return(y*pnorm(y) + dnorm(y))
#a is the SD of the normal distribution and b is the parameter of the uniform distribution
hit_probability_coord_lessthan_x <- function(z, a, b) return(a/2/b*(G(z/a+b/a)-G(z/a-b/a)))

p <- hit_distribution(anglerangevector,serror,spread)
A <- matrix(ship[4]/15,5,ship[6]+4)
#pad A
A <- cbind(A,c(0,0,0,0,0))
A <- cbind(A,c(0,0,0,0,0))
A <- cbind(c(0,0,0,0,0),A)
A <- cbind(c(0,0,0,0,0),A)
B_star <- create_b_star(A)

p_cum <- vector(mode = "double", length=length(p))
p_cum[1] <- p[1]
for (i in 2:length(p)) p_cum[i] <- sum(p[1:i])

#now the simple model
A <- matrix(ship[4]/15,5,ship[6]+4)
#pad A
A <- cbind(A,c(0,0,0,0,0))
A <- cbind(A,c(0,0,0,0,0))
A <- cbind(c(0,0,0,0,0),A)
A <- cbind(c(0,0,0,0,0),A)
B_star <- create_b_star(A)
hullhp <- ship[1]
shot <- 0

matrixlist <- list()
modelresults <- data.frame(hullhp=double(),shot=integer(),series=integer())

while (hullhp > 0){
  shot <- shot+1
  #2 rows of zeroes and 2 rows of padding
  for(r in 1:n) matrixlist[[r]] <-star_matrix_wonky_hull_dmg(A,B_star,r+4)
  hulldamage <- 0
  #probability list has list item 1 as missing
  for(r in 1:n) hulldamage <- hulldamage + matrixlist[[r]][1,1]*p[r]
  hullhp <- hullhp - hulldamage
  A[1,1] <- 0
  A <- A*(p[1]+p[length(p)])
  for(r in 1:n) A <- A + matrixlist[[r]]*p[r+1]
  for(i in 1:length(A[1,]))for(j in 1:length(A[,1])) A[j,i] <- max(A[j,i],0)
  B_star <- create_b_star(A)
  modelresults <-rbind(modelresults, c(hullhp, shot, i))
}
shotlimit <- shot

colnames(modelresults) <- c("hullhp","shot","series")
modelresults$series <- 999

generatetestdata <-1
if(generatetestdata == 1){
#generate simulated data using 100 models
testresults <- data.frame(hullhp=double(),shot=integer(),series=integer())
hullhp <- ship[1]
for (i in 1:1000){
A <- matrix(ship[4]/15,5,ship[6]+4)
#pad A
A <- cbind(A,c(0,0,0,0,0))
A <- cbind(A,c(0,0,0,0,0))
A <- cbind(c(0,0,0,0,0),A)
A <- cbind(c(0,0,0,0,0),A)
B_star <- create_b_star(A)
hullhp <- ship[1]
shot <- 0
while (shot <= shotlimit){
  shot <- shot+1
  A<-star_matrix_wonky_hull_dmg(A,B_star,custrandom(p_cum))
  hullhp <- hullhp - A[1,1]
  A[1,1]<-0
  B_star <- create_b_star(A)
  testresults <-rbind(testresults, c(hullhp, shot, i))
}
}
colnames(testresults) <- c("hullhp","shot","series")
}
library(ggplot2)



testresiduals <- testresults

testresiduals <- rbind(testresiduals, cbind(aggregate(hullhp ~ shot, testresiduals, FUN=median), series=500))
testresiduals <- rbind(testresiduals, modelresults)
testresiduals$hullhp <- testresiduals$hullhp/ship[1]
for (i in 1:shotlimit) {
  testresiduals[which(testresiduals$shot==i),1] <- testresiduals[which(testresiduals$shot==i),1] - modelresults[which(modelresults$shot==i),][[1]]/ship[[1]]
}
ggplot(testresiduals,aes(x=shot,y=hullhp*100,group=series,col=series,linewidth=floor(series/500)))+
  geom_line()+
  scale_colour_viridis_c()+
  scale_linewidth_continuous(range=c(0.1,1))+
  labs(x="Shot",y="Hull hp residual %")+
  theme(legend.position="none")

[close]

E2: alright hitting some problems here, this is damage 1000 energy weapon:



Now I'd say the error is still tolerable but it's probably a good idea to try whether the suggested error correction actually works or not. I also really wonder about the source of this error. Shouldn't it be the case that E(x/(x+a)) is closer to E(x/x) = 1 when x is greater relative to a? It does not appear to be a random artifact since the magnitude of the error varies but it is always the same direction when running this over and over.

If all else fails there is also a really crude technique to compensate: collect data from a very large number of "hull residual plots" and plot some kind of an equation with shot spread, shot damage, hull hp and armor as predictors to get a simple correction to compensate for the error. The problem is I think this approach falls apart as soon as there are multiple weapons.
« Last Edit: November 30, 2022, 12:22:10 AM by CapnHector »
Logged
5 ships vs 5 Ordos: Executor · Invictus · Paragon · Astral · Legion · Onslaught · Odyssey | Video LibraryHiruma Kai's Challenge

intrinsic_parity

  • Admiral
  • *****
  • Posts: 3071
    • View Profile
Re: Optimizing the Conquest: a Mathematical Model of Space Combat
« Reply #194 on: November 30, 2022, 09:12:09 AM »

To be honest. I don't really see how anything in your math corrects the error. At the end of the day, the source of the error is from using the expected value of the armor from the previous iteration to compute the next iteration (implicitly assuming E[ f( x ) ] = f( E[ x ] ) ), and it seems as though you are still doing that no? As long as you are plugging in an EV of armor as the prior armor value for the next iteration, you have not addressed the source of error.

Fundamentally, the armor at the previous iteration is a random variable, and we are substituting a deterministic variable (Expected value) in its place. Considering that there are infinitely many possible distributions with the same EV, it should be obvious why this is never going to be perfectly correct for a non-linear function like ours.

In terms of why larger shot values would cause more error, I think intuitively, it would make sense that large shot values would cause there to be a wider range of possible armor values after a single shot, meaning the distribution of armor values is 'larger' and perhaps more skewed, so then it would seem reasonable that the error due to not accounting for that distribution would be larger. But that's a very vague hand-wavy explanation.


Also, you are pretty loose with notation. It's really hard to tell what is a vector and what is an array, and what the product between them means (is that circle dot thing supposed to be some kind of element-wise multiplication of two vectors?, is x a cross product, or a scalar multiplication or something??). Also, I think you use capital X for two different matrices (the 1/15 array and also some array of chi values).

When I've written out lots of linear algebra type stuff, the convention has always been to use lower case symbols for scalars, bold lower case symbols for vectors (sometimes with a little arrow over them too), and capital non-bold symbols for matrices. I would highly recommend you adopt that convention for the sake of clarity.
Logged
Pages: 1 ... 11 12 [13] 14 15 ... 32