Fractal Softworks Forum

Please login or register.

Login with username, password and session length
Advanced search  

News:

Starsector 0.97a is out! (02/02/24); New blog post: Simulator Enhancements (03/13/24)

Pages: 1 ... 4 5 [6] 7

Author Topic: functional ship class definitions  (Read 11733 times)

Goumindong

  • Admiral
  • *****
  • Posts: 1886
    • View Profile
Re: functional ship class definitions
« Reply #75 on: April 24, 2020, 07:25:11 PM »

The signal is an active and directional signal. It's not black body radiation like thermal radiation which is emitted from a surface in all directions, its a carefully designed radio wave pointed right at earth to specifically send information to us. All of the energy in the signal is pointed at or near earth. Thermal radiation is in all directions so the same total power radiating via thermal radiation would be divided over the entire sphere rather than pointed in one small cone.

The total amount of energy your IR sensor will pick up depends on a lot more than the temperature as well, it depends on the distances and areas of the radiating and detecting objects. The total energy being radiated by a band of Jupiter at -315K will be many times more than the energy radiated by 5m object at a higher temp that is also much much further away.

Here is blog post of a guy who worked out some crazy cryogenically cooled stealth ship that can hide behind the thermal background radiation. http://toughsf.blogspot.com/2018/04/permanent-and-perfect-stealth-in-space.html He goes through a lot of the calculations to figure out if a spacecraft would be observable by a perfectly cooled IR sensor. By my math following the methods they used, the thermal signature of voyager 1 is about 5 orders of magnitude dimmer than the background thermal radiation. I used its main dish size as the area of emittance, spitzers main mirror area for the detection optics, and a temp of ~200K (-110F ish) which is what google said was the temp of voyager. I used the blackbody calculator to work out the radiance in the bandwidths 5-40 um, which is the bandwidth range for spitzers InfraRed Spectrograph (the other camera actually has a much smaller set of bandwidths), and that bandwidth also encompassed most of the bandwidth that the black body radiation was occurring in anyway so you couldn't improve much by choosing a wider bandwidth.

As to the exposure time, it will be very small for unknown objects. This is a scenario where you are scanning the sky looking for anomalies, not training every telescope you have one one point for a week. You need a somewhat reasonable field of view to scan the entire sky in a reasonable amount of time which will increase the background noise, and also limit your resolution. For reference, the spitzer would take about a week to cover the entire sky with 1 second exposures. Most of the deep space images you see from he spitzer are actually multiple hour+ exposure time images stacked together. The image of the black hole taken by the radio telescopes (telescopes plural because the data was actually taken by a group of massive radio dishes spread across the entire globe creating a virtual radio telescope the size of earth) is actually a stacked composite of many images taken over the course of 2 months. The idea that you can get that performance without knowing where the object you're trying to look at is while also scanning the entire sky is just not reasonable. Comparing apples and oranges.

Yes if you have super advanced technology that does not exist and also have unobtanium in order to achieve your initial velocity you can be stealthy to a single current technology telescope... That post is at projectrho already. Its not that convincing..

The detection assumptions are a bit ridiculous. In designing a 10km wide spaceship that takes multiple days to transverse orbits in the solar system and requires it not have its exhaust pointed anywhere near any telescope it can remain undetected at 1 million KM if an IR telescope is looking at it for less than a second. If the telescopes have more time.... which they do because its a hilariously huge object traveling at low velocity then that distance increases, significantly.

Lets say you wanted to launch that from mars to earth. If there were no telescopes beyond mars looking at mars from the other side then this would still be the brightest object in the night sky for the majority of the transport to someone looking from the surface of mars. (the expansion nozzle cools the exhaust but this only matters for observers that are not able to see into the nozzle. In the same way that its very easy to see the sun even though its not that 1 million degrees outside right now.  It will take them 3 minutes to let you know its coming.
Logged

intrinsic_parity

  • Admiral
  • *****
  • Posts: 3071
    • View Profile
Re: functional ship class definitions
« Reply #76 on: April 24, 2020, 11:28:33 PM »

Yes if you have super advanced technology that does not exist and also have unobtanium in order to achieve your initial velocity you can be stealthy to a single current technology telescope... That post is at projectrho already. Its not that convincing..

The detection assumptions are a bit ridiculous. In designing a 10km wide spaceship that takes multiple days to transverse orbits in the solar system and requires it not have its exhaust pointed anywhere near any telescope it can remain undetected at 1 million KM if an IR telescope is looking at it for less than a second. If the telescopes have more time.... which they do because its a hilariously huge object traveling at low velocity then that distance increases, significantly.

Lets say you wanted to launch that from mars to earth. If there were no telescopes beyond mars looking at mars from the other side then this would still be the brightest object in the night sky for the majority of the transport to someone looking from the surface of mars. (the expansion nozzle cools the exhaust but this only matters for observers that are not able to see into the nozzle. In the same way that its very easy to see the sun even though its not that 1 million degrees outside right now.  It will take them 3 minutes to let you know its coming.
First of all, I don't know why you're trying to pick apart this blog post when the only reason I even cited it was as a reference for how I was calculating that voyager 1 was nearly impossible to see by thermal signature with any existing or near future tech.

But I'll bite because this stuff is fun.
I'm pretty sure the ship he proposed in the blog post is 160 meters long and 10m wide, and with a ~70m^2 cross section, so not 10km. I think he calculates the detection distances for some huge cross sections a few times to demonstrate how stealthy it would be, but those weren't actually his design. In its 'high heat' mode with radiators extended, the detection range was ~1mil km, but in the hydrogen cooled mode, the detection range was ~10k km (less than geo orbit) and in the helium cooled mode, it was ~100 km (literally right next to you).

Reading the blog a bit more, he actually doesn't evaluate exposure time at all, and I think that's because it actually doesn't matter in this case. The way CCDs work is that photons of a certain wavelength hit a bin and generate some charge and after the exposure time, the charge is read and used to calculate how much energy was detected. The longer the exposure time, the more photons from the target are collected and the larger the charge is, meaning that small errors due to electrical noise in the circuit reading the bins are minimized. In other words long exposure times let you overcome sensor noise. What is happening with the stealth ship is that it is hiding behind environmental noise not sensor noise. No amount of exposure time will help with that because if you wait longer, more photons from the ship will hit your sensor, but more photons from the environment will also hit your sensor, so the signal to noise ratio will not improve. The ship is meant to be impossible to distinguish from the background meaning it is impossible to track, even if you have a perfect sensor. I think it is possible to stack images to reduce environmental noise, but that would require taking lots of long-exposure images and then doing a bunch of post processing and the space craft would likely move between images which would mess it up I'm pretty sure.

I would agree the thrusters are the sketchiest part of this proposal. He suggests some sort of pulsed thrust with flaps that cover the nozzle during combustion and then open to allow the gas out so that the hot gas is never visible. I'm not 100% convinced the timing would work because there might still be hot gas in the throat of the nozzle when the first of the cool gas reaches the end, but maybe it's possible to have short enough bursts. If that didn't work, anyone who could see down the nozzle would see you, but that's still not a crazy large portion of the sky. Since the nozzle has to be super long to allow for over expansion, there might be a 30 degree cone behind you that could see down it. That seems like a workable constraint for an initial burn. You just need to make sure there are no observers in that 30 degree cone of the sky (about 7% of the entire sky). The 30 degree estimate could be off, but you would need to do a ton of analysis to figure out the exact figure.  You could also just make the nozzle longer which would reduce the cone behind you that could see you.

To be honest, he doesn't give any dimensions for the proposed nozzle which makes me a bit skeptical. It might need to be reaaaally large to over-expand the gas enough to be that cold, but I don't have a good intuition for that from the one gas dynamics course I took so I can't really say, and I'm not about to do all the work to figure it out.

If the engine stuff didn't work, you could still do some cool stuff, like fly a 'mothership' into the desired transfer orbit, release the stealth craft onto the orbit without propulsion or onto a similar orbit with very small propulsion and then the mothership can go somewhere else without suspicion. Then the stealth craft continues to earth undetected and only fires its engine/does stuff at the last moment.
Logged

Goumindong

  • Admiral
  • *****
  • Posts: 1886
    • View Profile
Re: functional ship class definitions
« Reply #77 on: April 25, 2020, 02:05:26 AM »

Exposure time matters because its exposure time that lets you sift data and pick things out of the background and exposure time does indeed increase the sensitivity. He even gives an explicit multiple for the signal to noise ratio required in order to detect (10) on top of the rest

“Typically, telescopes have many hours to days to repeat their observations of a single spot in the sky, which allows for the collection of a huge number of separate images to be compared for an even greater sensitivity. Data on sensor sensitivity is usually given for 10,000 second observation times for this reason. However, detecting fast spaceships travelling at multiple kilometers per second means that telescopes won't have that luxury - for this reason, we will only consider single-second frames.”

But that luxury does exist and certainly does exist for voyager. Because even very fast objects have minute changes in position in the sky over 2.7 hours or so (especially if theyre traveling towards or away from you) and because these telsecopes would be doing a rolling calculation of these types of thing.

His formula also has some dimensionality issues that its too late for me to figure out. Im getting distance in meters = scalar * m^2 / steridians^.5... which you might notice is not equal to each other.

Re: 160m. 160m is the size of the ship. But the radiator wires are 10km long. Which is what is was referencing. (Though to be fair in his warm mode he doesnt consider the cross sectional area of the ship nor does he explain how he is going to get the temperature to hold at 50k without expending coolant. His wires will be absorbing radiation from the sun and so will need to be actively cooled)

Also worth noting that voyager is at about 190k even without its heater on (which was on to ensure that its components didnt break) its incredibly visible.

Logged

intrinsic_parity

  • Admiral
  • *****
  • Posts: 3071
    • View Profile
Re: functional ship class definitions
« Reply #78 on: April 25, 2020, 12:28:51 PM »

Exposure time matters because its exposure time that lets you sift data and pick things out of the background and exposure time does indeed increase the sensitivity. He even gives an explicit multiple for the signal to noise ratio required in order to detect (10) on top of the rest

“Typically, telescopes have many hours to days to repeat their observations of a single spot in the sky, which allows for the collection of a huge number of separate images to be compared for an even greater sensitivity. Data on sensor sensitivity is usually given for 10,000 second observation times for this reason. However, detecting fast spaceships travelling at multiple kilometers per second means that telescopes won't have that luxury - for this reason, we will only consider single-second frames.”

But that luxury does exist and certainly does exist for voyager. Because even very fast objects have minute changes in position in the sky over 2.7 hours or so (especially if theyre traveling towards or away from you) and because these telsecopes would be doing a rolling calculation of these types of thing.

Quote
"Data on sensor sensitivity is usually given for 10,000 second observation times for this reason."
What is at issue here is not whether the sensor is sensitive enough to detect the photons from the object (which exposure time would improve), it's whether the photons from the object are numerous enough to be distinguishable from the photons coming from background radiation. That does not depend on the sensor at all. If you look at his analysis, he never once accounts for sensor sensitivity (there are no terms for any sort of sensor noise), and that's because he is more or less assuming the sensor perfectly detects the incoming photons and is just looking at how many photons are coming from the object vs from the background in the bandwidth that object is emitting in. I just typed out my reasoning for why exposure time would not help overcome background noise in the last comment. I could be wrong about how CCD's work since I have only worked tangentially with them (I mostly work with data from them rather than directly with them) but I'm pretty sure what I said is true.

It's like trying to spot a blue dot against a randomly lit blue background, No matter how much light you collect, if the brightness of the object is within the variation in the brightness of the lighting on the background, you can't spot it because you can't distinguish between it and the random variations in the background, but if the object is brighter than the background, then you can spot it. TBH SNR of 10 seem high to me, but I think that depends on how many false positives you are willing to put up with.

I think if the background radiation varied on a significantly different time scale to the motion of the object, or if the background radiation was constant or perfectly known, you might be able to do some statistical post-processing over a bunch of different long-exposure images to identify the object. I don't know enough about background radiation to know if that would be possible or not.

His formula also has some dimensionality issues that its too late for me to figure out. Im getting distance in meters = scalar * m^2 / steridians^.5... which you might notice is not equal to each other.

Steradians are actually unitless like radians so thats just a constant, but I think the mirror area should not be in the equation. It appears he is setting the expression for target emission he wrote down above (which gives emission in watts) equal to background noise (background radiance*fov*SNR which gives W/m^2). I think he should also be multiplying the background radiation by the mirror area to get the total power from background radiation and then the mirror area would cancel when you solved for distance. Changing that would actually decrease the detection distance since mirror area was in the numerator and greater than one in his analysis.

Re: 160m. 160m is the size of the ship. But the radiator wires are 10km long. Which is what is was referencing. (Though to be fair in his warm mode he doesnt consider the cross sectional area of the ship nor does he explain how he is going to get the temperature to hold at 50k without expending coolant. His wires will be absorbing radiation from the sun and so will need to be actively cooled)
I think he assumes the radiation from the radiators will dominate the rest of the ship which seems somewhat reasonable, but I would agree that ignoring solar heating of the wires is questionable. He also doesn't talk at all about whether the radiator wires will be sufficiently thermally conductive to uniformly radiate (which would require basically perfect thermal conductivity), or about the structural integrity of the wires or how to deploy them. TBH, I would agree that the warm mode would probably not work at all.

Also worth noting that voyager is at about 190k even without its heater on (which was on to ensure that its components didnt break) its incredibly visible.

I did that math for voyager at 200K (which I mentioned in the other comment), it's just really small and really far away. The apparent emission scales with the inverse square of distance which is 150 AU. Not visible at all, like 5 orders of magnitude below background radiation. It would be visible if it was near mars or something though. Pretty sure it could be like 300K+ at 150 AU and still not be visible because it is so far away.
Logged

Thaago

  • Global Moderator
  • Admiral
  • *****
  • Posts: 7173
  • Harpoon Affectionado
    • View Profile
Re: functional ship class definitions
« Reply #79 on: April 25, 2020, 01:47:47 PM »

(Just housekeeping, as this has strayed away from the game in topic. Carry on with the discussion!)
Logged

Goumindong

  • Admiral
  • *****
  • Posts: 1886
    • View Profile
Re: functional ship class definitions
« Reply #80 on: April 25, 2020, 02:07:07 PM »

A steridian is not unitless. Its dimensonless but not in the same way that many scalars are. Keeping in mind the per-steridian units is still important (though you can square and root the base unit freely eithout issue). Dividing by mirror area is clearly wrong because mirror area is a positive effect on detection distance. Big telescopes see more stuff and so should increase the detection distance. Its more likely he needs to square his answer and fix the steridian issue. Its still not exactly clear what is going on.

Edit: its better to think of a unit of angle and its own unit rather than as dimensionless. You can (and have to) convert between units of angle. You can and have to convert between units of angle and square units of angle. But units of angle can correspond to any amount of distance because you must convert with the radius in order to draw an area or volume. So it doesnt have a dimension but it isnt unitless. We still need to remove it from our distance figure.


Quote
It's like trying to spot a blue dot against a randomly lit blue background, No matter how much light you collect, if the brightness of the object is within the variation in the brightness of the lighting on the background, you can't spot it because you can't distinguish between it and the random variations in the background, but if the object is brighter than the background, then you can spot it. TBH SNR of 10 seem high to me, but I think that depends on how many false positives you are willing to put up with

Well no. Unless the blue dot is perfectly at the exact level of background radiation. The sum(well average) of the images will very quickly pull out the dot that is not the same average intensity. The dot doesnt have to be within a band of acceptable range it has to exactly the same. The average and error terms must have the exact same structure over the relative range. More samples magnifies any differences between background and not. This is why he needs the sensor to noise ratio requirement for detection because otherwise any variation is spotted at infinite distance given enough time so long as the sensor collects the photon.

This can be made obvious by adding another telescope looking at the same area. If error is on a long enough time scale that two pictures in two subsequent seconds from the same telescope would produce the same image then a single telescope can pick out anything where the photon is collected on a long enough time scale so long as it doesnt have the exact same signature as the background. The difference in structure determines the length of the time scale necessary to pick it up.

If error is on a time scale that anything in the background will have the same intensity over two one second pictures then simultaneous exposure by two telescopes on the same area will near instantly pick out any object in the near distance. (It also makes an object that has high transverse velocity exceptionally easy to pick out as well). The longer error is stable it can be even easier to spot things since there will already be months of data built up for which to compare. Now not only do you have to be very cold but you have to predict the error in the background radiation for the angle of each telescope that might take your picture else you land on the wrong pixel and be seen as an anomoly to be deeply examined.

Ahh. But it takes more telescopes. Yea but more telescopes provide more information on everything. It would only take ~3400 telescopes of the type he describes to do a complete sky survey every day with every area getting double coverage (making each area a potentially huge synthetic telescope) if they take one image every second. That isnt a lot. And the more telescopes you have the faster and easier things get. At the point where space warfare makes sense there will be millions of telescopes. Every ship will have a telescope. Every object in the system will have a telescope. Asteroids not harvested for raw materials to make telescopes will have telescopes on them. And they will all be talking to each other. You cant really get away from this.
« Last Edit: April 25, 2020, 02:18:15 PM by Goumindong »
Logged

intrinsic_parity

  • Admiral
  • *****
  • Posts: 3071
    • View Profile
Re: functional ship class definitions
« Reply #81 on: April 25, 2020, 05:20:10 PM »

A steridian is not unitless. Its dimensonless but not in the same way that many scalars are. Keeping in mind the per-steridian units is still important (though you can square and root the base unit freely eithout issue). Dividing by mirror area is clearly wrong because mirror area is a positive effect on detection distance. Big telescopes see more stuff and so should increase the detection distance. Its more likely he needs to square his answer and fix the steridian issue. Its still not exactly clear what is going on.

Edit: its better to think of a unit of angle and its own unit rather than as dimensionless. You can (and have to) convert between units of angle. You can and have to convert between units of angle and square units of angle. But units of angle can correspond to any amount of distance because you must convert with the radius in order to draw an area or volume. So it doesnt have a dimension but it isnt unitless. We still need to remove it from our distance figure.


Quote
It's like trying to spot a blue dot against a randomly lit blue background, No matter how much light you collect, if the brightness of the object is within the variation in the brightness of the lighting on the background, you can't spot it because you can't distinguish between it and the random variations in the background, but if the object is brighter than the background, then you can spot it. TBH SNR of 10 seem high to me, but I think that depends on how many false positives you are willing to put up with

Well no. Unless the blue dot is perfectly at the exact level of background radiation. The sum(well average) of the images will very quickly pull out the dot that is not the same average intensity. The dot doesnt have to be within a band of acceptable range it has to exactly the same. The average and error terms must have the exact same structure over the relative range. More samples magnifies any differences between background and not. This is why he needs the sensor to noise ratio requirement for detection because otherwise any variation is spotted at infinite distance given enough time so long as the sensor collects the photon.

This can be made obvious by adding another telescope looking at the same area. If error is on a long enough time scale that two pictures in two subsequent seconds from the same telescope would produce the same image then a single telescope can pick out anything where the photon is collected on a long enough time scale so long as it doesnt have the exact same signature as the background. The difference in structure determines the length of the time scale necessary to pick it up.

If error is on a time scale that anything in the background will have the same intensity over two one second pictures then simultaneous exposure by two telescopes on the same area will near instantly pick out any object in the near distance. (It also makes an object that has high transverse velocity exceptionally easy to pick out as well). The longer error is stable it can be even easier to spot things since there will already be months of data built up for which to compare. Now not only do you have to be very cold but you have to predict the error in the background radiation for the angle of each telescope that might take your picture else you land on the wrong pixel and be seen as an anomoly to be deeply examined.

Ahh. But it takes more telescopes. Yea but more telescopes provide more information on everything. It would only take ~3400 telescopes of the type he describes to do a complete sky survey every day with every area getting double coverage (making each area a potentially huge synthetic telescope) if they take one image every second. That isnt a lot. And the more telescopes you have the faster and easier things get. At the point where space warfare makes sense there will be millions of telescopes. Every ship will have a telescope. Every object in the system will have a telescope. Asteroids not harvested for raw materials to make telescopes will have telescopes on them. And they will all be talking to each other. You cant really get away from this.

If the object was larger than a pixel, then you would have to closely match magnitude because the pixels occupied by the object will see the radiance of the object only, but if the object is smaller than a pixel, then you are actually comparing the total energy from that slice of sky, which would include the energy from the object plus the energy from the background radiation over the rest of the pixel. So if most of the pixel is not the object, then the total energy over the pixel will be very close to the background, even if the object is dimmer than the background. It's sort of like the object blocks a small amount of the background radiation and replaces it with its own radiation, but if the object is very small, then the amount it blocks would also be very small and so only a big increase in magnitude from the object would create a noticeable change over the entire pixel. If the object was dimmer, there would be some small deviation but it would much much smaller than the actual difference between background and the object. If your telescope has poor resolution at that distance, it hardly provides you any information.

It's true that with an infinite number of samples, you could always pick out a bias against zero mean noise, but you don't have infinite samples. The maximum time scale is the time it take the object to cross one pixel, and that would heavily depend on the exact geometry and would be different for different observers so its possible the object might stay in a pixel long enough to get picked up by one observer but not another. You also have to consider that information from far away observers will take minutes or hours to reach you, and so synchronizing the information would be challenging (and there are tons of places where error will be introduced by small differences in internal clocks, and in signal transmission and so on).

Also if we are not living in a crazy authoritarian single government world where every space ship is constantly providing all their available information to big brother, it's pretty unlikely that you would have access to any significant fraction of existing telescopes. And then you can get into information warfare and misinformation or incorrectly reported information and all that
stuff.

It's also worth noting that background radiation might not be zero mean gaussian noise. A lot of this would depend on the exact characteristics of the noise. For instance, most of the images I've seen of background radiation over the entire sky indicate there's some bias (i.e. different areas of the sky appear darker or lighter on average) if there is also some small scale variation as well that would mean images from different angles would see different noise distributions and thus not be easily comparable. There also would certainly be different objects in the background in each image, and at the absurdly high sensitivities we're talking about, every tiny rock and pebble in the solar system would be visible.

But we haven't even talked about tracking. All sorts of objects from junk to asteroids to pebbles to super distant stars will appear in your images, and unless you have some catalogue of every object in the universe, they will be unknown objects to track just like our hypothetical spaceship. Maybe that tiny deviation from background is our stealthy spaceship, or it could be an asteroid 40 AU further away, or it could be a ship 3000 AU away making a small maneuver, or it could be a galaxy millions of light years away, or it could be a tiny piece of debris right in orbit next to you. In order to distinguish you need multiple sequential measurement to see if there is a plausible orbit connecting the measurements from each time step to the next. The number of possible tracks (orbits) you need to consider explodes as you consider dimmer and dimmer objects (basically every point you observe at this time and every object in the next frame could be the same object moving in different ways). Multi Target Tracking is a huge open problem in the space situational awareness community right now that we can't feasibly solve with existing algorithms (because they scale so poorly with increasing numbers of targets). The lower you set your threshold for possible objects, the more objects you will pick up in a sweep, and tracking algorithms scale really poorly (like NP hard poorly).

Also, one second exposure times are TINY. Most astrophotography shots of anything other the planets and the sun are going to have exposure times on the order of minutes. On top of that, once per day scanning is going to make it really difficult to track anything. It's going to be super hard to correlate any sensor readings with readings from 24 hours ago. (remember you have to figure out which blip on your sensor today goes with the blips on the sensor from yesterday, and every pebble in the solar system will be showing up if you're looking at such dim objects).

Also R.E. dimensional analysis: he substitutes the expression for solid angle into the expression for power which causes the confusion you're having. Basically in the equation to calculate the power in watts observed from the spacecraft, there is a solid angle, but he substitutes the calculation to find the solid angle (Solid angle = Area/d^2) directly into the formula. When you cancel the m^2/m^2 from that substitution (Area/d^2), you're actually left with a steradian, and that cancels out the steradian from the radiance.

The equation is basically defining SNR = 'power from object'/'power from background radiation'. The problem that gives the distance unit mismatch is that he had is that he used W/m^2 for background radiation and just W for the object signal. The background radiation term needs to be multiplied by an area. As far as I can tell, that should be the mirror area, which is essentially saying that the total power from background radiation is the sum of the power/unit area incident on the mirror over the area of the mirror. If you do that, and solve the SNR equation for distance, the mirror area will cancel. This is because we are calculating a ratio of signals (SNR), and since the mirror effects both signals the same way, it has no net effect on the ratio. Like I said before, this analysis is ignoring the capability of the sensor completely and just analyzing the signal from the spacecraft vs the background. The mirror size does not matter.
Logged

Goumindong

  • Admiral
  • *****
  • Posts: 1886
    • View Profile
Re: functional ship class definitions
« Reply #82 on: April 26, 2020, 04:04:46 AM »

If the object was larger than a pixel, then you would have to closely match magnitude because the pixels occupied by the object will see the radiance of the object only, but if the object is smaller than a pixel, then you are actually comparing the total energy from that slice of sky, which would include the energy from the object plus the energy from the background radiation over the rest of the pixel. So if most of the pixel is not the object, then the total energy over the pixel will be very close to the background, even if the object is dimmer than the background. It's sort of like the object blocks a small amount of the background radiation and replaces it with its own radiation, but if the object is very small, then the amount it blocks would also be very small and so only a big increase in magnitude from the object would create a noticeable change over the entire pixel. If the object was dimmer, there would be some small deviation but it would much much smaller than the actual difference between background and the object. If your telescope has poor resolution at that distance, it hardly provides you any information.

But very close is not the same and it must be entirely the same in order to not be detectable with enough samples.

And while its true you don't have infinite samples you do have sufficient samples to make any sort of stealth impractical. The proposed ship, which is quite ridiculous in even the idea that it could be constructed without being seen, is quite visible to the

Quote
It's also worth noting that background radiation might not be zero mean gaussian noise. A lot of this would depend on the exact characteristics of the noise. For instance, most of the images I've seen of background radiation over the entire sky indicate there's some bias (i.e. different areas of the sky appear darker or lighter on average) if there is also some small scale variation as well that would mean images from different angles would see different noise distributions and thus not be easily comparable. There also would certainly be different objects in the background in each image, and at the absurdly high sensitivities we're talking about, every tiny rock and pebble in the solar system would be visible.

Yes but this is known and known before hand. This makes it even harder to do stealth. Because not only do you have to match the background from one direction you have to match the background from all directions.

Quote
unless you have some catalogue of every object in the universe,

You don't need a catalogue of every object just a catalogue of every known object. And well... we already keep those catalogues.

Quote
Also R.E. dimensional analysis: he substitutes the expression for solid angle into the expression for power which causes the confusion you're having. Basically in the equation to calculate the power in watts observed from the spacecraft, there is a solid angle, but he substitutes the calculation to find the solid angle (Solid angle = Area/d^2) directly into the formula. When you cancel the m^2/m^2 from that substitution (Area/d^2), you're actually left with a steradian, and that cancels out the steradian from the radiance.

The equation is basically defining SNR = 'power from object'/'power from background radiation'. The problem that gives the distance unit mismatch is that he had is that he used W/m^2 for background radiation and just W for the object signal. The background radiation term needs to be multiplied by an area. As far as I can tell, that should be the mirror area, which is essentially saying that the total power from background radiation is the sum of the power/unit area incident on the mirror over the area of the mirror. If you do that, and solve the SNR equation for distance, the mirror area will cancel. This is because we are calculating a ratio of signals (SNR), and since the mirror effects both signals the same way, it has no net effect on the ratio. Like I said before, this analysis is ignoring the capability of the sensor completely and just analyzing the signal from the spacecraft vs the background. The mirror size does not matter.

No? Mirror size absolutely matters. It must matter. Bigger mirrors collect more light. More light is more energy. So any differences off the background are similarly magnified.
Actual experts on telescopes seem to agree with me here : https://www.atnf.csiro.au/outreach/education/senior/astrophysics/resolution_sensitivity.html#rsolsensitivity

If his equation is not about how much light is collected in order to be examined then what is it about and how does that relate to the distance a ship can be seen at?



   
Code
Target emissions received: BR * CSA * TCA / D^2

Target emissions received will be in W/m^2.
BR is the band radiance in W/m^2/sr.
CSA is the Cross Section Area in m^2.
TCA is the telescope collector area in m^2.
D is the distance in m.

W/m^2/sr * m^2 * m^2 / m^2  = W/sr NOT W/m^2

If you assume you can cancel our SR then you should get W, not W/m^2. You cannot rewrite steridians in a way that make this work.

In terms of watts it even makes sense. The total energy received is the amount of energy produced per sq meter * by cross section producing * Size of the collector to the object relative to the circle with radius of the distance between the objects.


Code
    D: ((BR * CSA * TCA) / (FoV * BNI * SNR))^0.5

D is the detection distance in m.
BR is the band radiance in W/m^2/sr.
CSA is the Cross Section Area in m^2.
TCA is the telescope collector area in m^2.
FoV is the field of view in steradian.
BNI is the background noise intensity in W/m^2/sr.
SNR is the signal to noise ratio, at least 10.
Using this equation, in addition to the calculators and information on the background noise, we can establish the shortest distance a stealth craft can approach a telescope without being detected. 

Square this to make it easier to read

m^2 = W/m^2/sr * m^2 * m^2 /( sr * W/m^2/sr * SCALAR)

This is NOT fine. The W/m^2/sr on top an bottom cancel out and a scalar has no dimension and so cancels. Then we have m^2 * m^2 / sr = m^2. Even if we assume that steridians can be counted as a scalar since its dimensionless we're left with m^4=m^2... Which you might notice is not equal.

SNR is a scalar here and doesn't have to do with the formula except for him saying "you must be 10 times stronger than the background in order for you to detect me".

Thinking about this way too long it should be, if anything

(BR-BNI) * CSA  * TCA / D^2 = Delta W. We replace delta/W with the sensitivity of the sensor in the band in question multiplied by the area of the sensor. Then we can solve for distance.

There is one problem. BR and BNI have different distances to the collector dish. So it should be BR*CSA*TCA/d^2 - BNI * CSA * TCA/d^2. Where each one is the CSA and distance of each emission source. What is the CSA of the microwave background radiation? Well its going to be whatever one pixel on the sensor corresponds to the area of the CMBR and the distance will be at the CMBR. But we can shorthand this by going the other way and using his desired FoV. 2.35*10^-20 W/m^2 is the total energy received per m^2 of dish and so assuming a 4k resolution(I.E the same number of pixels as a 4k resolution monitor but square) image (which seems reasonable) our sensor has 8.3*10^6 pixels. And so

|BR*CSA*78.5m^2/D^2 +/- 2.83*10^-27 W/m^2*78.5m^2| = Delta W. 

Well now we do have a problem. According to him "a cryogenically-cooled infrared sensor would have a sensitivity of 10^-19W/m^2" and at 30K his ship has a band radiance of 0.000992991 W/m2/sr with a CSA of 1600m^2

so if we say the sensor is 1/100th of a meter^2 (1/10 by 1/10) then our 1600m^2 ship comes out to

So 10^-21W +/- 2.22*10^25W = 124.719 W*m^2 / distance in m^2

We notice that 10^-21 +/- 2.22*10-25 = 10^-21 for all intents and purposes then invert and multiply. 8.02*10^24 = distance in meters^2.

353 million kilometers per second of exposure with off the shelf ish tech in 2004*. Which means that your spacecraft would be seen from earth in one second of exposure on its 80 day journey so long as one of the millions of telescopes in the solar system was pointed at a potential aggressors moon based shipyards.

You may notice that i omitted signal to noise ratio this is because the error term on the CMBR is beyond hilariously small. The CMBR itself didn't make a dent in our calculation and the standard error on the CMBR is not even a 20th of a percentage point of its energy.

There is no stealth in space

*I say ish because the sensor i proposed has about 8x the pixels as the james web telescope. But this is super space tech so we can assume a slightly larger sensor that has the same sensitivity per meter square and more pixels.

Edit: I will get back to this sometime tomorrow. I worry that i have overestimated the size of the sensor. It should be the size of the pixel and not the size of the whole sensor... i think. Which means that i could be off by a large factor. Either way, making the dish and sensor bigger obviously both have positive effects. So if we're talking about an IR sensor designed to find stealth space ships it(and its dish) are going to be correspondingly huge.
« Last Edit: April 26, 2020, 12:12:37 PM by Goumindong »
Logged

intrinsic_parity

  • Admiral
  • *****
  • Posts: 3071
    • View Profile
Re: functional ship class definitions
« Reply #83 on: April 26, 2020, 02:33:12 PM »

And while its true you don't have infinite samples you do have sufficient samples to make ay sort of stealth impractical.
What evidence do you have for this statement? It seems to me like you would need an impractically or even impossibly large number of samples to identify the tiny differences you would see between background and the object intensities we are talking about.

No? Mirror size absolutely matters. It must matter. Bigger mirrors collect more light. More light is more energy. So any differences off the background are similarly magnified.
Actual experts on telescopes seem to agree with me here : https://www.atnf.csiro.au/outreach/education/senior/astrophysics/resolution_sensitivity.html#rsolsensitivity

If his equation is not about how much light is collected in order to be examined then what is it about and how does that relate to the distance a ship can be seen at?
Mirror size definitely affects how much light a telescope collects, but thats not what is being calculated here. What is calculated is a signal to noise ratio
https://en.wikipedia.org/wiki/Signal-to-noise_ratio
Particularly the noise due to thermal background radiation (not the sensor) and the 'signal' of thermal radiation from a target space craft. That ratio does not depend on the mirror size. If the mirror is larger it will collect more light from the target but also more light from the background so the ratio of the two will remain the same. The distance matters because if the object is farther away, the telescope will see less light from the object, but it won't see less light from the background. At some point, the amount of light coming from the object will be so much less than the amount of light coming from the background that the object will be indistinguishable from the background. That corresponds to the SNR going below some detection threshold. SNR is a common parameter in signal processing used to determine if a single is detectable. The SNR requirements for detection are going to depend on the requirements of whatever tracking algorithm you're using.


I think our main disagreement is whether the sentence ' At some point, the amount of light coming from the object will be so much less than the amount of light coming from the background that the object will be indistinguishable from the background.' is true. I've already addressed this but I'll go through it again.

If you assume the object can be approximated as a point source that is not blocking light from the background (which is a common assumption for distant objects in space), it is definitely true. In that case, the magnitude measured by the pixel will go to exactly the background as the magnitude of the target goes to zero. The point source assumption fails if the object becomes close enough that it covers a non-trivial portion of a pixel. In that case, you need to calculate fractions of pixels and stuff like that to work out what magnitude the pixel will observe and how it compares to the background. That magnitude will be very close to the background unless the object takes up a large portion of the pixel (in which case you're close to just resolving the object), and it will be extremely close to the background intensity if the object has an intensity moderately close to the background.

In a serious analysis, you would need to represent the background noise as a well characterized random variable (gaussian variable with a mean and variance is most common) and then calculate the of expected value and variance of the pixel containing the space ship compared to a random pixel to work out how many samples you would need to be confident that a certain measurement or sequence of measurements were not just random noise, but that's way more work than I'm willing to do for a silly argument. If you wanna go through all that and you find that you only need a few samples to have 90%+ confidence that a signal is not from random noise, I'll happily admit I am wrong.

To be honest, so much of this depends on the characteristics of the background noise, and I'm not even sure it's well characterized. I've never seen any values for it anyway. I've managed to find a few papers on arcmin scale variations in background noise and I think there's been a focus on the large scale variations because they have implications for the origins of the universe and stuff, but I don't know if we have any smaller scale noise characterizations.


Quote
unless you have some catalogue of every object in the universe,

You don't need a catalogue of every object just a catalogue of every known object. And well... we already keep those catalogues.
We don't even have a catalogue of all the objects in orbit around earth. Not even close. We do our best to track objects larger than 10 cm, but we routinely lose track of them because of orbital perturbations from things like solar radiation pressure and drag. Essentially if we don't have a very good geometry model for debris, it can deviate from our estimate of its orbit enough that we either fail to spot it on the next sensor pass because we're not looking in the right place or we fail to associate a second observation with the first. For reference, there are ~34000 of these objects >10cm that we actively try to track and millions of smaller objects that we don't try to catalogue or track at all because we just aren't capable of it (even though we can definitely see them).
https://en.wikipedia.org/wiki/Space_debris
I am familiar with the topic of tracking space debris because it's the motivation for my research (we are trying to estimate debris geometry from light curves to improve debris orbit estimates).

We definitely do not have a catalogue of all the tiny asteroids in the asteroid belt or the huge number of objects in the Kuiper belt. All of that stuff except for the biggest objects are completely unknown.
https://solarsystem.nasa.gov/solar-system/kuiper-belt/in-depth/
According to this, only 2000 objects in the kuiper belt are catalogued. That's a bit more than the number of active satellites around earth... There are almost certainly 10s or 100s of millions more objects that are too dim for us to see with current tech, or that we haven't bothered to look for because we couldn't keep track of them anyway. The asteroid belt is just the same, we can observe a few million objects (1km or larger), but we only catalogue the largest ones (about 15000 based on this https://www.nasa.gov/feature/jpl/catalog-of-known-near-earth-asteroids-tops-15000), and we assume that there are many many more smaller than 1km objects that we can't see.
https://en.wikipedia.org/wiki/Asteroid_belt

In the far future, I can imagine humanity littering space junk all across the solar system to vastly increase the difficulty of the task of tracking everything.

W/m^2/sr * m^2 * m^2 / m^2  = W/sr NOT W/m^2

The definition of a steradian is area/distance^2. Are you unfamiliar with units being defined in terms of other units?
https://en.wikipedia.org/wiki/Steradian
All he is doing is substituting the definition of the solid angle in instead of first calculating the solid angle using that formula, and then substituting the solid angle in steradians...

So yes W/m^2/sr * m^2 * (m^2/m^2) = W if the (m^2/m^2) represents the calculation of a solid angle in steradians using the definition of a steradian...

As to the formulation of SNR,
The signals should both be power in Watts, but I think theoretically your could also use the irradiance in W/m^2 like he tries to do. The blog guy used an irradiance in W/m^2 for noise from the background, but a total power in W for the signal from the ship. He should either calculate the total power from the background by multiplying the irradiance by the mirror area and then compare that the power he calculated for the space craft, or he should calculate the irradiance from the spacecraft which would not include the mirror area, and compare that to the background irradiance.

His process is to then assume some required SNR for detection which he doesn't give a whole of justification for, but presumable there would be some SNR > 1 required so it won't be that far off. You would probably work out the required SNR based on the tracking algorithm and noise and stuff.

Thinking about this way too long it should be, if anything

(BR-BNI) * CSA  * TCA / D^2 = Delta W. We replace delta/W with the sensitivity of the sensor in the band in question multiplied by the area of the sensor. Then we can solve for distance.
.......

Now what you're trying to do is calculate whether the sensor noise would prevent you from seeing the object (i.e. checking if the brightness of the object is below the sensitivity of the sensor), which is different from what the blog guy was trying to do. He addresses it in the first paragraph saying he had previously done what you're describing (and he links some series of previous blog posts where he does that). What you've find is that the sensor noise gives you a very far detection distance, which is more or less what he found as well (he mentions a distance of a few hundred million kilometers right at the beginning of his blog post). Now what he is trying to do in this post is figure out if the signal from the object is distinguishable from background noise. There's no question of whether you can detect signals of that magnitude, it's a question go whether you can distinguish it from background noise.

This is one of the fundamental problems in tracking and signal processing. How easy it is to distinguish the signal from the background would depend a lot on the characteristics of the noise (variance and mean for gaussian noise, but there can be crazy non-gaussian noise as well where you need to define a probability density function to describe it). This is a problem in modern tracking with radar on earth as well. The way that stealth aircraft work is not to have nearly zero radar cross section (so as to be below the sensitivity of the radar equipment) but to have a small enough cross section that they can't be differentiated from normal/random small signals like birds or atmospheric density variations. In this case, our blog friend is trying to work out if we can 'hide' behind background thermal radiation in a similar way to how stealth jets 'hide' behind atmospheric noise (in some abstract sense). Of course the telescope can see the background radiation or the signal from our ship if it is sensitive enough (just like how a radar is capable of detecting a bird or an F22), but can it tell the difference between that and our ship?

In addition, I would contend that stuff like debris and asteroids would also create more things to 'hide behind'. The idea here is that you have to define some floor on the magnitudes of objects you're going to attempt to keep track of, and if the ship is below that floor it can go undetected.The reason you have to define a floor is that the data association problem would be completely impossible with literally hundreds of millions of objects, and even the computational resources required to propagate all those orbits in real time is far far beyond what we are able to do now. (I've actually done some space debris tracking, I'm not just saying random stuff). This would indirectly result in some SNR floor that you impose on yourself with your tracking algorithm by ignoring weaker signals in order to make the tracking problem feasible. Obviously you can always imagine some hyper future computer that could solve everything, but it's way way (way) beyond where we are right now. 



This has gone on for a long time so I'll just say this:
I think you could construct a futuristic world where a single authoritarian central government has a massive network of thousands or millions of super advanced telescopes scanning every degree of the sky constantly and has the super advanced computational resources to process all that data and track everything that moves and has also perfectly mapped the entire solar system down to the smallest pebble and perfectly characterized every microsteradian of background radiation so that it is impossible to penetrate their defenses.

I think you could just as easily construct a futuristic world where there are many competing factions each with limited information collection abilities and there is unknown debris strewn about the solar system from centuries of skirmishes and you can sneak around undetected with very good planning and very advanced cryogenic tech.

It all depends on what direction technology develops in. Both of those worlds seems like quasi-plausible far future worlds to me, and neither seem particularly close to where we are today.
Logged

Thaago

  • Global Moderator
  • Admiral
  • *****
  • Posts: 7173
  • Harpoon Affectionado
    • View Profile
Re: functional ship class definitions
« Reply #84 on: April 26, 2020, 03:08:26 PM »

Just to chime in here as I have a bit of expertise with photon sensors in low light conditions (PhD not developing them myself, but worked with them and a labmate was essentially developing them):

The sensitivity of a sensor is not a very good measure of whether you can actually find a signal: rather, one needs to fight the intrinsic noise of the detector. See: https://www.hamamatsu.com/resources/pdf/ssd/infrared_kird0001e.pdf page 6 for D* curves and typical blackbody radiation wavelengths, https://en.wikipedia.org/wiki/Specific_detectivity for an explanation of what D* is.

Anyone giving a formula where they take detector sensitivity, multiplying by area and then by time... has no idea what they are doing if they are trying to fight detector noise. That is 100% incorrect in terms of doing things. More explanation below:

A key points that seem weird, but is true: The signal to noise ratio of the measurement goes as the square root of the duration of exposure. Looking for longer will give a better measurement, but only as the square root, which places severe limitations on practical measurements. Those who have worked with noise will recognize this from statistics: signal goes as integration time while noise goes as sqrt integration time, so SNR goes as sqrt integration time.

Next, if you look at the sensor chart I linked, a room temperature-ish spacecraft is going to be emitting at roughly 8 um. At 8 um we are looking at a D* of about 10^10 with the best available technology today (2020) that can be produced at a reasonable scale.

Converting D* into Noise Equivalent power (see wiki article for formulas) gives:

NEP = (Area/(2*integration time))^(.5) * (D*)^(-1)

Or, for a 1 cm^2 detector, 1 second integration, D* = 10^10

NEP = 7.07*10^(-11) W over a 1cm^2 detector.

What does this mean? It means that in order to get an SNR of 1 with respect to the detector noise, you need that much power focused onto the detector.

But something here seems off: the NEP goes down when detector area goes down, which means smaller sensors are more sensitive. This seems utterly bizzarre and counterintuitive, until you realize that sensor noise scales with sensor area. This means that small pixels are better... as long as the optics can focus correctly on them. And here we get into gaussian optics: the bigger the lens, the smaller the focal point and the more light collected. Phew! Bigger telescopes do give better sensitivity.

Now, we might be tempted to have extremely small pixels (and this does give better image resolution!), but we need to remember that the detector is still an image plane, just one upon which the light has been concentrated by the mirrors/optics. If the pixels are too small, the power collected from the target object will be spread out among multiple pixels according to their area: NEP goes as square root area, but power collected goes as area ^-1 for a given resolution!

The answer then for maximum sensitivity: in our telescope design, we want all the light from the object to be focused onto 1 pixel. This is essentially governed by the familiar diffraction limits of gaussian optics, so for a given telescope it can be computed. (See: https://astronomy.tools/calculators/ccd_suitability)

However, how big that pixel needs to be depends on a whole lot of stuff! The size of the telescope, the wavelength (10um, ouch thats bad!), the focal length, and the expected distance to target (arcsecond resolution) all play a big role. This is honestly too complex a topic, but I'm going to take values from https://www.mpifr-bonn.mpg.de/393197/detectors which also were similar to the pixel sizes that I used in my research, so I know they are reasonable. Say 25um on a side, or 625 um^2, or 6.25 * 10^-6 cm^2 (working in cm because the D* curves from hamamatsu are in cm).

Plugging this into the above formulas gives:

NEP (real optimized detector) = 1.77 * 10^-13 W

This is the power needed to be collected by the telescope optics and put onto the CCD in order to get a signal to noise ratio of 1 from the internal noise of the device. Note that this is also peak sensitivity, and much of a signal will lie outside of it.

Quick sanity check to see if this is reasonable: https://www.thorlabs.com/images/TabImages/Noise_Equivalent_Power_White_Paper.pdf

Hmm, their NEP at 1 hz is 5 * 10^-12 W at visible frequencies... This tells me that my calculation is probably too optimistic, but its in the right order. https://www.osapublishing.org/DirectPDFAccess/31E737C8-AC0C-88BB-56623B62C26541FB_423912/oe-27-25-37056.pdf?da=1&id=423912&seq=0&mobile=no actually gives the same value! But that one is uncooled, so my value which is better by an order of magnitude seems reasonable.

Further sanity check: a 10um photon carries ~2*10^-20 Joules, so the above corresponds to a photon flux of ~10^7 photons per second. Great! This isn't approaching single photon detector physics and we don't have to worry about that whole can of worms.

This assumes that there is 0 background from space: this is the noise background of the detector itself.

------

Calculation time! Assuming the 1600 m^2 spaceship from below at room temperature, using this calculator: http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/radfrac.html

For wavelength bands, I'm using 1 um to 11 um: the sensitive region of the Hamamatsu detector.

It emits 2.5*10^6 watts in this band. (only 33.6% of total power according to the calculator).

Collected power = power * (collector area)/(area of sphere at range = 4 pi R^2), so to get an NEP of >1:

NEP < collected power

Now shifting terms, turning collector area into pi * lens radius squared, and computing gives me:

For an NEP of 1 or higher, detection distance is equal to:

3.3*10^9 (meters distance per meter radius of telescope optic).

Lets say 10 meter (huge!) optic:

3.3*10^10 meters, or 1.8 light minutes, or .22 AU.

This is for 1 second integration times. Scale the answer by sqrt of time for longer exposures than 1 second: a 1 hour exposure will get 60 times that, or 13.2 AU. Pretty far!

However, I want to stress that this is the maximum theoretical distance with a 10meter radius telescope, current tech, and no background. It is also assuming a perfect telescope with no losses with the correct focal length and field of view. That said, this is also for SNR = 1. Modern good algorithms can detect signals about 10 times better than that, though false alarm rates will go up at the same time (and distance will go up only as the square root).

Now if the signal is stronger than this, then it needs to worry about background. But this is the limit due to detector noise!
Logged

intrinsic_parity

  • Admiral
  • *****
  • Posts: 3071
    • View Profile
Re: functional ship class definitions
« Reply #85 on: April 26, 2020, 07:01:40 PM »

@Thaago Hey someone who knows what they're talking about! Very interesting read, thanks.

A little bit of clarification, the D* parameter looks like it is a function of a bunch of stuff like detector temp and also the photon flux of the source. It seems like you picked a somewhat middle of the road detector at normal temps and stuff, but if I am understanding this correctly D* could change a lot under different conditions?   Also would you consider this sensor to be 'high end' or more average.

How would you go about accounting for background noise? It seems like it would depend on the fraction of the pixel that the spacecraft occupies? If the spacecraft was close to a point source, then the noise would be approximately additive noise on the actual spacecraft power, but if it was close the pixel resolution the background noise wouldn't actually hit the pixel the spacecraft was hitting. Does that seem right? Do you have experience with propagating additive environmental noise through a telescope/sensor and combining it with the sensor noise?

You mention that detection distance for a one hour exposure would be 60 times at the distance at 1 second (sqrt of integration time), but NEP scales inversely to the square root of integration time, and as far as I can tell, detection distance scales inversely with the square root of NEP, so that would mean detection distance scales with the 4th root of integration time? Am I missing something.

One more thing, I put in the numbers you mentioned into the online blackbody calculator (1.6e3m^2, 300K, 1000-11000nm (1-11um)) and got (almost exactly) one order of magnitude lower power. Did I make a mistake or did you?

Using the calculated NEP for the sensor and looking at the 1600 m^2 ship in the 1-11 um band at a couple different temperatures, I get these detection distances:
T = 300 K d = ~5e6 km
T = 200 K d = ~7e5 km
T = 100 K d = ~1.7e4 km
T = 50 K   d = ~1.5e1 km

(note that if I use 1.6e4 m^2 for the area I get a number on the same order of magnitude as Thaago for T = 300 K which should be close to room temp, so maybe that is the difference)

It seems like at super low temps, the power density is very low in this bandwidth (1-11um), so a different choose of sensor with a wider or higher bandwidth would probably improve performance a lot at those low temps. None of the sensors on the page you shared seemed to have significantly higher bandwidths (the power density is more in the ~50-200 um range), do you know of any sensors for the mid to far IR bands?

I also got that voyager 1 at 200K with ~30m^2 surface area (I approximated based on the ~10m^2 dish which is significantly bigger than the body of the satellite) the detection distance is ~2e5 km, which is a lot less than 150 AU. This is the entire reason I've wasted so much of my time in this thread, because it seemed totally impossible that voyager would show up at 150 AU. 

Also I want to be clear, I was never trying to say that the mirror area didn't have any effect on the telescope sensitivity, I was just pointing out that the way the guy in the blog calculate the SNR only considered background noise and didn't consider the sensor noise at all. He just took the ratio of the power from the background to the power from the spacecraft and called that SNR. TBH that does seem questionable, but that's why in his math, the mirror size shouldn't have mattered (because the telescope wasn't really being considered).
Logged

Thaago

  • Global Moderator
  • Admiral
  • *****
  • Posts: 7173
  • Harpoon Affectionado
    • View Profile
Re: functional ship class definitions
« Reply #86 on: April 26, 2020, 10:49:32 PM »

D* does depend a lot on the detector temperature, because its principally about fighting the detector noise, which goes down very fast as detector temperature goes down! The particular detector I chose for the example on the Hamamatsu chart was the "Type II for 14um band (-196C)" line because of its broad frequency response. I'm assuming that this is the design spec for -196C = 77K = liquid nitrogen temperatures. Detectors can be better at low temperatures if the associated electronics still work and there is no phase transition/other effect, but liquid nitrogen temps are the most practical and economic at present. A spacecraft specifically for observation with no people around might be able to efficiently cool to lower, which could boost D* by quite a bit!

D* shouldn't be dependent on incoming photon flux as long as the detector is not saturated with photons. If the detector is saturated though, an operator (or automated system) can put a filter on front and fix it. Of course, if its saturated with photons its probably not worried about SNR. :p Its very dependent on photon wavelength as shown on the chart.

Hamamatsu has a reputation as a good detector company that produces research grade sensors: these are a cut above mass produced sensors like IR motion detectors because of the quality of production, but are still of a mass producible nature. There is a lot of interest (read: funding opportunities from the US government) for developing better photo detectors, and there very well may be better large scale sensors coming in the future (have been some recent advances in the field) but if I were guessing at the quality of current day military telescopes, I would pick numbers in this ballpark. Lower temperatures/better process might make things a bit better, but there's only so much that can do with the same material composition. So: High end, but well established.

I am unsure as to better detector tech for deeper IR. I know people work on it and that it poses technical challenges!

Background noise from the signal with non-stationary sources (like the real night sky) is well outside my area of expertise... on the one hand, we can have months or years of data to create extremely precise background noise mapping (and then subtract that from the signal received). On the other hand, everything is fricken moving (damn you celestial heavens!!).

My best guess would be: the night sky would have a detailed map not only of the expected signal per second in each location, but also an expected variance of signal per second in each location. For a given measurement, the mean background can just be subtracted, but the variance of the background would need to be compared to the signal. The variance/signal will go down as the square root of the exposure time (if its guassian, which maybe its not :( ) because of statistics, so longer exposures will give diminishing returns. (Again, if Gaussian). I don't really want to try and compute any numbers here because I really have no idea how bright the cosmic infrared background is or how much it varies by location.

As an example: say a given pixel reads more power than background for its location and the excess power is 2 standard deviations above normal (given the exposure time). This pixel has a ~4% chance of being this high by random chance, so its probably best not to flag it for review unless the signal is correlated with other telescopes/observations. But something like 5 standard deviations above: now thats something to look at! A problem I have dealt with is cosmic rays/radiation: they will hit your pixels and give false signals. It sucks because they go through the optics/shielding. For the case of a detector network though, you can use coincidence detecting (do two telescopes see the same thing?) to very rapidly make random high signal events not an issue.

Time/distance scaling: Aw crud, did I screw that up? Let me try and double check.

: Checks :

Oh I totally screwed that up!! Yes, D^2 goes as 1/NEP so D should go as the fourth root of observation time.

If you don't mind I think I'm going to put off answering the rest of this until tomorrow as its a bit late! I'll double check what I got: I wouldn't be surprised if I plugged something into that calculator wrong, but I'll find another source for the formula and do it properly.
Logged

Nafensoriel

  • Lieutenant
  • **
  • Posts: 61
    • View Profile
Re: functional ship class definitions
« Reply #87 on: April 27, 2020, 01:00:15 PM »

Glad we finally all agree on the aperture of a telescope matter now :D.


I want to dump a bit more information on this one.

First, let's have some fun! I'm going to pull a wiki because it actually does a better job than I could at visualizing a modern state of the art radar system. I'm going to talk about radar over optics simply because optics are an absurd price point for detection purposes which I will go into later.

https://en.wikipedia.org/wiki/AN/SPY-6
Wonderful system. The link in question will show you a picture which is the primary point that needs to be made for one of the major disadvantages. Size and component to component latency.
As you can see modern radar is quite massive. This mass really will not reduce in space usage(in fact it will probably increase, to be honest). For future notes, this system is about 6Mw.

The most powerful radar we currently have is 32Mw as a phased array system.. It's about an acre. It requires an external power plant. It also requires cooling. For those curious, a standard power plant cooling tower is about 5mj/hr dissipation(obviously average as you can engineer whatever you want). Minus the fact that it's planet based this is a great actual "in existence" metric for how far or how accurately radar works in space since its basically built to do exactly that. It doesn't even reach a light second for small objects if you were curious... far less than that honestly.

There is a major design disadvantage to these systems(and why you'd need multiple types of radar in space) for detecting a hostile at range. The detector cells are small. For planet earth trying to see horizon targets, this is a great way to increase what you can detect and how quickly you can detect it. For space, you could use the same size cells but you would have to use a different server setup to parse the data differently due to SnR issues. Toss in several emitters and your noise turns to soup. Increasing cell size can increase effective range but it also murders resolution.
Resolution is marginally important because unlike earth limited objects you have zero reasons to keep the emitter near the hull.
Beyond the emitters and detectors, the servers themselves are a critical oversight for most people. The latency between these systems can easily be the difference between an acquisition and interception or suffering a hit on Earth. In space the amount of processing required for the acquisition will be measured in considerably higher sets of frames especially if the target is trying to do anything at all to hide or, more likely, obscure themselves. Why? On earth Horizon is 5kmish(ever wonder why civilian cheap systems are 5km? Now you know.) An object moving on earth has certain restrictions since it is inside an atmosphere and as such radar frames don't really have to see things moving for very long since their potential for movement between detection frames is very small. In space the potential for movement is quite high. Detecting "something" is easy.. detecting "something"  below a several KM wide "guess bloob" is server intensive...
Toss in more than one radar installation and the latency gets a bit crippling when dealing with things traveling at several hundred thousand KPH relative.

As to why optical systems just won't be considered the way people assume they will...
It's back to that aperture thing again. A telescope only has a very narrow "detection" zone when paired with a computer. While you might "see" a 25-degree cone of space you can only resolve a fraction of that if your frames are very short. Some of the best telescope designs also have unfortunate occlusion points due to engineering choices in getting more use out of the light accumulated. Point being cost wise it is far more expensive to cover a single steradian with optical detection vs radar detection. Currently orders of magnitude so. This doesn't even touch the fact that there isn't enough proper silica to make that many (good)optics for more than a handful of ships. Radar is more reasonable and scalable but as has been mentioned you are a giant lightbulb.

So to summate everything... While we can build theoretical systems to do all this light minute perfect detection nonsense no one will ever pay to build such systems especially on a military(expendable) hull. Without a novel invention, early space "warfare" will be a very lackluster affair. Realistic detection schemes are not going to be "Hubble"(research) grade.. they will be notches above military-grade or comparable which are far less fantastical than people think they are.

Logged

Thaago

  • Global Moderator
  • Admiral
  • *****
  • Posts: 7173
  • Harpoon Affectionado
    • View Profile
Re: functional ship class definitions
« Reply #88 on: April 27, 2020, 01:44:16 PM »

Derp, I totally plugged the numbers into the calculator wrong and my emitted power was a factor of 10 too high! Nice catch intrinsic_parity.

A side not I was thinking of looking at the D* tables: rocket exhaust is hot so will be shorter wavelength, which means that it can use a detector with a higher D*: much much higher. I'm having some trouble finding a good reference, but I think a cryogenically cooled Si detector can get something like 5 orders of magnitude better, so a given power output of exhaust would be seen 100+ times farther away than blackbody! (again going by detector noise issues, not background noise issues)

So rocket exhaust is not only really bright in terms of energy emitted, but we have much better sensors to see the light that it gives off.
Logged

Deshara

  • Admiral
  • *****
  • Posts: 1578
  • Suggestion Writer
    • View Profile
Re: functional ship class definitions
« Reply #89 on: May 07, 2020, 12:29:19 PM »

where did this thread go lol
Logged
Quote from: Deshara
I cant be blamed for what I said 5 minutes ago. I was a different person back then
Pages: 1 ... 4 5 [6] 7