Just to chime in here as I have a bit of expertise with photon sensors in low light conditions (PhD not developing them myself, but worked with them and a labmate was essentially developing them):
The sensitivity of a sensor is not a very good measure of whether you can actually find a signal: rather, one needs to fight
the intrinsic noise of the detector. See:
https://www.hamamatsu.com/resources/pdf/ssd/infrared_kird0001e.pdf page 6 for D* curves and typical blackbody radiation wavelengths,
https://en.wikipedia.org/wiki/Specific_detectivity for an explanation of what D* is.
Anyone giving a formula where they take detector sensitivity, multiplying by area and then by time... has no idea what they are doing if they are trying to fight detector noise. That is 100% incorrect in terms of doing things. More explanation below:
A key points that seem weird, but is true: The signal to noise ratio of the measurement goes as the
square root of the duration of exposure. Looking for longer will give a better measurement, but only as the square root, which places severe limitations on practical measurements. Those who have worked with noise will recognize this from statistics: signal goes as integration time while noise goes as sqrt integration time, so SNR goes as sqrt integration time.
Next, if you look at the sensor chart I linked, a room temperature-ish spacecraft is going to be emitting at roughly 8 um. At 8 um we are looking at a D* of about 10^10 with the best available technology today (2020) that can be produced at a reasonable scale.
Converting D* into Noise Equivalent power (see wiki article for formulas) gives:
NEP = (Area/(2*integration time))^(.5) * (D*)^(-1)
Or, for a 1 cm^2 detector, 1 second integration, D* = 10^10
NEP = 7.07*10^(-11) W over a 1cm^2 detector.
What does this mean? It means that in order to get an SNR of 1 with respect to the detector noise, you need that much power focused onto the detector.
But something here seems off: the NEP goes down when detector area goes down, which means
smaller sensors are more sensitive. This seems utterly bizzarre and counterintuitive, until you realize that sensor noise scales with sensor area. This means that small pixels are better...
as long as the optics can focus correctly on them. And here we get into gaussian optics:
the bigger the lens, the smaller the focal point and the more light collected. Phew! Bigger telescopes do give better sensitivity.
Now, we might be tempted to have extremely small pixels (and this does give better image resolution!), but we need to remember that the detector is still an image plane, just one upon which the light has been concentrated by the mirrors/optics. If the pixels are too small, the power collected from the target object will be spread out among multiple pixels according to their area: NEP goes as square root area, but power collected goes as area ^-1 for a given resolution!
The answer then for maximum sensitivity: in our telescope design, we want all the light from the object to be focused onto 1 pixel. This is essentially governed by the familiar diffraction limits of gaussian optics, so for a given telescope it can be computed. (See:
https://astronomy.tools/calculators/ccd_suitability)
However, how big that pixel needs to be depends on a whole lot of stuff! The size of the telescope, the wavelength (10um, ouch thats bad!), the focal length, and the expected distance to target (arcsecond resolution) all play a big role. This is honestly too complex a topic, but I'm going to take values from
https://www.mpifr-bonn.mpg.de/393197/detectors which also were similar to the pixel sizes that I used in my research, so I know they are reasonable. Say 25um on a side, or 625 um^2, or 6.25 * 10^-6 cm^2 (working in cm because the D* curves from hamamatsu are in cm).
Plugging this into the above formulas gives:
NEP (real optimized detector) = 1.77 * 10^-13 WThis is the power needed to be collected by the telescope optics and put onto the CCD in order to get a signal to noise ratio of 1 from the internal noise of the device. Note that this is also
peak sensitivity, and much of a signal will lie outside of it.
Quick sanity check to see if this is reasonable:
https://www.thorlabs.com/images/TabImages/Noise_Equivalent_Power_White_Paper.pdfHmm, their NEP at 1 hz is 5 * 10^-12 W at visible frequencies... This tells me that my calculation is probably too optimistic, but its in the right order.
https://www.osapublishing.org/DirectPDFAccess/31E737C8-AC0C-88BB-56623B62C26541FB_423912/oe-27-25-37056.pdf?da=1&id=423912&seq=0&mobile=no actually gives the same value! But that one is uncooled, so my value which is better by an order of magnitude seems reasonable.
Further sanity check: a 10um photon carries ~2*10^-20 Joules, so the above corresponds to a photon flux of ~10^7 photons per second. Great! This isn't approaching single photon detector physics and we don't have to worry about that whole can of worms.
This assumes that there is 0 background from space: this is the noise background of the detector itself.
------
Calculation time! Assuming the 1600 m^2 spaceship from below at room temperature, using this calculator:
http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/radfrac.htmlFor wavelength bands, I'm using 1 um to 11 um: the sensitive region of the Hamamatsu detector.
It emits 2.5*10^6 watts in this band. (only 33.6% of total power according to the calculator).
Collected power = power * (collector area)/(area of sphere at range = 4 pi R^2), so to get an NEP of >1:
NEP < collected power
Now shifting terms, turning collector area into pi * lens radius squared, and computing gives me:
For an NEP of 1 or higher, detection distance is equal to:
3.3*10^9 (meters distance per meter radius of telescope optic).
Lets say 10 meter (huge!) optic:
3.3*10^10 meters, or 1.8 light minutes, or .22 AU.
This is for 1 second integration times. Scale the answer by sqrt of time for longer exposures than 1 second: a 1 hour exposure will get 60 times that, or 13.2 AU. Pretty far!
However, I want to stress that this is the maximum theoretical distance with a 10meter radius telescope, current tech, and no background. It is also assuming a perfect telescope with no losses with the correct focal length and field of view. That said, this is also for SNR = 1. Modern good algorithms can detect signals about 10 times better than that, though false alarm rates will go up at the same time (and distance will go up only as the square root).
Now if the signal is stronger than this, then it needs to worry about background. But this is the limit due to detector noise!