## Good Nights, Bad Nights, You Know I've Had My Share

January is the time of year when an old man’s thoughts turn to taking data once again. After more than three months off we can fire up the data acquisition again in about a month. As eager as I am for the break when early October rolls around I am ready to roll again in February. Lately, though, I wonder if I should wait until a little later to start the data season. Bear with me and I will attempt to explain.

After a data season is over we assess the quality of each night within that season. First we find the mean signal for the entire season for each of the 800 brightest (non-variable) stars in the field. Next we find the mean signal for each of those stars on the night being tested. For each star we divide the nightly mean signal by the season mean signal and we find the standard deviation of the distribution of those ratios. Finally, we make a histogram of those standard deviation values as shown below.

Now it gets tricky. There is no rule that says how big a value is too big a value but we have unassailable evidence that odd outlying nights in our star brightness measurements are nearly always associated with the nights with larger standard deviation values. So, if we don’t remove any nights we get very messy data. If we cut too many nights we don’t have enough data to answer the questions we are trying to answer. There is no solid rule to help us here. We just need to make a decision, always recalling the limitations of our data set. For the past two years we have selected 0.050 (5%) as our cut value, removing all nights with standard deviation values above that and remaining particularly wary of the nights we kept with values between about 0.035 and 0.050. This procedure led to the removal of 15% of the data nights from 2014 and another 10% of the nights falling into the “wary eye” zone.

So far, so good. Right? The problem arose because I can’t stop making histograms and other graphs. So, I made a graph of the percentage of nights that passed our photometric quality tests for each month of the year for the past seven data season. It does not look good for February. The results are shown below.

Things work pretty well from July onward. There is a decent and steadily improving chance of success from March through June. The odds are squarely against us in February. We are probably better off sleeping through that month and maybe the first half of March. Those images also need to be taken at the most difficult, sleep-depriving time of the night. Additionally, these data runs are too short to be useful for our eclipsing binary or flare star projects. But stretching the data season really helps with our study of the evolution of semi-regular variable stars. There we need to ensure that we have a sufficiently long data span in a given year to capture a peak and a trough for the maximum number of stars. Still, most years we get almost nothing prior to the middle of March. But after three months off the lure of data is strong. We shall see how this plays out this year.