I have a huge time series of response variables from a physical experiment. In this experiment we are trying to evaluate dozens of different conditions, and we will change things to each new condition and then wait a while to let things settle out and record a few time intervals with the new condition before moving on to yet another condition. When processing data afterwards, I’m identifying what look like nice stretches to represent each of the different conditions.
But maybe there’s a standard method in statistics for accomplishing this, and if I knew what it was called I might find out that it is available in my environment.
What I’m really going through is picking a time interval when I think things are stable, and then testing that pick by looking at graphs of response variables in the time interval. In each case I’d like to see what looks to my eye like Gaussian noise. What I don’t want to see is points at either end of the interval that look like outliers relative to all the other points in the interval. This is most stark for the last points in the interval, because too wide an interval will catch points that are running rapidly off to a new condition because I accidentally caught points just after making a change. It’s more subtle at the beginning of an interval because the question is whether I have waited long enough before opening my interval.
But in either case I’m doing something like asking what a T test would say about whether this end point here seems a likely member of the population represented by all the other points already included in the interval.
I might try to automate what I’ve been doing the hard way by hand. I’d probably iteratively consider building an interval from some point obviously fairly stable, first by stepping later and later in time until I hit an obvious outlier, and then similarly extending the interval earlier and earlier in time.
But did somebody already describe a standard method for doing this job?
Thank you!!