If we have a time series of data points how can we tell if these come from some stationary distribution or if there is some secular process coupled to an uncertainty inducing step? Many statistical theorems start out by assuming stationarity. Suppose we ask for the probability of a model where is the `time coordinate’ of the data point, and comes from a stationary distribution. How do we disentangle and By definition, the likelihood should be invariant under replacing the permutation group on data points. In other words, but then this is going to require which suggests the trivial solution of just defining as the product over which seems a bit heavy-handed. It is easily achieved, of course, but this will be a computational nightmare. If this could be done, I think it would be a pretty strong constraint on what and could be.
One way to try to do this would be to do Monte-Carlo sampling over in calculating the cost function at each step of the MCMC for finding We really want to evaluate something like
and I think that for large enough the sum over could be approximated by a probabilistic sampling over some permutations. The worry here is that will come out all constant, so the noise model preferred will be the trivial distribution, so all the variation will try to be fit by The other extreme is that will be a constant and all the data will be fit by
The quick way might be to use something like singular spectrum analysis on each permuted summand and demand that the SSA interpolations all agree: Pick compute SSA for some sample of permutations, vary in a standard MCMC to sample over possible The end result should be a de-noised prediction for the actual functional dependence