After relearning IDL and the Linux command line interface, I was ready to begin.
This week I'll be dealing with the fact that gravitationally bound objects (like dwarf galaxies) have normally distributed velocity distributions and tidally disrupted objects most likely do not.
So if we look at the velocity distributions of the stars in what we believe to be a dwarf galaxy, like Segue I or Bootes II, we expect a Gaussian distribution. There is a process called Sigma Clipping, wherein one eliminates all data greater than the typical value of 3 standard deviations away from the mean. This process is then repeated until no more data is removed.
According to conventional wisdom, a Gaussian distribution should maintain its form and not be sigma-clipped into oblivion. The same logic says that other distributions can continue to lose data after each sigma-clip until there is no data left.
My goal was to test a randomly distributed Gaussian sample that was based on the mean velocity and uncertainties of Segue I's velocity distribution. I initially discovered that without incorporating the individual measurement error, there was a very high probability (greater than 10%) that a Gaussian distribution would be completely obliterated by sigma-clipping! This would imply that a dwarf galaxy could appear to be a tidally disrupted even while truly having a normalized velocity distribution.
However, once I added in the individual measurement errors I found that sigma-clipping (at least at 3 sigma) removed almost no stars from the distribution.
Tomorrow I'll repeat the process for sigma-clipping at 2.5 or 2.75 standard deviations.