by Weidong Liang
Beijing, 2013.05
In rejection sampling, to sample from distribution \( P(x) \) (which has the property that direct drawing sample from it is very hard) we can draw a sample \( x_1 \) from the proposed distribution \( Q(x) \) (whose samples can be easily generated) and use the rejection rule \( u < \frac{P(x_1)}{M Q(x_1) } \) to accept or reject sample \( x_1 \), where
In the following simulation, we use rejection sampling to draw samples from a bimodal Gaussian distribution. The red curve is our \( Q(x) \), and the blue curve is our \( P(x) \); we first draw a sample from \( Q(x) \), then generated a sample \( u \sim U(0, 1) \), we normalize \( P(x_1) \) using \( \frac{P(x_1)}{M*Q(x_1)} \) so that its range is within [0, 1], then we compare \( u \) against \( \frac{P(x_1)}{M*Q(x_1)} \), it \( u \) is less than it, then the current sample falls within \( Q(x_1) \), we accept sample \( x_1 \); otherwise, we reject it.
Acceptance Rate: .
RejectionSampling ( p(x), q(x), QSampler() ) // Returns a sample of distribution P(x) per invokation
Rejection sampling required the specification of the proposal distribution \( Q(x) \), and the constant \( M \). If \( M \) is too large, we would have to reject a of the samples (with a acceptance rate close to \( \frac{1}{M} \). If \( M \) is too small, we run the risk of not satisfying its second condition. Clearly, the efficient use of this algorithm required appropriate specification of \( Q(x) \) and \( M \). Unlike the simulation example given, we might visually select \( P(x) \) and \( M \), or high-dimensional distribution, this would be impossible.