Skip to contents

This function estimates the parameters of the Ricci distribution from a sample and using Jeffrey's prior.

Usage

riccibo(x, R, burn, jump, b = 1)

Arguments

x

A vector of values to which you want to fit the Ricci model.

R

Number of samples in the chain (including burned samples).

burn

A positive integer specifying number of warmup samples. This also specifies the number of samples used for stepsize adaptation, so warmup draws should not be used for inference. The number of warmup should not be larger than R.

jump

Thinning rate. Must be a positive integer.

Value

A list containing the following estimates:

acep

Acceptance rate in the metropolis hastings algorithm.

eta

Point estimate for the eta parameter.

alpha

Point estimate for parameter alpha.

CIL_eta

Lower limit of the credibility interval (95%) for the eta parameter.

CIS_eta

Upper limit of the credibility interval (95%) for the eta parameter.

CIL_alpha

Lower limit of the credibility interval (95%) for the alpha parameter.

CIS_alpha

Upper limit of the credibility interval (95%) for the alpha parameter.

Geweke.statistics

Z-Statistic of Geweke's convergence diagnostic.

Details

In this section, we outline the Monte Carlo Markov chain algorithm for sampling from the joint posterior distribution. To generate samples of \(\eta\) and \(\alpha\) from the marginal posterior distribution, the Metropolis-Hastings (MH) algorithm is required since the marginal distributions do not have closed-form expressions. Therefore, in order to obtain samples from the marginal distributions, we can use the conditional distribution given by

\( p_1(\eta|\alpha,\boldsymbol{x})\propto \sqrt{(\rho+1)\Psi(\rho)- \rho}\prod_{i=1}^{n}I_0\left( \frac{\eta x_i}{\alpha^2} \right) \exp\left( -\sum_{i=1}^{n}\frac{x_i^2 + \eta^2} {2\alpha^2} \right)\)

\( p_2(\alpha|\eta,\boldsymbol{x})\propto\frac{\sqrt{(\rho+1) \Psi(\rho)-\rho}}{\alpha^{2n+2}}\prod_{i=1}^{n}I_0\left( \frac {\eta x_i}{\alpha^2} \right) \exp\left( -\sum_{i=1}^{n}\frac {x_i^2 + \eta^2}{2\alpha^2} \right)\)

In this study, we adopt the Gamma distribution \(q(\alpha^{(*)}|\alpha^{(j)},k)\) y \(q(\eta^{(*)}|\eta^{(j)},d)\) as a proposal distribution to sample values of the parameters \(\alpha\) and \(\eta\), respectively, where \(d\) and \(k\) are hyperparameters that influence the convergence rate of the algorithm. It is important to note that alternative proposal distributions can be utilized in place of the Gamma model, such as any model that generates values in the positive real line. The subsequent steps detail the execution of the MH algorithm:

  1. Compute the initial values of \(\eta^{(1)}\) and \(\alpha^{(1)}\) from moment estimators and initialize a counter \(j=1\);

  2. Generate a random number \(\eta^{(*)}\) from the \(Gamma(\eta^{(j)}, d)\) distribution;

  3. Calculate the acceptance probability, defined as:

    \( \Delta\left(\eta^{(j)},\eta^{(*)}\right)=\min\left(1, \frac {p_1\left(\eta^{(*)}|\alpha^{(j)},\boldsymbol{x}\right)} {p_1\left(\eta^{(j)}|\alpha^{(j)},\boldsymbol{x}\right)} \frac{q\left(\eta^{(j)}|\eta^{(*)},d\right)}{q \left(\eta^{(*)}|\eta^{(j)},d\right)}\right)\)

    Draw a random sample from an independent uniform distribution \(u\) in the interval (0,1);

  4. If \(\Delta\left(\eta^{(j)},\eta^{(*)}\right)\geq u(0,1)\), accept the value \(\eta^{(*)}\) and set \(\eta^{(j+1)}=\eta^{(*)}\). If \(\Delta\left(\eta^{(j)},\eta^{(*)}\right)< u(0,1)\), reject the value and set \(\eta^{(j+1)}=\eta^{(j)}\);

  5. Generate a random number \(\alpha^{(*)}\) from the \(Gamma(\alpha^{(j)}, k)\) distribution;

  6. Calculate the acceptance probability, defined as: \( \Delta\left(\alpha^{(j)},\alpha^{(*)}\right)=\min \left(1, \frac{p_2\left(\alpha^{(*)}|\eta^{(j+1)}, \boldsymbol{x}\right)}{p_2\left(\alpha^{(j)}|\eta^{(j+1)}, \boldsymbol{x}\right)} \frac{q\left(\alpha^{(j)}| \alpha^{(*)},k\right)}{q\left(\alpha^{(*)}|\alpha^{(j)},k \right)}\right)\)

    Draw a random sample from an independent uniform distribution \(u\) in the interval (0,1);

  7. If \(\Delta\left(\alpha^{(j)},\alpha^{(*)}\right)\geq u(0,1)\), accept the value \(\alpha^{(*)}\) and set \(\alpha^{(j+1)}=\alpha^{(*)}\). If \(\Delta\left(\alpha^{(j)},\alpha^{(*)}\right)< u(0,1)\), reject the value and set \(\alpha^{(j+1)}=\alpha^{(j)}\);

  8. Increment the counter (j) to (j+1) and repeat steps 2-7 until the chains converge.

The implementation of the MCMC methodology detailed previously facilitated the generation of 5,500 samples from the posterior distributions. We excluded the initial 500 samples, considering them as the burn-in phase, and applied a thinning process with an interval of every 5 samples. This procedure resulted in two separate chains, each containing \(1000\) samples, which were utilized to derive the posterior statistical summaries. Initial values for the simulation were determined using moment estimators, which are known for their analytical solutions.

To assess the convergence of the Markov chains we produced, a Geweke diagnostic was employed. This diagnostic employs a method that compares the mean values of the first \(10\%\) and the last \(50\%\) of a chain, positing that the equality of these means indicates sampling from a stationary (posterior) distribution. Consequently, the Geweke statistic, under this hypothesis, is expected to conform to a standard normal distribution as it approaches infinity. The computation of the Z-score assumes the independence of the two segments of the chain over the long term. Convergence of the chains was inferred when their respective Geweke statistics fell within the critical range of -1.96 to 1.96.

References

Rice, S. O. (1944). Mathematical analysis of random noise. The Bell System Technical Journal, 23(3), 282-332.

Jeffreys, H. (1946). An invariant form for the prior probability in estimation problems. Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences, 186(1007), 453-461.

See also

mcmc: Function for performing MCMC sampling in the MCMCpack package.

Geweke.diag: Function for Geweke's convergence diagnostic in the coda package.

rice: Function for the Rice distribution in the VGAM package.

Examples

if (FALSE) {
# Install the `riccib` package from the GitHub repository.
if (!requireNamespace("remotes", quietly = TRUE)) {
  install.packages("remotes")
}
remotes::install_github("jeachire/riccib")

# Load `riccib` package and its dependencies.
library(riccib)
library(MCMCpack)
library(coda)
library(VGAM)

# Generate a sample of the Ricci distribution.
peta <- 6       # eta parameter
palpha <- 2     # alpha parameter
n <- 30         # sample size
x <- rrice(n, palpha, peta)

# Use the riccibo function from the `riccib` package.
riccibo(x, R = 5500, burn = 500, jump = 5)
}