table() function

Retrieving information from the table function:

x=c("river", "stream","stream","stream","river","river","river","stream","stream","stream", "flood")
y= table(x)

Weighted Regression

Here is a quick example of how to recreate weighted regression without needing the weights=... part of lm().  It also shows how to set up your own intercept in R. 

x0=rep(1,100)         #specify the intercept because they multiply by the weights


NHMM Package

Download R from CRAN.  Open R and at the prompt start installing packages.  install.packages("Rcpp")  install.packages("msm")  install.packages("MCMCpack")  install.packages("BayesLogit").  Place the NHMM folder (link below) in the same directory where the new libraries were just installed.

Open R and at the prompt use the library(NHMM). It contains the modeling functions: NHMM and HMM and NHMM_MVN, and output functions: OBIC, Oz, Oemparams, OWcoef, OXcoef, and OQQ. For the help files just put ?NHMM or ?OBIC

Progress bar

I usually use a print() statement within for loops but this progress bar seems better.  I wish packages had this included.

Q = 10000
pb = txtProgressBar(min = 0, max = Q, style = 3)
for(i in 1:Q)
  setTxtProgressBar(pb, i)

Inverse Gamma distribution

I hate loading whole libraries for quick one line distributions.

dinvgamma=function(x,a,b){exp(a*log(b) - lgamma(a) - (a+1)*log(x) - b*1.0/x)}

rinvgamma=function(n,a,b){1/rgamma(n,a,b)}  #at one point the MCMCpack library has this function with different parameters for Windows and Unix 



Converting Netcdf files to R

Occasionally, I need to open .nc files. The names command does not work quite as I expected. Instead there is a names command to see all of the variables in the dataset.

data = open.ncdf(filename.nc)
print(data)  #gives an overview of some of the contents of the netcdf file
names(data$var)   #gives the actual variable names
var1_R=get.var.ncdf(data, "var1")   ## assume var1 is one of the variables in the dataset

95% probability intervals (PI)

I typicallly work with models and MCMC algorithms with many unknown parameters.  At the end of the algorithm, all I really want is the mean and 95% PI.  Here is some code that only saves the extreme 5%  (2.5%,97.5%) of the posterior draws for a parameter.  This cuts down on memory space if there are a lot of unknown parameters.