Hi All

I would like to move this blog in a new-ish direction of survival analysis. The main reasons for this are:

- it is useful,
- it is an area that ML has not got its dirty paws on,
- It is a field that I did not study in my Ph.D.,
- Wikipedia does not seem to do a good job of describing it.

Starting at the beginning, because that always seems like a good place to start, survival analysis is the study of the duration from an origin to an event. In concrete terms (and avoiding depressing experiments in involving cancer patients) think about phone call lengths. The origin time is the time the call is started, the event time is the time the call ends and the duration is the difference. In addition to calls that end there are calls that are censored out; a call is censored out if it is still ongoing at the time the data is collected. In such cases the duration is the longest time that it is know the call is still ongoing.

To summarize survival data it is common to use either a survival function or hazard function. If is the pdf of the survival times and is the cdf then is the survival function. That is it the probability that a “phone call” will last at-least until time . The hazard function is the instantaneous probability of failure (i.e., “hanging up” ) given that an observation has lasted until that time (i.e., given that I have been on hold at Air Canada for 45 mins what is the probability that they answer me now). This is calculated as the ratio of the pdf to the survival function; .

The survival function is commonly empirically estimated in two ways these are Kaplan–Meier and Nelson-Aalen. In a later post I plan to gives some details about another method that I have been developing. The remainder of this post discusses the Kaplan–Meier method.

Consider an **ordered** sample of observed durations ( to with IIF ), each duration is either to an event (“hanging up“) or to censoring (the phone call is ongoing at the time of the study). The estimate is where , is the number of observations at risk in time interval , and is the number of “hanging ups” at time . That is, is a step function with a different value for each interval in the observed data. This should be sort of intuitive as it is very similar to the empirical cdf estimate used in boot strapping.

The confidence bounds for the Kaplan–Meier estimate can be derived as follows: Starting with Greenwood’s formula () the variance of is calculated. is the assumed to be Gaussian (If you know how this is justified please let me know I have not found a formal proof yet) and the confidence bounds are found by transforming the bounds for with . The reason that Greenwood’s formula is not used directly is that the survival function must be bounded by 0 and 1 and adding a delta to may not be in in this interval.

Greenwood’s formula is essentially an application of the approximation . Recall (really this is just up the page a few lines so recall is not necessary but I hate starting a sentence with an equation) . Thus . The variance of is ; this bold assertion is justified with the hand wavy response that has a binomial distribution. So moving full circle , which can be simplified . Using again (I really hope this approximation is good!) . Putting all these pieces together which can be solved for giving Greenwood’s formula .

To get the variance of once again is used. This time ; .

Now the completely unjustified assumption that Gaussian, a 95% confidence bound (yes I am not using the Bayesian terminology of central credibility interval because this whole process is frequentist) is then .

And finally the complementary log log space confidence bound is transformed back;

So before going on to some code that implements this I just want to point out that It does feel strange to me that the uncertainty of is not a beta distribution at each step. Again if anyone has any thoughts on this please let me know.

The code to plot a Kaplan–Meier estimate of a survival function is given below. Note that in the r package survival has a faster implementation of this the process; it looks something like “plot(survfit(Surv(time=t, event=event)~1 ))”. The plot generated by my code for the simulated data is shown in figure 1.

Cheers Tune in next week for more survival analysis (assuming you last that long)

t = sort(rexp(100)) r = runif(100) event = rep(1,100) event[r < 0.1] = 0 cen = runif(100) t[event==0] = t[event==0]*cen[event==0] index = sort(t, index.return = T)$ix t=t[index] event= event[index] g.KM<-function(t, event) { k = length(t) ndn = rep(1,k+1) dnnd = rep(0,k+1) for ( i in 1:k) { n.i = length(t[t > t[i]]) # the number of observations at risk d.i = event[i] # the number that die at the ith obersved time c.i = abs(1-event[i]) ndn[i+1] = (n.i - d.i -c.i)/(n.i - c.i) # prob of failure at that time dnnd[i+1] = dnnd[i] + d.i/(n.i*(n.i-d.i)+0.00000001) # greenwood estimate } S = ndn for (i in 2:(k+1)) { S[i] = prod(ndn[1:i]) } se = sqrt(dnnd)*S UB = S ^exp( 1.96*sqrt(dnnd)/((log(S)))) LB = S ^exp( -1.96*sqrt(dnnd)/((log(S)))) df = data.frame(time = c(0,t), Event = c(1,event), ndn, S, LB, UB) print(df) plot(c(0,t), seq(1, 0, length.out = k+1), type= "n", xlab = "time", ylab = "proportion alive" ) for (i in 2:(k+1)) { segments(x0=t[i-1], x1=t[i], y0 = UB[i-1], y1 = UB[i-1], lwd=1, col="grey") segments(x0=t[i], x1=t[i], y0 = UB[i-1], y1 = UB[i], lwd=1, col="grey") segments(x0=t[i-1], x1=t[i], y0 = LB[i-1], y1 = LB[i-1], lwd=1, col="grey") segments(x0=t[i], x1=t[i], y0 = LB[i-1], y1 = LB[i], lwd=1, col="grey") segments(x0=t[i-1], x1=t[i], y0 = S[i-1], y1 = S[i-1], lwd=3, col="blue") segments(x0=t[i], x1=t[i], y0 = S[i-1], y1 = S[i], lwd=3, col="blue") } points(c(1, t)[c(F, event==0)], S[c(F, event==0)], pch=3) t.frame = seq(0, 2*max(t), length.out =1001) S.true = 1-pexp(t.frame ) lines(t.frame, S.true, col="red", lwd=2) } g.KM(t, event)

Pingback: Plotting your Hazard | Bayesian Business Intelligence

Pingback: A short note on risk adjusted Survival functions | Bayesian Business Intelligence

Pingback: Estimating Passive Churn or Life time Conversion. | Bayesian Business Intelligence

Pingback: An introduction or “vulgurization” on Churn | Bayesian Business Intelligence