# Annealing Neural Networks: Another keeping it real post

Hi all

This post discuses a couple of strategies for estimating the weights (training) of a neural network (NN). Both strategies are based on down sampling the data ($latex {\bf d}). Both strategies make analogy with annealing; i.e., ignoring some of the observations makes the objective function flatter and thus easier to traverse. This is sort of equivalent to racing the likelihood to a power of $1/t$ as in done in simulated annealing. If the objective function is $\phi({\bf w})= \log ( \Pi_{i=1}^{N} P[d_i|{\bf w}])$ then $\phi({\bf w})^{1/t} \approx \log ( \Pi_{i \text{ in } S} P[d_i|{\bf w}])$ where $S$ is a random sample from the $N$ observations of size $sim N/t$. For example if $t = 2$ then$\latex S\$ should be of size $N/2$; in other words using have the data is similar to raising the objective function to a temperature of 2. This is sort of intuitive (at-least to me) in that if there is enough data most of it will just sort of be repeating it self.

Before going into the details I would like to highlight that neither of these worked particularly well. In general, a solution of similar quality could be found faster (fewer forward computations) just by using all the data, the backwards propagation with random jitter of my earlier post, and starting a few times (at different random locations), then taking the best result. This might mean that my test problem is too simple (a global optimal is easy to find so there is not need to explore the whole space) or the methods them selves are just bad.

The first method I am calling batched annealing. Here the data ( ${\bf d}$ with corresponding feature matrix $A$ ) are organized in a random order. Training is then conducted repeatedly. At each iteration one or more data points are added to the training set. Intuitively this is how I personally memorize things; I start with a small set of thing to memorize and add more as I go.  In sudo code:

index_rand = random_order(1:n) # randomly sort data
d_star = d[index_rand]
A_star = A[index_rand]
For i in 1:n # loop over observations
d_temp = d_star[1:i] # down sample the data
A_temp = A_star[1:i, ] # down sample the
W_old = train_NN(d_temp, A_temp, W_old)

What I found doing this was that very often I would get two sets of data that could not both be fit; which ever one got more members into the training set first would be fit and the other set would “ignored” / poorly fit. The issue is that if fitting one set resulted in a better (either by hold out error / or just likelihood / misfit of data) solution it would not find that solution if the other set had its data added first.

The second method is batched tempering (at least I am good at coming up with names for these methods…) is an analogy to parallel tempering. Again data are excluded to mimic higher temperatures. This is a bit more complex than the first methods. Instead of one optimizer there are $k$ optimizers. These optimizers exchange information every 1000 iterations. By exchange information I really mean they just re order themselves such that the better optimizers (those with a better objective function; higher for log-likelihood and lower for misfit) are more likely to user the fuller data sets. In this way optimizers that have found good weights locally optimize them and  and ones that have not found good weight values keep exploring the space. I also re sample the data sub sets once the new optimizers are assigned.

In true parallel tempering individual Markov chains (which are analogous to the optimizers used here) equivalent optimizers swap with randomly chosen partners. Here because there is no requirement to make the process ergodic I simply order the optimizers using a weighted random order. Note that it is very important not to just force the optimizers into a deterministic / strict ordering; doing so will stop the best chains from having an opportunity to explore around their space.

The code is appended relative to the previous posts on NNs  the only really important part is the “g.sto” function.  In sudo code

for ( i in 1:N) # loop over the number of steps you want  to optimize the function
for (j in 1:num_opt) # loop through each of the optimizers
W_old = record[j] # get previous model for optimizer j
index_rand = random_samp(sample_from = 1:n, size = NS[j] ) # randomly sample data NS is a list of sizes
d_temp = d[index_rand] # get the down sampled data
A_temp = A[index_rand]  # get the down sampled sensitivity matrix
W_temp = train_NN(d_temp, A_temp, maxit = 1000) # train the NN for 1000 iterations on the subset of data
phi_all_temp = objective_function_NN(d, A, W_temp) # evaluate the model with all data
record = record_udate(W_temp, phi_all_temp) # record the model
record = order(record)

That is really it for now. Sorry this is more or a reminder than a description.

Tune in next time for more adventures in analytics.

g.logit <-function(x)
{
return(1/(1+exp(-x)))
}
g.pred <-function(w, A, d)
{
zeta21 = w*A[,1] + w[ 2]*A[,2] +w[ 3]*A[,3] +w[ 4]*A[,4] +w[ 5]*A[,5] +w[ 6]*A[,6] +w[ 7]*A[,7] +w[ 8]*A[,8] + w
zeta22 = w*A[,1] + w*A[,2] +w*A[,3] +w*A[,4] +w*A[,5] +w*A[,6] +w*A[,7] +w*A[,8] +w
zeta23 = w*A[,1] + w*A[,2] +w*A[,3] +w*A[,4] +w*A[,5] +w*A[,6] +w*A[,7] +w*A[,8] +w
zeta24 = w*A[,1] + w*A[,2] +w*A[,3] +w*A[,4] +w*A[,5] +w*A[,6] +w*A[,7] +w*A[,8] +w
sigma21 = g.logit(zeta21)
sigma22 = g.logit(zeta22)
sigma23 = g.logit(zeta23)
sigma24 = g.logit(zeta24)
zeta31 = w*sigma21+w*sigma22+w*sigma23 +w*sigma24 +w
zeta32 = w*sigma21+w*sigma22+w*sigma23 +w*sigma24 +w
sigma31 = g.logit(zeta31)
sigma32 = g.logit(zeta32)
zeta41 = w*sigma31+w*sigma32 +w
sigma41 =g.logit(zeta41)

return(sigma41)
}
g.phy <-function(w, A, d)
{
sigma41 = g.pred(w, A, d)
#phy = 0.5*sum((d-sigma41)^2)
phy = -2*sum( d* log(sigma41) + (1-d)*log(1-sigma41) ) # + sum(w^2)/nrow(A) # the penalty for size

return(phy)
}

g.phy.prime <-function(w, A, d)
{
# get the grad vecter
num.obs = nrow(A)

zeta21 = w*A[,1] + w[ 2]*A[,2] +w[ 3]*A[,3] +w[ 4]*A[,4] +w[ 5]*A[,5] +w[ 6]*A[,6] +w[ 7]*A[,7] +w[ 8]*A[,8] + w
zeta22 = w*A[,1] + w*A[,2] +w*A[,3] +w*A[,4] +w*A[,5] +w*A[,6] +w*A[,7] +w*A[,8] +w
zeta23 = w*A[,1] + w*A[,2] +w*A[,3] +w*A[,4] +w*A[,5] +w*A[,6] +w*A[,7] +w*A[,8] +w
zeta24 = w*A[,1] + w*A[,2] +w*A[,3] +w*A[,4] +w*A[,5] +w*A[,6] +w*A[,7] +w*A[,8] +w
sigma21 = g.logit(zeta21)
sigma22 = g.logit(zeta22)
sigma23 = g.logit(zeta23)
sigma24 = g.logit(zeta24)
zeta31 = w*sigma21+w*sigma22+w*sigma23 +w*sigma24 +w
zeta32 = w*sigma21+w*sigma22+w*sigma23 +w*sigma24 +w
sigma31 = g.logit(zeta31)
sigma32 = g.logit(zeta32)
zeta41 = w*sigma31+w*sigma32 +w
sigma41 =g.logit(zeta41)

#gradphy = (sigma41 - d)
gradphy = -2* ((d/sigma41) - (1-d)/(1-sigma41))

)

}
NNplot<-function(w = w.old)
{

plot(1:4, 1:4, type="n", xlab= "", ylab="", ylim=c(0,9), xlim=c(0,5), xaxt="n", yaxt="n")

w = 10*w/max(abs(w))
g.col = rep("blue", 49)
g.col[w < 0] = "red"

#print(w[1:8])
segments(x0=1, x1=2, y0=1, y1 = 3, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=2, y1 = 3, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=3, y1 = 3, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=4, y1 = 3, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=5, y1 = 3, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=6, y1 = 3, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=7, y1 = 3, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=8, y1 = 3, lwd=abs(w), col=g.col)

segments(x0=1, x1=2, y0=1, y1 = 4, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=2, y1 = 4, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=3, y1 = 4, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=4, y1 = 4, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=5, y1 = 4, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=6, y1 = 4, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=7, y1 = 4, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=8, y1 = 4, lwd=abs(w), col=g.col)

segments(x0=1, x1=2, y0=1, y1 = 5, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=2, y1 = 5, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=3, y1 = 5, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=4, y1 = 5, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=5, y1 = 5, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=6, y1 = 5, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=7, y1 = 5, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=8, y1 = 5, lwd=abs(w), col=g.col)

segments(x0=1, x1=2, y0=1, y1 = 6, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=2, y1 = 6, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=3, y1 = 6, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=4, y1 = 6, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=5, y1 = 6, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=6, y1 = 6, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=7, y1 = 6, lwd=abs(w), col=g.col)
segments(x0=1, x1=2, y0=8, y1 = 6, lwd=abs(w), col=g.col)

segments(x0=2, x1=3, y0=3, y1 = 4, lwd=abs(w), col=g.col)
segments(x0=2, x1=3, y0=4, y1 = 4, lwd=abs(w), col=g.col)
segments(x0=2, x1=3, y0=5, y1 = 4, lwd=abs(w), col=g.col)
segments(x0=2, x1=3, y0=6, y1 = 4, lwd=abs(w), col=g.col)

segments(x0=2, x1=3, y0=3, y1 = 5, lwd=abs(w), col=g.col)
segments(x0=2, x1=3, y0=4, y1 = 5, lwd=abs(w), col=g.col)
segments(x0=2, x1=3, y0=5, y1 = 5, lwd=abs(w), col=g.col)
segments(x0=2, x1=3, y0=6, y1 = 5, lwd=abs(w), col=g.col)

segments(x0=3, x1=4, y0=4, y1 = 4.5, lwd=abs(w), col=g.col)
segments(x0=3, x1=4, y0=5, y1 = 4.5, lwd=abs(w), col=g.col)

symbols(x=1, y=1, circles=0.5 , add=T, inches = .25, bg ="grey")
symbols(x=1, y=2, circles=0.5 , add=T, inches = .25, bg ="grey")
symbols(x=1, y=3, circles=0.5 , add=T, inches = .25, bg ="grey")
symbols(x=1, y=4, circles=0.5 , add=T, inches = .25, bg ="grey")
symbols(x=1, y=5, circles=0.5 , add=T, inches = .25, bg ="grey")
symbols(x=1, y=6, circles=0.5 , add=T, inches = .25, bg ="grey")
symbols(x=1, y=7, circles=0.5 , add=T, inches = .25, bg ="grey")
symbols(x=1, y=8, circles=0.5 , add=T, inches = .25, bg ="grey")

text(x=rep(1,8), y= 1:8, labels = c("Not", "And", "or", "Xor", "contra", "IfThen", "alpha", "beta") )
symbols(x=2, y=3, circles=0.5 , add=T, inches = .20, bg ="grey")
symbols(x=2, y=4, circles=0.5 , add=T, inches = .20, bg ="grey")
symbols(x=2, y=5, circles=0.5 , add=T, inches = .20, bg ="grey")
symbols(x=2, y=6, circles=0.5 , add=T, inches = .20, bg ="grey")

symbols(x=3, y=4, circles=0.5 , add=T, inches = .20, bg ="grey")
symbols(x=3, y=5, circles=0.5 , add=T, inches = .20, bg ="grey")

symbols(x=4, y=4.5, circles=0.5 , add=T, inches = .20, bg ="grey")
}

NN2 <-function(w = (runif(49)-0.5)*10 , A=A., d=d., N= 10000, mod = 1, t = 100, ME = 0.0001,verb = 0 )
{

PP = 0.01
eta = 1
ND = nrow(A)
#print(w)
phy = g.phy(w, A,d)
#print(phy)

rec = c(phy, w)
phy.best = phy
w.best = w

fm = 0
phy.last = phy
i.last = 1
w.last = w

for ( i in 1:N)
{
phy = g.phy(w, A, d)

# get the grad vecter
grad = g.phy.prime(w, A, d)

#update the weights
if (mod == 6 )
{
eta = 0.5

r = runif(1)
if( r <= 0.01) { mag.w = sqrt(sum(w*w)) j = rnorm(49, sd = mag.w /sqrt(49))/5 } if( r > 0.01)
{
j = rnorm(49, sd = mag.grad /sqrt(49))/5
}
#print(sqrt(sum(j*j)))
#new misfit
phy.new = g.phy(w.new, A,d)

}
phy.new = g.phy(w.new, A,d)
if(is.nan(phy.new) )
{
phy.new = Inf
}
k = 0
fm = fm + 1

#if(verb == 1)
#{
#print(phy)
#print(phy.new)
#}
#print(phy.new > phy )
TEST = ( phy.new > phy )
if(is.na(TEST)) {TEST = FALSE}

while( ( TEST ) & ( k <= 100 ) )
{

#if(verb == 1)
#{
#print("---------------------- in the while loop ----------------------")
#print(phy)
#print(phy.new)
#print(k)
#}

# normal back prop step shrink

if (mod == 6 )
{
#print("here")
eta = eta/2
r = runif(1)
if( r <= 0.1) { mag.w = sqrt(sum(w*w)) j = rnorm(49, sd = mag.w /sqrt(49)) } if( r > 0.1)
{
j = rnorm(49, sd = mag.grad /sqrt(49))
}

#j = rnorm(49, sd = 1)

#new misfit
phy.new = g.phy(w.new, A,d)
fm = fm + 1
}

k = k+ 1
#if(k == 50)
# {print("k==50!")}

if(is.nan(phy.new) )
{
phy.new = Inf

}
if(phy < phy.best) { phy.best = phy w.best = w } TEST = ( phy.new > phy )
if(is.na(TEST)) {TEST = FALSE}

}
if(phy.new < phy)
{
phy = phy.new
w = w.new
}
# add the model to the
rec = rbind(rec, c(phy,w) )
rownames(rec) = NULL
sigma41 = g.pred(w, A, d)
r = runif(1)

if (r <= PP) { PP= PP/2 ND.40 = min(c(ND, 40)) #if ( verb == 1) # { # print(paste(i, "of", N, "(=", i/N,") and", fm, "forward models")) # print(paste("# Obs.", ND) ) # print(phy) # print(paste("log phy best = ", log(phy.best) )) # print(paste("max abs error", max(abs(d-sigma41))) ) # print(paste("max abs error", max(abs(d[1:ND.40]-sigma41[1:ND.40]))) ) # } ## ##print( data.frame(A[1:ND.40,], value= d[1:ND.40], pred = round(sigma41[1:ND.40], 4), error = round(abs(d[1:ND.40] -sigma41[1:ND.40] ),4) )) #if ( log(phy.last) - log(phy) >= 0.5 )
# {
# x11()
par(mfrow=c(2,1), mar=c(4,4,1,1))
NNplot(w.last)
plot( log(rec[1:i.last,1] ), pch=".", xlab = "Index", ylab = "Phy")
# }
phy.last = phy
w.last = w
i.last = i
}

if( (max(abs(d-sigma41)) <= ME ) & (i >= 10 ) ) # & (i >= 1000)
{
if ( verb == 1)
{
ND.40 = min(c(ND, 40))
sigma41 = g.pred(w, A, d)
print(paste(i, "of", N, "(=", i/N,") and", fm, "forward models"))
print(paste("# Obs.", ND) )
print(phy)
print(paste("log phy best = ", log(phy.best) ))
print(paste("max abs error", max(abs(d-sigma41))) )
print(paste("max abs error", max(abs(d[1:ND.40]-sigma41[1:ND.40]))) )
}
par(mfrow=c(2,1), mar=c(4,4,1,1))
NNplot(w.last)
plot( log(rec[1:i.last,1] ), pch=".", xlab = "Index", ylab = "Phy")

return(c(1,phy , w))

}

}

if ( verb == 1)
{
ND.40 = min(c(ND, 40))
sigma41 = g.pred(w, A, d)
print(paste(i, "of", N, "(=", i/N,") and", fm, "forward models"))
print(paste("# Obs.", ND) )
print(phy)
print(paste("log phy best = ", log(phy.best) ))
print(paste("max abs error", max(abs(d-sigma41))) )
print(paste("max abs error", max(abs(d[1:ND.40]-sigma41[1:ND.40]))) )
}

return(c(0,phy,w))
}
g.sto<-function( ME. = 0.01)
{

A.= matrix(c(0, 1,0,0,0,0, 1, 1,
0, 1,0,0,0,0, 0, 1,
0, 1,0,0,0,0, 1, 0,
0, 1,0,0,0,0, 0, 0,
0, 0,1,0,0,0, 1, 1,
0, 0,1,0,0,0, 0, 1,
0, 0,1,0,0,0, 1, 0,
0, 0,1,0,0,0, 0, 0,
0, 0,0,1,0,0, 1, 1,
0, 0,0,1,0,0, 0, 1,
0, 0,0,1,0,0, 1, 0,
0, 0,0,1,0,0, 0, 0,
0, 0,0,0,1,0, 1, 1,
0, 0,0,0,1,0, 0, 1,
0, 0,0,0,1,0, 1, 0,
0, 0,0,0,1,0, 0, 0,
0, 0,0,0,0,1, 1, 1,
0, 0,0,0,0,1, 0, 1,
0, 0,0,0,0,1, 1, 0,
0, 0,0,0,0,1, 0, 0,
1, 1,0,0,0,0, 1, 1,
1, 1,0,0,0,0, 0, 1,
1, 1,0,0,0,0, 1, 0,
1, 1,0,0,0,0, 0, 0,
1, 0,1,0,0,0, 1, 1,
1, 0,1,0,0,0, 0, 1,
1, 0,1,0,0,0, 1, 0,
1, 0,1,0,0,0, 0, 0,
1, 0,0,1,0,0, 1, 1,
1, 0,0,1,0,0, 0, 1,
1, 0,0,1,0,0, 1, 0,
1, 0,0,1,0,0, 0, 0,
1, 0,0,0,1,0, 1, 1,
1, 0,0,0,1,0, 0, 1,
1, 0,0,0,1,0, 1, 0,
1, 0,0,0,1,0, 0, 0,
1, 0,0,0,0,1, 1, 1,
1, 0,0,0,0,1, 0, 1,
1, 0,0,0,0,1, 1, 0,
1, 0,0,0,0,1, 0, 0)
, nrow=40, ncol=8, byrow=T)

colnames(A.) = c("Not", "And", "or", "Xor", "contra", "IfThen", "alpha", "beta")

A.=rbind(A.,
A. + (runif(320)-0.5)*0.5,
A. + (runif(320)-0.5)*0.5,
A. + (runif(320)-0.5)*0.5,
A. + (runif(320)-0.5)*0.5,
A. + (runif(320)-0.5)*0.5,
A. + (runif(320)-0.5)*0.5,
A. + (runif(320)-0.5)*0.5,
A. + (runif(320)-0.5)*0.5,
A. + (runif(320)-0.5)*0.5
)
d. = matrix(c(1,0,0,0, 1,1,1,0, 0,1,1,0, 1,0,1,1, 1,1,0,1,
0,1,1,1, 0,0,0,1, 1,0,0,1, 0,1,0,0, 0,0,1,0), nrow= 40, ncol=1 )

d. = rbind(d., d., d., d., d.,
d., d., d., d.,
d.)

N = nrow(A.)

NC = 40
NS = floor(seq(0, N, length.out = (NC+1)))[2:(NC+1)]
# make the intial data sets
DS = list()
for ( i in 1:NC)
{
DS[[i]] = sort(sample(1:N, NS[i]))
}
Ws.all = list()
Ws.sub = list()
for ( i in 1:NC)
{
Ws.sub[[i]] = c(0, Inf, rnorm(49))

# c(-8.08041439 , 2.06346497 , -1.54031571 , -2.88445872 , 6.47911272 , -8.49851364 , -7.15282381 , -7.68137939 , 11.59349865 , -11.86608750 , 1.99563545 , -2.98180221 , 10.13098873 , -6.46718542 , 4.07691956 , 6.02385674 , -7.61151232 , 7.61163996 , -3.35555794 , 0.03171125 , -1.51265323 , 1.08372308 , 0.74957913 , -2.19298751 , -1.24309429 , -4.29052458 , 5.43952098 , -1.52537648 , 4.59789938 , 1.61840032 , 1.20710391 , -2.13490775 , -1.63754937 , -3.15367652 , -2.95884757 , 1.42927769 , -17.70410965 , -6.64222147 , 8.18730908 , -11.45157253 , 24.59673331 , -18.76273941 , -14.44924125 , 20.51598107 , -17.67475199 , 6.38152918 , 33.44444894 , -35.86176580 , -14.85993747))
Ws.all[[i]] = Ws.sub[[i]]

}

MBL = 100
BLI = 0
while ( (Ws.all[[NC]] == 0) & (BLI <= MBL) )
{
print("-------------------------------------------------------------")
print(BLI)
BLI = BLI +1

for( i in 1:NC)
{
w.temp = Ws.sub[[i]][3:51]
SAMP.temp = DS[[i]]

w.temp = NN2(w= w.temp, N= 500, A= A.[SAMP.temp,], d = d.[SAMP.temp,], mod = 6, t = 100, ME = 0.1 , verb = 0 )
#w.temp = NN2(w= w.temp, N= 100, A= A., d = d., mod = 6, t = 100, ME = 0.05 , verb = 0 )
Ws.sub[[i]] = w.temp

#print("------------------- pre sort ------------------")
#print(NS[[i]])
#print(Ws.sub[[i]][1:5])
#print(i)
}
phys = 1:NC
for( i in 1:NC)
{
w.temp = Ws.sub[[i]][3:51]
v. = 0
if(i ==NC) {v. = 1}
w.temp = NN2(w= w.temp, N= 10, A= A., d = d., mod = 6, t = 100, ME = 0.1 , verb = v. )
Ws.sub[[i]] = w.temp
phys[i] = w.temp

}
#print(phys)
probs = -0.5*phys
#print(probs)
probs = ( probs - min(probs) )
#print(probs)
probs = exp(10*probs/max(probs))
#print(probs)

NI = rev(sample(1:NC, prob = probs))

#for( i in 1:NC)
# {
# print("------------------- pre sort ------------------")
# print(NS[[i]])
# print(Ws.sub[[i]][1:5])
# print(i)
# }
print(
data.frame(true.order = order(sapply(Ws.sub, function(x) x, simplify=TRUE), decreasing=TRUE) ,
probs_true = probs[ order(sapply(Ws.sub, function(x) x, simplify=TRUE), decreasing=TRUE)],
phy = phys[ order(sapply(Ws.sub, function(x) x, simplify=TRUE), decreasing=TRUE)],
rand.order = NI, probs = probs[NI] )

)

#Ws.sub = Ws.sub[order(sapply(Ws.sub, function(x) x, simplify=TRUE), decreasing=TRUE)]

Ws.sub.temp = Ws.sub

for( i in 1:NC)
{
Ws.sub[i] = Ws.sub.temp[NI[i]]
}
best.index = order(sapply(Ws.sub, function(x) x, simplify=TRUE), decreasing=TRUE)[NC]
for ( i in 1:NC)
{
DS[[i]] = sort(sample(1:N, NS[i]))
r = runif(1)
if( (i <= NC/2) )
{
if( (r<= 0.45) & (r >=0) ) {Ws.sub[[i]] = Ws.sub[[best.index]]}
if( (r<= 0.5) & (r >=0.45) ) {Ws.sub[[i]] = c(0, Inf, rnorm(49)) }
}

}
print("Model with full data ")
print(Ws.sub[[best.index]])
}
w.old = Ws.sub[[NC]]

return(w.old)
}
w.old = g.sto( )