The function computes the product of the raw input-hidden and hidden-output connection weights between each input and output neuron and sums the products across all hidden neurons, as proposed by Olden (2004).

getConnectionWeight(object, thr = NULL, verbose = FALSE, ...)

Arguments

object

A neural network object from SEMdnn() function.

thr

A value of the threshold to apply to connection weights. If NULL (default), the threshold is set to thr=mean(abs(connection weights)).

verbose

A logical value. If FALSE (default), the processed graph will not be plotted to screen.

...

Currently ignored.

Value

A list od two object: (i) a data.frame including the connections together with their weights, and (ii) the DAG with colored edges. If abs(W) > thr and W < 0, the edge is inhibited and it is highlighted in blue; otherwise, if abs(W) > thr and W > 0, the edge is activated and it is highlighted in red.

Details

In a neural network, the connections between inputs and outputs are represented by the connection weights between the neurons. The importance values assigned to each input variable using the Olden method are in units that are based directly on the summed product of the connection weights. The amount and direction of the link weights largely determine the proportional contributions of the input variables to the neural network's prediction output. Input variables with larger connection weights indicate higher intensities of signal transfer and are therefore more important in the prediction process. Positive connection weights represent excitatory effects on neurons (raising the intensity of the incoming signal) and increase the value of the predicted response, while negative connection weights represent inhibitory effects on neurons (reducing the intensity of the incoming signal). The weights that change sign (e.g., positive to negative) between the input-hidden to hidden-output layers would have a cancelling effect, and vice versa weights with the same sign would have a synergistic effect. Note that in order to map the connection weights to the DAG edges, the element-wise product, W*A is performed between the Olden's weights entered in a matrix, W(pxp) and the binary (1,0) adjacency matrix, A(pxp) of the input DAG.

References

Olden, Julian & Jackson, Donald. (2002). Illuminating the "black box": A randomization approach for understanding variable contributions in artificial neural networks. Ecological Modelling. 154. 135-150. 10.1016/S0304-3800(02)00064-9.

Olden, Julian. (2004). An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Ecological Modelling. 178. 10.1016/S0304-3800(04)00156-5.

Author

Mario Grassi mario.grassi@unipv.it

Examples


# \donttest{
if (torch::torch_is_installed()){

# load ALS data
ig<- alsData$graph
data<- alsData$exprs
data<- transformData(data)$data

dnn0 <- SEMdnn(ig, data, train=1:nrow(data), cowt = FALSE,
      #loss = "mse", hidden = 5*K, link = "selu",
      loss = "mse", hidden = c(10, 10, 10), link = "selu",
      validation = 0, bias = TRUE, lr = 0.01,
      epochs = 32, device = "cpu", verbose = TRUE)

res<- getConnectionWeight(dnn0, thr=NULL, verbose=TRUE)
table(E(res$dag)$color)
}
#> Conducting the nonparanormal transformation via shrunkun ECDF...done.
#> 1 : z10452 z84134 z836 z4747 z4741 z4744 z79139 z5530 z5532 z5533 z5534 z5535 

#>    epoch   train_l valid_l
#> 32    32 0.2650295      NA
#> 
#> 2 : z842 z1432 z5600 z5603 z6300 

#>    epoch   train_l valid_l
#> 32    32 0.3035205      NA
#> 
#> 3 : z54205 z5606 z5608 

#>    epoch   train_l valid_l
#> 32    32 0.2834193      NA
#> 
#> 4 : z596 z4217 

#>    epoch   train_l valid_l
#> 32    32 0.3734021      NA
#> 
#> 5 : z1616 

#>    epoch   train_l valid_l
#> 32    32 0.3259201      NA
#> 
#> DNN solver ended normally after 736 iterations 
#> 
#>  logL: -42.72807  srmr: 0.1002272 
#> 

#> 
#>     gray50       red2 royalblue3 
#>         27          9          9 
# }