Dear SMILE Team,
I have a network where 'A ' and 'C' are 'target's and
'B' and 'D' are observations
If I use the 'test view' from the diagnosis menu I see that both
'D' and 'B' are informative and in this case 'D' is more informative
because of the probabilities and the fact that it can isolate C.
If I break this into two networks, I should in theory still be able to
still compute entropy gain as 'B' tells me about 'A' and 'D' tells me
However, testview (and direct calls to a smile engine running this net)
claim that observing 'B' now provides 0 bits information.
I attached a sample network for the problem.
Does this make sense? Am I missing something?
Version 2.0.3470.0 7/2/2009