Dear SMILE Team,

I have a network where 'A ' and 'C' are 'target's and

'B' and 'D' are observations

A

\

B

/

C

\

D

If I use the 'test view' from the diagnosis menu I see that both

'D' and 'B' are informative and in this case 'D' is more informative

because of the probabilities and the fact that it can isolate C.

If I break this into two networks, I should in theory still be able to

still compute entropy gain as 'B' tells me about 'A' and 'D' tells me

about 'C'.

A

\

B

C

\

D

However, testview (and direct calls to a smile engine running this net)

claim that observing 'B' now provides 0 bits information.

I attached a sample network for the problem.

Does this make sense? Am I missing something?

Regards,

Bob.

Version 2.0.3470.0 7/2/2009