Learning structure

The front end

Learning structure

Postby lytia85 » Wed Apr 09, 2008 4:27 pm

Hello,
I'm a Genie new user. I searched some documentation about genie and how use it but I dont understand the option of the different algorithms as in greedy thick thinning how can I choose K2 or BDeu and what is the meaning of Network weight. I didn't find documentation about greedy thick thinning and essential graph search.
Moreover, I want to know how choose the best network (can I find the score of a network ?).
Is it possible to do supervised analyse without naive bayes method ? After learning we can specify a target node (when we use PC for example) but can we trace the ROC curve or can we export the probability of target state with the network ?

Thanks

Lytia

Sorry for my english !
lytia85
 
Posts: 5
Joined: Wed Apr 09, 2008 4:02 pm

Postby mark » Wed Apr 09, 2008 6:49 pm

Please refer to Heckerman's "A Tutorial on Learning With Bayesian Networks" for an explanation of GreedyThickThinning. Essential graph search starts from a graph obtained by applying PC and then continues with a GreedyThickThinning search (and it also does multiple restarts). The learning algorithms automatically select the best networks based on their scores (except PC which doesn't use a score).

It's possible to use any Bayesian network for supervised analysis and it's not necessary to use naive Bayes. GeNIe has target nodes and also a diagnosis module that may be useful in your case.
mark
Site Admin
 
Posts: 178
Joined: Tue Nov 27, 2007 4:02 pm

Best model

Postby lytia85 » Thu Apr 10, 2008 4:45 pm

When I learn structure of a network with greedy thick thinning method and then with essential graph search with the same data file. How can I choose the best model ? Can I show the score of each model ?

Moreover, if I do supervised analysis, can I export a data file with a new variable composed by the probability of the target learned by the network ?

Thanks

Lytia
lytia85
 
Posts: 5
Joined: Wed Apr 09, 2008 4:02 pm

Re: Best model

Postby mark » Thu Apr 10, 2008 6:16 pm

lytia85 wrote:When I learn structure of a network with greedy thick thinning method and then with essential graph search with the same data file. How can I choose the best model ? Can I show the score of each model ?

At the moment you cannot see the scores of the network, but it's probably a good idea to show them. In your case, I think it was empirically shown that the essential graph search leads to better results. See here: http://www.pitt.edu/~druzdzel/abstracts/uai99.html

lytia85 wrote:Moreover, if I do supervised analysis, can I export a data file with a new variable composed by the probability of the target learned by the network ?

I don't understand what you want to do. Saving a learned network is possible, of course, but what else do you want to do?
mark
Site Admin
 
Posts: 178
Joined: Tue Nov 27, 2007 4:02 pm

best classifier

Postby lytia85 » Fri Apr 11, 2008 8:48 am

I want to determinate the prediction value of the target node. So I want to calculate matrix confusion or trace ROC curve in order to define the best model of prediction. For this, I must export the probability of the target obtained by the network.

Thanks

Lytia
lytia85
 
Posts: 5
Joined: Wed Apr 09, 2008 4:02 pm

Postby mark » Sat Apr 12, 2008 11:00 pm

I think you'll have to write a small program that uses SMILE to do this.
mark
Site Admin
 
Posts: 178
Joined: Tue Nov 27, 2007 4:02 pm

Re:

Postby borisrabin » Wed Feb 23, 2011 12:01 pm

mark wrote:Please refer to Heckerman's "A Tutorial on Learning With Bayesian Networks" for an explanation of GreedyThickThinning. Essential graph search starts from a graph obtained by applying PC and then continues with a GreedyThickThinning search (and it also does multiple restarts). The learning algorithms automatically select the best networks based on their scores (except PC which doesn't use a score).

It's possible to use any Bayesian network for supervised analysis and it's not necessary to use naive Bayes. GeNIe has target nodes and also a diagnosis module that may be useful in your case.


When i Learning structure with GreedyThickThinning (for example) i actually get Bayesian network when the links between variables explained as correlation / association ?
The learning is Unsupervised?


Thakns,
Boris
borisrabin
 
Posts: 24
Joined: Thu Sep 30, 2010 7:48 pm

Re: Learning structure

Postby mark » Thu Feb 24, 2011 12:40 am

In a nutshell, the arcs are causal, unless an arc can be reversed without changing the set of conditional independencies that hold for a given graph. The learning is unsupervised.
mark
Site Admin
 
Posts: 178
Joined: Tue Nov 27, 2007 4:02 pm

Re: Learning structure

Postby borisrabin » Thu Feb 24, 2011 9:31 am

mark wrote:In a nutshell, the arcs are causal, unless an arc can be reversed without changing the set of conditional independencies that hold for a given graph. The learning is unsupervised.


If the arc can be reversed without any changing it will be deleted or appear as correlated?

Another question:
The K2 and the BDeu are score metrics for the greedy algorithm?
What is the meaning of the the BDeu's weights ?

Thanks,
Boris
borisrabin
 
Posts: 24
Joined: Thu Sep 30, 2010 7:48 pm

Re: Learning structure

Postby mark » Thu Feb 24, 2011 6:47 pm

If the arc can be reversed it means there is a direct correlation between the variables, but the causal relationship cannot be determined. For example, if you have two discrete variables that are correlated it is not possible to learn from data which way the causal connection goes. K2 and BDeu are prior distributions over parameters used in the score metric.
mark
Site Admin
 
Posts: 178
Joined: Tue Nov 27, 2007 4:02 pm

Re: Learning structure

Postby borisrabin » Sat Feb 26, 2011 6:19 pm

mark wrote:If the arc can be reversed it means there is a direct correlation between the variables, but the causal relationship cannot be determined

Are these kind of arcs presented ?

mark wrote: K2 and BDeu are prior distributions over parameters used in the score metric

What the BDeu's weights meaning?


Thanks,
Boris
borisrabin
 
Posts: 24
Joined: Thu Sep 30, 2010 7:48 pm

Re: Learning structure

Postby mark » Sat Feb 26, 2011 8:26 pm

Yes, these arcs will be part of the network that's outputted. The BDeu weight expresses the strength of a prior belief in the uniformity of the conditional distributions in the network.
mark
Site Admin
 
Posts: 178
Joined: Tue Nov 27, 2007 4:02 pm

Re: Learning structure

Postby borisrabin » Thu Mar 03, 2011 12:54 pm

mark wrote: K2 and BDeu are prior distributions over parameters used in the score metric.


Can you please indicate the run time for the greedy algorithm when using the K2 and BDeu ?

Thanks,
Boris
borisrabin
 
Posts: 24
Joined: Thu Sep 30, 2010 7:48 pm

Re: Learning structure

Postby mark » Thu Mar 03, 2011 6:44 pm

Do you mean to ask if there is a difference? In general, the runtime depends strongly on the connectivity of the graph you are trying to learn (i.e., the number of conditional dependencies in the data).
mark
Site Admin
 
Posts: 178
Joined: Tue Nov 27, 2007 4:02 pm

Re: Learning structure

Postby borisrabin » Fri Mar 04, 2011 1:44 am

mark wrote:Do you mean to ask if there is a difference? In general, the runtime depends strongly on the connectivity of the graph you are trying to learn (i.e., the number of conditional dependencies in the data).


I actually intended to ask the run time for the greedy algorithm as function of nodes .
Separate when using the K2 and separate for the BDeu.

Thank you very much,
Boris
borisrabin
 
Posts: 24
Joined: Thu Sep 30, 2010 7:48 pm

Next

Return to GeNIe

Who is online

Users browsing this forum: Google [Bot] and 1 guest

cron