Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simple question on the tutorial - cpd #144

Closed
bernardokp opened this issue Jul 15, 2022 · 6 comments
Closed

Simple question on the tutorial - cpd #144

bernardokp opened this issue Jul 15, 2022 · 6 comments
Assignees

Comments

@bernardokp
Copy link

Hello,

I have started using the package, and I could not understand the 3rd line of this sequence of commands:

bn2 = BayesNet()
push!(bn2, StaticCPD(:sighted, NamedCategorical([:bird, :plane, :superman], [0.40, 0.55, 0.05])))
push!(bn2, FunctionalCPD{Bernoulli}(:happy, [:sighted], a->Bernoulli(a == :superman ? 0.95 : 0.2)))

I understand a BN was created, and that the father is "sighted", which can assume values [:bird, :plane, :superman] with probabilities [0.40, 0.55, 0.05]. Then, a son was created, happy. I don't know what the rest of the code does
( this part: "a->Bernoulli(a == :superman ? 0.95 : 0.2))" )

Thanks in advance!

Bernardo

@tawheeler tawheeler self-assigned this Jul 18, 2022
@tawheeler
Copy link
Contributor

tawheeler commented Jul 18, 2022

Hello Bernardo.
The second line creates the child node that represents the probability distribution P(happy | sighted). The child happy depends on its parent, sighted. This conditional probability distribution is:

P(happy = true | sighted = bird) = 0.20
P(happy = false | sighted = bird) = 0.80
P(happy = true | sighted = plane) = 0.20
P(happy = false | sighted = plane) = 0.80
P(happy = true | sighted = superman) = 0.95
P(happy = false | sighted = superman) = 0.05

This is condensed in the formulation a->Bernoulli(a == :superman ? 0.95 : 0.2) where a is the input assignment, such as Dict{Symbol,Any}(:sighted => :bird) and Bernoulli creates a Bernoulli distribution over true and false. It is just a compact way of constructing an example Bayesian network.

@bernardokp
Copy link
Author

Many thanks, @tawheeler for the reply. Mathematically, all clear. But why don't I see that in this piece of code I wrote?

bn2 = BayesNet()
cpd1 = StaticCPD(:sighted, NamedCategorical([:bird, :plane, :superman], [0.40, 0.55, 0.05]))
push!(bn2, cpd1)
cpd2 = FunctionalCPD{Bernoulli}(:happy, [:sighted], a->Bernoulli(a == :superman ? 0.95 : 0.2))
push!(bn2, cpd2)
println(name(cpd2))
println(parents(cpd2))
println(cpd2)
println(cpd2(:sighted=>:superman ))
println(cpd2(:sighted=>:bird))
println(cpd2(:sighted=>:plane))
for i=1:100
println(rand(cpd2, :sigthed=>:superman))
end

The println(cpd2(:sighted=>:superman )) tells me I have "Bernoulli{Float64}(p=0.2)". Why not
Bernoulli{Float64}(p=0.95)?

In the for loop, I expected to see, well, lots of true, but that was not the case.

Regards,
Bernardo

@tawheeler
Copy link
Contributor

Aha, the problem is with a->Bernoulli(a == :superman ? 0.95 : 0.2). The assignment is a dictionary, and so will never compare favorably with :superman. Please try

a->Bernoulli(a[:sighted] == :superman ? 0.95 : 0.2)

I can update the example.

@bernardokp
Copy link
Author

Dear Tim,

thanks for the reply. Yes, it is working now!
I believe it would be good to update the example. For beginners like me, those simple examples are very important to understand the package.

My final goal is to integrate decision-making, a.k.a. optimization, within BN. Can you point me to any references that use your package?

Regards,
Bernardo

@tawheeler
Copy link
Contributor

Great!
I fixed the example when I last commented, so it should be good to go. Please let me know if you run into any other issues.

Perhaps the best example of decision making using Bayesian networks that I am aware of is the ACAS-X system for aircraft collision avoidance, which was developed with the use of Bayesian networks by Prof. Mykel Kochenderfer.

Performing the sort of decision-making behavior is central to our new book, available for free here. Bayesian networks can be used to represent transition distributions in MDPs. I do this under the hood for many of the examples in the chapter on imitation learning.

Cheers,
-Tim

@bernardokp
Copy link
Author

Hello Tim,

all looking good, no further questions regarding this issue. Thanks again for your help!

Those are great pointers for decision/optimization and BN. I will definitely take a look at the article, and I downloaded the book. I will check the imitation learning section, for sure.

I will keep exploring the package and get back to you if I run into other issues.

Best,
Bernardp

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants