You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear @rob-p
I have looked this question over your paper and Q &A here, but couldn't find my answer.
I will be grateful if you could help me with that.
I know bootstrap and Gibbs methods conceptually pretty well, but not sure how they are used in this context.
When I run with --numBootstraps or --numGibbsSamples, a file named cmd_infor.json gives information about number of bootstraps used and meta_infor.json gives information about sampling type.
Like
"samp_type": "bootstrap"
These two methods are an approach to assign confidence to the estimates returned by the main inference algorithm.
where can we see the criteria for confidence (for example the variance) using these two methods? The jason files it only give infoirmation about sampling method (Gibbs or bootstraps) and number of bootstraps and nothing about variance or other criteria.
As here mentions, when Salmon does Bootstarp, "the bootstrap sampling process works by sampling (with replacement) counts for each equivalence class, and then re-running the offline inference procedure (either the EM or VBEM algorithm) for each bootstrap sample."
When Salmon does sampling, does it do it over equivalence classes or just transcripts? I mean does it pick a whole class as part of the sample? and then re-do the EM.
Do we expect the same abundance as the ones we got in the final stage of the algorithm for each sample? where can we see the variance ?
Can we say that Salmon uses the final estimation of abundances to re-evaluate the abundance for each sub-sample that it picks and we expect convergence to the same values for that sub-sample as the final estimates?
For Gibbs as mentioned in the paper "it draws samples from the posterior distribution using Gibbs sampling to sample, in turn, from the transcript abundances given the fragment assignments, and then to re-assign the fragments within each equivalence class given these abundances. "
Is it true to say that it is similar to bootstrap but it uses distribution of abundances to pick the samples ?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Dear @rob-p
I have looked this question over your paper and Q &A here, but couldn't find my answer.
I will be grateful if you could help me with that.
I know bootstrap and Gibbs methods conceptually pretty well, but not sure how they are used in this context.
When I run with --numBootstraps or --numGibbsSamples, a file named cmd_infor.json gives information about number of bootstraps used and meta_infor.json gives information about sampling type.
Like
"samp_type": "bootstrap"
These two methods are an approach to assign confidence to the estimates returned by the main inference algorithm.
where can we see the criteria for confidence (for example the variance) using these two methods? The jason files it only give infoirmation about sampling method (Gibbs or bootstraps) and number of bootstraps and nothing about variance or other criteria.
As here mentions, when Salmon does Bootstarp, "the bootstrap sampling process works by sampling (with replacement) counts for each equivalence class, and then re-running the offline inference procedure (either the EM or VBEM algorithm) for each bootstrap sample."
When Salmon does sampling, does it do it over equivalence classes or just transcripts? I mean does it pick a whole class as part of the sample? and then re-do the EM.
Do we expect the same abundance as the ones we got in the final stage of the algorithm for each sample? where can we see the variance ?
Can we say that Salmon uses the final estimation of abundances to re-evaluate the abundance for each sub-sample that it picks and we expect convergence to the same values for that sub-sample as the final estimates?
For Gibbs as mentioned in the paper "it draws samples from the posterior distribution using Gibbs sampling to sample, in turn, from the transcript abundances given the fragment assignments, and then to re-assign the fragments within each equivalence class given these abundances. "
Is it true to say that it is similar to bootstrap but it uses distribution of abundances to pick the samples ?
Thank in advance for your help and time.
Beta Was this translation helpful? Give feedback.
All reactions