Jump to content


Photo

Bootstrap


  • Please log in to reply
5 replies to this topic

#1 Georgia Charkoftaki

Georgia Charkoftaki

    Advanced Member

  • Members
  • PipPipPip
  • 39 posts

Posted 19 November 2013 - 03:06 PM

Hi,

I am performing bootstrap for my model and I want to manually exclude some subjects, as

they have either too low or too high values (upper-lower limita).

Is there a criterion that I can use in order to do that?

Also is there any relation to the return code that is in the excel of the raw data?

 

Thanks,

Georgia



#2 Samer Mouksassi

Samer Mouksassi

    Advanced Member

  • Val_Members
  • PipPipPip
  • 90 posts

Posted 19 November 2013 - 03:44 PM

Hi Georgia,

It is not advisable to exclude some subjects from your bootstrap resampling.

If the subject was judged as an outlier remove it from your analysis, refit a final model then bootstrap this model that does not include suspect subjects.

 

Phoenix will not give you the parameters value by subject. What is your kind of model and setup for bootstrap on what criteria you judge that subject need to be excluded ?

 

Please include some outputs to be able o help more.



#3 Georgia Charkoftaki

Georgia Charkoftaki

    Advanced Member

  • Members
  • PipPipPip
  • 39 posts

Posted 19 November 2013 - 03:52 PM

Hi,

I didn't mean excluding subjects, but bootstraps samples.

For a example if a specific bootstrap has failed to fit, surely we have to exclude it, otherwise the statistics are biased. Is there any criterion to choose the successful runs?

Thanks,

Georgia



#4 Samer Mouksassi

Samer Mouksassi

    Advanced Member

  • Val_Members
  • PipPipPip
  • 90 posts

Posted 19 November 2013 - 03:57 PM

Hi Georgia,

A commonly used approach is to use all replicates that provide parameter estimates regardless of the successful minimization.

Other use more stringent criteria.

You can read the raw results file from the bootstrap. It is a csv file with each row having the parameter values of each replicate.

 

Typically we are interested in the 95 % CI of parameters which is computed by keeping the 2.5 and 97.5 % percentiles.

You can develop your own workflow starting from all replicate results, work out some exclusion criteria then compute your 95 % CI.

 

Please refer to the meanings of return codes for more details on what is a successful run.

A return code = 1 indicate no issues.



#5 vaishnavi ganti

vaishnavi ganti

    Member

  • Members
  • PipPip
  • 12 posts

Posted 09 July 2018 - 04:55 PM

Hi Georgia, A commonly used approach is to use all replicates that provide parameter estimates regardless of the successful minimization. Other use more stringent criteria. You can read the raw results file from the bootstrap. It is a csv file with each row having the parameter values of each replicate. Typically we are interested in the 95 % CI of parameters which is computed by keeping the 2.5 and 97.5 % percentiles. You can develop your own workflow starting from all replicate results, work out some exclusion criteria then compute your 95 % CI. Please refer to the meanings of return codes for more details on what is a successful run. A return code = 1 indicate no issues.

Hi Samer

 would you suggest we use Dose/group as a covariate such that all the subjects are included all the time?

Please let me know

Thank you



#6 smouksassi1

smouksassi1

    Advanced Member

  • Members
  • PipPipPip
  • 231 posts
  • LocationMontreal

Posted 25 July 2018 - 02:49 PM

Stratification is a common technique that ensures that you do not end with empty groups of interest when your N is low.

e.g. you have an ESRD effect but only 5 / 100 subject had ESRD = 1 you can easily end up with a boot sample with no ESRD patients if you do not stratify over it.

 

but regardless you need to be careful with low N and bootstrap where some rules/ assumptions might not hold.

 

Samer






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users