Jump to content


Photo

Comparison between models

criterion

  • Please log in to reply
4 replies to this topic

#1 Crystal

Crystal

    Member

  • Val_Members
  • PipPip
  • 17 posts

Posted 08 November 2023 - 05:45 AM

Dear Forum Members

when comparing NLME models ,which criterion (-2LL, AIC, or BIC) is most commly used, and why?

Any assistance or insights you can provide would be greatly appreciated. Thank you very much for your time and assistance.


  • Cyharlesbob, Michaelnof, Thomasgaks and 20 others like this

#2 cradhakr

cradhakr

    Advanced Member

  • Administrators
  • 78 posts

Posted 08 November 2023 - 06:23 AM

Dear Forum Members

when comparing NLME models ,which criterion (-2LL, AIC, or BIC) is most commly used, and why?

Any assistance or insights you can provide would be greatly appreciated. Thank you very much for your time and assistance.

Hi Crystal

 

 

Here is some general information and information about the quantities you refer to, which you can use to decide what to do about your model. When comparing models, we expect the model with the lowest AIC or BIC to be the best (among the models tested). However, if we are only considering one model, AIC and BIC cannot provide information about how good the fit is. AIC and BIC can take on any value (very negative to very positive) and only provide relative information. So if they are close to zero, that does not provide information for a single model. The situation is similar for -2LL (minus 2 times the log likelihood).

 

For this reason, model selection often involves comparison of all reasonable possible models. If you do that, and then select the one with, for example, lowest BIC and good diagnostic plots, that is a good justification for model selection. If you have only a single model, you could still look at diagnostic plots, such as IPRED vs. DV, and make sure the circles are very close to the diagonal line. Also, you could look at IRES, and count the number of times it changes from positive to negative, and vice versa. In your output, this happens 3 times for the 9 non-zero data points. I would say that is not a lot of times, so there could be some bias in the model. IRES is > 0 for a long time, before dipping down again.

 

So the residuals have a pattern, rather than randomly being scattered in positive and negative directions. Another thing to look at is the Return Code, RetCode. In this example, it is 3, which means there was some issue with the model converging. If the code is 1 or 2, I would probably trust the model; for a 3, I would look at the model carefully; for 4 or 5, I would definitely just try again (possibly just with different initial values). You can find more details in the Phoenix Help file.

 

Thanks and regards

Mouli



#3 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,329 posts

Posted 08 November 2023 - 07:43 AM

I would add that -2LL is a 'raw' value for assessing goodness of fit whereas AIC and BIC introduce a penalty for increasing number of parameters in the model, take a look at their formulas

https://onlinehelp.c...on&rhsyns=

 

therefor i tend to use AIC or BIC, both are generally in agreement.

 

 Simon.



#4 Crystal

Crystal

    Member

  • Val_Members
  • PipPip
  • 17 posts

Posted 09 November 2023 - 01:11 AM

Thank you for your detailed response, I really appreciate that.

Hi Crystal

 

 

Here is some general information and information about the quantities you refer to, which you can use to decide what to do about your model. When comparing models, we expect the model with the lowest AIC or BIC to be the best (among the models tested). However, if we are only considering one model, AIC and BIC cannot provide information about how good the fit is. AIC and BIC can take on any value (very negative to very positive) and only provide relative information. So if they are close to zero, that does not provide information for a single model. The situation is similar for -2LL (minus 2 times the log likelihood).

 

For this reason, model selection often involves comparison of all reasonable possible models. If you do that, and then select the one with, for example, lowest BIC and good diagnostic plots, that is a good justification for model selection. If you have only a single model, you could still look at diagnostic plots, such as IPRED vs. DV, and make sure the circles are very close to the diagonal line. Also, you could look at IRES, and count the number of times it changes from positive to negative, and vice versa. In your output, this happens 3 times for the 9 non-zero data points. I would say that is not a lot of times, so there could be some bias in the model. IRES is > 0 for a long time, before dipping down again.

 

So the residuals have a pattern, rather than randomly being scattered in positive and negative directions. Another thing to look at is the Return Code, RetCode. In this example, it is 3, which means there was some issue with the model converging. If the code is 1 or 2, I would probably trust the model; for a 3, I would look at the model carefully; for 4 or 5, I would definitely just try again (possibly just with different initial values). You can find more details in the Phoenix Help file.

 

Thanks and regards

Mouli



#5 Crystal

Crystal

    Member

  • Val_Members
  • PipPip
  • 17 posts

Posted 09 November 2023 - 01:12 AM

Your response is also very valuable. Thank you for sharing your knowledge with me.

I would add that -2LL is a 'raw' value for assessing goodness of fit whereas AIC and BIC introduce a penalty for increasing number of parameters in the model, take a look at their formulas

https://onlinehelp.c...on&rhsyns=

 

therefor i tend to use AIC or BIC, both are generally in agreement.

 

 Simon.






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users