Jump to content


Photo

Factors that influence CV%

modeling fit

  • Please log in to reply
3 replies to this topic

#1 mjPK

mjPK

    Member

  • Val_Members
  • PipPip
  • 22 posts

Posted 04 January 2024 - 04:34 PM

Hi, 

 

I am looking for strategies to improve my CV% on fits that have pretty solid metrics otherwise. I will get CV percentages greater than 1000% despite good adherence to the data trends. Are my only options to play with the error model in some manner. What are best practices when the CV% for some parameters are very high and the data is the data.... 

 

Would a better initial guess help? 

 

Thanks in advance. 


  • Stevensi, AlexKawn and KeithTon like this

#2 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,318 posts

Posted 04 January 2024 - 04:47 PM

Matthew, 1000% is certainly pretty high but it i's hard to help without seeing the specifics of your problem.  I doubt different initial estimates would help unless you think you may have hit some local minimum and want to try something very different to see if it minimizes to a different minimum?

If you can post the project others maybe able to help more e.g. suggesting a parameter that could be frozen to reduce the complexity of the model.

How many subjects/data points vs model parameters do you have?  are you fitting in individual or population mode?

 

Happy New Year, Simon.



#3 mjPK

mjPK

    Member

  • Val_Members
  • PipPip
  • 22 posts

Posted 04 January 2024 - 04:58 PM

@Simon 

 

I attached a small project exhibiting the behavior. I will have one parameter well matched say Volume with other parameters with very high numbers. The fit looks fine graphically but everything else.... 

 

Now I have a small amount of data so maybe this is it but I am wondering if there are other strategies to employ to get the CV values down a bit. 

Attached Files



#4 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,318 posts

Posted 16 January 2024 - 02:54 PM

If you have data from some more subjects then trying NaivePooled or full population analysis would make sense.

 

As it stands, there's not enough information to guess about the values of parameters, since multiple sets of parameters could produce similar values. That's why the uncertainty is high.

 

Right now, the model seems overparametrized since these values could be parameterized just by a 1 compartment model, but if you want to build up a PBPK type model with this amount of data then I think you have to cope with this lack of identifiabilty.

 

Parameter Estimate Units Stderr CV% 2.5% CI 97.5% CI Var. Inf. factor

tvV 74.213119 5.0646589 6.8244793   60.151371 88.274866 69.763
tvCl          56.662192 7.5045942 13.244447   35.826099 77.498286 160.83
stdev0 0.62329607 0.16658241 26.726048 0.16078917 1.085803
Simon.

 

  Simon.







Also tagged with one or more of these keywords: modeling, fit

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users