Jump to content


Photo

which one to consider - Rsquare or Rsquare(adj) ?


  • Please log in to reply
4 replies to this topic

#1 ARINDAM PAL

ARINDAM PAL

    Member

  • Members
  • PipPip
  • 10 posts

Posted 17 December 2013 - 08:14 AM

Sir/Ma'am

While determining the elimination phase of a conc.-time profile, someone uses R-sq while others use Rsq(adj). Someone keeps the R-sq cut-off value at 0.8 and others keep at 0.75.

 

Can anybody kindly tell me which one to consider R-sq or ot R-sq(adj) ?

It is obvious that R-sq(adj) will only take those variables (of time, in this case) which actually explains the variability/ associationship of the other independent variable (conc. in this case), discarding other variables which confounds.

In this case, when only "time" is the variable determining associationship to "conc" then where comes the other unreasonable variables ?

If such is the case then we can take R-sq only and not R-sq(adj) !!

Please explain.

 

Other one is level of correlation, what to keep - 0.75 or 0.8 or anything else, is there any scientific basis on keeping 0.8 or 0.75 ? If yes, what is that ?

If I keep 0.75 (R-sq) the correlation level is 83% and if I keep 0.8 the correl level is 96% - how do regulatories judge which is the appropriate level ?

 

Regards

Arindam



#2 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,326 posts

Posted 17 December 2013 - 10:25 AM

Arindam, to my knowledge the regulatory authorities do not have hard limits set on these. I personaly use the Rsq(adj) values as guidance and write my analysis plan to that effect.

e.g. It is recommended that an elimination half life is only calculated if the Rsq adjusted value for the regression line is >0.7. If Rsq adjusted values of <0.7 are accepted than these must be described in the study report and the resulting elimination half life value interpreted with caution

Note that using 0.7 as my guiding cut off point is as arbitrary as 0.75, 0.8 or anything else, because we are applying noncompartmental techniques which effectively assume 1 compartment kinetics to drugs whose profiles *may* show more than one apparent rate of elimination. If you set a hard limit of e.g. 0.75 would you really consider 0.7499 to better fit than another profile at 0.7500? I think there are other questions to consider around goodness of fit and they relate to what you are using that regression for, namely extrapolating AUC to infinity so I would also want to consider in my report;

a) %AUCextrapolated e.g. aim to have less than 30%

B) the period of estimation e.g. I would consider elimination half life is calculated with reasonable accuracy when it is estimated over a period of at least three half lives

c) Outlier values, if the inclusion of such a data point appears to bias the value of the elimination half life I would be exclude them from the regression analysis (but not the full PK analysis), however I appreciate some QA groups are uncomfortable with this.

d) comparison and consistency with other individuals in the group, do you suspect a switched sample (see above)
(Also consider an ascending dose study - the APPARENT elimination half-life is calculated at each dose level even though this may not represent the terminal elimination half-life seen at higher dose levels. so it's important the pharmacokineticist to interpret the data with care and state clearly in the study report if trends seen in the data may be due to assay sensitivity effects rather than actual dose-dependent trends in clearance)

Simon

PS I'm taking some holiday time so it might be my last post until 2014, hope everyone on the board can take some time off with their family and friends and best wishes for the New Year.

#3 Ana Henry

Ana Henry

    Advanced Member

  • Val_Members
  • PipPipPip
  • 232 posts

Posted 18 December 2013 - 05:49 PM

Adding to this topic: The best fit algorithm within Phoenix WinNonlin (NCA) utilizes the Rsq(adj) to select the statistical best points for the lambdaz regression.



#4 ARINDAM PAL

ARINDAM PAL

    Member

  • Members
  • PipPip
  • 10 posts

Posted 19 December 2013 - 05:28 AM

Thanks to both Simon and Henry.
However before departing for Christmas and New Year, like to know about the 1st question I asked, which is when interpreting the data and lets say in absence of WNL algorithm which takes R-sq(adj) as a parameter for curve fitting.

It is obvious that R-sq(adj) will only take those variables (of time, in this case) which actually explains the variability/ associationship of the other independent variable (conc. in this case), discarding other variables which confounds.

In this case, when only "time" is the variable determining associationship to "conc" then where comes the other unreasonable variables ?
If such is the case then can we take R-sq only and not R-sq(adj) !!
Kindly explain.

Thanks
Arindam

#5 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,326 posts

Posted 19 December 2013 - 10:24 AM

Arindam,

 

    I don't understand your reasoning - I have always used RsqAdj since it takes into account the number of points used in the estimation of Lambda Z. Generally you want to evaluate this over as long a time period as possible and therefore include as many points as reasonable.

 

   I don't think I recall ever coming across a paper or group that recommended Rsq over RsqAdj.

 

 Simon

 

PS I have already finished until the New Year so I wish you and all the other forum users (and lurkers!) a happy and relaxing break.






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users