Jump to content


Photo

Question about Adjusted R2


  • Please log in to reply
14 replies to this topic

#1 xli@levenabiopharma.com

xli@levenabiopharma.com

    Member

  • Members
  • PipPip
  • 14 posts

Posted 27 December 2020 - 07:26 PM

Hello Experts,

 

I am confused by the meaning of Adjusted R2 on page 263 in the Phoenix Winnolin user guide. I have two questions about adjusted R2.

 

here is the sentence.

In the Rsq_adjusted field, type 0.97 to flag any profile with an Rsq_adjusted value greater than or equal to this value.

Profiles that break the rule are flagged in the output and can be quickly filtered out of the results. The process will be illustrated later in this example.

 

Based on the sentences above, all the profiles whose Adjusted R2 are bigger than 0.97 will be excluded for calculation PK parameters. 

1. is my understanding right or not?

 

Another, the meaning of adjusted R2 online is below.

R2 shows how well terms (data points) fit a curve or line. Adjusted R2 also indicates how well terms fit a curve or line, but adjusts for the number of terms in a model. If you add more and more useless variables to a model, adjusted r-squared will decrease. If you add more useful variables, adjusted r-squared will increase. Adjusted R2 will always be less than or equal to R2.

According to the meaning of adjusted R2 here, the higher R2, the more number of variables is useful for modeling.

2. Am I right on this concept?

 

 

Thank you vey much in advance!

 

Xiaoqing

 

 

 

 

 



#2 cradhakr

cradhakr

    Advanced Member

  • Members
  • PipPipPip
  • 56 posts

Posted 27 December 2020 - 09:52 PM

Forum)1.png

Hello Experts,

 

I am confused by the meaning of Adjusted R2 on page 263 in the Phoenix Winnolin user guide. I have two questions about adjusted R2.

 

here is the sentence.

In the Rsq_adjusted field, type 0.97 to flag any profile with an Rsq_adjusted value greater than or equal to this value.

Profiles that break the rule are flagged in the output and can be quickly filtered out of the results. The process will be illustrated later in this example.

 

Based on the sentences above, all the profiles whose Adjusted R2 are bigger than 0.97 will be excluded for calculation PK parameters. 

1. is my understanding right or not?

 

Another, the meaning of adjusted R2 online is below.

R2 shows how well terms (data points) fit a curve or line. Adjusted R2 also indicates how well terms fit a curve or line, but adjusts for the number of terms in a model. If you add more and more useless variables to a model, adjusted r-squared will decrease. If you add more useful variables, adjusted r-squared will increase. Adjusted R2 will always be less than or equal to R2.

According to the meaning of adjusted R2 here, the higher R2, the more number of variables is useful for modeling.

2. Am I right on this concept?

 

 

Thank you vey much in advance!

 

Xiaoqing

Hi Xiaoqing,

 

Question 1:

 

 

In the Rsq_adjusted field, type 0.97 to flag any profile with an Rsq_adjusted value greater than or equal to this value.

Profiles that break the rule are flagged in the output and can be quickly filtered out of the results. The process will be illustrated later in this example.

 

 

I have attached the image - on the top part of the image you could see "Rsq_adjusted" grated or equal to 0.98  - you could see only three profile flagged as "Accepted." While, when the value is 0.95 all the profile were flagged as "Accepted." Hope this explains your question.

Forum)1.png

*****************

Question 2:

 

R2 value indicates how well the predicted conc is fitting with your observed (see below image).

rsqr.png

 

 

Thanks

Mouli



#3 xli@levenabiopharma.com

xli@levenabiopharma.com

    Member

  • Members
  • PipPip
  • 14 posts

Posted 28 December 2020 - 02:08 AM

Hi Mouli,

 

Thank you very much for kind help.

 

unfortunately, I could not open the first attached figure.

 

is it possible to send me the figure to my email address?

 

My email address is xli@sorrentotherapeutics.com.

 

Thank you again!

 

xiaoqing



#4 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,255 posts

Posted 03 January 2021 - 12:25 PM

Hi Xiaoqing,

 

   I am posting the online help link I would look at.  https://onlinehelp.c...justed&rhsyns=

 

I agree the phrasing is a bit confusing, the rule is to set the ACCEPTABLE values; 

 

note if you follow the worked example indicated in the help it will probably make more sense;

 

C:\Program Files (x86)\Certara\Phoenix\application\Examples\WinNonlin\NCA.phxproj.

 

  Simon.

Attached Thumbnails

  • rsq_adjusted.jpg


#5 warrensacko

warrensacko

    Newbie

  • Members
  • Pip
  • 1 posts

Posted 06 June 2022 - 06:17 AM

r-squared refers to the 'goodness' of fit for a particular model with no regard for the number of independent variables. Whereas, adjusted r-squared takes into account the number of independent variables.
 
So if you have a regression equation such as
 
y = mx + nx1 + ox2 + b
 
The r-squared will tell you how well that equation describes your data. If you add more independent variables (p, q, r, s ...) then the r-square value will improve because you are in essence more specifically defining your sample data.  Using adjusted R-squared metric instead takes into account that you have added more independent variables and will 'penalize' the result for the more variables you add which don't fit the sample data. This is a good way to test the variables, either by adding in one at a time and checking when the adj-R2 starts to deteriorate or by starting with all the variables and removing one at a time until the adj-R2 doesn't improve.
 
 


#6 Helmut Schütz

Helmut Schütz

    Advanced Member

  • Members
  • PipPipPip
  • 308 posts
  • LocationVienna, Austria

Posted 09 June 2022 - 10:46 PM

Correct, of course.

 

Coming back to to original topic: IMHO, a cut-off for R2 is nonsense. See there.

All too often users trust in algorithms and don’t read the manual, which clearly states:

Using this methodology, Phoenix will almost always compute an estimate for Lambda Z. It is the user’s responsibility to evaluate the appropriateness of the estimated value.

 

Any algorithm might fail on ‘flat’ profiles (controlled re­lease formulations with flip-flop PK) or on profiles of multiphasic release formulations. Hence, visual inspection of fits is recommended for ages.

  1. Schulz H-U, Steinijans, VW. Striving for standards in bioequivalence assessment: a review. Int J Clin Pharm Ther Toxicol. 1991; 29(8): 293–8. PMID:1743802.
  2. Sauter R, Steinijans VW, Diletti E, Böhm E, Schulz H-U. Presentation of results from bioequivalence studies. Int J Clin Pharm Ther Toxicol. 1992; 30(7): 233–56. PMID:1506127.
  3. Hauschke D, Steinjans V, Pigeot I. Bioequivalence Studies in Drug Development. Chichester: Wiley; 2007. p. 131.
  4. Scheerans C, Derendorf H, Kloft C. Proposal for a Standardised Identification of the Mono-Exponential Ter­mi­nal Phase for Orally Administered Drugs. Biopharm Drug Dispos. 2008; 29(3): 145–57. doi:10.1002/bdd.596.

Our brain is an excellent pattern rec­og­​ni­tion machine.


  • Simon Davis likes this

 Best regards,
Helmut

https://forum.bebac.at/


#7 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,255 posts

Posted 10 June 2022 - 07:30 AM

Very good points Helmut, the software is only a tool and you must think and use the tool appropriately, I'll try to remember to repeat that in a few weeks time,  hope you're well.

 

The session will be recorded if you can't make it and I will update this post with a link.

Attached Thumbnails

  • SOC_WEB_Phoenix.png


#8 Helmut Schütz

Helmut Schütz

    Advanced Member

  • Members
  • PipPipPip
  • 308 posts
  • LocationVienna, Austria

Posted 13 June 2022 - 03:10 PM

Hi Simon!

 

Very good points Helmut, the software is only a tool and you must think and use the tool appropriately, I'll try to remember to repeat that in a few weeks time,  hope you're well.

 

The session will be recorded if you can't make it and I will update this post with a link.

 

I’ll try to watch it. BTW, sneaked around the corner (PKanalix). Like in PHX/WNL the bloody linear trapezoidal is the default (did you see this one?). When it comes to estimation of λz, the trainers pointed out that inspection of the fits is important.


 Best regards,
Helmut

https://forum.bebac.at/


#9 0521

0521

    Advanced Member

  • Members
  • PipPipPip
  • 40 posts

Posted 16 June 2022 - 05:00 AM

Hi Simon!

 

 

I’ll try to watch it. BTW, sneaked around the corner (PKanalix). Like in PHX/WNL the bloody linear trapezoidal is the default (did you see this one?). When it comes to estimation of λz, the trainers pointed out that inspection of the fits is important.

 

IMHO, I disagree with some of your points.
 
1. The most critical assumption about the cornerstone of your full text:

Apart from the few drugs which are subjected to capacity limited elimination like C2H5OH,
any pharmacokinetic model can be simplified to a sum of exponentials
jpg.jpg

 

 

 

 
This is very, very obviously wrong!(any→some, or any pharmacokinetic model of chemical drugs on the market )
 
As we all know, all pharmacokinetic processes are nonlinear, and the linearity we observe is just an accident and coincidence. When the reaction occurs in the range of 20-80% of the Emax model, we can think of it as a kind of approximate linearity. That's why almost all of our drugs do Dose Escalation Trial to look for so-called linear ranges (forgive my ignorance, I can't find a quote).
 
2. So after that, all your conclusions are based on the assumption that "the observed blood concentration data are all within the linear range of the drug."
It is within the linear range,→ so it is obvious that the elimination is the first-order rate, and further, the elimination is the first-order rate,→ then it is better to naturally use the logarithmic trapezoid method.
 
3. As we all know, the study of pharmacokinetics is a gradual process. At first we knew nothing about the kinetic process of the drug, and in the end, we could even accurately predict the drug-time curve of an individual who had not collected any blood concentration. That is, generally speaking, the study of pharmacokinetics can be divided into:
Scenario 1, exploratory research (very little prior information, or none at all),
Scenario 2, research to improve accuracy (there is a lot of prior information, but we need more data to improve the accuracy of our model).
 
For scenario 1: when you don't know the result, whether it's linear trapezoidal  or logarithmic trapezoidal, you're throwing dice because you can't evaluate accuracy.
For scenario 2: obviously, linear elimination use logarithmic trapezoid, nonlinear elimination use linear trapezoid! which is also very clear, there is no logarithmic trapezoid is always the best.
 
So, the one that should be thrown into the dustbin is "I have a correct model".
Essentially, all models are wrong, but some are useful.

 

— George E Box. In: Box GEP, Draper NR. Empirical Model-Building and Response Surfaces. New York: Wiley; 1987. p. 424.

 

 
Of course, if you are mainly exposed to chemical drugs BE trials, then congratulations! you are mainly engaged in "precision improvement research", and it just so happens that they are almost all linear elimination, which allows you to repeatedly prove your own great conclusion that "all drugs are linear elimination" in practice,  and then make you feel confident in your own experience.
 
Sincerely,
0521

Attached Thumbnails

  • jpg.jpg

Edited by 0521, 17 June 2022 - 04:32 PM.


#10 Helmut Schütz

Helmut Schütz

    Advanced Member

  • Members
  • PipPipPip
  • 308 posts
  • LocationVienna, Austria

Posted 16 June 2022 - 08:11 AM

Hi 0521,

 

 

IMHO, your article on "Trapezoidal Rules" is not well written.It may be an introductory reading for beginners, but it is by no means a piece of scientific advice.

 

Sorry, couldn’t do better.

 

 

1. The most critical assumption about the cornerstone of your full text:

any pharmacokinetic model can be simplified to a sum of exponentials

 
This is very, very obviously wrong!

 

Unfortunately you dropped the first part of the sentence. It its entirety:

Apart from the few drugs which are subjected to capacity limited elimination like C2H5OH, any pharmacokinetic model can be simplified to a sum of exponentials.

 

 

2. So after thatt, all your conclusions are based on the assumption that "the observed blood concentration data are all within the linear range of the drug."
It is within the linear range,→ so it is obvious that the elimination is the first-order rate, and further, the elimination is the first-order rate,→ then it is better to naturally use the logarithmic trapezoid method.
 
[…] For scenario 2: obviously, linear elimination use logarithmic trapezoid, nonlinear elimination use linear trapezoid! which is also very clear, there is no logarithmic trapezoid is always the best.
 
Of course, if you are mainly exposed to chemical drugs BE trials, then congratulations! you are mainly engaged in "precision improvement research", and it just so happens that they are almost all linear elimination, which allows you to repeatedly prove your own great conclusion that "all drugs are linear elimination" in practice,  and then make you feel confident in your own experience.

 

THX. In the overview I stated:

The content expressed in the articles presents my sole personal opinion.

And yes, I’m dealing with BE. Of course, I didn’t state “all drugs are linear elimination” anywhere.

 

All the models are wrong, and some of them are useful!

 

If you quote, please correctly:

[…] all models are approximations. Essentially, all models are wrong, but some are useful.
— George E Box. In: Box GEP, Draper NR. Empirical Model-Building and Response Surfaces. New York: Wiley; 1987. p. 424.


 Best regards,
Helmut

https://forum.bebac.at/


#11 Helmut Schütz

Helmut Schütz

    Advanced Member

  • Members
  • PipPipPip
  • 308 posts
  • LocationVienna, Austria

Posted 19 June 2022 - 02:08 PM

Hi 0521,

 

I updated the article. Hope, you can live with it.


 Best regards,
Helmut

https://forum.bebac.at/


#12 0521

0521

    Advanced Member

  • Members
  • PipPipPip
  • 40 posts

Posted 19 June 2022 - 04:10 PM

Hi Helmut Schütz,

 

Hi 0521,

 

I updated the article. Hope, you can live with it.

 

1.

 

Nowadays, the linear trapezoidal rule is of historical interest only and should be thrown into the pharmacometric waste container.

🚮

Abandon calculating the AUC by the linear trapezoidal rule. Your results will inevitably be biased and lead in trouble, especially if values are missing.

If your SOP still calls for the linear trapezoidal rule, it is time for an update to follow science instead of a rather outdated custom.

  
You know, software is a machine, not a pharmacologist, and software users are pharmacologists. 
 
So, I think you should tell the reader very clearly, the premise assumed by the reader when calculating AUC using this method is that "pharmacokinetic model can be simplified to a sum of exponentials"
 

 

Nowadays,  when the reader you also agrees that the drug you are analyzing also conforms to the description of "the pharmacokinetic model can be simplified to a sum of exponentials", the linear trapezoidal rule is of historical interest only and should be thrown into the pharmacometric waste container.

🚮

Abandon calculating the AUC by the linear trapezoidal rule. Your results will inevitably be biased and lead in trouble, especially if values are missing.

If your SOP still calls for the linear trapezoidal rule, it is time for an update to follow science instead of a rather outdated custom.

 

 

 

2.Even if the hope is slim, but once again to try to convince you, the linear trapezoid method.

 

The simplest model that solves the problem is the best model!(forgive my ignorance, I can't find a quote)

Which is relatively simpler, "linear trapezoid" or "linear up log down"?

 

Sincerely,
0521


#13 Helmut Schütz

Helmut Schütz

    Advanced Member

  • Members
  • PipPipPip
  • 308 posts
  • LocationVienna, Austria

Posted 20 June 2022 - 12:13 PM

Hi 0521,

 

You know, software is a machine, not a pharmacologist, and software users are pharmacologists. 
 
So, I think you should tell the reader very clearly, the premise assumed by the reader when calculating AUC using this method is that "pharmacokinetic model can be simplified to a sum of exponentials"

 

Software ≠ machine, the hardware is. Trained wetware is a prerequisite.

 

 

2.Even if the hope is slim, but once again to try to convince you, the linear trapezoid method.

The simplest model that solves the problem is the best model!(forgive my ignorance, I can't find a quote)

Which is relatively simpler, "linear trapezoid" or "linear up log down"?

 

It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.

— Albert Einstein, Herbert Spencer Lecture “On the Method of Theoretical Physics”, Oxford, June 10, 1933.

 

Often misquoted and simplified [sic] to:

Everything should be made as simple as possible, but not simpler.

 

Of course, one should know what to do. In my very first paper (dealing with phenytoin in 1984) I applied the linear trapezoidal. Not because I couldn’t do otherwise – I used the linear-up / log-down since 1981 – but because I was aware of phenytoin’s nonlinear PK.

 

All my articles should be read with the background of bioequivalence. Drugs with nonlinear PK are rare. See this example dealing with missing values – happens all the time. The cards are stacked against one with the linear trapezoidal.


 Best regards,
Helmut

https://forum.bebac.at/


#14 0521

0521

    Advanced Member

  • Members
  • PipPipPip
  • 40 posts

Posted 20 June 2022 - 01:33 PM

Hi Helmut Schütz,

Hi 0521,

 

 

Software ≠ machine, the hardware is. Trained wetware is a prerequisite.

 

 

It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.

— Albert Einstein, Herbert Spencer Lecture “On the Method of Theoretical Physics”, Oxford, June 10, 1933.

 

Often misquoted and simplified [sic] to:

Everything should be made as simple as possible, but not simpler.

 

Of course, one should know what to do. In my very first paper (dealing with phenytoin in 1984) I applied the linear trapezoidal. Not because I couldn’t do otherwise – I used the linear-up / log-down since 1981 – but because I was aware of phenytoin’s nonlinear PK.

 

All my articles should be read with the background of bioequivalence. Drugs with nonlinear PK are rare. See this example dealing with missing values – happens all the time. The cards are stacked against one with the linear trapezoidal.

 

When you've hypothesized:

jpg.jpg
 
Why don't you directly apply the Comparment Model Analysis?
 
Or do you think the Comparment Model is not simple enough and should be abolished and only use what you call the  NCA + Linear Up Log Down?
 
 
 
 

 

It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.

— Albert Einstein, Herbert Spencer Lecture “On the Method of Theoretical Physics”, Oxford, June 10, 1933.

 

Often misquoted and simplified [sic] to:

Everything should be made as simple as possible, but not simpler.

I don't think the quote you found is a quote from the sentence I described.
 

 

 

 Entities should not be multiplied unnecessarily.    – William of Ockham

 

 
 
Because it is very obvious that when we are modeling and analyzing to solve practical problems in the real world, we often know that there is a more accurate model, but we still choose to use the wrong model without hesitation:
  • In the pharmacological model, even though we sometimes know that a drug conforms to the two-compartment model, this still does not prevent us from using the wrong one-compartment model when analyzing the experimental data when sparse sampling.
  • When we calculate the circumference of a circle, even though we know that the more accurate value is π, it still can't stop people from using 3.14 or even 3 when calculating.
  • When we calculate gravity,even though we know that the difference between our position and time on the earth will have an impact and even Newton's laws are not accurate, we insist on using G=m*9.8.
 
 
 

 

 

Software ≠ machine, the hardware is. Trained wetware is a prerequisite.

Oh, I can't believe your car was not carefully designed and debugged by engineers before it left the factory? How did you drive that pile of parts as a car?

 

Are we still in the Stone Age? Our tools are stones all over the floor?

 

 

I also know that there seems to be a company named "International Business Machines".

 

Sincerely,
0521

Edited by 0521, 20 June 2022 - 02:03 PM.


#15 Helmut Schütz

Helmut Schütz

    Advanced Member

  • Members
  • PipPipPip
  • 308 posts
  • LocationVienna, Austria

Posted 20 June 2022 - 01:46 PM

Hi 0521,

 

Why don't you directly apply the Comparment Model Analysis?

 

According to all global guidances for bioequivalence only NCA is acceptable.

 

 

I also know that there seems to be a company named "International Business Machines".

 

Cause it started with hardware (typewriters, cash machines, mainframes,…). IBM’s OS/2 was a flop.

 

I think this conversion leads to nowhere. End of discussion from my side.


 Best regards,
Helmut

https://forum.bebac.at/





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users