Jump to content


Photo

Simultaneous fit of oral and IV data


  • Please log in to reply
22 replies to this topic

#1 Marc Rumpler

Marc Rumpler

    Newbie

  • Members
  • Pip
  • 9 posts

Posted 20 January 2011 - 12:42 AM

I'm trying to perform a simultaneous fit of oral and IV data on either classic WNL or Phoenix. I realize there is no canned model for this. Is my only option to write the code or edit graphically, and if so does anyone have suggestions on how I would do either of those? I also think there is some of the "old" ASCII text for this approach in the back of Gabrielsson and Weiner's book, but I cannot get my hands on a copy. Also, would the data input file (attached) be appropriate for this model? Thank you -Marc [file name=PK_question.xlsx size=20835]/extranet/media/kunena/attachments/legacy/files/PK_question.xlsx[/file]


Edited by Simon Davis, 03 May 2017 - 10:34 AM.

  • Michaelnof likes this

#2 Helmut Schütz

Helmut Schütz

    Advanced Member

  • Val_Members
  • PipPipPip
  • 316 posts
  • LocationVienna, Austria

Posted 20 January 2011 - 12:26 PM

Hi Marc!

I realize there is no canned model for this. Is my only option to write the code or edit graphically […]

Yes.

Also, would the data input file (attached) be appropriate for this model?

Yes, but some remarks:
[ul][li]Don't use data reported from bioanalytics 'as is'. The 'full precision' is a delusion. Remember that ±15% for accuracy and precision are acceptable (and ±20% at the LLOQ) in bioanalytics. Even 5% are common for 'good' methods. Always use values rounded to not more than three significant digits (implying already less than 1% error!).[/li][li]In all horses a three-compartment model (w=1/Yhat²) is by far better than a two-compartment model (iv) based on AIC and visual inspection of fits and residuals.[/li][li]Your oral dataset will not work. The first sample is already Cmax in 50% of cases. Even for the remaining 50% Cmax is already the second value. No way to seriously model the absorption phase.[/li][li]Since your absorption phase is not well defined, forget about attempting simultaneous modeling.[/li][/ul]
What is the target of your study? If you only want to estimate absolute BA, I would suggest to retreat to NCA.

P.S.: The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data. John W. Tukey
  • newi and Austingar like this
 Best regards,
Helmut
https://forum.bebac.at/

#3 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,318 posts

Posted 20 January 2011 - 05:02 PM

Hi Marc, Helmut,
Actually this question was asked on PharmPK recently and Nathan Teuscher gave an example on his blog to do this with NONMEM;

http://blog.learnpkp...simultaneously/

I took the sample set he provided there and created a Phoenix project containing an equivalent NLME model. I hope the steps below will show you how it should be quite easy to convert a built-in model to do this in NLME. Here are the steps I would suggest to see how to modify an existing model to add another dosepoint;

1) Select Parametisation of Clearance.
2) Choose Absorption of Extravascular (leave the number of compartments at 1 if using Nathan's example iv-oral-sample.csv)

3) Now select "Edit as Graphical" and under Abs compartment options in the Structural tab check the box for bioavailability.

3) Now select "Edit as textual" and simply and a second dosepoint line for A1
test(){
deriv(A1 = - (Cl * C) + (Aa * Ka))
urinecpt(A0 = (Cl * C))
deriv(Aa = - (Aa * Ka))
C = A1 / V
dosepoint(Aa, bioavail = (F))
dosepoint(A1)
error(CEps = 0.2)
observe(CObs = C * (1 + CEps))
stparm(V = tvV * exp(nV))
stparm(Cl = tvCl * exp(nCl))
stparm(Ka = tvKa * exp(nKa))
stparm(Ka = tvKa * exp(nKa))
stparm(F = tvF * exp(nF))

fixef(tvF = c(, 0.5, ))
fixef(tvV = c(, 6.2, ))
fixef(tvCl = c(, 11, ))
fixef(tvKa = c(, 12.5, ))
ranef(diag(nV, nKa, nCl, nF) = c(0.2, 0.2, 0.2, 0.2))
}


4) Set up your input mappings (note I made a minor modification to the data in my workflow by creating separate columns for IV and Oral doses to keep the code simple)
Sort : ID
Aa : ORALamt
Aa Rate :
A1 : IVamt
A1 Rate :
Time : TIME
CObs : DV

5) Optionally use the Initial estimates tab (see screen shot). [file name=61bioavail_ORAL_IV.phxproj size=686297]/extranet/media/kunena/attachments/legacy/files/61bioavail_ORAL_IV.phxproj[/file] oral_IV_simultaneous.jpg

Attached Thumbnails

  • oral_IV_simultaneous.jpg

Edited by Simon Davis, 03 May 2017 - 10:34 AM.


#4 Marc Rumpler

Marc Rumpler

    Newbie

  • Members
  • Pip
  • 9 posts

Posted 20 January 2011 - 05:17 PM

Helmut,

Thank you so much for your prompt and thorough help. I appreciate your input. Yes, I do want absolute BA, but the ultimate goal of the simultaneous fit is to provide and improve the reliabilty of my parameter estimates. I'll work with the steps you have provided and report back soon.

I will also try and compare the NCA approach due to the "lack of" absorption phase.

Cheers,

Marc



#5 Helmut Schütz

Helmut Schütz

    Advanced Member

  • Val_Members
  • PipPipPip
  • 316 posts
  • LocationVienna, Austria

Posted 20 January 2011 - 06:01 PM

Hi Marc!

Yes, I do want absolute BA, but the ultimate goal of the simultaneous fit is to provide and improve the reliabilty of my parameter estimates.

Resonably. On the other hand any simultaneous fit assumes lack of inter-occasion variability in PK parameters – which is unlikely. From a theoretical point of view I would expect a true improvement if you work with a stable isotope, i.e., really administer po and labeled iv simultaneously. ;-) Then you could set the estimated CL from iv as a constant in the po model.

I will also try and compare the NCA approach due to the "lack of" absorption phase.

The fits of your oral data 'look' OK, but not only the absorption phase is awfully fit (scary SEs; both in 2- and 3-comp. models). If you want to come up with a value of fabs the lesser of two evils is NCA.
 Best regards,
Helmut
https://forum.bebac.at/

#6 Helmut Schütz

Helmut Schütz

    Advanced Member

  • Val_Members
  • PipPipPip
  • 316 posts
  • LocationVienna, Austria

Posted 20 January 2011 - 06:29 PM

Hi Simon,

yes, I've seen the posts of both of you at David's list. ;-)

To be honest what makes me relucant routinely using PHX's new modeling engine is the lack of hard-coded secondary parameters for even the most simple models. If I select the classical WNL engine I get all this nice stuff (depending whether parameterized in rate constants or clearances the respective other parameters + AUC, Cmax, MRT, blahblah). In PHX I get the primary estimates. Full stop. Not even a half-life.

E.g. for a two-compartment infusion (I would say the most common model in Phase I) a lot of formulas need to be entered. In my experience there are many users 'in the wild' who don't have all these fancy PK-textbooks in their bookshelf (well, at least they have the PDF-manual)... Would be nice to have the secondary parameters back on board (why did they had to walk the plank?), or at least – somewhere – a codebase.
  • newi likes this
 Best regards,
Helmut
https://forum.bebac.at/

#7 Emily Colby

Emily Colby

    Advanced Member

  • Members
  • PipPipPip
  • 88 posts

Posted 20 January 2011 - 08:03 PM

Dear Marc,

 

I see that there have already been several responses, but I wanted to send you this project in case you decide to do IV/PO modeling simultaneously in the future. I've attached a Phoenix project with a graphical IV/PO model. An arbitrary dose of 1 was assumed for IV and PO. A reset column was added to the dataset so that the same IV/PO model can be fitted to each subject. Obviously, the fits are bad for reasons that Helmut mentioned, but the model is there for you to have in case you ever need it.

 

I selected a 3-compartment, Extravascular built-in model, and then I clicked "Edit as Graphical". Next, I clicked on the central compartment and change the number of dosepoints to 1 in the bottom panel. I set up the mappings in the Main section, entered initial estimates and executed the model.

 

As Helmut suggested, a population approach might be better if one wants to account for interoccasion variability. That topic is discussed in our Population Modeling Methodology using Phoenix NLME classes. This year, our hands-on exercises include an example of a crossover study with two extravascular treatments, occasion and treatment effects on Ka, and an age effect on Cl.

 

We also have the Deconvolution tool for characterizing absorption, available in the NCA and Toolbox menu in Phoenix WinNonlin, and discussed in Intro to Phoenix WinNonlin and IVIVC courses.

 

Best regards,

Emily [file name=IVPO.phxproj size=869095]http://www.pharsight.com/extranet/media/kunena/attachments/legacy/files/IVPO.phxproj[/file]


  • Tkin likes this

#8 Marc Rumpler

Marc Rumpler

    Newbie

  • Members
  • Pip
  • 9 posts

Posted 21 January 2011 - 03:37 AM

Emily,

Thank you for you help. However, I'm unable to extract the project files because they are password protected.

-Marc



#9 Emily Colby

Emily Colby

    Advanced Member

  • Members
  • PipPipPip
  • 88 posts

Posted 21 January 2011 - 04:03 PM

Dear Marc,

I noticed the same problem when I downloaded your .xlsx file. For some reason, I think files are downloaded from this site as .zip files. If you rename them to have the correct file extension (in the case of my project, .phxproj), they will open just fine in the program they're designed to open in.

Emily

Emily, Marc, interestingly I am using Firefox on this site and I've never had any problems with the re-naming of the attachments. I just checked by downloading both Emily's and Marc's attachments again and it's an IE8 issue (or feature!). Can you please confirm as to what browser you are using so I can follow this up with our administrator.
Simon.


#10 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,318 posts

Posted 22 January 2011 - 12:29 PM

Helmut wrote:

To be honest what makes me reluctant routinely using PHX's new modeling engine is the lack of hard-coded secondary parameters for even the most simple models. If I select the classical WNL engine I get all this nice stuff (depending whether parameterized in rate constants or clearances the respective other parameters + AUC, Cmax, MRT, blahblah). In PHX I get the primary estimates. Full stop. Not even a half-life.


That point is definitely acknowledged.... and in 6.2, due shortly; all the corresponding WNL classical models will have secondary parameters pre-filled when set in the Phoenix model engine. One will need to select the model via 'Set WNL model' Posted Image

and the user could add additional secondary parameters if desired as before. Posted Image

Have a good weekend all.
Simon.

Attached Thumbnails

  • set_wnl_model2.jpg


#11 Helmut Schütz

Helmut Schütz

    Advanced Member

  • Val_Members
  • PipPipPip
  • 316 posts
  • LocationVienna, Austria

Posted 22 January 2011 - 12:52 PM

Hi Simon,

 

very nice!

 

BTW (going off-topic), I never figured out how to perform IRLS in (e.g. w=1/Yhat, 1/Yhat²) in the new engine. Hints?


 Best regards,
Helmut
https://forum.bebac.at/

#12 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,318 posts

Posted 24 January 2011 - 07:59 AM

Helmut - the Naive Pooled engine in the Phoenix model object uses maximum likelihood estimation; so it's not a least squares approach. Weighting is specified by the residual error model; in the training manual their is a list translating WNL model weighting -> PHX model residual error models that looks something like this;

• Additive is Uniform in classical WinNonlin: C + CEps
(where CObs is the observed concentration, C is the predicted concentration, and CEps is the residual error)
• Multiplicative is 1/Yhat*Yhat in WinNonlin: C*(1+CEps)
• Power is (Yhat)-2*power in WinNonlin: C + Cpower*Ceps
• Special case:
• Power=0.5 is 1/Yhat in WNL (Poisson): C + C0.5*CEps

Is that what you were looking for?

Simon.

#13 Helmut Schütz

Helmut Schütz

    Advanced Member

  • Val_Members
  • PipPipPip
  • 316 posts
  • LocationVienna, Austria

Posted 24 January 2011 - 12:51 PM

Hi Simon!

Helmut - the Naive Pooled engine in the Phoenix model object uses maximum likelihood estimation; so it's not a least squares approach.

Yes, I know – but I find different estimates irritating.

Have a look at Marc's data (horse 1, 2 comp. micro; WNL classical w=1/Yhat², PHX Multiplicative CObs=C*(1+CEps)), dose arbitrarily set to 1e+6)
WNL:
V1 45.79
K10 0.9546
K12 0.2222
K21 0.4537
PHX:
V1 45.56
K10 0.9470
K12 0.1792
K21 0.4242

It's not possible to compare fits based on the AIC (computed differently) and the SSQ is not given in PHX's output. Which one is 'better'? WNL: 2.16e+8, PHX 2.18e+8...
 Best regards,
Helmut
https://forum.bebac.at/

#14 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,318 posts

Posted 24 January 2011 - 01:34 PM

Helmut, this may be a reminder for you; but I thought it useful to put it out for the general community;

Generally, the WNL Classic engine and the Phoenix Model Object Naïve Pooled engine should yield very similar results, that is, when the fits are good (standard errors are small or the confidence intervals around the estimates are narrow). They won’t yield precisely the same values, but during informal testing, were within half of a percent.

The PHX naive pooled results are true maximum likelihood estimators, whereas the WNL classic results are based on an interated weighted least squares algorithm that usually comes close to a maximum likelihood solution when the fits are good, but may be significantly different for poor fits.

The model fitting method for PHX naive pooled engine (which is what is used in Phoenix for single subject models) is a BFGS (Broyden–Fletcher–Goldfarb–Shanno), quasi-Newton method that maximizes the log likelihood function .

In contrast the Gauss Newton method in WinNonlin Classic model is quite different - it solves a sequence of iteratively reweighted nonlinear least squares problems by Gauss Newton, where the current iteration computed using weights defined by the residuals on the previous iteration. Usually the results are quite similar to the LL optimization method, but , depending on the residual error model, not necessarily exactly the same. One big disadvantage of iteratively reweighted least squares is that it cannot fit parameters such as an exponent in the residual error model or a mixing ratio in a proportional / additive error model, whereas the LL method has no problem with this.

So in short - the theory is that Phoenix Model estimates should be 'better'; but if you want to have a more involved theory discussion on this then perhaps you can start a new thread on this specific topic and I can ask the authors of the improved Phoenix algorithm to present some more support for this summary if you'd like ?

Simon

#15 Marc Rumpler

Marc Rumpler

    Newbie

  • Members
  • Pip
  • 9 posts

Posted 25 January 2011 - 02:28 AM

Simon,

I was indeed using IE8.

Thank you,

Marc



#16 Helmut Schütz

Helmut Schütz

    Advanced Member

  • Val_Members
  • PipPipPip
  • 316 posts
  • LocationVienna, Austria

Posted 26 January 2011 - 11:41 AM

Hi Simon,

 

THX for the answer. I'm a liitle short in time with the workshop in Mumbai this week, but will definitely come back to the topic later.


 Best regards,
Helmut
https://forum.bebac.at/

#17 raghava choudary

raghava choudary

    Advanced Member

  • Members
  • PipPipPip
  • 81 posts

Posted 28 January 2011 - 09:06 AM

Dear Simon,

 

The following code posted by you was used to model data for a two compartment model for fitting simultaneously IV and Oral. (CL,C,F,Ka, V2 and CL2)

 

The ka values from FO method was found to be 13 and the reference value is ~ 1 (NONMEM) . Can you guide me in this regard,

Plz find the attached phx nlme project [file name=IVPO_2_comp_.phxproj size=558022]http://pharsight.com/extranet/media/kunena/attachments/legacy/files/IVPO_2_comp_.phxproj[/file]

 

 

test(){

    deriv(A1 = - (Cl * C) + (Aa * Ka)- (Cl2 * (C - C2)))

    urinecpt(A0 = (Cl * C))

    deriv(Aa = - (Aa * Ka))

    deriv(A2 = (Cl2 * (C - C2)))

    C = A1 / V

    dosepoint(Aa, bioavail = (F))

    dosepoint(A1)

    C2 = A2 / V2

    error(CEps = 0.2)

    observe(CObs = C + CEps)

    stparm(V = (tvV) + nV)

    stparm(Cl = (tvCl) + nCl)

    stparm(Ka = (tvKa) + nKa)

    stparm(V2 = (tvV2) + nV2)

    stparm(Cl2 = (tvCl2) + nCl2)

    stparm(F = (tvF) + nF)

    fixef(tvF = c(, 1, ))

    fixef(tvV = c(, 1, ))

    fixef(tvCl = c(, 1, ))

    fixef(tvKa = c(, 1, ))

    fixef(tvV2 = c(, 1, ))

    fixef(tvCl2 = c(, 1, ))

    ranef(diag(nCl, nV2, nCl2, nV, nKa, nF) = c(1, 1, 1, 1, 1, 1))

 

Regards,

 

Raghav [file name=IVPO_2_comp_.phxproj size=558022]http://pharsight.com/extranet/media/kunena/attachments/legacy/files/IVPO_2_comp_.phxproj[/file]

Attached Files



#18 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,318 posts

Posted 28 January 2011 - 10:12 AM

Raghav,

   Note it is possible to edit your posts; so if you've forgotten something just correct the previous one rather than posting again please. I retained you last post; but neither had the data attached. If you would like people to comment on this I suggest you attach the PHXPROJ file instead so they can see mappings, ensure like is being compared with like etc. Please try and attach the file to your previous post again and I'll check back in a few hours.

 

If you can't manage that please just email it to me and/or support and we'll attach on your behalf.

 

   Thanks, Simon



#19 raghava choudary

raghava choudary

    Advanced Member

  • Members
  • PipPipPip
  • 81 posts

Posted 28 January 2011 - 01:58 PM

Simon,

 

Please find my post edited and this time with attachment.

 

Regards,

 

Raghav



#20 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,318 posts

Posted 28 January 2011 - 11:02 PM

Hi Raghav, thanks for providing your project - it really makes it much easier to track through the problems. Out of interest have you run the NONMEM model you were comparing NLME to? There are a few things that I can note in your project; 1) minor data handling issue; there is an option in Phoenix; if you click the MDV option; on the tab INPUT OPTIONS then you can set the pre-dose value form ZERO to missing which I'd recommend otherwise the in e.g. the IV dosing you're trying to force fit through ZERO at time 0. 2) With a relatively small amount of data I would probably start with Naive pooled as a run option.(tvKa 1.43715) Now I've built a sub workflow where I first created a desc. stats to get a better estimate of St Dev. (200 vs. 0.2 in your model) 3) this means your additive error model will now fit and give an estimate of Ka of 1.16226 4) now we can review the fit e..g Pop_IWRES_vs_IPRED.jpg and see that perhaps a multiplicative error model is better. Lastly I've added a model comparator so you can see how I've developed this model at each step. It would be interesting if you could confirm the NONMEM model you used as reference. Simon Simon. [file name=2935323_IVPOraghav.phxproj size=2205897]/extranet/media/kunena/attachments/legacy/files/2935323_IVPOraghav.phxproj[/file]

Attached Files


Edited by Simon Davis, 03 May 2017 - 10:36 AM.





7 user(s) are reading this topic

0 members, 7 guests, 0 anonymous users