Jump to content


Photo

Different results between built-in PK model and Phoenix model

NLME built-in

  • Please log in to reply
12 replies to this topic

#1 LLLi

LLLi

    Advanced Member

  • Members
  • PipPipPip
  • 92 posts

Posted 17 November 2016 - 06:21 PM

Hi All,

 

I wanted to transfer 3 built-in PK models to its corresponding Phoenix models. But the results from built-in models and Phoenix models are different. 

 

Background: 2-compartment model, oral dosing, dose= 23158.

 

In the projects, there are three PK library models: A and B used PK model 11, and C used PK model 12. There are also three PHX models corrsponding to library PK models.

 

PK library model A: PK model 11 (no Tlag) and no weighting (uniform);

PHX model A: Micro, extravascular 2-ct, additive

Result: no big difference.

 

PK library model B: PK model 11 (no Tlag) and with weighting (1/Yhat*Yhat);

PHX model B: Micro, extravascular 2-ct, multiplicative (0.1)

Results:

V1_F 73.404213                                   tvV 82.5805

  K01   2.9796176                                 tvKa 4.91756 
  K10   0.73812706                                tvKe 0.689437
   K12   0.155199                                   tvK12 0.133572
  K21   0.1120743                                  tvK21 0.107067
 
 

PK library model C: PK model 12 (with Tlag) and with weighting (1/Yhat*Yhat);

PHX model C: Micro, extravascular 2-ct, tlag, multiplicative (0.1)

Results: 

V1_F  82.956782     tvV 102

K01  10.024833        tvKa 23 
K10  0.66048075      tvKe 0.7
K12  0.12688184      tvK12 0.1
K21  0.10477455      tvK21 0.1
Tlag  0.077915856     tvTlag 1
(It looks that the output of PHX model C is equal to the initial estimate).
 
Would someone help me to figure it out?

 

Furthermore, when I ran the built-in PK model, I have to give the initial estimates, otherwise, there was a execution error "Failed to fit model" (the last model in the projects). Why did this happen? 

 

Please see my project attached for more detail.

 

Thank you!

LLLi

Attached Files


Edited by LLLi, 18 November 2016 - 03:02 PM.


#2 mittyright

mittyright

    Advanced Member

  • Members
  • PipPipPip
  • 98 posts

Posted 21 November 2016 - 01:20 PM

Dear LLLi,

 

PK library model B: PK model 11 (no Tlag) and with weighting (1/Yhat*Yhat);

PHX model B: Micro, extravascular 2-ct, multiplicative (0.1)

Results:

 

The Gauss-Newton algo (pseudo-Newton if we a re talking about PHX model) is very sensitive to the initial estimates. 

Please look at the Initial Estimates tabs, they are different. Moreover, due to different algos (RLSM vs ML) and numerical instability of proposed model the final parameters will be close but not equal even with same Initials. The outputted residuals are more reasonable for PHX model, but not good enough. Please note that CV is very high especially for Ka, so one can suggest to change the input function (adding the Tlag for example as you did in model C).

 

(It looks that the output of PHX model C is equal to the initial estimate).

I think due to initial estimates which are far away from applicable values the algo cannot produce a gradient. Please try the following for the model C: select  WNL generated Initial parameter values and WNL Bounds. With curve stripping procedure WNL will calculate the applicable initial estimates (and the final estimates too). These initial etimates you can use for WNL or PHX model as well. PHX and WNL are converging very well.

In PHX model it is not possible to generate initial estimates from scratch, but you can use special Tab to do that graphically.

 

Furthermore, when I ran the built-in PK model, I have to give the initial estimates, otherwise, there was a execution error "Failed to fit model" (the last model in the projects). Why did this happen? 

 

The answer you can find in the core output:

  *** ERROR 10203 *** AN ERROR OCCURRED DURING CURVE STRIPPING.
  *** ERROR 10201 *** INITIAL ESTIMATES CANNOT BE DETERMINED FOR THIS MODEL.
Due to some reasons WNL cannot use curve stripping method (as I showed above the reason could be in Ka). By the way you can strip the curves by yourself and get good initials ;-)

 

Hope it helps,

Mittyright



#3 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,316 posts

Posted 21 November 2016 - 01:35 PM

LLLi, I just looked at this briefly (and in the mean time it looks like Mitty has added quite a lot of useful feedback);

 

your first WNL classic model has awful precison for parameter estimation, and the fit on a log scale is not good for the last 3 points.

 

Model B doesn't look too bad.

 

However I would say Model C is over parametised, tlag is very small and hard to determine from this data - it might be better left out of the model in my opinion.

 

However if I start it at say 0.01, (remember at 0.083 it is quantifiable so 1 as a starting estimate is much too late). I get something looking a lot more reasonable (and actually a lower AIC than Model B - so maybe worth keeping after all);

 

Parameter    Estimate    Units    Stderr    CV%    2.5% CI    97.5% CI    Var. Inf. factor
tvKa    10.065        0.87545988    8.6980614    7.9948565    12.135143    370.07
tvV    82.9562        2.678899    3.2292933    76.62158    89.29082    3393.7
tvKe    0.660911        0.01486506    2.2491772    0.62576055    0.69606145    0.10923
tvK12    0.127295        0.0058613138    4.6045122    0.11343513    0.14115487    0.018092
tvK21    0.105038        0.0046011955    4.3805056    0.09415785    0.11591815    0.010311
tvTlag    0.0779299        0.00041126634    0.52773882    0.076957405    0.078902395    9.1863E-05
stdev0    0.0460949        0.0088175393    19.1291    0.025244634    0.066945166    

Simon

Attached Thumbnails

  • model_1.jpg

Edited by Simon Davis, 21 November 2016 - 01:37 PM.


#4 mittyright

mittyright

    Advanced Member

  • Members
  • PipPipPip
  • 98 posts

Posted 21 November 2016 - 02:06 PM

Hi Simon,

 

Tlag is really very low for Model C, by the way it looks like for this dataset this parameter is key parameter. We can even start from 0 from NCA, still get stable estimates.

Please also look at eigenvalues in model B. I didn't find good estimates (with good precision) for Model B (CV is too high whenever I do)

 

Mittyright



#5 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,316 posts

Posted 21 November 2016 - 02:42 PM

Yes I agree Mittyright, I wasn't expecting it to be so critical to a good fit; intially looking at LLLi's results I was going to propose to discard entirely but after fitting it myself I would keep it  - sorry if I didn't make that clear.

 

 Simon



#6 LLLi

LLLi

    Advanced Member

  • Members
  • PipPipPip
  • 92 posts

Posted 21 November 2016 - 09:25 PM

Thank you Mittyright and Simon!

 

Mittyright,

For an IV 2-ct model, I know how to do the curve stripping graphically. But I don't know how to do that for an oral 2-ct model, especially the initial estimate of Ka. Would you please provide some information? 

 

Thank you! 

LLLi



#7 mittyright

mittyright

    Advanced Member

  • Members
  • PipPipPip
  • 98 posts

Posted 21 November 2016 - 11:14 PM

Dear LLLi,

 

Here you go (see attached)

 

Mittyright

Attached Files



#8 LLLi

LLLi

    Advanced Member

  • Members
  • PipPipPip
  • 92 posts

Posted 01 December 2016 - 06:12 PM

Hi mittyright,

 

Thank you for your project! It is amazing!

 

I still have some questions about your project and hope that you can help me.

 

1) For calculating MinLambdaTime, you used (A1:A1). I did not find that function in the guide. Would you please provide more information?

 

2) In the "Parameters" of results, there is "Smoothing_parameter_delta". What is this parameter? Would you please explain more?

 

3) What is the purpose to calculate A1=A/Dose and A2=B/Dose? 

 

Thank you!

LLLi



#9 mittyright

mittyright

    Advanced Member

  • Members
  • PipPipPip
  • 98 posts

Posted 02 December 2016 - 02:02 PM

Hi LLLi,

 

1) For calculating MinLambdaTime, you used (A1:A1). I did not find that function in the guide. Would you please provide more information?

 

You are right, it is not described in the Guide directly. But you can use same cell address as in Excel

also please review the table Custom functions list in the Data Tools guide. You'll see there are some examples with ':' like: 

average(A1:A10) - to get the average for values in the first column, rows 1-10.

So here I am trying to get the very first (1) time (Column A) in the dataset.

 

2) In the "Parameters" of results, there is "Smoothing_parameter_delta". What is this parameter? Would you please explain more?

 

It is difficult to explain and understand. 

Please read Deconvolution through convolution methodology in the Winnonlin User's guide:

The convolution operation acts essentially as a low pass filter with respect to the filtering of the input information.
So you can think about this value as a some smotthing level filter. 

 

3) What is the purpose to calculate A1=A/Dose and A2=B/Dose? 

 

Please read the guide:

Because the UIR assumes the response from 1 dose unit, for PK model 1 (one

compartment), take V (volume) from the 1-compartment IV model output and let

A = 1/V. Make units corrections if needed (see below). For model PK 8 (two compartment), take the A and B from the output and divide by the stripping dose to

get A1 and A2 for the UIR.

 

BR,

Mittyright



#10 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,316 posts

Posted 02 December 2016 - 02:42 PM

Be cautious with some of these functions Mitty since they come from a common programming library;however in Phoenix sheets you can't perform operations over rows, only across columns.  Cell references are generally not recognised and your (A1:A1) may be a lucky exception.

 

 Simon


Edited by Simon Davis, 02 December 2016 - 02:42 PM.


#11 mittyright

mittyright

    Advanced Member

  • Members
  • PipPipPip
  • 98 posts

Posted 02 December 2016 - 04:51 PM

Hi Simon!

 

thank you for the point!

Yes, this is significant data wizard drawback: I need to find out some workarounds to operate over rows.

By the way I'm extensively using something like count(A:A) without any problem.

There are a lot of examples of functions in the Guide's list with cell references. I'm trying to avoid it if possible, but I never saw any related faults during implementation.

 

BR,

Mittyright



#12 rajkumar2601

rajkumar2601

    Newbie

  • Members
  • Pip
  • 1 posts

Posted 16 August 2017 - 12:20 PM

Hi,

Can anyone help as to following?

I am using Phoenix64 for IVIVC

 

1) How to add initial estimate parameters in PK modeling?

 

2)I am not getting A and alpha values  and A, B, alpha and beta values for one compartment and two compartments respectively modelings.

How to get those values? 

3)HOw to fit into two compartment oral with macro constants. whe i was trying to fit, it shows that parameters are missing. same parameters which i was used for 2compartment micro consatants.

please help me in this regard.

 

Regards,

Rajkumar



#13 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,316 posts

Posted 17 August 2017 - 06:13 PM

Hi Rajkumar, I am on holiday this week but this sounds very similar to the questions you asked in support  Case163429.

 

Please look at my response in there again and let me know what was uncelar please.







Also tagged with one or more of these keywords: NLME, built-in

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users