Jump to content

Most Liked Content

#4133 Dose proportionality

Posted by Helmut Schütz on 12 November 2016 - 01:06 AM

Hi LLLi,


In your reply, you mentioned "Formulas: θL,=[1+ln(θL)/ln(r )] and θU=[1+ln(θU)/ln(r)]. From [0.8, 1.25] and r 10 we get [0.9031, 1.0969] and from [0.5, 2] and r 10 [0.6990, 1.3010]".
While in Smith's paper, it is said that "the CI of Rdnm completely outside (0.8, 1.25), indicating a disproportionate increase". I am confused about the difference between the reference interval. Which reference interval we should use to evalute whether DP or not?

You are quoting the footnote b below Table 2. I guess this is just a typo. Smith used a slightly different terminology (compared to Chow/Liu and Hummel et al.). He starts from 0.80 and 1.25 (ΘL and ΘU; page 1279, first paragraph). The transformed acceptance range (he calls it “the critical region” and later “the reference interval”) is derived in Eq. (4). That’s the same one I used above. Now look at page 1282, left column, second paragraph, which reads:

The corresponding 90% CI (0.679, 0.844) fell outside the reference interval (0.903, 1.097) defined by Eq. (4) for r = 10 and ΘU = 1/ΘL = 1.25, indicating a disproportionate change in Cmax across the dose range studied.

In other words from the original range [0.80, 1.25] he gets the transformed one [0.903, 1.097] and this is what you should use (of course depending on the actual r in the study). q.e.d.


Furthermore, the paper also said that the Rdnm value of 1 would denote ideal dose-proportionality.

Correct. Let β ≡ 1 and r ⊂ ℝ. Then Rdnm = r β–1 = r 0 = 1. Less mathematical: If the slope is exactly 1 then for any possible ratio of dose levels Rdnm will be exactly 1.


Rdnm = PK/corresponding dose?

No. r = the ratio of the highest/lowest dose and Rdnm = r β–1. You do not dose-normalize in this model.


For power model of Rdnm, what is Y and what is x?

In my project I used the linearized power model, which is

ln(Yj) = α + β · ln(xj),

where Y is he respective PK metric (AUC, Cmax, …) and x the dose; both at level j. Most people prefer the linearized model over the original one – which is

Yj = α · xj β

because the latter requires nonlinear fitting. If you have a Phoenix/NLME license go ahead with the “pure” model. Anyhow, I would not recommend that because in a regulatory setting the former is more easy to assess than the latter.

However, there is a situation which demands nonlinear fitting: A power model with an intercept, i.e.,

Yj = α + λ · xj β.

You would need this model for dosing an endogenous compound and measurable basal levels.

  • mittyright and LLLi like this

#3959 Sorting within the columns for Plots

Posted by Linda Hughes on 12 October 2016 - 07:12 PM

Sorry, it is currently not possible to do what you are requesting.  This feature request is being tracked in our tracking system, QC 9942, 'Capability to order legends', "Clients would like to be able to sort the legend. They want to order the values within the variables that are mapped as group."

Thank you for your input.
Linda Hughes
Principal Software Engineer, Certara

  • Helmut Schütz and spic like this

#2869 Time Dependent Covariates

Posted by serge guzy on 17 March 2015 - 03:11 PM

Not sure if this has been touched on the discussion boards but I am curious how time-dependent covariate modeling is implemented in PML?  Are there special considerations for the construction of the dataset, such as making a dummy timepoint where the covariate value changes? Or does the covariate, regardless of its value, need to be specified at each observation (DV)?   

Dear Lance

In the pml, you have 2 options, either to use covariate statement or the fcovariate statement.

fcovariate means covariate carried forward while the covariate statement is backward.


Suppose you have a covariate like weight at t=0 that is 70.

The trial is long and let say that after 6 months the weight is checked and is let say 75



If you put at t=0 70 for weight and at t=6 months you put 75, then if you are using


the program will read a weight of 70 kg from t=0 to t=6 months and then at t=6 months it will read 75 kg.

If you are using the covariate statement


then at t=0 it will read 70 but then immediately will look at the next defined value for weight which is 75 and will define the weight as 75 until t=6 months.


I made the request to have fcovariate as the default interface option which to me makes more sense.


You do not need to define at each row the covariate value but only when it changes and use the fcovariate statement to make sure it will take the new value at the new time (as I just explained).

I hope it helps.

best Regards


  • Tao Liu and NMedlicott like this

#2696 Phoenix' new license model

Posted by Christian on 12 November 2014 - 07:39 AM

It is a shame that the Floating license model is being dropped by Certara.


The new license model is based on named user management requiering considerable administrative effort, particularly for organizations with a larger user community and where certain users need the software only occasionally.


The new model is also vey inflexible as users who left the organization can only be de-registered twice a year and therefore block a license for this period.


Furthermore it costs considerably more without adding any obvious value.


I therefore plea Certara to re-introduce a model similar to the past Floating licenses! This would greatly help to manage this software.




  • harbjo and hansonm like this

#6047 Phoenix 8.3 released June 2020, Live online training

Posted by Ana Henry on 25 June 2020 - 02:33 PM

In addition, Certara University has created a set of free videos on What is new in Phoenix Version 8.3,   You can watch them by registering for this free course:





Ana Henry

  • Simon Davis likes this

#6027 Steady state with additional multiple dosing

Posted by Ana Henry on 16 June 2020 - 07:52 PM

Hello Veenu,
It would be best to see your project but by looking at your output it appears that you are indicating that steady state starts at 24 hours, this is probably because your steady state variable (we cannot see from the screenshots how that looks like) might have a flag for steady state at 24 hours when it should probably be at 0 hrs. This is just a guess based on your screenshots.


Ana Henry
  • veenu.bala likes this

#6017 Dissolution model in PK simulation

Posted by bwendt@certara.com on 10 June 2020 - 09:17 AM

Hi Martin,


if you want to simulate or fit your PK profile using estimates from a fit of a Weibull model to your dissolution data you can try this PML code:


deriv(Aa = - WB *AaDose )
deriv(A1 = WB*AaDose - Cl * C - Cl2 * (C - C2))
deriv(A2 = Cl2 * (C - C2))
C = A1 / V
C2 = A2/V2
deriv(Fa= WB)
error(CEps = 20.1559)
observe(CObs = C + CEps)
fixef(MDT(freeze) = c(,4,)) 
fixef(b(freeze) = c(,1.5,))
fixef(Ascale = c(,1,))
fixef(V = c(0, 15, ))
fixef(Cl = c(0, 5, ))
fixef(V2 = c(, 8, ))
fixef(Cl2 = c(, 2,))

Let me know if this works for you.



  • mjons likes this

#5996 Reset ignored with sequential PKPD

Posted by cradhakr on 14 May 2020 - 02:22 AM

Attached File  Forum_seqPKPD example.phxproj   2.56MB   72 downloadsHi,


For some reason your Concentration was not mapped in your workflow (not sure why this happened). I recreated the model (see under Edited workflow) and its working fine. 



Chandramouli (Mouli)

  • cphipps likes this

#5923 Initial fcntheta1 NaN error in NLME

Posted by smouksassi1 on 07 February 2020 - 10:26 AM

you need to guide against input being zero at time zero.


option 1: only apply the input when time above zero otherwise it is zero.

input =  t ==0 ? 0 : ((MAT/(6.283*CV^2*t^3))^0.5) *exp(-((MAT-t)^2)/(2*CV^2*MAT*t))
option 2: put a small amount of time next to t:
P1 = (MAT/(2*3.14*CV*CV*(t+0.000001)**3))
P2 = P1**0.5
P3 = (MAT-(t+0.000001))*(MAT-(t+0.000001))/(2*CV*CV*MAT*(t+0.000001))
input = 1*P2*exp(-P3) # dose amount assumed 1
see attached on how you can also output cumulative fraction absorbed


Attached Files

  • BP1968 likes this

#5890 Enterohepatic Recycling - clearance problem

Posted by Xiuqing Gao on 19 December 2019 - 09:39 PM

Hi Everyone


I am working on a small molecule with enterohepatic recycling on pharmacokientic study after extravascular administration. However, the model fitting and CV% seems good for pharmacokinetic parameters except clearance for central compartment (always gave the negative values)!! 


I have also tried transition model, but all of the model i tried always gave me the negative value for central compartment clearance value (graph fitting and CV% always are great). 


Does anyone can explain this issue?


Please check my models and results, I am looking forward to your response!!! Thank you!!!!




Best regards,

Xiuqing Gao

Attached Files

  • Xiuqing Gao likes this

#5871 NCA analysis of sparse data - how to?

Posted by smouksassi1 on 28 November 2019 - 10:53 AM

Hi Daniel,
Reformulating your problem you have sparse data that does not allow individual NCA but you are interested to determine PK parameters of inerest ( e.g. AUC) 

AUC can be derived using pop pk so dont worry about the method that get you the pk parameter you want.

A pop PK approach is usually the go to solution. if you are using simple models you can even derive Cmax and AUC using formulas or of course you can simulate if your model does not allow mathematical solutions.


you do not necessarily need to run on NCA there is model based integration that can get you AUC for example.


when you request a table simulating at a grid of new timepoints the simulated concentration will be based on the individual EBE
  so you get that automatically when you fit a model your individual parameters output are all based on EBE.


you question
Which procedure is better - indvidual simulation + NCA  or  PK parameters based on EBEs?
is not applicable.



  • Simon Davis likes this

#5870 NCA analysis of sparse data - how to?

Posted by MojDa01 on 27 November 2019 - 10:59 AM

Dear All,


I have come across an - may be quite an general - issue, namely, how to (best) derive NCA parameters when I have sparse data available?

For instance I have densely sampled drug concentrations available in adults and sparse data in pediatric patients.


1.) My initial idea was to build a POP-PK model using all available data from the adults and pediatric patients and then to use individual simulations of the young to derive the PK parameters via NCA from those simulations.To be consistent with the PK parameter derivation via NCA for the adults and pediatrics, I could also use individual simulations for adults and pediatrics alike.

- Does that procedure makes sense to you?

- How do I get the individual simulations in to the NCA analysis within Phoenix?


2.) I have now also come across a procedure for obtaining PK parameters of sparsely sampled PK data by empirical bayes estimates (EBE).

- How does the procedure using EBEs work?

- How do I perform the calcualtions using EBEs in Phoenix?

- Which procedure is better - indvidual simulation + NCA  or  PK parameters based on EBEs?


Thank you very much in advance and

best regards,


  • Simon Davis likes this

#5864 “leave-one-out” geometric mean

Posted by Simon Davis on 19 November 2019 - 02:09 PM

Ded Moroz has come early....

solution coded in the data wizard will work with NAs, what do you think Helmut.


(note it is feasible there could be more than one PK metric for the subjects (i.e. replicate designs), that’s why the geo. mean is calculated for separate subjects too even if it does not seem to be necessary in this case.




Attached Files

  • bwendt@certara.com likes this

#5855 Analyzing results & output - AIC and BIC values

Posted by cradhakr on 15 November 2019 - 03:49 PM

After a model has been run/executed, and the folder is refreshed, Pirana will show the

main results of the run in the main overview. It will show the OFV, the di_erence in

OFV with the reference model (if speci_ed), the number of signi_cant digits, and some

information about the estimation, i.e.:


S           means a succesful minimization (as reported by NONMEM)

R           means estimation ended with rounding errors

C           means a succesful covariance step

M          means an unsuccesful covariance step due to matrix singularity

B           means a boundary problem was reported by NONMEM


Pirana can also show the AIC and BIC values for the model, although you will have

to instruct Pirana explicitly to calculate them for finished runs. 


Right click on the model (Colored yellow) >> Model >> Compute AIC & BIC



Attached File  Pirana_AIC.png   507.88KB   0 downloads


This will open Compute AIC & BIC >> hit Compute AIC/BIC


Attached File  AIC_BIC.jpg   76.17KB   0 downloads


Pirana will calculate these values and you should see the results computed


Attached File  Final_AICBIC.png   181.59KB   0 downloads

  • Simon Davis likes this

#5837 PML School teams up with Certara University

Posted by bwendt@certara.com on 28 October 2019 - 08:25 AM

I would like to announce that PML School is joining forces with Certara University. Future PML School sessions will be part of their Phoenix Express Webinar series. This allows us to focus on preparing new topics on the basis of modeling issues reported to Support that are interesting to this community of Phoenix NLME users. We will continue to upload content and links of new webinar sessions to this forum and entertain any questions you may have.


  • Simon Davis likes this

#5835 Phoenix 8.2 is now certified for Windows Server 2019

Posted by bwendt@certara.com on 25 October 2019 - 10:14 AM

The validation activities for Phoenix 8.2 for use with Windows Server 2019 certification have successfully completed and Phoenix 8.2 is now certified for use with Windows Server 2019.

  • Simon Davis likes this

#5810 Inflating clearance %CV for monte carlo simulation of off-diagonal omega matrix

Posted by smouksassi1 on 28 September 2019 - 03:42 AM

you can keep the correlation matrix as is and inflate the stdev part:


user R on any other software to compute your "new " var cov matrix based on:

inflationfactor <- 1.15

mu <- c(0, 0, 0) # eta means


# this is your correlation matrix of omega

corMat <- matrix(c(1, 0.78, 0.23,

0.78, 1, 0.27,
0.23, 0.27, 1),
ncol = 3)


# you diagonals squareroots

stddev <- c(1.23, 0.92, 1.32) * inflationfactor

covMat <- stddev %*% t(stddev) * corMat


plug the new covMat into your pml code

  • Simon Davis likes this

#5790 how to add a column in data wizard

Posted by Simon Davis on 05 September 2019 - 04:53 AM

Shannen, it's still unclear to me what it is you want to do, what is it you want to do with this new column? did you read the help file I pointed you to? that showed a transformation from a baseline.


I attach an example in v8.1, (althoguh v8.2 was released in July so I recommend being on the latest version if possible), where no reference is made to a prior column and creates 2 new columns - is that what you are asking?


The attached project shows another option, the custom transformation, which will allow you to create an empty column, one containing any text or indeed anything you can write a formula for, (see function list to the right).



Attached Files

  • shannen likes this

#5783 pirana.exe starting from somewhere in the %TEMP% folder

Posted by evasive on 02 September 2019 - 09:48 AM



This is EXACTLY what I needed.





Best regards,


  • Simon Davis likes this

#5781 pirana.exe starting from somewhere in the %TEMP% folder

Posted by Keith Nieforth on 28 August 2019 - 04:25 PM



This can be accomplished by setting the environment variables PAR_GLOBAL_TEMP and/or PAR_TEMP to a folder of your preference.


Please see this thread: https://www.perlmonks.org/?node_id=709846. This seems to do exactly what you want (we also use PAR::Packer to compile the Pirana executable on Windows).


Please note that you can't set the PAR_GLOBAL_TMP folder to the same folder as where you are running Pirana from (probably C:\Program Files x86), since that will probably overwrite pirana.exe. 


On a windows machine from the command line you can run:





This will create the specified folder (if it doesn't exist yet), and run Pirana from that folder instead of from the TEMP folder. Of course you can also set the environment variables from the system settings window, and then run Pirana from a shortcut or start menu, it doesn’t have to be run from the console. 


Best Regards,



  • Simon Davis likes this