# Most Liked Content

### #4133Dose proportionality

Posted by on 12 November 2016 - 01:06 AM

Hi LLLi,

In your reply, you mentioned "Formulas: θL,=[1+ln(θL)/ln(r )] and θU=[1+ln(θU)/ln(r)]. From [0.8, 1.25] and r 10 we get [0.9031, 1.0969] and from [0.5, 2] and r 10 [0.6990, 1.3010]".
While in Smith's paper, it is said that "the CI of Rdnm completely outside (0.8, 1.25), indicating a disproportionate increase". I am confused about the difference between the reference interval. Which reference interval we should use to evalute whether DP or not?

You are quoting the footnote b below Table 2. I guess this is just a typo. Smith used a slightly different terminology (compared to Chow/Liu and Hummel et al.). He starts from 0.80 and 1.25 (ΘL and ΘU; page 1279, first paragraph). The transformed acceptance range (he calls it “the critical region” and later “the reference interval”) is derived in Eq. (4). That’s the same one I used above. Now look at page 1282, left column, second paragraph, which reads:

The corresponding 90% CI (0.679, 0.844) fell outside the reference interval (0.903, 1.097) defined by Eq. (4) for r = 10 and ΘU = 1/ΘL = 1.25, indicating a disproportionate change in Cmax across the dose range studied.

In other words from the original range [0.80, 1.25] he gets the transformed one [0.903, 1.097] and this is what you should use (of course depending on the actual r in the study). q.e.d.

Furthermore, the paper also said that the Rdnm value of 1 would denote ideal dose-proportionality.

Correct. Let β ≡ 1 and r ⊂ ℝ. Then Rdnm = r β–1 = r 0 = 1. Less mathematical: If the slope is exactly 1 then for any possible ratio of dose levels Rdnm will be exactly 1.

Rdnm = PK/corresponding dose?

No. r = the ratio of the highest/lowest dose and Rdnm = r β–1. You do not dose-normalize in this model.

For power model of Rdnm, what is Y and what is x?

In my project I used the linearized power model, which is

ln(Yj) = α + β · ln(xj),

where Y is he respective PK metric (AUC, Cmax, …) and x the dose; both at level j. Most people prefer the linearized model over the original one – which is

Yj = α · xj β

because the latter requires nonlinear fitting. If you have a Phoenix/NLME license go ahead with the “pure” model. Anyhow, I would not recommend that because in a regulatory setting the former is more easy to assess than the latter.

However, there is a situation which demands nonlinear fitting: A power model with an intercept, i.e.,

Yj = α + λ · xj β.

You would need this model for dosing an endogenous compound and measurable basal levels.

• mittyright and LLLi like this

### #3959Sorting within the columns for Plots

Posted by on 12 October 2016 - 07:12 PM

Hello,
Sorry, it is currently not possible to do what you are requesting.  This feature request is being tracked in our tracking system, QC 9942, 'Capability to order legends', "Clients would like to be able to sort the legend. They want to order the values within the variables that are mapped as group."

Regards,
Linda Hughes
Principal Software Engineer, Certara

• Helmut Schütz and spic like this

### #2869Time Dependent Covariates

Posted by on 17 March 2015 - 03:11 PM

Not sure if this has been touched on the discussion boards but I am curious how time-dependent covariate modeling is implemented in PML?  Are there special considerations for the construction of the dataset, such as making a dummy timepoint where the covariate value changes? Or does the covariate, regardless of its value, need to be specified at each observation (DV)?

Dear Lance

In the pml, you have 2 options, either to use covariate statement or the fcovariate statement.

fcovariate means covariate carried forward while the covariate statement is backward.

Suppose you have a covariate like weight at t=0 that is 70.

The trial is long and let say that after 6 months the weight is checked and is let say 75

If you put at t=0 70 for weight and at t=6 months you put 75, then if you are using

fcovariate(weight)

the program will read a weight of 70 kg from t=0 to t=6 months and then at t=6 months it will read 75 kg.

If you are using the covariate statement

covariate(weight)

then at t=0 it will read 70 but then immediately will look at the next defined value for weight which is 75 and will define the weight as 75 until t=6 months.

I made the request to have fcovariate as the default interface option which to me makes more sense.

You do not need to define at each row the covariate value but only when it changes and use the fcovariate statement to make sure it will take the new value at the new time (as I just explained).

I hope it helps.

best Regards

Serge

• Tao Liu and NMedlicott like this

Posted by on 12 November 2014 - 07:39 AM

It is a shame that the Floating license model is being dropped by Certara.

The new license model is based on named user management requiering considerable administrative effort, particularly for organizations with a larger user community and where certain users need the software only occasionally.

The new model is also vey inflexible as users who left the organization can only be de-registered twice a year and therefore block a license for this period.

Furthermore it costs considerably more without adding any obvious value.

I therefore plea Certara to re-introduce a model similar to the past Floating licenses! This would greatly help to manage this software.

Christian

Novartis

• harbjo and hansonm like this

### #6120Add new column in simulation data for grouping

Posted by on 09 October 2020 - 04:06 AM

Hi Gwchoi,

Please see the attached image. While doing the simulation run you could add the additional data such as dose and covariates into the

Also, the below link could be of use to you in setting up the simulation tab.

https://onlinehelp.c...lation&rhsyns=

Thanks

Chandramouli

#### Attached Files

• bwendt@certara.com likes this

### #6107is there any bug for Phoenix software?

Posted by on 20 September 2020 - 05:28 AM

Thank you very much, Simon.

best wishes

• Simon Davis likes this

### #6099No simulated conc vs time profile for oral administration

Posted by on 24 August 2020 - 06:44 AM

Hello,

I created a model to simultaneously predict the concentration time profile for a parent compound and metabolite for three different routes of administration (IV, PO, inhalation) using the NLME Pop PK in Phoenix.

The model with IV and Inhalation is working as intended which is labeled as "IV and Inh THC 11OHTHC FO" in the attached file.

When I add the oral route of administration "PO, IV and Inh THC 11OHTHC FO", there is no simulated profile for the parent or metabolite via the oral route.

Does anyone have any ideas why this is happening?

Thank you,

Issue was solved by changing the absorption equations.

• Simon Davis likes this

### #6047Phoenix 8.3 released June 2020, Live online training

Posted by on 25 June 2020 - 02:33 PM

In addition, Certara University has created a set of free videos on What is new in Phoenix Version 8.3,   You can watch them by registering for this free course:

https://certaraunive...om/store/831110

Best,

Ana Henry

• Simon Davis likes this

Posted by on 16 June 2020 - 07:52 PM

Hello Veenu,
It would be best to see your project but by looking at your output it appears that you are indicating that steady state starts at 24 hours, this is probably because your steady state variable (we cannot see from the screenshots how that looks like) might have a flag for steady state at 24 hours when it should probably be at 0 hrs. This is just a guess based on your screenshots.

Best,

Ana Henry
• veenu.bala likes this

### #6017Dissolution model in PK simulation

Posted by on 10 June 2020 - 09:17 AM

Hi Martin,

if you want to simulate or fit your PK profile using estimates from a fit of a Weibull model to your dissolution data you can try this PML code:

test(){
deriv(Aa = - WB *AaDose )
deriv(A1 = WB*AaDose - Cl * C - Cl2 * (C - C2))
deriv(A2 = Cl2 * (C - C2))
C = A1 / V
C2 = A2/V2
# IVIVC
bs=Ascale*b
WB=(bs/MDT)*((t)/MDT)^(bs-1)*exp(-((t)/MDT)^bs)
deriv(Fa= WB)
error(CEps = 20.1559)
observe(CObs = C + CEps)
fixef(MDT(freeze) = c(,4,))
fixef(b(freeze) = c(,1.5,))
fixef(Ascale = c(,1,))
fixef(V = c(0, 15, ))
fixef(Cl = c(0, 5, ))
fixef(V2 = c(, 8, ))
fixef(Cl2 = c(, 2,))
}

Let me know if this works for you.

Bernd

• mjons likes this

### #5996Reset ignored with sequential PKPD

Posted by on 14 May 2020 - 02:22 AM

For some reason your Concentration was not mapped in your workflow (not sure why this happened). I recreated the model (see under Edited workflow) and its working fine.

Thanks

Chandramouli (Mouli)

• cphipps likes this

### #5923Initial fcntheta1 NaN error in NLME

Posted by on 07 February 2020 - 10:26 AM

you need to guide against input being zero at time zero.

option 1: only apply the input when time above zero otherwise it is zero.

input =  t ==0 ? 0 : ((MAT/(6.283*CV^2*t^3))^0.5) *exp(-((MAT-t)^2)/(2*CV^2*MAT*t))

option 2: put a small amount of time next to t:
P1 = (MAT/(2*3.14*CV*CV*(t+0.000001)**3))
P2 = P1**0.5
P3 = (MAT-(t+0.000001))*(MAT-(t+0.000001))/(2*CV*CV*MAT*(t+0.000001))
input = 1*P2*exp(-P3) # dose amount assumed 1

see attached on how you can also output cumulative fraction absorbed

#### Attached Files

• BP1968 likes this

### #5890Enterohepatic Recycling - clearance problem

Posted by on 19 December 2019 - 09:39 PM

Hi Everyone

I am working on a small molecule with enterohepatic recycling on pharmacokientic study after extravascular administration. However, the model fitting and CV% seems good for pharmacokinetic parameters except clearance for central compartment (always gave the negative values)!!

I have also tried transition model, but all of the model i tried always gave me the negative value for central compartment clearance value (graph fitting and CV% always are great).

Does anyone can explain this issue?

Please check my models and results, I am looking forward to your response!!! Thank you!!!!

Best regards,

Xiuqing Gao

#### Attached Files

• Xiuqing Gao likes this

### #5871NCA analysis of sparse data - how to?

Posted by on 28 November 2019 - 10:53 AM

Hi Daniel,
Reformulating your problem you have sparse data that does not allow individual NCA but you are interested to determine PK parameters of inerest ( e.g. AUC)

AUC can be derived using pop pk so dont worry about the method that get you the pk parameter you want.

A pop PK approach is usually the go to solution. if you are using simple models you can even derive Cmax and AUC using formulas or of course you can simulate if your model does not allow mathematical solutions.

you do not necessarily need to run on NCA there is model based integration that can get you AUC for example.

when you request a table simulating at a grid of new timepoints the simulated concentration will be based on the individual EBE
so you get that automatically when you fit a model your individual parameters output are all based on EBE.

you question
Which procedure is better - indvidual simulation + NCA  or  PK parameters based on EBEs?
is not applicable.

Samer

• Simon Davis likes this

### #5870NCA analysis of sparse data - how to?

Posted by on 27 November 2019 - 10:59 AM

Dear All,

I have come across an - may be quite an general - issue, namely, how to (best) derive NCA parameters when I have sparse data available?

For instance I have densely sampled drug concentrations available in adults and sparse data in pediatric patients.

1.) My initial idea was to build a POP-PK model using all available data from the adults and pediatric patients and then to use individual simulations of the young to derive the PK parameters via NCA from those simulations.To be consistent with the PK parameter derivation via NCA for the adults and pediatrics, I could also use individual simulations for adults and pediatrics alike.

- Does that procedure makes sense to you?

- How do I get the individual simulations in to the NCA analysis within Phoenix?

2.) I have now also come across a procedure for obtaining PK parameters of sparsely sampled PK data by empirical bayes estimates (EBE).

- How does the procedure using EBEs work?

- How do I perform the calcualtions using EBEs in Phoenix?

- Which procedure is better - indvidual simulation + NCA  or  PK parameters based on EBEs?

Thank you very much in advance and

best regards,

Daniel

• Simon Davis likes this

### #5864“leave-one-out” geometric mean

Posted by on 19 November 2019 - 02:09 PM

Ded Moroz has come early....

This
solution coded in the data wizard will work with NAs, what do you think Helmut.

(note it is feasible there could be more than one PK metric for the subjects (i.e. replicate designs), that’s why the geo. mean is calculated for separate subjects too even if it does not seem to be necessary in this case.

Simon.

#### Attached Files

• bwendt@certara.com likes this

### #5855Analyzing results & output - AIC and BIC values

Posted by on 15 November 2019 - 03:49 PM

After a model has been run/executed, and the folder is refreshed, Pirana will show the

main results of the run in the main overview. It will show the OFV, the di_erence in

OFV with the reference model (if speci_ed), the number of signi_cant digits, and some

S           means a succesful minimization (as reported by NONMEM)

R           means estimation ended with rounding errors

C           means a succesful covariance step

M          means an unsuccesful covariance step due to matrix singularity

B           means a boundary problem was reported by NONMEM

Pirana can also show the AIC and BIC values for the model, although you will have

to instruct Pirana explicitly to calculate them for finished runs.

Right click on the model (Colored yellow) >> Model >> Compute AIC & BIC

This will open Compute AIC & BIC >> hit Compute AIC/BIC

Pirana will calculate these values and you should see the results computed

• Simon Davis likes this

### #5837PML School teams up with Certara University

Posted by on 28 October 2019 - 08:25 AM

I would like to announce that PML School is joining forces with Certara University. Future PML School sessions will be part of their Phoenix Express Webinar series. This allows us to focus on preparing new topics on the basis of modeling issues reported to Support that are interesting to this community of Phoenix NLME users. We will continue to upload content and links of new webinar sessions to this forum and entertain any questions you may have.

• Simon Davis likes this

### #5835Phoenix 8.2 is now certified for Windows Server 2019

Posted by on 25 October 2019 - 10:14 AM

The validation activities for Phoenix 8.2 for use with Windows Server 2019 certification have successfully completed and Phoenix 8.2 is now certified for use with Windows Server 2019.

• Simon Davis likes this

### #5810Inflating clearance %CV for monte carlo simulation of off-diagonal omega matrix

Posted by on 28 September 2019 - 03:42 AM

you can keep the correlation matrix as is and inflate the stdev part:
...

user R on any other software to compute your "new " var cov matrix based on:

inflationfactor <- 1.15

mu <- c(0, 0, 0) # eta means

# this is your correlation matrix of omega

corMat <- matrix(c(1, 0.78, 0.23,

0.78, 1, 0.27,
0.23, 0.27, 1),
ncol = 3)

# you diagonals squareroots

stddev <- c(1.23, 0.92, 1.32) * inflationfactor

covMat <- stddev %*% t(stddev) * corMat

plug the new covMat into your pml code

• Simon Davis likes this