Jump to content


Photo

Interpretation of subject-wise steady-state Assessment

Steady state PlanarCI_Upper

  • Please log in to reply
2 replies to this topic

#1 d_nil

d_nil

    Newbie

  • Members
  • Pip
  • 6 posts

Posted 05 January 2022 - 06:09 AM

Dear Team,

 

I am working on a Molecule in which steady state estimation needs to confirm.

 

I have derived Linear regression for subject wise steady state in which final parameters shows Intercept, Slope, SD & %CV along with Upper and Lower Univar & Planar CL.

 

How to interpret the steady state achievement from this data?

 

Again, What are the limits for UpperCL values? What is the significance? In my case PlanarCl_Upper value for more than 5 subject is negative value.

 

Kindly revert.


Edited by d_nil, 05 January 2022 - 06:10 AM.

Regards,

d_nil


#2 Simon Davis

Simon Davis

    Advanced Member

  • Administrators
  • 1,316 posts

Posted 05 January 2022 - 11:23 AM

Hi D_Nil,

 

       I think it would be clearer if we saw your project to better understand the linear regression you have performed.

 

Happy New Year!  Simon



#3 Helmut Schütz

Helmut Schütz

    Advanced Member

  • Members
  • PipPipPip
  • 316 posts
  • LocationVienna, Austria

Posted 15 January 2022 - 08:16 AM

Hi d_nil,

 

I am working on a Molecule in which steady state estimation needs to confirm.

 

Needed by whom? What do you want to demonstrate exactly? Whether linear PK is applicable (superposition holds) or for comparative bioavailability in (pseudo-) steady state?

 

 

I have derived Linear regression for subject wise steady state in which final parameters shows Intercept, Slope, SD & %CV along with Upper and Lower Univar & Planar CL.

 

How to interpret the steady state achievement from this data?

 

Though this approach was used in the past – I recommended it for decades myself – it is problematic.

 

 

Again, What are the limits for UpperCL values? What is the significance? In my case PlanarCl_Upper value for more than 5 subject is negative value.

 

That shows exactly why this approach is problematic.

  1. If there is low to intermediate within-subject variability, it is quite likely that some subjects show a slope which is statistically significant different from zero (or alternatively, the CI does not include zero). You will – falsely – conclude that the respective subjects are not in steady-state.
  2. If within-subject variability is high, you will find no significant difference of the slope to zero (i.e., the CI includes zero) – although some subjects are still in the saturation phase. You will – falsely – conclude that steady-state is achieved.

In the 1st case you loose power and in the 2nd keep subjects in the data set which are not in steady state. Recall, that we can assess BA either after a single dose or in steady state – not in between.

Assessing the CI would only make sense if there would be a priori defined acceptance limits (like in bioequivalence). Such limits were and are never defined.

 

Hence, assessing the CI is not recommended in any guideline for BE. Instead:

  • Present tables of individual values together with the geometric mean / CV.
  • Spaghetti and geometric mean plots.
  • No fixed rules but common sense!

The above was agreed upon by members of the CHMP Pharmacokinetics Party at the EUFEPS-workshop (Bonn, June 2013) dealing with the EMA’s MR guideline. When I presented that last April in São Paulo, members of the ANVISA were fine with it.

 

If you have R, try this script:

round.up <- function(x, y) {
  # round x up to the next multiple of y
  return(as.integer(y * (x %/% y + as.logical(x %% y))))
}

est <- function(x, y) {
  x   <- as.data.frame(cbind(t = x, C = y))
  x   <- tail(x, 3)
  # linear regression of last three pre-dose concentrations
  m   <- lm(C ~ t, data = x)
  # 95% CI of slope
  ci  <- confint(m, "t", level = 0.95)
  # assess whether the CI contains zero
  ifelse (sign(ci[1]) == sign(ci[2]), sig <- TRUE, sig <- FALSE)
  res <- list(intercept = coef(m)[[1]], slope = coef(m)[[2]],
              lwr = ci[[1]], upr = ci[[2]], sig = sig)
  return(res)
}

set.seed(123456)                        # for reproducibility
nsims  <- 1e5L                          # number of simulations
CV.lo  <- 0.05                          # drug with low variability
CV.hi  <- 0.40                          # drug with high variability
tau    <- 24                            # dosing interval
t.half <- 12                            # you name it...
t.ss   <- 7 * t.half                    # conservative approach
t.ss   <- round.up(t.ss, tau)           # time of last dose
t      <- seq(0, t.ss, tau)             # dosing times
d      <- length(t)                     # number of doses
C      <- 1 - exp(-log(2) / t.half * t) # 1st order
true   <- est(t, C)                     # model for comparison
# collect results with low and high variability
lo     <- hi <- data.frame(sim = 1:nsims, intercept = NA_real_,
                           slope = NA_real_, lwr = NA_real_,
                           upr = NA_real_, sig = NA)
pb     <- txtProgressBar(0, 1, 0, char = "\u2588",
                         width = NA, style = 3)
for (sim in 1:nsims) {
  # multiplicative error
  C.lo <- rlnorm(n = d, meanlog = log(C) - 0.5 * log(CV.lo^2 + 1),
                 sdlog = sqrt(log(CV.lo^2 + 1)))
  C.hi <- rlnorm(n = d, meanlog = log(C) - 0.5 * log(CV.hi^2 + 1),
                 sdlog = sqrt(log(CV.hi^2 + 1)))
  # get intercept, slope and its 95% CI, significance
  lo[sim, 2:6] <- est(t, C.lo)
  hi[sim, 2:6] <- est(t, C.hi)
  setTxtProgressBar(pb, sim / nsims)
} # simulations running...
close(pb)
f <- "CV = %4.1f%%, median of slopes = %.4g (RE = %.4f%%)"
f <- paste0(f, "\n significant slopes: %.2f%%\n") # cosmetics
f <- "CV = %4.1f%%, median of slopes = %+.5f (RE = %+.2f%%)"
f <- paste0(f, "\n significant slopes: %.2f%%\n") # cosmetics
cat(prettyNum(nsims, format = "d", big.mark =","),
    sprintf("simulations (%.2f%% of steady state,", 100 * tail(C, 1)),
    sprintf("slope = %+.5f)", true$slope), "\n",
    sprintf(f, 100 * CV.lo, median(lo$slope),
               100 * (median(lo$slope) - true$slope)/true$slope,
               100 * sum(lo$sig, na.rm = TRUE) / nsims),
    sprintf(f, 100 * CV.hi, median(hi$slope),
               100 * (median(hi$slope) - true$slope)/true$slope,
               100 * sum(hi$sig, na.rm = TRUE) / nsims), "\n")

You should get:

100,000 simulations (99.61% of steady state, slope = +0.00122)
 CV =  5.0%, median of slopes = +0.00122 (RE = -0.09%)
 significant slopes: 6.40%
 CV = 40.0%, median of slopes = +0.00114 (RE = -6.58%)
 significant slopes: 4.75%

 

Note that steady state would be reached after an infinite number of administrations. In the example we are at ≈99.61% of the true steady state. Hence, the slope is slightly positive (≈0.00122). Only in true steady state the slope would be zero. Even then we would expect  5% of significant results. That’s the false positive rate (FPR) of any test performed at level α = 0.05.

What we see here: With low variability we are above the FPR. That’s correct because the true slope is positive. On the other hand, with high variability the test is not powerful enough to detect the positive slope and we are close to the level of the test.

 

Background and more examples there.

 

Hope that helps.


  • Simon Davis likes this

 Best regards,
Helmut

https://forum.bebac.at/






Also tagged with one or more of these keywords: Steady state, PlanarCI_Upper

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users