Hello,
I'm trying to build a model where I need to have ETAs generated on
separately for the ID and another variable (MACH). What I have is a PD
experiment that was run on several different machines (MACH). Each
machine appears to have a different slope per day and a different
calibration. I st
model in the same way as
> >>>> IOV. In the
> >>>> case of intermachine-variability you would have to assume the
> >>>> variability
> >>>> between all machines are the same... Or would you rather assume
> >>>> in
Hi Batul,
The first method should work-- I use it relatively often.
Thanks,
Bill
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Batul Parta
Sent: Wednesday, November 12, 2008 10:36 AM
To: nmusers@globomaxnm.com
Subject: [NMusers] Use of ADDL
Hi Jian,
I would look for a covariate effect on that parameter.
Thanks,
Bill
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Jian Xu
Sent: Thursday, November 13, 2008 10:16 AM
To: nmusers@globomaxnm.com
Subject: [NMusers] Very small P-Value fo
Hi Hussain,
The error as it states is that it cannot find g77. Where is g77 on your
C drive (what directory), and what does your compiler section look like?
Thanks,
Bill
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Mulla Hussain - Senior P
Dear All:
I'm writing to inform you that we continue to solicit CVs who are
interested in pursuing entry-level and senior pharmacokineticist
positions within the Department of Drug Metabolism at Merck Research
Laboratories at our West Point, Pennsylvania location -- in fact, we are
planning to exp
Hello,
When is F1 evaluated? I am working on a model with nonlinear
bioavailability and I need to know if F1 is evaluated during the entire
absorption process or if it is evaluated upon the dose entering the
dosing compartment.
Thanks,
Bill
Notice: This e-mail message, together with any attach
Hi Susan,
The simplest solution would be to split the line similar to:
TVCL1=THETA(1)+THETA(7)*(VMLL-49827.76)+THETA(8)*(KML-96.226)+THETA(9)*(
VLRP-5.659)
TVCL=TVCL1+THETA(10)*(VLSP-83.615)+THETA(11)*(WT-76.638)+THETA(12)*(VLB-
6.778)+THETA(13)*(VLG-1.606)
Have a good day,
Bill
_
Hi Naren,
PsN (http://psn.sf.net) provides an interface to Sun grid engine. I've
not used it personally, but I do use it with Torque, and generally, it's
just a normal installation of NONMEM onto shared disk space and running
in a shared disk space. There are no specific tricks when using with
Hi Leonid,
This is how to do it for fixed effects in R with a regular nlme model
(I've not used nlmeODE). I don't have an example with generating new
random variables.
library(MASS)
## make a model named "model" here
coef <- model$coefficients$fixed
cov.matrix <- model$varFix
new.coef <- as.data
Hi Paul,
If you have a row in your data file at TIME == DUR, then something like
the following should work (you would need to have DUR on each row for
this; there are other ways to do it that you don't have to have DUR on
each row):
$ERROR
CMAX = 0
IPRE=F
EDV=EXP(DV)
W=1
IF(F.GT.0) W=F
IRES=F-E
Hi,
I have code where the ID is printed out in verbatim code for a model
that worked fine with NM6, but it crashes NM7:
...
$PK
...
" PRINT*,ID
...
The error is:
WARNINGS AND ERRORS (IF ANY) FOR PROBLEM1
(WARNING 2) NM-TRAN INFERS THAT THE DATA ARE POPULATION.
CREATING MUMODEL
Hello,
I have a model where I will likely need between 130 and 150 compartments
(it's a rich data set and many transit compartments, so it is probably
estimable). When I was looking in SIZES, it indicated that the maximum
number of compartments is 99; I was wondering if there is a way around
this
Hi,
I know that this topic has come up from time to time, but I've not found
a definitive word on using NONMEM in virtualized environments. If I've
missed an old post here, please point me to the link I should have
found.
Specifically I have a few questions for people who have built
virtualized
auer, Ph.D.
Vice President, Pharmacometrics
ICON Development Solutions
Tel: (215) 616-6428
Mob: (925) 286-0769
Email: robert.ba...@iconplc.com
Web: www.icondevsolutions.com
____________
From: Denney, William S. [mailto:william_den...@merck.com]
Sent: Tuesday, September 21, 2010
Hi Jeroen,
Jumping in a bit later, I agree generally with what has been said so far, but I
do disagree with one point. I think that the models we work with tend to have
local minima that cause us to find different "best models" depending on the
path taken to get there.
And, I brush after brea
Hi Santosh,
One thing you didn't mention was the compiler. If you have an Intel
compiler, things will typically run much faster.
Thanks,
Bill
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com]
On Behalf Of Santosh
Sent: Thursday, March 10, 2011 10:01 AM
To: nm
Hi Istvan,
The issue is that compartment 2 is your default observation (DEFOBS)
compartment. In your $ERROR block, you need something like
IF (CMT.EQ.2) IPRE=A(2)/V2
IF (CMT.EQ.4) IPRE=A(4)/V4
Y=IPRE*(1+ERR(1))+ERR(2)
Note that this will assume the same error structure between the two
compart
Hi Michael,
Another method to consider would be to fit the individuals who obviously
were correctly sampled and then estimate the true infusion rate for the
other individuals iteratively.
* Subset data to IDs with correct sampling
* Fit model to those individuals
* Subset points for incorrect sam
Hi Min,
For this data set, it's likely that there is not a large effect of a
60.9 hr dosing on the 176.93 hr sample (assuming that the half-life is
relatively short as would be usually suggested by BID dosing. It looks
like daylight savings time may have happened between the 48 and 60 hr
time poi
Hi Toufigh,
I typically think that data quality decreases with phase and with sampling
frequency. Given what you described below, I'd think that you're fighting data
quality in the sparse, phase 3 studies, and with the parameters you're
describing as having trouble, it seems to support that th
Hi Norman,
With what you're asking, is the real question, "How sensitive is parameter x to
changes in parameter y?"
If so, the best method would probably be bootstrapping. In PsN, that can be
done with the bootstrap command. An alternate could be likelihood profiling
(llp in PsN), though the
Hi Norman,
I don't know a simple way to do that analysis. Were I doing this, I'd code the
model into R and loop over the parameters.
LLP would probably get you closest within NONMEM tools that I'm aware of.
You'd need to include Cmax and AUC in your output files, and them you would
manually
Hi Yaming,
This simplifies things significantly. The first dose will (hopefully fully)
define the PK of drug 1. The dose of drug 2 will be modeled as the sum of the
two.
Since you have only drug 1 at the beginning, you should be able to model the
full time course with:
F=A(2)/V2 + A(3)/V3
Hi Orlando,
This is somewhat dependent on how exactly you want the fraction to be coded.
Most likely, you want to restrict the volume to be positive and not bounded
between 0 and 1 (i.e. it is ≥0). Given that, you can define V3 as:
V3=EXP(THETA(8))*V1
You would then just exponentiate THETA(8
Hi,
I've hit an issue with the .ext file generated by NM7.1.2. It appears that if
the estimation fails (message below), the .ext file doesn't get the header with
the parameter values:
Estimation failure message:
0PROGRAM TERMINATED BY OBJ
ERROR IN OBJ2 WITH INDIVIDUAL 1 ID= 1.0
Hi Norman,
I believe that you can use the standard ADVAN routines by putting the urine as
the excretion compartment (e.g. CMT = 3 for ADVAN3). What you would do is:
* Put an EVID=2 with CMT=3 at TIME 0 for each subject (to turn on the
output compartment),
* Put a row with EVI
Hi Ayyappa,
As Sven mentioned, it would make the most sense for it to be a significant
covariate on both central and peripheral volumes.
Just because the objective function value drops more doesn't mean that you
shouldn't include the effect of weight on central volume in your final model.
The
Hi Martin,
At an arbitrary level of precision, there will likely be differences [1].
These differences are unlikely to affect any model results to a notable degree.
When I've done testing previously with different compilers, processors, and
platforms, changing compilers (gcc vs ifortran) made
Hi Khaled,
You can simply include age as a new column.
The complexity is that the solution will change at the time that the row
happens. So, if the change in age between rows in the table is small relative
to the total covariate effect (like age changes from 20 to 21 which makes a 1%
change i
Hi Markus,
Typically, AUCs are calculated using software like WinNonlin because it is both
simpler and more objective (the model is simple and specified a priori).
Generally, the model is just used as the linear-up, log-down trapezoidal rule
with personal (or corporate) preferences on handling
Hi Xavier and Kyun-Seop,
I was hoping to have a more satisfactory answer as well, but it's good to have
the real answer.
I have always looked at nsig as a general marker of stability while I think of
the actual number of significant digits in the final answer as related to the
RSE of the param
Hi Pascal,
In addition to Leonid's answer, if you have time-varying covariates and aren't
explicitly computing the current value in the $DES block and are interpolating
them (with something other than LOCF), that could explain the difference. The
reason would be that NONMEM only resets the val
Hi Siwei,
In a similar situation previously, I've found fixing the additive error to a
small value (~= 0.0001*LOQ) has provided a work-around for this. It usually
arises from a zero measurement needing to be nonzero for estimation purposes.
A better fix is to use the M2 method which should lo
Hi Yaming,
In a general sense, the two things that you're wanting to do are likely
straight-forward with NONMEM.
For the dosing, as long as you're dosing at the same time and its the dose
amount that is changing, you can make F1 dependent on your event (e.g. to stop
dosing after the event, set
Hi István,
Because the parameter selected for the reference period will affect the fitting
in all periods, I think that the only way to do what you're wanting will be to
fit just the reference period and then merge the resulting Ka into your data
set and use that instead of fitting it.
A more
Hi Nele,
For the first point, if you have values that are rounded like that and you will
be mixing rounded and non-rounded values, then the simplest way to handle it
would be to have a different additive error term for measurements that were or
were not subject to the rounding.
A more complex
Hi Bernard,
Try using just IF statements instead of including the ELSE IF and END IF like
this:
IF (GENE.EQ.3) THEN CL = THETA(1)* EXP(ETA(1)) ;GG
IF (GENE.EQ.2) THEN CL = THETA(6)* EXP(ETA(3)) ;GT
IF (GENE.EQ.1) THEN CL = THETA(7)* EXP(ETA(4)) ;TT
IF (GENE.EQ.4) THEN CL = THETA(7)* EXP(ETA(4))
Hi Xinting,
In a few rare cases, I've seen this happen if the model is approaching
nonconvergence. In those cases, typically the RSE on one or more parameters
will increase and the ratio of max to min eigenvalues will increase
substantially. Are you seeing either of these?
Thanks,
Bill
On
Hi Siwei,
Biases can definitely come from multiple sources including model
mis-specification (as you noted with #1 below). There are multiple methods
that you can use to assess the improvement of the model which may include using
prior information (a prior statement for the parameters reported
Hi Siwei,
If you are using an algebraic model (i.e. no differential equations), then you
can simply include it in your equation:
e.g. assuming that SBP is systolic blood pressure in your original data set:
EFF=THETA(1)+SBP*THETA(2)
If you have a differential equation model and you want the tim
*EXP(ETA(4))
S2=V2/1000
S3=V3/1000
$ERROR
IPRE=F
IRES=DV-IPRE
W=F
IF(W.EQ.0) W = 1
IWRE = IRES/W
Y=F*(1+EPS(1))+EPS(2)
Best Regards
On 12 August 2013 20:50, Denney, William S.
mailto:william.s.den...@pfizer.com>> wrote:
Hi Xinting,
In a few rare cases, I've seen this happen i
From: Xinting Wang [mailto:wxinting1...@gmail.com]
Sent: Monday, August 26, 2013 9:27 AM
To: Leonid Gibiansky
Cc: nmusers@globomaxnm.com; Denney, William S.
Subject: Re: [NMusers] Reducing ETAs actually decreased OFV
Dear Bill,
Appreciate your reply a lot. The issue is from KA. Adding KA or no
Hi Markus,
I assume that by "capped" you mean that there is one value that shows up
repeatedly as the maximum value.
Given your model below without seeing the data, there are no covariates in the
model to allow for differences between individuals without etas (that come with
IPRED). If you ar
Hi Pavel,
Perhaps try a variable name other than "R". The error suggest that it is a
reserved name that cannot be reused.
Thanks,
Bill
On Mar 26, 2014, at 18:52, "Pavel Belo"
mailto:non...@optonline.net>> wrote:
Hello NONMEM73 Users,
I try to use call random:
$PK
IF (ICALL.EQ.4.AND.NEWIND
Hi Matthew,
It looks like you're going to some rather extreme efforts to achieve your goal,
and dependent on your assay, having negative CP values may not be wrong.
For the negative CP values, usually a PK assay has a component of both additive
and either proportional or exponential error from
Hi Xinting,
I’ve worked with these types of statements a good bit, and my personal
preference is to add a column to the data set that makes the selection simpler
(e.g. set it to 1 if (A == 1 or A == 2) and B < 100). Last I knew, it wasn’t
possible to do an “AND” in an ignore statement (and che
Hi SoJeong,
I agree with Leonid here on the value of the mixture model. With potentially
subtle changes, mixture models can be very difficult. One way that I've had
luck previously with a similar approach is to make "unknown genotype" a
separate category and then to fit a parameter that is fr
Hi Yuma,
IIV on residual error is effectively compound symmetry. What it is saying is
that "some subjects are more variable than others" without suggesting a reason
why. Your model below incorporates additive error, and you don't have IIV on
F1, so in your case, this ETA on EPS could be one o
Hi Ravi,
Does the file "C:\COMBINED_PK_PD_Feb3rd.CSV" exist? I'm guessing that you
didn't mean to load the file from the root directory of your C: drive. Check
the file location in your model file.
Thanks,
Bill
On Feb 6, 2015, at 11:55, "Singh,Ravi Shankar"
mailto:ravi.si...@ufl.edu>> wrot
Hi Xinting,
It looks like your data file has quotes in it instead of just character data.
You could confirm this in Linux with the head command (head data.csv). The fix
is to use quote=FALSE in your call to write.csv in R.
Thanks,
Bill
On Mar 9, 2015, at 0:04, "Xinting Wang"
mailto:wxintin
Join the ISoP New England Local Events Committee for an evening of learning and
networking. On April 30, 2015 the group will be hosting the ISoP New England
Greatest Hits Poster Event at Takeda Pharmaceuticals in Cambridge, MA. This
will be an excellent opportunity to show your work, and see w
Join the ISoP New England Local Events Committee for an evening of learning and
networking. On April 30, 2015 the group will be hosting the ISoP New England
Greatest Hits Poster Event at Takeda Pharmaceuticals in Cambridge, MA. This
will be an excellent opportunity to show your work, and see w
Dear all,
There will be another ISoP social event at the Asgard (350 Mass Ave, Cambridge
MA) on Thursday June 11 at 5:30pm. This event will be informal and we'll each
be responsible for our own tab.
I also wanted to let you know that for organizational purposes, we've created a
few new tools.
Hi Brady,
Generally, you will just include the baseline measurement as time=0 and then
reference all other times from that time. So, if your baseline value was drawn
at 8AM on day 1 while the first dose was at 10AM, just set the baseline time as
0 and the first dose time as 2 (also assuming th
Hi Katrin,
$ERROR is executed once per data row. The time when $ERROR is run is the TIME
value (the discrete times of the measurement). For this specific example, you
can just use TIME. You will need to code your $DES block so that what you're
wanting to integrate as a function of time is in
Hi Ahmad,
I agree with Nick, you will want to weight your precision by the inverse
standard error.
More generally, you are doing a model-based meta-analysis. When I was first
learning about it, a book that I found very informative and readable was
"Introduction to Meta-Analysis" by Borenstein
Hi Pavel,
The easiest way that I know is to generate your data file with one set of rows
for estimation with M3 and another row just above or below with MDV=1. NONMEM
will then provide PRED and IPRED in the rows with MDV=1.
Thanks,
Bill
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus..
ll be something like
$EST MAXEVALS= SIG=3 NOABORT PRINT=1 SORT CONSTRAIN=5
METHOD=SAEM NBURN=0 NITER=0 POSTHOC INTERACTION
LAPLACIAN GRD=TG(1-7):TS(8-9) CTYPE=3 CINTERVAL=10
I guess the best future way is modify something in NONMEM so there is an option
to provide only PRED in the PRED column (
Dear NMUSERS,
Next Thursday at 5:30pm, come enjoy the ISoP New England social at The Asgard
in Cambridge, MA! Come enjoy a drink, socialize with your pharmacometrics
peers, and chat about the best of 2015 and excitement for 2016.
The details are:
When: Thursday, December 10 at 5:30pm
Where: T
Hi Dennis,
For this, I’d bootstrap it, apply the function to the bootstrapped results, and
use that as your CI.
For more specific steps:
Assuming that:
THETA(1) = intercept
THETA(2) = slope
THETA(3) = value at saturation
LOW = THETA(1) + THETA(2)*Cp
HIGH= THETA(3)
IF (LOW.LT.HIGH) Y = LOW
IF (
Hi Zheng,
I'll take an intermediate view between Joachim and Nick.
The rich data from Phase 1 provides the ability to define the structural model
and a few of the important covariates. The control of Phase 1 gives precision
that cannot be achieved in Phase 2 or 3 studies. But, there are usual
Hi Sven,
As Mats said, you need to account for correlation between parameters. Using
uncorrelated parameters, you will have all the issues discussed below (high
variability on the second population). For model building, you could do the
following to minimize that correlation and have the vari
Hi,
Adding to Suruchi, one issue I have encountered in the past when working with
complex or mathematically stiff models is that different integrators
occasionally give different results.
This will usually show up as instability of one of the integrators (big jumps
up and down on a percent bas
Hello,
The New England chapter of ISoP is hosting a gathering at The Asgard (350
Massachusetts Ave, Cambridge, MA 02139) on February 4 from 5-7pm. This event
was part of democracy in action, Andy Stein organized a Google Doodle poll to
help choose the location. We would love to see you ther
Hi Zheng,
My first guess is that you have a time-varying covariate in your data set, and
the way that NONMEM handles time varying covariates in the data set is that
they are kept fixed until an instantaneous change when the new record appears.
If your EVID=2 records change the interpretation o
Hi Laureen,
Large condition numbers (typically interpreted as >1000) indicate that two or
more parameters in the model are highly correlated in their covariance and that
the model parameters are difficult to identify. Given your description below,
I would not suggest using the 3-compartment mo
[https://insp.memberclicks.net/assets/site/isop.png]
ISoP New England Tools and Poster Day
March 31, 2016
Come join us March 31 for three interesting updates on Pharmacometric/QSP
"Tools of the Trade" (yes, it's coming right up!) plus bring your recent
posters for an encore presentation of "r
68 matches
Mail list logo