Hi Silke,
If you are looking at model indentifiability, regardless of in-principle
or in-practice, you absolutely must use a range of initial estimates.
Minimization algorithms can converge to a minimum or a saddle point
close to the initial estimates. If your model always finds the 'right'
answ
My guess is the low response is likely an effect of inexperienced and
intermediate users remaining silent to allow the most informed users
decide the outcome of the poll. This was a consideration for me, even
though I eventually decided to participate as an intermediate user.
Regular reading of a
Can anyone explain what advantages encryted source code might have?
For me at least, the NONMEM source code is already functionally encrypted
because of my poor understanding of Fortran. But it is unclear to me what
problem encryted source code actually solves. Can anyone enlighten me?
Doug
onplc.com]
Verzonden: do 2-7-2009 15:15
Aan: Eleveld, DJ; nmusers@globomaxnm.com
CC: Krohn, Anthony
Onderwerp: RE: [NMusers] NONMEM 7 Update
Dear All:
Icon Development Solutions has invested in the further
development of NONMEM. By mutual agreement with the University of
California at S
De : owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] De la
part de Ludden, Thomas
Envoyé : jeudi, 2. juillet 2009 15:15
À : Eleveld, DJ; nmusers@globomaxnm.com
Cc : Krohn, Anthony
Objet : RE: [NMusers] NONMEM 7 Update
Dear All:
Icon Development Solutions has invest
Susan,
The one I really liked was:
M O Karlsson and R M Savic, Diagnosing Model Diagnostics, Clinical
Pharmacology & Therapeutics (2007) 82, 17-20.
doi:10.1038/sj.clpt.6100241
Douglas Eleveld
Van: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm
Hi Neil,
Well if you compare proportional+additive error model with a logarithmic error
model then it shouldnt be suprising that they work differently and give you
different residual variance. Logarithmic error model presumes that the
accuracy of the observations, in absolute terms, becomes ve
Hi Pyry and Jacob,
If you exclude zero etas then what happens to infomative individuals who just
happen to have the population typical values?
This approch would exclude these individuals when trying to indicate how
informative an estimation is about a parameter.
I know this is unlikely, but
Hi,
As others have suggested making an additional record at the known desired time
is probably the most straightforward manner to get what you want. So this
should probably be a preferred solution.
If you for some reason you cant to add a record (maybe you dont know the time
beforehand), I th
I'd like to interject a slightly different point of view to the distributional
assumption question here.
When I hear people speak in terms of the distribution assumptions of some
estimation method I think its easy for people to jump to the conclusion that
the normal distribution assumptio
Hi Dieter,
If you only onserve the average of two quantities you can never estimate the
individual contributions.
You must have some additional information to be able to do this.
Maybe if the time constants are apriori known to be very different, i.e. fast
and slow, then it might be
possible
different tissues. This can be apriori or determined by some other
technique.
Doug Eleveld
-Oorspronkelijk bericht-
Van: Dieter Menne [mailto:dieter.me...@menne-biomed.de]
Verzonden: July 30, 2010 8:05 AM
Aan: Eleveld, DJ; nmusers@globomaxnm.com
Onderwerp: RE: [NMusers] How to model obs
Hi Jeroen,
If shrinkage induces correlations (which arent "true") in the posthoc ETAs then
the data isnt very informative for at least 1 of the parameters. If this
(misleading) correlation causes the researcher to test a model with
off-diagonal covariance, I would expect that they would not fi
Hello NONMEM users,
I am seeing compiler warnings during recompiling stage of NONMEM 7.2.0 runs.
Is this expected? Or should I take this as a sign that something is wrong?
I am using Linux (Ubuntu) and G95.
best regards,
Douglas Eleveld
Recompiling certain components
/home/deleveld/nm72/pr/S
Ok, thanks. I guess its a coding style issue, its probably very hard to make
automatically generated code that is always warning-free
-Original Message-
From: Bill Bachman [mailto:bachm...@comcast.net]
Sent: Fri 5/27/2011 1:57 PM
To: Eleveld, DJ; 'nmusers'
Subject: RE
Hi Li,
Well, do you have rich data and a small number of subjects?
How much shrinkage exactly? A very small negative number might just be due to
(hopefully) non-important numerical issues. It could also be due to early
termination of the estimation, not doing enough iterations, problems with
Hi Toufigh,
Just a suggestion that you may already be using, do you use the SORT option for
estimation.
This is i think helpful when the informativeness of individuals vary
considerably.
I might help stabilise the full data set.
Douglas Eleveld
From: o
Hi All,
I dont think it is contradictory to use the sample-based densities for
integration and then use the classical EBE for reporting individual values.
When integrating you want to see the entire individual density so you can give
correct weight to large areas of low probability. But when y
Hi All,
The strange thing to me about the PPP&D method is that you generate two
different posthoc estimates for individual PK, one from the PK modelling alone
and another from the PPP&D step.
Does anyone know if one of these inferior to the other? Which is the "right"
individual posthoc PK est
You mean you removed an eta and the objective function went down? I dont think
this can really happen in a straightforward way.
In NONMEM, the minimum of the objective function is found and if having all
etas to zero gives a lower objective function than some other eta values then
barring coverg
Hi Matt and Everyone,
Whether or not "just using weight and CLCR should be enough" depends on whether
you think that people who lost weight because of a drug (the formerly obese)
are physilogically the same (with respect to the drugs in question) as those
who were never obese. Are the formerly-
Hi Jules,
If the correlation between two ETAs is 0 then you should be able to remove one
ETA and replace it by a scaled version of the other ETA and get the same
objective function value.
This model would not have a correlation problem (since only one ETA) so your
problem would be "solved" in a
Hi Andreas,
You cant fix part of a block in NONMEM, you have to fix the whole block. So the
trick is to construct the covaiance matrix structure you want out of smaller
blocks.
And when you fix an ETA on the diagonal to zero the corresponding covariances
have to be zero as well. (i.e. the left-
Hello everyone,
I have a curious problem with slow PRED calculations in tables. The estimations
are reasonably fast, 471 seconds for 23 iterations. If there is no PRED in any
tables then NONMEM finishes a moment after the message about elapsed estimation
time. But if is PRED in a table then it
there could be all sorts of practical considerations with
legacy code.
warm regards,
Douglas Eleveld
Van: Bauer, Robert [mailto:robert.ba...@iconplc.com]
Verzonden: January 29, 2014 10:34 PM
Aan: Eleveld, DJ; nmusers@globomaxnm.com
Onderwerp: RE: Slow PRED in tables
Hi Robert,
I can confirm that using WRESCHOL solves the problem. Thanks!
warm regards
Douglas Eleveld
From: Bauer, Robert [robert.ba...@iconplc.com]
Sent: Thursday, January 30, 2014 5:21 PM
To: Eleveld, DJ; nmusers@globomaxnm.com
Subject: RE: Slow PRED in
Hi Pavel,
My question is: Why is it desirable to fit a complete omega matrix if its
physical interpretation is unclear? Etas are variation of unknown origin i.e.
not explained by the structural model. A full omega matrix allows the unknown
variation of one paramater to have a (linear?) relations
; Eleveld, DJ
Onderwerp: Re: [NMusers] OMEGA matrix
Dear Pavel, others,
The underlying technical difference is that SAEM is in its core a sampling
methodology. Off-diagonal elements (as explained by Bob Bauer) are available as
sample correlations and do not have to be separately computed in contrast
Hi Yuma,
My experience is that some model modifications can greatly reduce objfn but
make prediction actually worse. I like to use repeated 2-fold cross-validation
since I am usually interested in accurate predictions for out-of-sample data.
This may or may not be what you want your model to do
it sounds like that covariate provides information for the parameter influenced
by eta.
you have taken something that was unexplained population variation and
explained part of it with the covariate.
this is usually a good thing.
if the covariate helps so much to predict the parameter that the et
Hi Aziz,
Just some comments off the top of my head in a quite informal way: I'm not
really sure that these are the same problem because they dont start with the
same information in the form of parameter constraints. In model 1 you are
asking the optimizer for the unconstrained maximum likelihoo
Hi Andre,
Hopefully you can see that
(1) QCO=15.87*(BW)**0.75
calculates very different values for QCO compared to
(2)QCO=15.87*(WT/WTstd)**0.75
unless of course WTstd is 1kg. In that case (WTstd is 1 kg) then they are
exactly the same.
The easiest way to separate these situations is to
Hi Jakob and Everyone,
In the no-covariate case flip and flop 😊 represent equal likelihoods i.e. two
local minimums of equal depth.
I agree that distributional assumptions would likely be useful to discriminate
between two different parameters values that have equal likelihoods.
Depending on how
Hi All,
I am writing a paper on some C language code I have written that does FOCE.
I want to write a few sentences about the history of NONMEM but I'm not 100%
sure it is correct.
Can someone knowledgeable give me some feedback on this?
"NONMEM software was developed at USCF in the late 1970s a
5
Aan: Eleveld, DJ ; nonmem usersgroup
Onderwerp: RE: Question about ETA on residual error
Hello Douglas:
The off-diagonals of ETA(4, XXX) should be zero, as the variance-covariance of
etas (phc) is the inverse of the information matrix that is calculated as:
Ey(2nd derivative partial
Hi All,
I am having trouble understanding an aspect of NONMEM FOCEI estimation and I
didn’t find anything in the documentation to get me started.
When I add an ETA() on residual error everything estimates as I expect. I call
this "composite error model", I dont know if there is another terminolog
36 matches
Mail list logo