Bookmark and Share

Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: st: Comparing risk scores


From   K Jensen <[email protected]>
To   [email protected]
Subject   Re: st: Comparing risk scores
Date   Tue, 18 Oct 2011 14:20:37 +0100

Thanks, Jesper.

This is sort of binary data, in that it is in-hospital mortality and
the length of stay doesn't vary that much by patient and is relatively
short.  We are treating it as binary.  Any tips on binary data?

Thankyou

Karin

On 18 October 2011 13:52, Jesper Lindhardsen <[email protected]> wrote:
> Hi there,
>
> A popular, although perhaps rough, method is to use logist models and determine the best models by AUC. However in my view , this seem to impose restrictions on the data you analyse regarding censoring and equal follow-up. Since (I gather) your data you use to make the risk scores is survival data, perhaps the approach by Newson could be applied (Newson RB. Comparing the predictive power of survival models using Harrell ' s c or Somers ' D. Stata Journal - http://www.imperial.ac.uk/nhli/r.newson/papers/predsurv.pdf), where comparison of Harrells C of Cox regression is used to compare the prediction of the scheme.
>
> Incidentially, I guess generalisability is always an issue and the risk scores should be tried out in other cohorts than the one you use to develop the model.
>
> HTH
>
> Jesper
>
> Jesper Lindhardsen
> MD, PhD candidate
> Department of Cardiovascular Research
> Copenhagen University Hospital, Gentofte
> Denmark
>
>
>
>
>
>
> -----Original Message-----
> From: [email protected] [mailto:[email protected]] On Behalf Of K Jensen
> Sent: 18 October 2011 14:36
> To: [email protected]
> Subject: Re: st: Comparing risk scores
>
> The clinicians I am working with are ADAMANT that they want a simple
> scale based on ticking boxes and adding the number of ticks, and
> nothing more complicated...
>
> If we work within these constraints, how is it best to compare the
> possible scores?
>
> Thanks
>
> Karin
>
> On 18 October 2011 13:22, Richard Goldstein <[email protected]> wrote:
>> Karin,
>>
>> I suggest you might want to read Sullivan, LM, et al. (2004),
>> "Presentation of multivariate data for clinical use: the Framingham
>> study risk score functions," _Statistics in Medicine_, 23: 1631-1660,
>> which describes how the Framingham people came up with their risk scores
>>
>> Rich
>>
>> On 10/18/11 8:18 AM, K Jensen wrote:
>>> Hi Nick
>>>
>>> Thanks for your reply.  It's actually a bit more complicated than
>>> that.  We are trying to construct a "best" single score that would be
>>> simple and used clinically.  The elements that are summed to make the
>>> score (0,1,2,3 etc) are derived from various clinical measurements.
>>> They are dichotomised by choosing the cutpoint that maximises the sum
>>> of sensitivity+specificity.  Only those binary variables significant
>>> in a univariate logistic regression are proposed for the model.
>>>
>>> I am wanting to choose the "best" model, that is useful for
>>> clinicians.  If we had 7 binary variables, say, I would look at all
>>> possibilities of choosing different combinations of the sums of them.
>>> E.g. 1, 2, 3, 4, 5, 6, 7,1+2,1+3,1+4,1+5,1+6,1+7, 2+3, 2+4,... up to
>>> 1+2+3+4+5+6+7.  I would like to use the optimal score based on this
>>> method, but don't know how to measure optimality.
>>>
>>> Best wishes,
>>>
>>> Karin
>>>
>>> On 18 October 2011 12:36, Nick Cox <[email protected]> wrote:
>>>> I would recast this as a -logit- or -logistic- problem in which your
>>>> outcome is dead or alive. Depending on how you think about your
>>>> scores, they define predictors to be treated as they come or
>>>> predictors to be treated as a set of indicator variables (or in some
>>>> cases both).
>>>>
>>>>  I don't think you are restricted to using one score or the other as predictor.
>>>>
>>>> Nick
>>>>
>>>> On Tue, Oct 18, 2011 at 12:11 PM, K Jensen <[email protected]> wrote:
>>>>> Maybe this is more of a stats question than a Stata one, but there are
>>>>> such a lot of good brains here...
>>>>>
>>>>> We are constructing point scores to indicate severity of risk  Death
>>>>> is the outcome. What is the best way of measuring the usefulness of
>>>>> the score?  The aim is to show a good gradient of risk.  Say the
>>>>> results for two different scores were:
>>>>>
>>>>> Score  Dead  Alive    %dead    Totals
>>>>> 0        12    136      9.9%      145
>>>>> 1        18    126     15.4%      144
>>>>> 2        18     62     26.2%       81
>>>>> 3        10      9     57.1%       20
>>>>> 4         2      0    100  %        3
>>>>> -------------------------------------
>>>>> Total:   60    333                393
>>>>>
>>>>> Score  Dead  Alive    %dead    Totals
>>>>> 0         8    174      4.6%      182
>>>>> 1        21    143     12.8%      164
>>>>> 2        22     19     53.7%       41
>>>>> 3         5      1     83.3%        6
>>>>> -------------------------------------
>>>>> TOTAL:   60    333                393
>>>>>
>>>>> Which is the better score?  What is the best way to measure its
>>>>> predictive power?  I understand that ROC type analysis doesn't really
>>>>> apply here.  Some measure of R-squared?  AIC?
>>>>>
>>>>> Thankyou
>>>>>
>>>>> Karin
>>>>>
>>>>> PS) I have made up the data, so the numbers don't quite add up.  It is
>>>>> meant to be two different, competing scores on the same people.
>> *
>> *   For searches and help try:
>> *   http://www.stata.com/help.cgi?search
>> *   http://www.stata.com/support/statalist/faq
>> *   http://www.ats.ucla.edu/stat/stata/
>>
>
> *
> *   For searches and help try:
> *   http://www.stata.com/help.cgi?search
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/
>
> *
> *   For searches and help try:
> *   http://www.stata.com/help.cgi?search
> *   http://www.stata.com/support/statalist/faq
> *   http://www.ats.ucla.edu/stat/stata/
>

*
*   For searches and help try:
*   http://www.stata.com/help.cgi?search
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/


© Copyright 1996–2018 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   Site index