Thanks for this.  I should be more clear.  The reasoning and figures tasks
are completely separate and are not being compared to each other.  Each
task has two conditions.  For example, in the reasoning task, I have two
conditions:  Spatial and Nonspatial.  I want to compare Spatial and
Nonspatial to each other and to baseline.  I am trying to understand
whether/how the efficiency statistic that optseq provides (or perhaps a vrf
statistic) can be used to help determine a good amount of total baseline to
include (divided randomly into jitter by the optseq event schedule).  As I
add more total baseline and iteratively re-run optseq, the efficiency
statistic increases with the amount of baseline.  Is there a threshold
value for the efficiency statistic that will enable me to be confident in
the design?  When I devote a third of the total time to baseline in each of
the two separate tasks (i.e., Reasoning, and Embedded figures), each of
which has two main conditions, optseq yields very different efficiency
values for the two tasks (about .048 for one task, and about .145 for the
other task).  Should I be concerned about this difference?

Thanks very much for your help with this.

On Mon, Apr 20, 2015 at 10:49 PM, Douglas Greve <gr...@nmr.mgh.harvard.edu>
wrote:

>
>
> On 4/20/15 5:45 PM, Dan Goldman wrote:
>
>  Hello,
> I have a question re: calculations of efficiency and vrf in optseq2.
> We are running two tasks. One is a reasoning tasks, with trial lengths of
> 12 seconds. The other is an embedded figures task with trial lengths of 10
> seconds. For both tasks, we have set a minimum ISI of 4 seconds and a
> maximum ISI of 10 seconds. In both tasks, we have two conditions we want to
> compare and, thus, have devoted an amount of time equal to 50% of the time
> devoted to tasks to total jitter time.
>
> I'm not sure of your design here. If you have two conditions that you want
> to compare to each other and to baseline, then you would give 1/3 of the
> total time to each. The baseline comparison is often of less importance, so
> the null could be given a smaller proportion.
>
>
>  After running optseq2 for each task, we got the following statistics:
>
>
>  Task 1 (reasoning):
>           efficiency               vrfavg  0.0485815 6.44628
>
>
>  Task 2 (embedded figures):
>           efficiency               vrfavg  0.14506 9.22134
>  Is is a problem that the efficiency and average vrf are so much lower
> for task 1 than task 2?
>
> A problem in what way? If in terms of comparing reasoning vs figures, it
> will not produce false positives, but you will be losing some power by not
> balancing the design.
>
>
>  Thank you,
> Dan
>
>
> _______________________________________________
> Freesurfer mailing 
> listfreesur...@nmr.mgh.harvard.eduhttps://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer
>
>
>
> _______________________________________________
> Freesurfer mailing list
> Freesurfer@nmr.mgh.harvard.edu
> https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer
>
>
> The information in this e-mail is intended only for the person to whom it
> is
> addressed. If you believe this e-mail was sent to you in error and the
> e-mail
> contains patient information, please contact the Partners Compliance
> HelpLine at
> http://www.partners.org/complianceline . If the e-mail was sent to you in
> error
> but does not contain patient information, please contact the sender and
> properly
> dispose of the e-mail.
>
>
_______________________________________________
Freesurfer mailing list
Freesurfer@nmr.mgh.harvard.edu
https://mail.nmr.mgh.harvard.edu/mailman/listinfo/freesurfer


The information in this e-mail is intended only for the person to whom it is
addressed. If you believe this e-mail was sent to you in error and the e-mail
contains patient information, please contact the Partners Compliance HelpLine at
http://www.partners.org/complianceline . If the e-mail was sent to you in error
but does not contain patient information, please contact the sender and properly
dispose of the e-mail.

Reply via email to