Re: [Dhis2-devs] [Dhis2-users] Extra tick box in tracker capture for every data element

2016-09-11 Thread Archana Chillala
Hi Abyot,

What does "Allow provided elsewhere" mean? What are it's implications?


*Cheers*

Archana Chillala
Application Developer
Email archa...@thoughtworks.com
Telephone +91 9100960533 <+91+9100960533>
[image: ThoughtWorks]


On Fri, Aug 26, 2016 at 1:58 PM, Abyot Asalefew Gizaw 
wrote:

> Hi Elmarie,
>
> Go to your program stage configuration and make sure the data elements
> have "Allow provided elsewhere" property not selected.
>
> --
> Abyot A. Gizaw.
> Senior Engineer, DHIS2
> University of Oslo
> http://www.dhis2.org
>
> On Fri, Aug 26, 2016 at 10:24 AM, Elmarie Claasen 
> wrote:
>
>> Hi all,
>>
>>
>>
>> I have set up a tracker program but there is an extra tick box next to
>> every data element in tracker capture. (see screenshot).
>>
>> Why are these there and how does one remove it?
>>
>>
>>
>> Regards,
>>
>>
>>
>> *Elmarie Claasen*
>>
>> [image: Hisp logo]
>>
>> Project Manager
>>
>> Health Information Systems Program
>>
>> Tel:  041-367 1027
>>
>> Cell: 082 374 2209
>>
>> E-mail: elma...@hisp.org
>>
>> Skype:  elmarie.claasen52
>>
>>
>>
>>
>>
>> This message and any attachments are subject to a disclaimer published at
>> http://www.hisp.org/policies.html#comms_disclaimer .   Please read the
>> disclaimer before opening any attachment or taking any other action in
>> terms of this electronic transmission.
>> If you cannot access the disclaimer, kindly send an email to 
>> disclai...@hisp.org
>> and a copy will be provided to you. By replying to this e-mail or opening
>> any attachment you agree to be bound by the provisions of the disclaimer.
>>
>>
>>
>> *This message and any attachments are subject to a disclaimer published
>> at http://www.hisp.org/policies.html#comms_disclaimer
>> .  Please read the
>> disclaimer before opening any attachment or taking any other action in
>> terms of this electronic transmission.  If you cannot access the
>> disclaimer, kindly send an email to disclai...@hisp.org
>>  and a copy will be provided to you. By replying to
>> this e-mail or opening any attachment you agree to be bound by the
>> provisions of the disclaimer.*
>>
>> ___
>> Mailing list: https://launchpad.net/~dhis2-users
>> Post to : dhis2-us...@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~dhis2-users
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
> ___
> Mailing list: https://launchpad.net/~dhis2-devs
> Post to : dhis2-devs@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~dhis2-devs
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp


Re: [Dhis2-devs] Analytics and disk space

2016-09-11 Thread David Siang Fong Oh
+1 to Calle's idea of staggering analytics year by year

I also like Jason's suggestion of being able to configure the time period
for which analytics is regenerated. If the general use-case has data being
entered only for the current year, then is it perhaps unnecessary to
regenerate data for previous years?

Cheers,

-doh

On Tue, Jul 26, 2016 at 2:36 PM, Calle Hedberg 
wrote:

> Hi,
>
> One (presumably) simple solution is to stagger analytics on a year by year
> basis - i.e. run and complete 2009 before processing 2010. That would
> reduce temp disk space requirements significantly while (presumably) not
> changing the general design.
>
> Regards
> Calle
>
> On 26 July 2016 at 10:24, Jason Pickering 
> wrote:
>
>> Hi Devs,
>> I am seeking some advice on how to try and decrease the amount of disk
>> usage with DHIS2.
>>
>> Here is a list of the biggest tables in the system.
>>
>>  public.datavalue   | 2316 MB
>>  public.datavalue_pkey | 1230 MB
>>  public.in_datavalue_lastupdated  | 680 MB
>>
>>
>> There are a lot more tables, and all in all, the database occupies about
>> 5.4 GB without analytics.
>>
>> This represents about 30 million data rows, so not that big of a database
>> really. This server is being run off of a Digital Ocean virtual server with
>> 60 GB of disk space. The only thing on the server really is Linux,
>> Postgresql and Tomcat. Nothing else. With out analytics and everything
>> installed for the system, we have about 23% of that 60 GB free.
>>
>> When analytics runs, it maintains a copy of the main analytics tables (
>> analytics_) and creates temp tables like analytics_temp_2004. When
>> things are finished and the indexes are built, the tables are swapped. This
>> ensures that analytics resources are available while analytics are being
>> built, but the downside of this is that A LOT more disk space is required,
>> as now we effectively have two copies of the tables along with all their
>> indexes, which are quite large themselves (up to 60% the size of the table
>> itself).  Here's what happens when analytics is run
>>
>>  public.analytics_temp_2015  | 1017 MB
>>  public.analytics_temp_2014  | 985 MB
>>  public.analytics_temp_2011  | 952 MB
>>  public.analytics_temp_2010  | 918 MB
>>  public.analytics_temp_2013  | 885 MB
>>  public.analytics_temp_2012  | 835 MB
>>  public.analytics_temp_2009  | 804 MB
>>
>> Now each analytics table is taking about 1 GB of space. In the end, it
>> adds up to more than 60 GB and analytics fails to complete.
>>
>> So, while I understand the need for this functionality, I am wondering if
>> we need a system option to allow the analytics tables to be dropped prior
>> to regenerating them, or to have more control over the order in which they
>> are generated (for instance to generate specific periods). I realize this
>> can be done from the API or the scheduler, but only for the past three
>> relative years.
>>
>>  The reason I am asking for this is because its a bit of a pain (at the
>> moment) when using Digital Ocean as a service provider, since their stock
>> disk storage is 60 GB. With other VPS providers (Amazon, Linode), its a bit
>> easier, but DigitalOcean only supports block storage in two regions at the
>> moment. Regardless, it would seem somewhat wasteful to have to have such a
>> large amount of disk space, for such a relatively small database.
>>
>> Is this something we just need to plan for and maybe provide better
>> documentation on, or should we think about trying to offer better
>> functionality for people running smaller servers?
>>
>> Regards,
>> Jason
>>
>> ___
>> Mailing list: https://launchpad.net/~dhis2-devs
>> Post to : dhis2-devs@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~dhis2-devs
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
>
> --
>
> ***
>
> Calle Hedberg
>
> 46D Alma Road, 7700 Rosebank, SOUTH AFRICA
>
> Tel/fax (home): +27-21-685-6472
>
> Cell: +27-82-853-5352
>
> Iridium SatPhone: +8816-315-19119
>
> Email: calle.hedb...@gmail.com
>
> Skype: calle_hedberg
>
> ***
>
>
> ___
> Mailing list: https://launchpad.net/~dhis2-devs
> Post to : dhis2-devs@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~dhis2-devs
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp


Re: [Dhis2-devs] Analytics and disk space

2016-09-11 Thread Calle Hedberg
Hi,

It's not only analytics that would benefit from segmented/staggered
processing: I exported around 100 mill data values yesterday from a number
of instance, and found that the export process was (seemingly)
exponentially slower with increasing number of records exported. Most of
the export files contained well under 10 mill records, which was pretty
fast. In comparison, the largest export file with around 30 mill data
values probably took 20 times as much time as an 8 mill value export. Based
on just keeping an eye on the "progress bar", it seemed like some kind of
cache staggering was taking place - the amount exported would increase
quickly by 2-3mb, then "hang" for a good while, then increase quickly by
2-3mb again.

Note also that there are several fundamental strategies one could use to
reducing heavy work processes like analytics, exports (and thus imports),
etc:
- to be able to specify a sub-period as Jason's suggest
- to be able to specify the "dirty" part of the instance by using e.g.
LastUpdated >= x
- to be able to specify a sub-OrgUnit-area

These partial strategies are of course mostly relevant for very large
instances, but such large instances are also the ones where you typically
only have changes made to a small segment of the total - like if you have
data for 30 years, 27 of those might be locked down and no longer available
for updates.

Regards
Calle

On 11 September 2016 at 15:47, David Siang Fong Oh 
wrote:

> +1 to Calle's idea of staggering analytics year by year
>
> I also like Jason's suggestion of being able to configure the time period
> for which analytics is regenerated. If the general use-case has data being
> entered only for the current year, then is it perhaps unnecessary to
> regenerate data for previous years?
>
> Cheers,
>
> -doh
>
> On Tue, Jul 26, 2016 at 2:36 PM, Calle Hedberg 
> wrote:
>
>> Hi,
>>
>> One (presumably) simple solution is to stagger analytics on a year by
>> year basis - i.e. run and complete 2009 before processing 2010. That would
>> reduce temp disk space requirements significantly while (presumably) not
>> changing the general design.
>>
>> Regards
>> Calle
>>
>> On 26 July 2016 at 10:24, Jason Pickering 
>> wrote:
>>
>>> Hi Devs,
>>> I am seeking some advice on how to try and decrease the amount of disk
>>> usage with DHIS2.
>>>
>>> Here is a list of the biggest tables in the system.
>>>
>>>  public.datavalue   | 2316 MB
>>>  public.datavalue_pkey | 1230 MB
>>>  public.in_datavalue_lastupdated  | 680 MB
>>>
>>>
>>> There are a lot more tables, and all in all, the database occupies about
>>> 5.4 GB without analytics.
>>>
>>> This represents about 30 million data rows, so not that big of a
>>> database really. This server is being run off of a Digital Ocean virtual
>>> server with 60 GB of disk space. The only thing on the server really is
>>> Linux, Postgresql and Tomcat. Nothing else. With out analytics and
>>> everything installed for the system, we have about 23% of that 60 GB free.
>>>
>>> When analytics runs, it maintains a copy of the main analytics tables (
>>> analytics_) and creates temp tables like analytics_temp_2004. When
>>> things are finished and the indexes are built, the tables are swapped. This
>>> ensures that analytics resources are available while analytics are being
>>> built, but the downside of this is that A LOT more disk space is required,
>>> as now we effectively have two copies of the tables along with all their
>>> indexes, which are quite large themselves (up to 60% the size of the table
>>> itself).  Here's what happens when analytics is run
>>>
>>>  public.analytics_temp_2015  | 1017 MB
>>>  public.analytics_temp_2014  | 985 MB
>>>  public.analytics_temp_2011  | 952 MB
>>>  public.analytics_temp_2010  | 918 MB
>>>  public.analytics_temp_2013  | 885 MB
>>>  public.analytics_temp_2012  | 835 MB
>>>  public.analytics_temp_2009  | 804 MB
>>>
>>> Now each analytics table is taking about 1 GB of space. In the end, it
>>> adds up to more than 60 GB and analytics fails to complete.
>>>
>>> So, while I understand the need for this functionality, I am wondering
>>> if we need a system option to allow the analytics tables to be dropped
>>> prior to regenerating them, or to have more control over the order in which
>>> they are generated (for instance to generate specific periods). I realize
>>> this can be done from the API or the scheduler, but only for the past three
>>> relative years.
>>>
>>>  The reason I am asking for this is because its a bit of a pain (at the
>>> moment) when using Digital Ocean as a service provider, since their stock
>>> disk storage is 60 GB. With other VPS providers (Amazon, Linode), its a bit
>>> easier, but DigitalOcean only supports block storage in two regions at the
>>> moment. Regardless, it would seem somewhat wasteful to have to have s

Re: [Dhis2-devs] Analytics and disk space

2016-09-11 Thread David Siang Fong Oh
I think Jason also pointed out that this could be achieved from the API,
but the question is whether it needs to be more user-friendly, i.e.
customisable using the web application as opposed to requiring a custom
script triggered by a cron job.

Cheers,

-doh

On Sun, Sep 11, 2016 at 8:36 PM, Dan Cocos  wrote:

> Hi All,
>
> You could run this
> /api/24/maintenance/analyticsTablesClear
> and this possibly this
> /api/24/maintenance/periodPruning
>
> I don't see it in the documentation but we use call this
>  /api/resourceTables/analytics?lastYears=2 quite often for clients with a
> lot of historical data.
>
> Good luck,
> Dan
>
> *Dan Cocos*
> Principal, BAO Systems
> dco...@baosystems.com  | http://www.baosystems.com
> |  2900 K Street, Suite 404, Washington D.C. 20007
>
>
>
>
>
> On Sep 11, 2016, at 10:05 AM, Calle Hedberg 
> wrote:
>
> Hi,
>
> It's not only analytics that would benefit from segmented/staggered
> processing: I exported around 100 mill data values yesterday from a number
> of instance, and found that the export process was (seemingly)
> exponentially slower with increasing number of records exported. Most of
> the export files contained well under 10 mill records, which was pretty
> fast. In comparison, the largest export file with around 30 mill data
> values probably took 20 times as much time as an 8 mill value export. Based
> on just keeping an eye on the "progress bar", it seemed like some kind of
> cache staggering was taking place - the amount exported would increase
> quickly by 2-3mb, then "hang" for a good while, then increase quickly by
> 2-3mb again.
>
> Note also that there are several fundamental strategies one could use to
> reducing heavy work processes like analytics, exports (and thus imports),
> etc:
> - to be able to specify a sub-period as Jason's suggest
> - to be able to specify the "dirty" part of the instance by using e.g.
> LastUpdated >= x
> - to be able to specify a sub-OrgUnit-area
>
> These partial strategies are of course mostly relevant for very large
> instances, but such large instances are also the ones where you typically
> only have changes made to a small segment of the total - like if you have
> data for 30 years, 27 of those might be locked down and no longer available
> for updates.
>
> Regards
> Calle
>
> On 11 September 2016 at 15:47, David Siang Fong Oh 
> wrote:
>
>> +1 to Calle's idea of staggering analytics year by year
>>
>> I also like Jason's suggestion of being able to configure the time period
>> for which analytics is regenerated. If the general use-case has data being
>> entered only for the current year, then is it perhaps unnecessary to
>> regenerate data for previous years?
>>
>> Cheers,
>>
>> -doh
>>
>> On Tue, Jul 26, 2016 at 2:36 PM, Calle Hedberg 
>> wrote:
>>
>>> Hi,
>>>
>>> One (presumably) simple solution is to stagger analytics on a year by
>>> year basis - i.e. run and complete 2009 before processing 2010. That would
>>> reduce temp disk space requirements significantly while (presumably) not
>>> changing the general design.
>>>
>>> Regards
>>> Calle
>>>
>>> On 26 July 2016 at 10:24, Jason Pickering 
>>> wrote:
>>>
 Hi Devs,
 I am seeking some advice on how to try and decrease the amount of disk
 usage with DHIS2.

 Here is a list of the biggest tables in the system.

  public.datavalue   | 2316 MB
  public.datavalue_pkey | 1230 MB
  public.in_datavalue_lastupdated  | 680 MB


 There are a lot more tables, and all in all, the database occupies
 about 5.4 GB without analytics.

 This represents about 30 million data rows, so not that big of a
 database really. This server is being run off of a Digital Ocean virtual
 server with 60 GB of disk space. The only thing on the server really is
 Linux, Postgresql and Tomcat. Nothing else. With out analytics and
 everything installed for the system, we have about 23% of that 60 GB free.

 When analytics runs, it maintains a copy of the main analytics tables (
 analytics_) and creates temp tables like analytics_temp_2004. When
 things are finished and the indexes are built, the tables are swapped. This
 ensures that analytics resources are available while analytics are being
 built, but the downside of this is that A LOT more disk space is required,
 as now we effectively have two copies of the tables along with all their
 indexes, which are quite large themselves (up to 60% the size of the table
 itself).  Here's what happens when analytics is run

  public.analytics_temp_2015  | 1017 MB
  public.analytics_temp_2014  | 985 MB
  public.analytics_temp_2011  | 952 MB
  public.analytics_temp_2010  | 918 MB
  public.analytics_temp_2013  | 885 MB
  public.analytics_temp_2012  | 835 MB
  public.analytics_tem

[Dhis2-devs] [Bug 1622381] [NEW] Register Event button in wrong place

2016-09-11 Thread Timothy Harding
Public bug reported:

Using 
Version:
2.23
Build revision:
8794224

This seems to only affect 2.23, and not 2.24.

In 2.24 it seems that "col-sm-6 div-bottom ng-scope" has a new child div
class=div-bottom, can we get this fix back ported 2.23?

Uploaded pic of this problem in action.

** Affects: dhis2
 Importance: Undecided
 Status: New

** Attachment added: "Screen Shot 2016-09-11 at 4.26.40 PM.png"
   
https://bugs.launchpad.net/bugs/1622381/+attachment/4738834/+files/Screen%20Shot%202016-09-11%20at%204.26.40%20PM.png

-- 
You received this bug notification because you are a member of DHIS 2
developers, which is subscribed to DHIS.
https://bugs.launchpad.net/bugs/1622381

Title:
  Register Event button in wrong place

Status in DHIS:
  New

Bug description:
  Using 
  Version:
  2.23
  Build revision:
  8794224

  This seems to only affect 2.23, and not 2.24.

  In 2.24 it seems that "col-sm-6 div-bottom ng-scope" has a new child
  div class=div-bottom, can we get this fix back ported 2.23?

  Uploaded pic of this problem in action.

To manage notifications about this bug go to:
https://bugs.launchpad.net/dhis2/+bug/1622381/+subscriptions

___
Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp


[Dhis2-devs] [Bug 1622387] [NEW] Event Capture Up Down selectors Firefox Only

2016-09-11 Thread Timothy Harding
Public bug reported:

In the Event Capture module, Firefox has updown buttons for numerical
entry boxes. This affects at least the latest builds of 2.23 and 2.24. A
screenshot from play demo is included.

Left side is Firefox and right side is Chrome.

It may be good to remove them entirely for all browsers as in firefox
they also show up in the Lat/Long boxes where their assistance is
dubious.

** Affects: dhis2
 Importance: Undecided
 Status: New

** Attachment added: "Screen-Shot-2016-09-11-at-5.03.10-PM.png"
   
https://bugs.launchpad.net/bugs/1622387/+attachment/4738840/+files/Screen-Shot-2016-09-11-at-5.03.10-PM.png

-- 
You received this bug notification because you are a member of DHIS 2
developers, which is subscribed to DHIS.
https://bugs.launchpad.net/bugs/1622387

Title:
  Event Capture Up Down selectors Firefox Only

Status in DHIS:
  New

Bug description:
  In the Event Capture module, Firefox has updown buttons for numerical
  entry boxes. This affects at least the latest builds of 2.23 and 2.24.
  A screenshot from play demo is included.

  Left side is Firefox and right side is Chrome.

  It may be good to remove them entirely for all browsers as in firefox
  they also show up in the Lat/Long boxes where their assistance is
  dubious.

To manage notifications about this bug go to:
https://bugs.launchpad.net/dhis2/+bug/1622387/+subscriptions

___
Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp


[Dhis2-devs] Groups - Tables

2016-09-11 Thread Raminosoa Rabemanantsoa, Tantely
Dear Members,

I am looking for 3 database tables:

1- the table where are stored all organization units groups with the list
of their members.
2- the table where are stored the datasets assigned to the organization
units
3- the table where are stored the organization units assigned to all users

Does anyone know in which tables should I query to select and update their
records? I am using DHIS2 2.19 and I connect to the DHIS2 database directly
with psql.

Thank you for your advice.

Regards,

Tantely.

-- 
*This message and its attachments are confidential and solely for the 
intended recipients. If received in error, please delete them and notify 
the sender via reply e-mail immediately.*
___
Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp


Re: [Dhis2-devs] Groups - Tables

2016-09-11 Thread Jason Pickering
Hi Tantely,

1- the table where are stored all organization units groups with the list
of their members.

\d orgunitgroupmembers

2- the table where are stored the datasets assigned to the organization
units

\d datasetsource

3- the table where are stored the organization units assigned to all users

\d usermembership

You should not really try and update these tables manually. Its much better
to use the API.


Regards,
Jason



On Mon, Sep 12, 2016 at 7:52 AM, Raminosoa Rabemanantsoa, Tantely <
tramino...@mikolo.org> wrote:

> Dear Members,
>
> I am looking for 3 database tables:
>
> 1- the table where are stored all organization units groups with the list
> of their members.
> 2- the table where are stored the datasets assigned to the organization
> units
> 3- the table where are stored the organization units assigned to all users
>
> Does anyone know in which tables should I query to select and update their
> records? I am using DHIS2 2.19 and I connect to the DHIS2 database directly
> with psql.
>
> Thank you for your advice.
>
> Regards,
>
> Tantely.
>
> *This message and its attachments are confidential and solely for the
> intended recipients. If received in error, please delete them and notify
> the sender via reply e-mail immediately.*
> ___
> Mailing list: https://launchpad.net/~dhis2-devs
> Post to : dhis2-devs@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~dhis2-devs
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
Jason P. Pickering
email: jason.p.picker...@gmail.com
tel:+46764147049
___
Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp


Re: [Dhis2-devs] Analytics and disk space

2016-09-11 Thread Lars Helge Ă˜verland
Hi there,

thanks for the feedback. Most of what's requested is available in the API.
It's on our list to rewrite the import-export app and write a better
scheduling manager for background tasks such analytics generation.

In the meantime:

- Analytics tables generation

for
last x years
- Data value export

(lastUpdated, lastUpdatedDuration, orgUnit params)


regards,

Lars



On Sun, Sep 11, 2016 at 5:20 PM, David Siang Fong Oh 
wrote:

> I think Jason also pointed out that this could be achieved from the API,
> but the question is whether it needs to be more user-friendly, i.e.
> customisable using the web application as opposed to requiring a custom
> script triggered by a cron job.
>
> Cheers,
>
> -doh
>
> On Sun, Sep 11, 2016 at 8:36 PM, Dan Cocos  wrote:
>
>> Hi All,
>>
>> You could run this
>> /api/24/maintenance/analyticsTablesClear
>> and this possibly this
>> /api/24/maintenance/periodPruning
>>
>> I don't see it in the documentation but we use call this
>>  /api/resourceTables/analytics?lastYears=2 quite often for clients with
>> a lot of historical data.
>>
>> Good luck,
>> Dan
>>
>> *Dan Cocos*
>> Principal, BAO Systems
>> dco...@baosystems.com  | http://www.baosystems.com
>>  |  2900 K Street, Suite 404, Washington D.C. 20007
>>
>>
>>
>>
>>
>> On Sep 11, 2016, at 10:05 AM, Calle Hedberg 
>> wrote:
>>
>> Hi,
>>
>> It's not only analytics that would benefit from segmented/staggered
>> processing: I exported around 100 mill data values yesterday from a number
>> of instance, and found that the export process was (seemingly)
>> exponentially slower with increasing number of records exported. Most of
>> the export files contained well under 10 mill records, which was pretty
>> fast. In comparison, the largest export file with around 30 mill data
>> values probably took 20 times as much time as an 8 mill value export. Based
>> on just keeping an eye on the "progress bar", it seemed like some kind of
>> cache staggering was taking place - the amount exported would increase
>> quickly by 2-3mb, then "hang" for a good while, then increase quickly by
>> 2-3mb again.
>>
>> Note also that there are several fundamental strategies one could use to
>> reducing heavy work processes like analytics, exports (and thus imports),
>> etc:
>> - to be able to specify a sub-period as Jason's suggest
>> - to be able to specify the "dirty" part of the instance by using e.g.
>> LastUpdated >= x
>> - to be able to specify a sub-OrgUnit-area
>>
>> These partial strategies are of course mostly relevant for very large
>> instances, but such large instances are also the ones where you typically
>> only have changes made to a small segment of the total - like if you have
>> data for 30 years, 27 of those might be locked down and no longer available
>> for updates.
>>
>> Regards
>> Calle
>>
>> On 11 September 2016 at 15:47, David Siang Fong Oh 
>>  wrote:
>>
>>> +1 to Calle's idea of staggering analytics year by year
>>>
>>> I also like Jason's suggestion of being able to configure the time
>>> period for which analytics is regenerated. If the general use-case has data
>>> being entered only for the current year, then is it perhaps unnecessary to
>>> regenerate data for previous years?
>>>
>>> Cheers,
>>>
>>> -doh
>>>
>>> On Tue, Jul 26, 2016 at 2:36 PM, Calle Hedberg 
>>>  wrote:
>>>
 Hi,

 One (presumably) simple solution is to stagger analytics on a year by
 year basis - i.e. run and complete 2009 before processing 2010. That would
 reduce temp disk space requirements significantly while (presumably) not
 changing the general design.

 Regards
 Calle

 On 26 July 2016 at 10:24, Jason Pickering 
  wrote:

> Hi Devs,
> I am seeking some advice on how to try and decrease the amount of disk
> usage with DHIS2.
>
> Here is a list of the biggest tables in the system.
>
>  public.datavalue   | 2316 MB
>  public.datavalue_pkey | 1230 MB
>  public.in_datavalue_lastupdated  | 680 MB
>
>
> There are a lot more tables, and all in all, the database occupies
> about 5.4 GB without analytics.
>
> This represents about 30 million data rows, so not that big of a
> database really. This server is being run off of a Digital Ocean virtual
> server with 60 GB of disk space. The only thing on the server really is
> Linux, Postgresql and Tomcat. Nothing else. With out analytics and
> everything installed for the system, we have about 23% of that 60 GB free.
>
> When analytics runs, it maintains a copy of the main analytics tables
> ( analytics_) and creates temp tables like analytics_temp_2004. When
> things are finished and the indexes