Re: [Dhis2-devs] [Dhis2-users] Creating Sync between Linode(External Server) and Local Server

2014-12-18 Thread Guy Ekani
Hello,
try to see with the features of this hoster : Why BAO Hosting? | BAO Systems

|   |
|   |   |   |   |   |
| Why BAO Hosting? | BAO SystemsDHIS 2 hosting in the cloud |
|  |
| Afficher sur www.baosystems.com | Aperçu par Yahoo |
|  |
|   |

   Sincerely= EKANI Guy Cameroon



  

 Le Jeudi 18 décembre 2014 6h07, gerald thomas  a 
écrit :
   

 Dear All,
Sierra Leone wants to finally migrate to an online server (External
server hosted outside the Ministry) but we will like to create a daily
backup of that server locally in case anything goes wrong.
My questions:

1.  We need a help with a script that can create a sync between the
External Server and the Local Server (at least twice a day)

2. Is there something we should know from past experiences about
hosting servers on the cloud

Please feel free to share anything and I will be grateful to learn new
things about dhis2

-- 
Regards,

Gerald

___
Mailing list: https://launchpad.net/~dhis2-users
Post to    : dhis2-us...@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-users
More help  : https://help.launchpad.net/ListHelp


   ___
Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp


Re: [Dhis2-devs] Creating Sync between Linode(External Server) and Local Server

2014-12-18 Thread Bob Jolliffe
Hi Gerald

We tested this when I was in Sierra Leone and we were finding serious
problems with bandwidth getting the data back to Sierra Leone.

So you are going to have to think carefully about when and how often to
synch.  Currently your database files are very small as you don't have much
data on your cloud server, but it will soon grow.  I suspect "at least
twice a day" sounds unrealistic.

The way I typically do it is to first create an account on the backup
server.  Make sure that the account running your dhis instance can login to
the backup server without a password by creating an ssh key pair and
installing the public key on the backup server account.  Then you can
simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
a directory on the backup server using cron.   In fact if you look in
/usr/bin/dhis2-backup you will see that the commands are already there to
do this, just commented out.  This would synch with the backup server after
taking the nightly backup.

This simple (and slightly lazy) setup has worked fine, and continues to
work, in a number of places.  But there are a number of reasons you might
want to do something different.

(i)  you might want to pull from the backup server rather than push to it.
Particularly as the backup server might not be as reliably always online as
the production server.  This would require a slightly different variation
on the above, but using the same principle of creating an ssh keypair and
letting rsync do the work.

(ii) rsync is a really great and simple tool, but it is sadly quite slow.
If you are bandwidth stressed and your database is growing it might not be
the best solution.  Works fine when bandwidth is not a critical issue.  The
trouble is it doesn't really take into account the incremental nature of
the data ie. you backup everything every time (besides the ephemeral tables
like analytics, aggregated etc).  In which case you need to start thinking
smarter and maybe a little bit more complicated.  One approach I have been
considering, (but not yet tried) is to make a copy of the metadata export
every night and then just pull all the datavalues with a lastupdated
greater than the last time you pulled.  That is going to reduce the size of
the backup quite considerably.  In theory this is probably even possible to
do through the api rather than directly through psql which might be fine if
you choose the time of day/night carefully.  I'd probably do it with psql
at the backed,

So there are a few options.  The first being the simplest and also the
crudest.  Any other thoughts?

Cheers
Bob

On 18 December 2014 at 05:07, gerald thomas  wrote:
>
> Dear All,
> Sierra Leone wants to finally migrate to an online server (External
> server hosted outside the Ministry) but we will like to create a daily
> backup of that server locally in case anything goes wrong.
> My questions:
>
> 1.  We need a help with a script that can create a sync between the
> External Server and the Local Server (at least twice a day)
>
> 2. Is there something we should know from past experiences about
> hosting servers on the cloud
>
> Please feel free to share anything and I will be grateful to learn new
> things about dhis2
>
> --
> Regards,
>
> Gerald
>
> ___
> Mailing list: https://launchpad.net/~dhis2-devs
> Post to : dhis2-devs@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~dhis2-devs
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp


[Dhis2-devs] [Branch ~dhis2-devs-core/dhis2/trunk] Rev 17726: move the User-group-managemen user interface from dashboard module to dhis-web-maintenance-user m...

2014-12-18 Thread noreply

revno: 17726
committer: Tran Chau
branch nick: dhis2
timestamp: Thu 2014-12-18 18:41:20 +0700
message:
  move the User-group-managemen user interface from dashboard module to 
dhis-web-maintenance-user module.
removed:
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/java/org/hisp/dhis/dashboard/usergroup/action/
renamed:
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/java/org/hisp/dhis/dashboard/usergroup/
 => 
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/java/org/hisp/dhis/user/action/usergroup/
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/java/org/hisp/dhis/dashboard/usergroup/action/AddUserGroupAction.java
 => 
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/java/org/hisp/dhis/user/action/usergroup/AddUserGroupAction.java
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/java/org/hisp/dhis/dashboard/usergroup/action/AddUserGroupFormAction.java
 => 
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/java/org/hisp/dhis/user/action/usergroup/AddUserGroupFormAction.java
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/java/org/hisp/dhis/dashboard/usergroup/action/EditUserGroupFormAction.java
 => 
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/java/org/hisp/dhis/user/action/usergroup/EditUserGroupFormAction.java
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/java/org/hisp/dhis/dashboard/usergroup/action/GetUserGroupAction.java
 => 
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/java/org/hisp/dhis/user/action/usergroup/GetUserGroupAction.java
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/java/org/hisp/dhis/dashboard/usergroup/action/GetUserGroupListAction.java
 => 
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/java/org/hisp/dhis/user/action/usergroup/GetUserGroupListAction.java
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/java/org/hisp/dhis/dashboard/usergroup/action/RemoveUserGroupAction.java
 => 
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/java/org/hisp/dhis/user/action/usergroup/RemoveUserGroupAction.java
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/java/org/hisp/dhis/dashboard/usergroup/action/UpdateUserGroupAction.java
 => 
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/java/org/hisp/dhis/user/action/usergroup/UpdateUserGroupAction.java
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/java/org/hisp/dhis/dashboard/usergroup/action/ValidateUserGroupAction.java
 => 
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/java/org/hisp/dhis/user/action/usergroup/ValidateUserGroupAction.java
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/webapp/dhis-web-dashboard-integration/addUserGroupForm.vm
 => 
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/webapp/dhis-web-maintenance-user/addUserGroupForm.vm
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/webapp/dhis-web-dashboard-integration/javascript/usergroup.js
 => 
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/webapp/dhis-web-maintenance-user/javascript/usergroup.js
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/webapp/dhis-web-dashboard-integration/updateUserGroupForm.vm
 => 
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/webapp/dhis-web-maintenance-user/updateUserGroupForm.vm
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/webapp/dhis-web-dashboard-integration/userGroupList.vm
 => 
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/webapp/dhis-web-maintenance-user/userGroupList.vm
modified:
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/resources/META-INF/dhis/beans.xml
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/resources/org/hisp/dhis/dashboard/i18n_module.properties
  dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/resources/struts.xml
  
dhis-2/dhis-web/dhis-web-dashboard-integration/src/main/webapp/dhis-web-dashboard-integration/dashboard.vm
  
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/resources/META-INF/dhis/beans.xml
  
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/resources/org/hisp/dhis/user/i18n_module.properties
  
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/resources/struts.xml
  
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/webapp/dhis-web-maintenance-user/index.vm
  
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/webapp/dhis-web-maintenance-user/menu.vm
  
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-user/src/main/java/org/hisp/dhis/user/action/usergroup/AddUserGroupAction.java
  
dhis-2/dhis-web/dhis-web-maintenance/dhis-web-maintenance-

[Dhis2-devs] [Branch ~dhis2-devs-core/dhis2/trunk] Rev 17727: typo fix in number validation directive; labels for ou tree and meta-data download progress indic...

2014-12-18 Thread noreply

revno: 17727
committer: Abyot Asalefew Gizaw 
branch nick: dhis2
timestamp: Thu 2014-12-18 12:50:32 +0100
message:
  typo fix in number validation directive; labels for ou tree and meta-data 
download progress indicator
modified:
  
dhis-2/dhis-web/dhis-web-apps/src/main/webapp/dhis-web-event-capture/i18n/i18n_app.properties
  
dhis-2/dhis-web/dhis-web-apps/src/main/webapp/dhis-web-event-capture/scripts/controllers.js
  
dhis-2/dhis-web/dhis-web-apps/src/main/webapp/dhis-web-event-capture/styles/style.css
  
dhis-2/dhis-web/dhis-web-apps/src/main/webapp/dhis-web-event-capture/views/home.html
  
dhis-2/dhis-web/dhis-web-commons-resources/src/main/webapp/dhis-web-commons/javascripts/angular/plugins/dhis2/directives.js
  
dhis-2/dhis-web/dhis-web-commons-resources/src/main/webapp/dhis-web-commons/javascripts/angular/plugins/dhis2/services.js


--
lp:dhis2
https://code.launchpad.net/~dhis2-devs-core/dhis2/trunk

Your team DHIS 2 developers is subscribed to branch lp:dhis2.
To unsubscribe from this branch go to 
https://code.launchpad.net/~dhis2-devs-core/dhis2/trunk/+edit-subscription
=== modified file 'dhis-2/dhis-web/dhis-web-apps/src/main/webapp/dhis-web-event-capture/i18n/i18n_app.properties'
--- dhis-2/dhis-web/dhis-web-apps/src/main/webapp/dhis-web-event-capture/i18n/i18n_app.properties	2014-12-09 23:24:16 +
+++ dhis-2/dhis-web/dhis-web-apps/src/main/webapp/dhis-web-event-capture/i18n/i18n_app.properties	2014-12-18 11:50:32 +
@@ -118,5 +118,7 @@
 posInt=Positive Integer
 negInt=Negative Integer
 zeroPostitiveInt=Zero or Positive Integer
+loading_tree=Loading orgunit tree
+loading_metadata=Loading meta-data
 
 

=== modified file 'dhis-2/dhis-web/dhis-web-apps/src/main/webapp/dhis-web-event-capture/scripts/controllers.js'
--- dhis-2/dhis-web/dhis-web-apps/src/main/webapp/dhis-web-event-capture/scripts/controllers.js	2014-12-16 15:44:11 +
+++ dhis-2/dhis-web/dhis-web-apps/src/main/webapp/dhis-web-event-capture/scripts/controllers.js	2014-12-18 11:50:32 +
@@ -23,6 +23,7 @@
 DialogService) {
 //selected org unit
 $scope.selectedOrgUnit = '';
+$scope.treeLoaded = false;
 
 $scope.calendarSetting = CalendarService.getSetting();
 

=== modified file 'dhis-2/dhis-web/dhis-web-apps/src/main/webapp/dhis-web-event-capture/styles/style.css'
--- dhis-2/dhis-web/dhis-web-apps/src/main/webapp/dhis-web-event-capture/styles/style.css	2014-12-17 14:21:45 +
+++ dhis-2/dhis-web/dhis-web-apps/src/main/webapp/dhis-web-event-capture/styles/style.css	2014-12-18 11:50:32 +
@@ -1,3 +1,15 @@
+html, body { 
+height:100%;
+margin: 0;
+padding: 0;
+background-color: white;
+font-size: 10pt;
+}
+
+.not-for-screen {
+display: none;
+}
+
 /**/
 /* Form
 /**/

=== modified file 'dhis-2/dhis-web/dhis-web-apps/src/main/webapp/dhis-web-event-capture/views/home.html'
--- dhis-2/dhis-web/dhis-web-apps/src/main/webapp/dhis-web-event-capture/views/home.html	2014-12-08 16:37:23 +
+++ dhis-2/dhis-web/dhis-web-apps/src/main/webapp/dhis-web-event-capture/views/home.html	2014-12-18 11:50:32 +
@@ -16,11 +16,16 @@
 
 
 
+
+{{'loading_tree'| translate}}
+
 
 
 
+
 
-
+
+{{'loading_metadata'| translate}}
  
 
 

=== modified file 'dhis-2/dhis-web/dhis-web-commons-resources/src/main/webapp/dhis-web-commons/javascripts/angular/plugins/dhis2/directives.js'
--- dhis-2/dhis-web/dhis-web-commons-resources/src/main/webapp/dhis-web-commons/javascripts/angular/plugins/dhis2/directives.js	2014-12-17 14:17:16 +
+++ dhis-2/dhis-web/dhis-web-commons-resources/src/main/webapp/dhis-web-commons/javascripts/angular/plugins/dhis2/directives.js	2014-12-18 11:50:32 +
@@ -33,6 +33,11 @@
 //Disable ou selection until meta-data has downloaded
 $( "#orgUnitTree" ).addClass( "disable-clicks" );
 
+$timeout(function() {
+scope.treeLoaded = true;
+scope.$apply();
+});
+
 downloadMetaData();
 });
 });
@@ -66,6 +71,19 @@
 };
 })
 
+.directive('d2Enter', function () {
+return function (scope, element, attrs) {
+element.bind("keydown keypress", function (event) {
+if(event.which === 13) {
+scope.$apply(function (){
+scope.$eval(attrs.d2Enter);
+});
+event.preventDefault();
+}
+});
+};
+})
+
 .directive('d2NumberValidation', function() {
 
 return {
@@ -85,7 +103,7 @@
 case "negInt":
   

Re: [Dhis2-devs] Creating Sync between Linode(External Server) and Local Server

2014-12-18 Thread gerald thomas
Bob,
My Suggestion:
All local servers must be on 2.15 war file then we create a SFTP
account on cloud server then we can use filezilla from the local
server to download the backup from the cloud server.
I know it is crude but that help for now.
What is your take Bob.

On 12/18/14, Bob Jolliffe  wrote:
> Hi Gerald
>
> We tested this when I was in Sierra Leone and we were finding serious
> problems with bandwidth getting the data back to Sierra Leone.
>
> So you are going to have to think carefully about when and how often to
> synch.  Currently your database files are very small as you don't have much
> data on your cloud server, but it will soon grow.  I suspect "at least
> twice a day" sounds unrealistic.
>
> The way I typically do it is to first create an account on the backup
> server.  Make sure that the account running your dhis instance can login to
> the backup server without a password by creating an ssh key pair and
> installing the public key on the backup server account.  Then you can
> simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
> a directory on the backup server using cron.   In fact if you look in
> /usr/bin/dhis2-backup you will see that the commands are already there to
> do this, just commented out.  This would synch with the backup server after
> taking the nightly backup.
>
> This simple (and slightly lazy) setup has worked fine, and continues to
> work, in a number of places.  But there are a number of reasons you might
> want to do something different.
>
> (i)  you might want to pull from the backup server rather than push to it.
> Particularly as the backup server might not be as reliably always online as
> the production server.  This would require a slightly different variation
> on the above, but using the same principle of creating an ssh keypair and
> letting rsync do the work.
>
> (ii) rsync is a really great and simple tool, but it is sadly quite slow.
> If you are bandwidth stressed and your database is growing it might not be
> the best solution.  Works fine when bandwidth is not a critical issue.  The
> trouble is it doesn't really take into account the incremental nature of
> the data ie. you backup everything every time (besides the ephemeral tables
> like analytics, aggregated etc).  In which case you need to start thinking
> smarter and maybe a little bit more complicated.  One approach I have been
> considering, (but not yet tried) is to make a copy of the metadata export
> every night and then just pull all the datavalues with a lastupdated
> greater than the last time you pulled.  That is going to reduce the size of
> the backup quite considerably.  In theory this is probably even possible to
> do through the api rather than directly through psql which might be fine if
> you choose the time of day/night carefully.  I'd probably do it with psql
> at the backed,
>
> So there are a few options.  The first being the simplest and also the
> crudest.  Any other thoughts?
>
> Cheers
> Bob
>
> On 18 December 2014 at 05:07, gerald thomas  wrote:
>>
>> Dear All,
>> Sierra Leone wants to finally migrate to an online server (External
>> server hosted outside the Ministry) but we will like to create a daily
>> backup of that server locally in case anything goes wrong.
>> My questions:
>>
>> 1.  We need a help with a script that can create a sync between the
>> External Server and the Local Server (at least twice a day)
>>
>> 2. Is there something we should know from past experiences about
>> hosting servers on the cloud
>>
>> Please feel free to share anything and I will be grateful to learn new
>> things about dhis2
>>
>> --
>> Regards,
>>
>> Gerald
>>
>> ___
>> Mailing list: https://launchpad.net/~dhis2-devs
>> Post to : dhis2-devs@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~dhis2-devs
>> More help   : https://help.launchpad.net/ListHelp
>>
>


-- 
Regards,

Gerald

___
Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp


Re: [Dhis2-devs] Creating Sync between Linode(External Server) and Local Server

2014-12-18 Thread Bob Jolliffe
I wouldn't do it that way.  I think filezilla is a gui app.  You need to
have something automated if you are to rely on the offsite backup.

If you want to use a gui now you could already use winscp on windows for
example or an ssh location in nautilus file browser on linux so no need for
sftp.

On 18 December 2014 at 12:13, gerald thomas  wrote:
>
> Bob,
> My Suggestion:
> All local servers must be on 2.15 war file then we create a SFTP
> account on cloud server then we can use filezilla from the local
> server to download the backup from the cloud server.
> I know it is crude but that help for now.
> What is your take Bob.
>
> On 12/18/14, Bob Jolliffe  wrote:
> > Hi Gerald
> >
> > We tested this when I was in Sierra Leone and we were finding serious
> > problems with bandwidth getting the data back to Sierra Leone.
> >
> > So you are going to have to think carefully about when and how often to
> > synch.  Currently your database files are very small as you don't have
> much
> > data on your cloud server, but it will soon grow.  I suspect "at least
> > twice a day" sounds unrealistic.
> >
> > The way I typically do it is to first create an account on the backup
> > server.  Make sure that the account running your dhis instance can login
> to
> > the backup server without a password by creating an ssh key pair and
> > installing the public key on the backup server account.  Then you can
> > simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups)
> to
> > a directory on the backup server using cron.   In fact if you look in
> > /usr/bin/dhis2-backup you will see that the commands are already there to
> > do this, just commented out.  This would synch with the backup server
> after
> > taking the nightly backup.
> >
> > This simple (and slightly lazy) setup has worked fine, and continues to
> > work, in a number of places.  But there are a number of reasons you might
> > want to do something different.
> >
> > (i)  you might want to pull from the backup server rather than push to
> it.
> > Particularly as the backup server might not be as reliably always online
> as
> > the production server.  This would require a slightly different variation
> > on the above, but using the same principle of creating an ssh keypair and
> > letting rsync do the work.
> >
> > (ii) rsync is a really great and simple tool, but it is sadly quite slow.
> > If you are bandwidth stressed and your database is growing it might not
> be
> > the best solution.  Works fine when bandwidth is not a critical issue.
> The
> > trouble is it doesn't really take into account the incremental nature of
> > the data ie. you backup everything every time (besides the ephemeral
> tables
> > like analytics, aggregated etc).  In which case you need to start
> thinking
> > smarter and maybe a little bit more complicated.  One approach I have
> been
> > considering, (but not yet tried) is to make a copy of the metadata export
> > every night and then just pull all the datavalues with a lastupdated
> > greater than the last time you pulled.  That is going to reduce the size
> of
> > the backup quite considerably.  In theory this is probably even possible
> to
> > do through the api rather than directly through psql which might be fine
> if
> > you choose the time of day/night carefully.  I'd probably do it with psql
> > at the backed,
> >
> > So there are a few options.  The first being the simplest and also the
> > crudest.  Any other thoughts?
> >
> > Cheers
> > Bob
> >
> > On 18 December 2014 at 05:07, gerald thomas 
> wrote:
> >>
> >> Dear All,
> >> Sierra Leone wants to finally migrate to an online server (External
> >> server hosted outside the Ministry) but we will like to create a daily
> >> backup of that server locally in case anything goes wrong.
> >> My questions:
> >>
> >> 1.  We need a help with a script that can create a sync between the
> >> External Server and the Local Server (at least twice a day)
> >>
> >> 2. Is there something we should know from past experiences about
> >> hosting servers on the cloud
> >>
> >> Please feel free to share anything and I will be grateful to learn new
> >> things about dhis2
> >>
> >> --
> >> Regards,
> >>
> >> Gerald
> >>
> >> ___
> >> Mailing list: https://launchpad.net/~dhis2-devs
> >> Post to : dhis2-devs@lists.launchpad.net
> >> Unsubscribe : https://launchpad.net/~dhis2-devs
> >> More help   : https://help.launchpad.net/ListHelp
> >>
> >
>
>
> --
> Regards,
>
> Gerald
>
___
Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp


Re: [Dhis2-devs] Creating Sync between Linode(External Server) and Local Server

2014-12-18 Thread Steffen Tengesdal
We’ve set it up for clients where we can script a DB moving automatically from 
one server to another and it is automated  entirely.  

Typically we have scripts that will do a dump without analytics (e.g. pg_dump 
-T analytics* -T completeness* dhis2 | /usr/bin/gzip -c > 
/tmp/dhis2.backup.gz), so it is much smaller of a backup to move (since 
bandwidth is a consideration). We then transfer that securely using key pairs 
to the new server and it drops the existing DB there (after backing it up) and 
imports the new one.  We typically schedule this transfer as a cron job to run 
nightly or during off-peak hours from a bash script, since analytics also need 
to be re-run on the local server once the DB is moved as well.

There are many ways to script this and rsync can work, pg_dump can also backup 
on one machine and restore to another (but we highly recommend keys to keep it 
secure), scp, etc.   

Steffen Tengesdal
BAO Systems


> On Dec 18, 2014, at 7:13 AM, gerald thomas  wrote:
> 
> Bob,
> My Suggestion:
> All local servers must be on 2.15 war file then we create a SFTP
> account on cloud server then we can use filezilla from the local
> server to download the backup from the cloud server.
> I know it is crude but that help for now.
> What is your take Bob.
> 
> On 12/18/14, Bob Jolliffe  > wrote:
>> Hi Gerald
>> 
>> We tested this when I was in Sierra Leone and we were finding serious
>> problems with bandwidth getting the data back to Sierra Leone.
>> 
>> So you are going to have to think carefully about when and how often to
>> synch.  Currently your database files are very small as you don't have much
>> data on your cloud server, but it will soon grow.  I suspect "at least
>> twice a day" sounds unrealistic.
>> 
>> The way I typically do it is to first create an account on the backup
>> server.  Make sure that the account running your dhis instance can login to
>> the backup server without a password by creating an ssh key pair and
>> installing the public key on the backup server account.  Then you can
>> simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
>> a directory on the backup server using cron.   In fact if you look in
>> /usr/bin/dhis2-backup you will see that the commands are already there to
>> do this, just commented out.  This would synch with the backup server after
>> taking the nightly backup.
>> 
>> This simple (and slightly lazy) setup has worked fine, and continues to
>> work, in a number of places.  But there are a number of reasons you might
>> want to do something different.
>> 
>> (i)  you might want to pull from the backup server rather than push to it.
>> Particularly as the backup server might not be as reliably always online as
>> the production server.  This would require a slightly different variation
>> on the above, but using the same principle of creating an ssh keypair and
>> letting rsync do the work.
>> 
>> (ii) rsync is a really great and simple tool, but it is sadly quite slow.
>> If you are bandwidth stressed and your database is growing it might not be
>> the best solution.  Works fine when bandwidth is not a critical issue.  The
>> trouble is it doesn't really take into account the incremental nature of
>> the data ie. you backup everything every time (besides the ephemeral tables
>> like analytics, aggregated etc).  In which case you need to start thinking
>> smarter and maybe a little bit more complicated.  One approach I have been
>> considering, (but not yet tried) is to make a copy of the metadata export
>> every night and then just pull all the datavalues with a lastupdated
>> greater than the last time you pulled.  That is going to reduce the size of
>> the backup quite considerably.  In theory this is probably even possible to
>> do through the api rather than directly through psql which might be fine if
>> you choose the time of day/night carefully.  I'd probably do it with psql
>> at the backed,
>> 
>> So there are a few options.  The first being the simplest and also the
>> crudest.  Any other thoughts?
>> 
>> Cheers
>> Bob
>> 
>> On 18 December 2014 at 05:07, gerald thomas  wrote:
>>> 
>>> Dear All,
>>> Sierra Leone wants to finally migrate to an online server (External
>>> server hosted outside the Ministry) but we will like to create a daily
>>> backup of that server locally in case anything goes wrong.
>>> My questions:
>>> 
>>> 1.  We need a help with a script that can create a sync between the
>>> External Server and the Local Server (at least twice a day)
>>> 
>>> 2. Is there something we should know from past experiences about
>>> hosting servers on the cloud
>>> 
>>> Please feel free to share anything and I will be grateful to learn new
>>> things about dhis2
>>> 
>>> --
>>> Regards,
>>> 
>>> Gerald
>>> 
>>> ___
>>> Mailing list: https://launchpad.net/~dhis2-devs
>>> Post to : dhis2-devs@lists.launchpad.net
>>> Unsubscribe : https://lau

Re: [Dhis2-devs] Creating Sync between Linode(External Server) and Local Server

2014-12-18 Thread Bob Jolliffe
Hi Steffen

That makes sense and is pretty close to what I am suggesting as well.  Do
you have any thoughts about taking incremental backups of the datavalues
tables?  Even with the analytics and the like removed, some of these
databases start to get quite big.

Bob

On 18 December 2014 at 12:27, Steffen Tengesdal 
wrote:
>
> We've set it up for clients where we can script a DB moving automatically
> from one server to another and it is automated  entirely.
>
> Typically we have scripts that will do a dump without analytics (e.g.
> pg_dump -T analytics* -T completeness* dhis2 | /usr/bin/gzip -c >
> /tmp/dhis2.backup.gz), so it is much smaller of a backup to move (since
> bandwidth is a consideration). We then transfer that securely using key
> pairs to the new server and it drops the existing DB there (after backing
> it up) and imports the new one.  We typically schedule this transfer as a
> cron job to run nightly or during off-peak hours from a bash script, since
> analytics also need to be re-run on the local server once the DB is moved
> as well.
>
> There are many ways to script this and rsync can work, pg_dump can also
> backup on one machine and restore to another (but we highly recommend keys
> to keep it secure), scp, etc.
>
> Steffen Tengesdal
> BAO Systems
>
>
> On Dec 18, 2014, at 7:13 AM, gerald thomas  wrote:
>
> Bob,
> My Suggestion:
> All local servers must be on 2.15 war file then we create a SFTP
> account on cloud server then we can use filezilla from the local
> server to download the backup from the cloud server.
> I know it is crude but that help for now.
> What is your take Bob.
>
> On 12/18/14, Bob Jolliffe  wrote:
>
> Hi Gerald
>
> We tested this when I was in Sierra Leone and we were finding serious
> problems with bandwidth getting the data back to Sierra Leone.
>
> So you are going to have to think carefully about when and how often to
> synch.  Currently your database files are very small as you don't have much
> data on your cloud server, but it will soon grow.  I suspect "at least
> twice a day" sounds unrealistic.
>
> The way I typically do it is to first create an account on the backup
> server.  Make sure that the account running your dhis instance can login to
> the backup server without a password by creating an ssh key pair and
> installing the public key on the backup server account.  Then you can
> simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
> a directory on the backup server using cron.   In fact if you look in
> /usr/bin/dhis2-backup you will see that the commands are already there to
> do this, just commented out.  This would synch with the backup server after
> taking the nightly backup.
>
> This simple (and slightly lazy) setup has worked fine, and continues to
> work, in a number of places.  But there are a number of reasons you might
> want to do something different.
>
> (i)  you might want to pull from the backup server rather than push to it.
> Particularly as the backup server might not be as reliably always online as
> the production server.  This would require a slightly different variation
> on the above, but using the same principle of creating an ssh keypair and
> letting rsync do the work.
>
> (ii) rsync is a really great and simple tool, but it is sadly quite slow.
> If you are bandwidth stressed and your database is growing it might not be
> the best solution.  Works fine when bandwidth is not a critical issue.  The
> trouble is it doesn't really take into account the incremental nature of
> the data ie. you backup everything every time (besides the ephemeral tables
> like analytics, aggregated etc).  In which case you need to start thinking
> smarter and maybe a little bit more complicated.  One approach I have been
> considering, (but not yet tried) is to make a copy of the metadata export
> every night and then just pull all the datavalues with a lastupdated
> greater than the last time you pulled.  That is going to reduce the size of
> the backup quite considerably.  In theory this is probably even possible to
> do through the api rather than directly through psql which might be fine if
> you choose the time of day/night carefully.  I'd probably do it with psql
> at the backed,
>
> So there are a few options.  The first being the simplest and also the
> crudest.  Any other thoughts?
>
> Cheers
> Bob
>
> On 18 December 2014 at 05:07, gerald thomas  wrote:
>
>
> Dear All,
> Sierra Leone wants to finally migrate to an online server (External
> server hosted outside the Ministry) but we will like to create a daily
> backup of that server locally in case anything goes wrong.
> My questions:
>
> 1.  We need a help with a script that can create a sync between the
> External Server and the Local Server (at least twice a day)
>
> 2. Is there something we should know from past experiences about
> hosting servers on the cloud
>
> Please feel free to share anything and I will be grateful to learn new
> things about dhis2
>
> -

Re: [Dhis2-devs] Creating Sync between Linode(External Server) and Local Server

2014-12-18 Thread gerald thomas
Bob,
My Suggestion:
All local servers must be on 2.15 war file then we create a SFTP
account on cloud server then we can use filezilla from the local
server to download the backup from the cloud server.
I know it is crude but that help for now.
What is your take Bob.

On 12/18/14, Bob Jolliffe  wrote:
> Hi Gerald
>
> We tested this when I was in Sierra Leone and we were finding serious
> problems with bandwidth getting the data back to Sierra Leone.
>
> So you are going to have to think carefully about when and how often to
> synch.  Currently your database files are very small as you don't have much
> data on your cloud server, but it will soon grow.  I suspect "at least
> twice a day" sounds unrealistic.
>
> The way I typically do it is to first create an account on the backup
> server.  Make sure that the account running your dhis instance can login to
> the backup server without a password by creating an ssh key pair and
> installing the public key on the backup server account.  Then you can
> simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
> a directory on the backup server using cron.   In fact if you look in
> /usr/bin/dhis2-backup you will see that the commands are already there to
> do this, just commented out.  This would synch with the backup server after
> taking the nightly backup.
>
> This simple (and slightly lazy) setup has worked fine, and continues to
> work, in a number of places.  But there are a number of reasons you might
> want to do something different.
>
> (i)  you might want to pull from the backup server rather than push to it.
> Particularly as the backup server might not be as reliably always online as
> the production server.  This would require a slightly different variation
> on the above, but using the same principle of creating an ssh keypair and
> letting rsync do the work.
>
> (ii) rsync is a really great and simple tool, but it is sadly quite slow.
> If you are bandwidth stressed and your database is growing it might not be
> the best solution.  Works fine when bandwidth is not a critical issue.  The
> trouble is it doesn't really take into account the incremental nature of
> the data ie. you backup everything every time (besides the ephemeral tables
> like analytics, aggregated etc).  In which case you need to start thinking
> smarter and maybe a little bit more complicated.  One approach I have been
> considering, (but not yet tried) is to make a copy of the metadata export
> every night and then just pull all the datavalues with a lastupdated
> greater than the last time you pulled.  That is going to reduce the size of
> the backup quite considerably.  In theory this is probably even possible to
> do through the api rather than directly through psql which might be fine if
> you choose the time of day/night carefully.  I'd probably do it with psql
> at the backed,
>
> So there are a few options.  The first being the simplest and also the
> crudest.  Any other thoughts?
>
> Cheers
> Bob
>
> On 18 December 2014 at 05:07, gerald thomas  wrote:
>>
>> Dear All,
>> Sierra Leone wants to finally migrate to an online server (External
>> server hosted outside the Ministry) but we will like to create a daily
>> backup of that server locally in case anything goes wrong.
>> My questions:
>>
>> 1.  We need a help with a script that can create a sync between the
>> External Server and the Local Server (at least twice a day)
>>
>> 2. Is there something we should know from past experiences about
>> hosting servers on the cloud
>>
>> Please feel free to share anything and I will be grateful to learn new
>> things about dhis2
>>
>> --
>> Regards,
>>
>> Gerald
>>
>> ___
>> Mailing list: https://launchpad.net/~dhis2-devs
>> Post to : dhis2-devs@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~dhis2-devs
>> More help   : https://help.launchpad.net/ListHelp
>>
>


-- 
Regards,

Gerald

___
Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp


Re: [Dhis2-devs] Creating Sync between Linode(External Server) and Local Server

2014-12-18 Thread Steffen Tengesdal
Hi Gerald,

As Bob pointed out, filezilla is a GUI tool and it does not support scheduling 
of downloads.  Your local server should not have a GUI on it if it is a 
production system.  If your local host a Linux system? If so, you can create a 
simple bash script on the localhost system that uses sftp or scp command line 
to connect and download a backup.  A script for that would not be very 
complicated.  

Steffen

> On Dec 18, 2014, at 7:47 AM, gerald thomas  wrote:
> 
> Bob,
> My Suggestion:
> All local servers must be on 2.15 war file then we create a SFTP
> account on cloud server then we can use filezilla from the local
> server to download the backup from the cloud server.
> I know it is crude but that help for now.
> What is your take Bob.
> 
> On 12/18/14, Bob Jolliffe  > wrote:
>> Hi Gerald
>> 
>> We tested this when I was in Sierra Leone and we were finding serious
>> problems with bandwidth getting the data back to Sierra Leone.
>> 
>> So you are going to have to think carefully about when and how often to
>> synch.  Currently your database files are very small as you don't have much
>> data on your cloud server, but it will soon grow.  I suspect "at least
>> twice a day" sounds unrealistic.
>> 
>> The way I typically do it is to first create an account on the backup
>> server.  Make sure that the account running your dhis instance can login to
>> the backup server without a password by creating an ssh key pair and
>> installing the public key on the backup server account.  Then you can
>> simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
>> a directory on the backup server using cron.   In fact if you look in
>> /usr/bin/dhis2-backup you will see that the commands are already there to
>> do this, just commented out.  This would synch with the backup server after
>> taking the nightly backup.
>> 
>> This simple (and slightly lazy) setup has worked fine, and continues to
>> work, in a number of places.  But there are a number of reasons you might
>> want to do something different.
>> 
>> (i)  you might want to pull from the backup server rather than push to it.
>> Particularly as the backup server might not be as reliably always online as
>> the production server.  This would require a slightly different variation
>> on the above, but using the same principle of creating an ssh keypair and
>> letting rsync do the work.
>> 
>> (ii) rsync is a really great and simple tool, but it is sadly quite slow.
>> If you are bandwidth stressed and your database is growing it might not be
>> the best solution.  Works fine when bandwidth is not a critical issue.  The
>> trouble is it doesn't really take into account the incremental nature of
>> the data ie. you backup everything every time (besides the ephemeral tables
>> like analytics, aggregated etc).  In which case you need to start thinking
>> smarter and maybe a little bit more complicated.  One approach I have been
>> considering, (but not yet tried) is to make a copy of the metadata export
>> every night and then just pull all the datavalues with a lastupdated
>> greater than the last time you pulled.  That is going to reduce the size of
>> the backup quite considerably.  In theory this is probably even possible to
>> do through the api rather than directly through psql which might be fine if
>> you choose the time of day/night carefully.  I'd probably do it with psql
>> at the backed,
>> 
>> So there are a few options.  The first being the simplest and also the
>> crudest.  Any other thoughts?
>> 
>> Cheers
>> Bob
>> 
>> On 18 December 2014 at 05:07, gerald thomas  wrote:
>>> 
>>> Dear All,
>>> Sierra Leone wants to finally migrate to an online server (External
>>> server hosted outside the Ministry) but we will like to create a daily
>>> backup of that server locally in case anything goes wrong.
>>> My questions:
>>> 
>>> 1.  We need a help with a script that can create a sync between the
>>> External Server and the Local Server (at least twice a day)
>>> 
>>> 2. Is there something we should know from past experiences about
>>> hosting servers on the cloud
>>> 
>>> Please feel free to share anything and I will be grateful to learn new
>>> things about dhis2
>>> 
>>> --
>>> Regards,
>>> 
>>> Gerald
>>> 
>>> ___
>>> Mailing list: https://launchpad.net/~dhis2-devs
>>> Post to : dhis2-devs@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~dhis2-devs
>>> More help   : https://help.launchpad.net/ListHelp
>>> 
>> 
> 
> 
> -- 
> Regards,
> 
> Gerald
> 
> ___
> Mailing list: https://launchpad.net/~dhis2-devs 
> 
> Post to : dhis2-devs@lists.launchpad.net 
> 
> Unsubscribe : https://launchpad.net/~dhis2-devs 
> 
> More help   : https://help.launchpad.net/ListHelp 
> 

Re: [Dhis2-devs] Creating Sync between Linode(External Server) and Local Server

2014-12-18 Thread Dan Cocos
Hi All,

I think there are two concerns being discussed here.

1) Making sure there is a reliable backup in case something goes wrong.

The first problem is pretty straight forward, one can create another instance 
in another region, another provider or locally. Then schedule a regular backup 
to that server. Though I don’t recommend that the local actively run DHIS 2 
because any changes made to that server will be lost on the next update from 
the cloud instance. Merging DBs is a difficult problem and causes more headache 
than it is worth.

Depending on how far back you’d like your backups to go this will start to 
consume a lot of disk space.

If the cloud server goes down you can be assured that your data is safe because 
you’ll have a copy of the database either on another cloud server or locally.

Incremental backups can be good for low bandwidth but my concerns are restore 
time and if one of the increments is corrupted it can cause a lot of problems.

Some cloud providers also offer storage/backup solutions that can address this 
concern. 

2) Failover in the event the cloud server goes down.

This is a more complex problem and can be addressed by having stand by servers 
in different regions, this will allow for failover in the event of an outage 
but has to be carefully planned and starts to get expensive as you’ve 
essentially doubled or tripled the number of instances/servers you’d need 
available. It also requires careful planning to make sure there is clear 
failover plan in addition to a clear plan to restore to the initial setup. 

—
Executive summary 
1) Reliable backups are pretty straight forward and can be cost effective. 
2) Failure over can be addressed but it is complex problem and starts to get 
expensive.

Lastly and more importantly is to test on regular basis to make sure that you 
are able to restore from backups in the event of a failure.

Thanks,
Dan



Dan Cocos
BAO Systems
www.baosystems.com 
T: +1 202-352-2671 | skype: dancocos

> On Dec 18, 2014, at 7:53 AM, Steffen Tengesdal  wrote:
> 
> Hi Gerald,
> 
> As Bob pointed out, filezilla is a GUI tool and it does not support 
> scheduling of downloads.  Your local server should not have a GUI on it if it 
> is a production system.  If your local host a Linux system? If so, you can 
> create a simple bash script on the localhost system that uses sftp or scp 
> command line to connect and download a backup.  A script for that would not 
> be very complicated.  
> 
> Steffen
> 
>> On Dec 18, 2014, at 7:47 AM, gerald thomas > > wrote:
>> 
>> Bob,
>> My Suggestion:
>> All local servers must be on 2.15 war file then we create a SFTP
>> account on cloud server then we can use filezilla from the local
>> server to download the backup from the cloud server.
>> I know it is crude but that help for now.
>> What is your take Bob.
>> 
>> On 12/18/14, Bob Jolliffe > > wrote:
>>> Hi Gerald
>>> 
>>> We tested this when I was in Sierra Leone and we were finding serious
>>> problems with bandwidth getting the data back to Sierra Leone.
>>> 
>>> So you are going to have to think carefully about when and how often to
>>> synch.  Currently your database files are very small as you don't have much
>>> data on your cloud server, but it will soon grow.  I suspect "at least
>>> twice a day" sounds unrealistic.
>>> 
>>> The way I typically do it is to first create an account on the backup
>>> server.  Make sure that the account running your dhis instance can login to
>>> the backup server without a password by creating an ssh key pair and
>>> installing the public key on the backup server account.  Then you can
>>> simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
>>> a directory on the backup server using cron.   In fact if you look in
>>> /usr/bin/dhis2-backup you will see that the commands are already there to
>>> do this, just commented out.  This would synch with the backup server after
>>> taking the nightly backup.
>>> 
>>> This simple (and slightly lazy) setup has worked fine, and continues to
>>> work, in a number of places.  But there are a number of reasons you might
>>> want to do something different.
>>> 
>>> (i)  you might want to pull from the backup server rather than push to it.
>>> Particularly as the backup server might not be as reliably always online as
>>> the production server.  This would require a slightly different variation
>>> on the above, but using the same principle of creating an ssh keypair and
>>> letting rsync do the work.
>>> 
>>> (ii) rsync is a really great and simple tool, but it is sadly quite slow.
>>> If you are bandwidth stressed and your database is growing it might not be
>>> the best solution.  Works fine when bandwidth is not a critical issue.  The
>>> trouble is it doesn't really take into account the incremental nature of
>>> the data ie. you backup everything every time (besides the ephemera

Re: [Dhis2-devs] [Dhis2-users] Creating Sync between Linode(External Server) and Local Server

2014-12-18 Thread Bob Jolliffe
I think Steffen put his finger on it when he said that the backup should be
restored (and hence tested) as part of the same scripted operation.  But
you make a good point about not having a dhis2 instance running live
against that database as it would disturb the integrity of the backup.

Its also important to have a notion of generations of backup.  If you just
have the production database and the backup, then when things go bad on the
production server you don't want to overwrite your good backup with a bad
one.

You can't keep daily backups forever as you will rapidly run out of space
or budget.  My preference is to keep:
6 days of daily backups
6 weeks of weekly backups
some number of monthly backups
etc

This way as you roll into the future your disk usage doesn't grow too
rapidly.

On 18 December 2014 at 13:27, Dan Cocos  wrote:
>
> Hi All,
>
> I think there are two concerns being discussed here.
>
> 1) Making sure there is a reliable backup in case something goes wrong.
>
> The first problem is pretty straight forward, one can create another
> instance in another region, another provider or locally. Then schedule a
> regular backup to that server. Though I don't recommend that the local
> actively run DHIS 2 because any changes made to that server will be lost on
> the next update from the cloud instance. Merging DBs is a difficult problem
> and causes more headache than it is worth.
>
> Depending on how far back you'd like your backups to go this will start to
> consume a lot of disk space.
>
> If the cloud server goes down you can be assured that your data is safe
> because you'll have a copy of the database either on another cloud server
> or locally.
>
> Incremental backups can be good for low bandwidth but my concerns are
> restore time and if one of the increments is corrupted it can cause a lot
> of problems.
>
> Some cloud providers also offer storage/backup solutions that can address
> this concern.
>
> 2) Failover in the event the cloud server goes down.
>
> This is a more complex problem and can be addressed by having stand by
> servers in different regions, this will allow for failover in the event of
> an outage but has to be carefully planned and starts to get expensive as
> you've essentially doubled or tripled the number of instances/servers you'd
> need available. It also requires careful planning to make sure there is
> clear failover plan in addition to a clear plan to restore to the initial
> setup.
>
> --
> Executive summary
> 1) Reliable backups are pretty straight forward and can be cost effective.
> 2) Failure over can be addressed but it is complex problem and starts to
> get expensive.
>
> Lastly and more importantly is to test on regular basis to make sure that
> you are able to restore from backups in the event of a failure.
>
> Thanks,
> Dan
>
>
>
> *Dan Cocos*
> BAO Systems
> www.baosystems.com
> T: +1 202-352-2671 | skype: dancocos
>
> On Dec 18, 2014, at 7:53 AM, Steffen Tengesdal 
> wrote:
>
> Hi Gerald,
>
> As Bob pointed out, filezilla is a GUI tool and it does not support
> scheduling of downloads.  Your local server should not have a GUI on it if
> it is a production system.  If your local host a Linux system? If so, you
> can create a simple bash script on the localhost system that uses sftp or
> scp command line to connect and download a backup.  A script for that would
> not be very complicated.
>
> Steffen
>
> On Dec 18, 2014, at 7:47 AM, gerald thomas  wrote:
>
> Bob,
> My Suggestion:
> All local servers must be on 2.15 war file then we create a SFTP
> account on cloud server then we can use filezilla from the local
> server to download the backup from the cloud server.
> I know it is crude but that help for now.
> What is your take Bob.
>
> On 12/18/14, Bob Jolliffe  wrote:
>
> Hi Gerald
>
> We tested this when I was in Sierra Leone and we were finding serious
> problems with bandwidth getting the data back to Sierra Leone.
>
> So you are going to have to think carefully about when and how often to
> synch.  Currently your database files are very small as you don't have much
> data on your cloud server, but it will soon grow.  I suspect "at least
> twice a day" sounds unrealistic.
>
> The way I typically do it is to first create an account on the backup
> server.  Make sure that the account running your dhis instance can login to
> the backup server without a password by creating an ssh key pair and
> installing the public key on the backup server account.  Then you can
> simply the rsync the backups directory (eg /var/lib/dhis2/dhis/backups) to
> a directory on the backup server using cron.   In fact if you look in
> /usr/bin/dhis2-backup you will see that the commands are already there to
> do this, just commented out.  This would synch with the backup server after
> taking the nightly backup.
>
> This simple (and slightly lazy) setup has worked fine, and continues to
> work, in a number of places.  But there are a number of reasons you might
> want to do

Re: [Dhis2-devs] [Dhis2-users] Creating Sync between Linode(External Server) and Local Server

2014-12-18 Thread Jason Pickering
One way of solving the problem of backups and disk space, is to push your
backups to Amazon Glacier. That way, you can be sure that you have a
"secure" offsite backup some place. Once it is on Glacier, then you can
download the backup to your  backup machine. From a security standpoint, it
might be better as well, as you do not need direct interaction between the
backup server and the production cloud server. Of course, it is more cost,
but you solve the problem of having a secure backup, away from both the
production and backup servers. Currently, at $0.01 per gigabyte per month,
it is likely much cheaper than what would cost you in-house to worry about
this.

Regards,
Jason


On Thu, Dec 18, 2014 at 2:45 PM, Bob Jolliffe  wrote:
>
> I think Steffen put his finger on it when he said that the backup should
> be restored (and hence tested) as part of the same scripted operation.  But
> you make a good point about not having a dhis2 instance running live
> against that database as it would disturb the integrity of the backup.
>
> Its also important to have a notion of generations of backup.  If you just
> have the production database and the backup, then when things go bad on the
> production server you don't want to overwrite your good backup with a bad
> one.
>
> You can't keep daily backups forever as you will rapidly run out of space
> or budget.  My preference is to keep:
> 6 days of daily backups
> 6 weeks of weekly backups
> some number of monthly backups
> etc
>
> This way as you roll into the future your disk usage doesn't grow too
> rapidly.
>
> On 18 December 2014 at 13:27, Dan Cocos  wrote:
>
>> Hi All,
>>
>> I think there are two concerns being discussed here.
>>
>> 1) Making sure there is a reliable backup in case something goes wrong.
>>
>> The first problem is pretty straight forward, one can create another
>> instance in another region, another provider or locally. Then schedule a
>> regular backup to that server. Though I don’t recommend that the local
>> actively run DHIS 2 because any changes made to that server will be lost on
>> the next update from the cloud instance. Merging DBs is a difficult problem
>> and causes more headache than it is worth.
>>
>> Depending on how far back you’d like your backups to go this will start
>> to consume a lot of disk space.
>>
>> If the cloud server goes down you can be assured that your data is safe
>> because you’ll have a copy of the database either on another cloud server
>> or locally.
>>
>> Incremental backups can be good for low bandwidth but my concerns are
>> restore time and if one of the increments is corrupted it can cause a lot
>> of problems.
>>
>> Some cloud providers also offer storage/backup solutions that can address
>> this concern.
>>
>> 2) Failover in the event the cloud server goes down.
>>
>> This is a more complex problem and can be addressed by having stand by
>> servers in different regions, this will allow for failover in the event of
>> an outage but has to be carefully planned and starts to get expensive as
>> you’ve essentially doubled or tripled the number of instances/servers you’d
>> need available. It also requires careful planning to make sure there is
>> clear failover plan in addition to a clear plan to restore to the initial
>> setup.
>>
>> —
>> Executive summary
>> 1) Reliable backups are pretty straight forward and can be cost
>> effective.
>> 2) Failure over can be addressed but it is complex problem and starts to
>> get expensive.
>>
>> Lastly and more importantly is to test on regular basis to make sure that
>> you are able to restore from backups in the event of a failure.
>>
>> Thanks,
>> Dan
>>
>>
>>
>> *Dan Cocos*
>> BAO Systems
>> www.baosystems.com
>> T: +1 202-352-2671 | skype: dancocos
>>
>> On Dec 18, 2014, at 7:53 AM, Steffen Tengesdal 
>> wrote:
>>
>> Hi Gerald,
>>
>> As Bob pointed out, filezilla is a GUI tool and it does not support
>> scheduling of downloads.  Your local server should not have a GUI on it if
>> it is a production system.  If your local host a Linux system? If so, you
>> can create a simple bash script on the localhost system that uses sftp or
>> scp command line to connect and download a backup.  A script for that would
>> not be very complicated.
>>
>> Steffen
>>
>> On Dec 18, 2014, at 7:47 AM, gerald thomas  wrote:
>>
>> Bob,
>> My Suggestion:
>> All local servers must be on 2.15 war file then we create a SFTP
>> account on cloud server then we can use filezilla from the local
>> server to download the backup from the cloud server.
>> I know it is crude but that help for now.
>> What is your take Bob.
>>
>> On 12/18/14, Bob Jolliffe  wrote:
>>
>> Hi Gerald
>>
>> We tested this when I was in Sierra Leone and we were finding serious
>> problems with bandwidth getting the data back to Sierra Leone.
>>
>> So you are going to have to think carefully about when and how often to
>> synch.  Currently your database files are very small as you don't have
>> much
>> data on your cloud se

Re: [Dhis2-devs] [Dhis2-users] Creating Sync betweenLinode(External Server) and Local Server

2014-12-18 Thread gerald thomas
Bob,
Sorry about the GUI application I recommend. I was only trying to make a
point to explain my idea and also thinking of those regional servers
(because they are using desktop Ubuntu) .
Bob,
Your account is still there and you can ssh .
Sorry all for my late responses but I will be giving more input in a hour
or two from now.

Regards,
Gerald
On 18 Dec 2014 14:18, "Bob Jolliffe"  wrote:

>  [image: Boxbe]  This message is eligible
> for Automatic Cleanup! (bobjolli...@gmail.com) Add cleanup rule
> 
> | More info
> 
>
> I think Steffen put his finger on it when he said that the backup should
> be restored (and hence tested) as part of the same scripted operation.  But
> you make a good point about not having a dhis2 instance running live
> against that database as it would disturb the integrity of the backup.
>
> Its also important to have a notion of generations of backup.  If you just
> have the production database and the backup, then when things go bad on the
> production server you don't want to overwrite your good backup with a bad
> one.
>
> You can't keep daily backups forever as you will rapidly run out of space
> or budget.  My preference is to keep:
> 6 days of daily backups
> 6 weeks of weekly backups
> some number of monthly backups
> etc
>
> This way as you roll into the future your disk usage doesn't grow too
> rapidly.
>
> On 18 December 2014 at 13:27, Dan Cocos  wrote:
>>
>> Hi All,
>>
>> I think there are two concerns being discussed here.
>>
>> 1) Making sure there is a reliable backup in case something goes wrong.
>>
>> The first problem is pretty straight forward, one can create another
>> instance in another region, another provider or locally. Then schedule a
>> regular backup to that server. Though I don’t recommend that the local
>> actively run DHIS 2 because any changes made to that server will be lost on
>> the next update from the cloud instance. Merging DBs is a difficult problem
>> and causes more headache than it is worth.
>>
>> Depending on how far back you’d like your backups to go this will start
>> to consume a lot of disk space.
>>
>> If the cloud server goes down you can be assured that your data is safe
>> because you’ll have a copy of the database either on another cloud server
>> or locally.
>>
>> Incremental backups can be good for low bandwidth but my concerns are
>> restore time and if one of the increments is corrupted it can cause a lot
>> of problems.
>>
>> Some cloud providers also offer storage/backup solutions that can address
>> this concern.
>>
>> 2) Failover in the event the cloud server goes down.
>>
>> This is a more complex problem and can be addressed by having stand by
>> servers in different regions, this will allow for failover in the event of
>> an outage but has to be carefully planned and starts to get expensive as
>> you’ve essentially doubled or tripled the number of instances/servers you’d
>> need available. It also requires careful planning to make sure there is
>> clear failover plan in addition to a clear plan to restore to the initial
>> setup.
>>
>> —
>> Executive summary
>> 1) Reliable backups are pretty straight forward and can be cost
>> effective.
>> 2) Failure over can be addressed but it is complex problem and starts to
>> get expensive.
>>
>> Lastly and more importantly is to test on regular basis to make sure that
>> you are able to restore from backups in the event of a failure.
>>
>> Thanks,
>> Dan
>>
>>
>>
>> *Dan Cocos*
>> BAO Systems
>> www.baosystems.com
>> T: +1 202-352-2671 | skype: dancocos
>>
>> On Dec 18, 2014, at 7:53 AM, Steffen Tengesdal 
>> wrote:
>>
>> Hi Gerald,
>>
>> As Bob pointed out, filezilla is a GUI tool and it does not support
>> scheduling of downloads.  Your local server should not have a GUI on it if
>> it is a production system.  If your local host a Linux system? If so, you
>> can create a simple bash script on the localhost system that uses sftp or
>> scp command line to connect and download a backup.  A script for that would
>> not be very complicated.
>>
>> Steffen
>>
>> On Dec 18, 2014, at 7:47 AM, gerald thomas  wrote:
>>
>> Bob,
>> My Suggestion:
>> All local servers must be on 2.15 war file then we create a SFTP
>> account on cloud server then we can use filezilla from the local
>> server to download the backup from the cloud server.
>> I know it is crude but that 

[Dhis2-devs] [Branch ~dhis2-devs-core/dhis2/trunk] Rev 17728: Removed unused method

2014-12-18 Thread noreply

revno: 17728
committer: Lars Helge Overland 
branch nick: dhis2
timestamp: Thu 2014-12-18 18:04:15 +0100
message:
  Removed unused method
modified:
  dhis-2/dhis-api/src/main/java/org/hisp/dhis/datavalue/DataValue.java
  
dhis-2/dhis-web/dhis-web-api-mobile/src/main/java/org/hisp/dhis/api/mobile/support/DataStreamSerializer.java
  
dhis-2/dhis-web/dhis-web-api/src/main/java/org/hisp/dhis/webapi/controller/DimensionController.java
  
dhis-2/dhis-web/dhis-web-api/src/main/java/org/hisp/dhis/webapi/controller/dataelement/DataElementOperandController.java


--
lp:dhis2
https://code.launchpad.net/~dhis2-devs-core/dhis2/trunk

Your team DHIS 2 developers is subscribed to branch lp:dhis2.
To unsubscribe from this branch go to 
https://code.launchpad.net/~dhis2-devs-core/dhis2/trunk/+edit-subscription
=== modified file 'dhis-2/dhis-api/src/main/java/org/hisp/dhis/datavalue/DataValue.java'
--- dhis-2/dhis-api/src/main/java/org/hisp/dhis/datavalue/DataValue.java	2014-10-16 06:17:19 +
+++ dhis-2/dhis-api/src/main/java/org/hisp/dhis/datavalue/DataValue.java	2014-12-18 17:04:15 +
@@ -148,11 +148,6 @@
 // Dimension
 // -
 
-public String getMeasure()
-{
-return value;
-}
-
 @Override
 public String getName()
 {
@@ -328,7 +323,7 @@
 
 public void setValue( String value )
 {
-if( !auditValueIsSet )
+if ( !auditValueIsSet )
 {
 this.auditValue = valueIsSet ? this.value : value;
 auditValueIsSet = true;

=== modified file 'dhis-2/dhis-web/dhis-web-api-mobile/src/main/java/org/hisp/dhis/api/mobile/support/DataStreamSerializer.java'
--- dhis-2/dhis-web/dhis-web-api-mobile/src/main/java/org/hisp/dhis/api/mobile/support/DataStreamSerializer.java	2014-12-11 09:06:50 +
+++ dhis-2/dhis-web/dhis-web-api-mobile/src/main/java/org/hisp/dhis/api/mobile/support/DataStreamSerializer.java	2014-12-18 17:04:15 +
@@ -36,7 +36,6 @@
 import java.io.OutputStream;
 
 import org.hisp.dhis.api.mobile.model.DataStreamSerializable;
-import org.jfree.util.Log;
 
 import com.jcraft.jzlib.JZlib;
 import com.jcraft.jzlib.ZOutputStream;

=== modified file 'dhis-2/dhis-web/dhis-web-api/src/main/java/org/hisp/dhis/webapi/controller/DimensionController.java'
--- dhis-2/dhis-web/dhis-web-api/src/main/java/org/hisp/dhis/webapi/controller/DimensionController.java	2014-12-04 10:11:00 +
+++ dhis-2/dhis-web/dhis-web-api/src/main/java/org/hisp/dhis/webapi/controller/DimensionController.java	2014-12-18 17:04:15 +
@@ -28,7 +28,17 @@
  * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  */
 
-import com.google.common.collect.Lists;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
 import org.hisp.dhis.common.DimensionService;
 import org.hisp.dhis.common.DimensionalObject;
 import org.hisp.dhis.common.IdentifiableObjectManager;
@@ -47,15 +57,7 @@
 import org.springframework.web.bind.annotation.RequestMethod;
 import org.springframework.web.bind.annotation.RequestParam;
 
-import javax.servlet.http.HttpServletRequest;
-import javax.servlet.http.HttpServletResponse;
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
+import com.google.common.collect.Lists;
 
 @Controller
 @RequestMapping( value = DimensionController.RESOURCE_PATH )
@@ -133,7 +135,6 @@
 public void getItemsJson( @PathVariable String uid, @RequestParam Map parameters,
 Model model, HttpServletRequest request, HttpServletResponse response ) throws IOException
 {
-WebOptions options = new WebOptions( parameters );
 List items = dimensionService.getCanReadDimensionItems( uid );
 
 if ( parameters.containsKey( "filter" ) )

=== modified file 'dhis-2/dhis-web/dhis-web-api/src/main/java/org/hisp/dhis/webapi/controller/dataelement/DataElementOperandController.java'
--- dhis-2/dhis-web/dhis-web-api/src/main/java/org/hisp/dhis/webapi/controller/dataelement/DataElementOperandController.java	2014-11-30 10:30:06 +
+++ dhis-2/dhis-web/dhis-web-api/src/main/java/org/hisp/dhis/webapi/controller/dataelement/DataElementOperandController.java	2014-12-18 17:04:15 +
@@ -28,14 +28,16 @@
  * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  */
 
-import com.google.common.collect.Lists;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
 import org.hisp.dhis.common.Pager;
 import org.hisp.dhis.common.PagerUtils;
 import org.hisp.dhis.dataelement.DataElement;
 import org.hisp.dhis.dataelement.DataEleme

[Dhis2-devs] [Branch ~dhis2-devs-core/dhis2/trunk] Rev 17729: Deprecation fix, changed from Objects to MoreObjects

2014-12-18 Thread noreply

revno: 17729
committer: Lars Helge Overland 
branch nick: dhis2
timestamp: Thu 2014-12-18 18:08:59 +0100
message:
  Deprecation fix, changed from Objects to MoreObjects
modified:
  dhis-2/dhis-api/src/main/java/org/hisp/dhis/calendar/DateTimeUnit.java
  
dhis-2/dhis-api/src/main/java/org/hisp/dhis/dataset/CompleteDataSetRegistration.java
  
dhis-2/dhis-api/src/main/java/org/hisp/dhis/dataset/CompleteDataSetRegistrations.java
  dhis-2/dhis-api/src/main/java/org/hisp/dhis/node/AbstractNode.java
  dhis-2/dhis-api/src/main/java/org/hisp/dhis/schema/Property.java
  dhis-2/dhis-api/src/main/java/org/hisp/dhis/schema/Schema.java
  dhis-2/dhis-api/src/main/java/org/hisp/dhis/user/UserCredentials.java
  
dhis-2/dhis-services/dhis-service-dxf2/src/main/java/org/hisp/dhis/dxf2/events/event/csv/CsvEventDataValue.java
  
dhis-2/dhis-services/dhis-service-dxf2/src/main/java/org/hisp/dhis/dxf2/fieldfilter/FieldMap.java
  
dhis-2/dhis-services/dhis-service-dxf2/src/main/java/org/hisp/dhis/dxf2/webmessage/WebMessage.java
  
dhis-2/dhis-web/dhis-web-api/src/main/java/org/hisp/dhis/webapi/webdomain/sharing/SharingUserGroupAccess.java


--
lp:dhis2
https://code.launchpad.net/~dhis2-devs-core/dhis2/trunk

Your team DHIS 2 developers is subscribed to branch lp:dhis2.
To unsubscribe from this branch go to 
https://code.launchpad.net/~dhis2-devs-core/dhis2/trunk/+edit-subscription
=== modified file 'dhis-2/dhis-api/src/main/java/org/hisp/dhis/calendar/DateTimeUnit.java'
--- dhis-2/dhis-api/src/main/java/org/hisp/dhis/calendar/DateTimeUnit.java	2014-09-21 08:45:17 +
+++ dhis-2/dhis-api/src/main/java/org/hisp/dhis/calendar/DateTimeUnit.java	2014-12-18 17:08:59 +
@@ -28,7 +28,11 @@
  * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  */
 
-import com.google.common.base.Objects;
+import java.util.Date;
+import java.util.TimeZone;
+
+import javax.validation.constraints.NotNull;
+
 import org.joda.time.Chronology;
 import org.joda.time.DateTime;
 import org.joda.time.DateTimeZone;
@@ -36,9 +40,7 @@
 import org.joda.time.LocalDateTime;
 import org.joda.time.chrono.ISOChronology;
 
-import javax.validation.constraints.NotNull;
-import java.util.Date;
-import java.util.TimeZone;
+import com.google.common.base.MoreObjects;
 
 /**
  * Class representing a specific calendar date.
@@ -421,7 +423,7 @@
 @Override
 public String toString()
 {
-return Objects.toStringHelper( this )
+return MoreObjects.toStringHelper( this )
 .add( "year", year )
 .add( "month", month )
 .add( "day", day )

=== modified file 'dhis-2/dhis-api/src/main/java/org/hisp/dhis/dataset/CompleteDataSetRegistration.java'
--- dhis-2/dhis-api/src/main/java/org/hisp/dhis/dataset/CompleteDataSetRegistration.java	2014-11-10 17:02:36 +
+++ dhis-2/dhis-api/src/main/java/org/hisp/dhis/dataset/CompleteDataSetRegistration.java	2014-12-18 17:08:59 +
@@ -28,11 +28,9 @@
  * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  */
 
-import com.fasterxml.jackson.annotation.JsonProperty;
-import com.fasterxml.jackson.databind.annotation.JsonSerialize;
-import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlProperty;
-import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlRootElement;
-import com.google.common.base.Objects;
+import java.io.Serializable;
+import java.util.Date;
+
 import org.hisp.dhis.common.BaseIdentifiableObject;
 import org.hisp.dhis.common.DxfNamespaces;
 import org.hisp.dhis.common.ImportableObject;
@@ -40,8 +38,11 @@
 import org.hisp.dhis.organisationunit.OrganisationUnit;
 import org.hisp.dhis.period.Period;
 
-import java.io.Serializable;
-import java.util.Date;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.databind.annotation.JsonSerialize;
+import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlProperty;
+import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlRootElement;
+import com.google.common.base.MoreObjects;
 
 /**
  * @author Lars Helge Overland
@@ -277,7 +278,7 @@
 @Override
 public String toString()
 {
-return Objects.toStringHelper( this )
+return MoreObjects.toStringHelper( this )
 .add( "dataSet", dataSet )
 .add( "period", period )
 .add( "source", source )

=== modified file 'dhis-2/dhis-api/src/main/java/org/hisp/dhis/dataset/CompleteDataSetRegistrations.java'
--- dhis-2/dhis-api/src/main/java/org/hisp/dhis/dataset/CompleteDataSetRegistrations.java	2014-10-13 05:01:18 +
+++ dhis-2/dhis-api/src/main/java/org/hisp/dhis/dataset/CompleteDataSetRegistrations.java	2014-12-18 17:08:59 +
@@ -28,15 +28,16 @@
  * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  */
 
+import java.util.ArrayList;
+import java.util.List;
+
+import org.hisp.dhis.common.DxfNamespaces;
+
 import com.fasterxml.jackson.annotation.JsonProperty;
 import com.faste

[Dhis2-devs] [Branch ~dhis2-devs-core/dhis2/trunk] Rev 17730: Removed code for detecting whether password had changed for user credentials that simply did not ...

2014-12-18 Thread noreply

revno: 17730
committer: Lars Helge Overland 
branch nick: dhis2
timestamp: Thu 2014-12-18 20:29:39 +0100
message:
  Removed code for detecting whether password had changed for user credentials 
that simply did not work. When comparing a persisted argument instance with 
another instance retrieved from hibernate with the same id, one will be 
referring to the same instance from the hibernate session. Must be fixed 
properly.
modified:
  
dhis-2/dhis-services/dhis-service-core/src/main/java/org/hisp/dhis/user/hibernate/HibernateUserCredentialsStore.java


--
lp:dhis2
https://code.launchpad.net/~dhis2-devs-core/dhis2/trunk

Your team DHIS 2 developers is subscribed to branch lp:dhis2.
To unsubscribe from this branch go to 
https://code.launchpad.net/~dhis2-devs-core/dhis2/trunk/+edit-subscription
=== modified file 'dhis-2/dhis-services/dhis-service-core/src/main/java/org/hisp/dhis/user/hibernate/HibernateUserCredentialsStore.java'
--- dhis-2/dhis-services/dhis-service-core/src/main/java/org/hisp/dhis/user/hibernate/HibernateUserCredentialsStore.java	2014-10-16 06:17:19 +
+++ dhis-2/dhis-services/dhis-service-core/src/main/java/org/hisp/dhis/user/hibernate/HibernateUserCredentialsStore.java	2014-12-18 19:29:39 +
@@ -86,19 +86,9 @@
 @Override
 public void updateUserCredentials( UserCredentials userCredentials )
 {
-Session session = sessionFactory.getCurrentSession();
-
-User persistedUser = userService.getUser( userCredentials.getUser().getUid() );
-
-if ( persistedUser != null && persistedUser.getUserCredentials() != null
-&& persistedUser.getUserCredentials().getPassword() != null
-&& userCredentials.getPassword() != null
-&& !persistedUser.getUserCredentials().getPassword().equals( userCredentials.getPassword() ) )
-{
-userCredentials.setPasswordLastUpdated( new Date() );
-}
-
-session.update( userCredentials );
+userCredentials.setPasswordLastUpdated( new Date() ); //TODO only update when password changed
+
+sessionFactory.getCurrentSession().update( userCredentials );
 }
 
 @Override

___
Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp


[Dhis2-devs] [Branch ~dhis2-devs-core/dhis2/trunk] Rev 17731: update passwordLastUpdated when password is set, not ideal (since password might be the same), bu...

2014-12-18 Thread noreply

revno: 17731
committer: Morten Olav Hansen 
branch nick: dhis2
timestamp: Thu 2014-12-18 20:55:42 +0100
message:
  update passwordLastUpdated when password is set, not ideal (since password 
might be the same), but keeps password expiry working
modified:
  dhis-2/dhis-api/src/main/java/org/hisp/dhis/user/UserCredentials.java


--
lp:dhis2
https://code.launchpad.net/~dhis2-devs-core/dhis2/trunk

Your team DHIS 2 developers is subscribed to branch lp:dhis2.
To unsubscribe from this branch go to 
https://code.launchpad.net/~dhis2-devs-core/dhis2/trunk/+edit-subscription
=== modified file 'dhis-2/dhis-api/src/main/java/org/hisp/dhis/user/UserCredentials.java'
--- dhis-2/dhis-api/src/main/java/org/hisp/dhis/user/UserCredentials.java	2014-12-18 17:08:59 +
+++ dhis-2/dhis-api/src/main/java/org/hisp/dhis/user/UserCredentials.java	2014-12-18 19:55:42 +
@@ -28,11 +28,12 @@
  * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  */
 
-import java.util.Collection;
-import java.util.Date;
-import java.util.HashSet;
-import java.util.Set;
-
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonView;
+import com.fasterxml.jackson.databind.annotation.JsonSerialize;
+import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlElementWrapper;
+import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlProperty;
+import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlRootElement;
 import org.hisp.dhis.common.BaseIdentifiableObject;
 import org.hisp.dhis.common.DimensionType;
 import org.hisp.dhis.common.DimensionalObject;
@@ -47,12 +48,10 @@
 import org.hisp.dhis.dataset.DataSet;
 import org.springframework.util.StringUtils;
 
-import com.fasterxml.jackson.annotation.JsonProperty;
-import com.fasterxml.jackson.annotation.JsonView;
-import com.fasterxml.jackson.databind.annotation.JsonSerialize;
-import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlElementWrapper;
-import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlProperty;
-import com.fasterxml.jackson.dataformat.xml.annotation.JacksonXmlRootElement;
+import java.util.Collection;
+import java.util.Date;
+import java.util.HashSet;
+import java.util.Set;
 
 /**
  * @author Nguyen Hong Duc
@@ -455,6 +454,7 @@
 public void setPassword( String password )
 {
 this.password = password;
+this.passwordLastUpdated = new Date();
 }
 
 @JsonProperty

___
Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp


[Dhis2-devs] [Branch ~dhis2-devs-core/dhis2/trunk] Rev 17732: User credentials, removed tmp fix

2014-12-18 Thread noreply

revno: 17732
committer: Lars Helge Overland 
branch nick: dhis2
timestamp: Thu 2014-12-18 21:01:47 +0100
message:
  User credentials, removed tmp fix
modified:
  
dhis-2/dhis-services/dhis-service-core/src/main/java/org/hisp/dhis/user/hibernate/HibernateUserCredentialsStore.java


--
lp:dhis2
https://code.launchpad.net/~dhis2-devs-core/dhis2/trunk

Your team DHIS 2 developers is subscribed to branch lp:dhis2.
To unsubscribe from this branch go to 
https://code.launchpad.net/~dhis2-devs-core/dhis2/trunk/+edit-subscription
=== modified file 'dhis-2/dhis-services/dhis-service-core/src/main/java/org/hisp/dhis/user/hibernate/HibernateUserCredentialsStore.java'
--- dhis-2/dhis-services/dhis-service-core/src/main/java/org/hisp/dhis/user/hibernate/HibernateUserCredentialsStore.java	2014-12-18 19:29:39 +
+++ dhis-2/dhis-services/dhis-service-core/src/main/java/org/hisp/dhis/user/hibernate/HibernateUserCredentialsStore.java	2014-12-18 20:01:47 +
@@ -85,9 +85,7 @@
 
 @Override
 public void updateUserCredentials( UserCredentials userCredentials )
-{
-userCredentials.setPasswordLastUpdated( new Date() ); //TODO only update when password changed
-
+{
 sessionFactory.getCurrentSession().update( userCredentials );
 }
 

___
Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp


[Dhis2-devs] [Branch ~dhis2-devs-core/dhis2/trunk] Rev 17733: Fixed bug related to user invite and validation. Removed cascading from the usercredentials > use...

2014-12-18 Thread noreply

revno: 17733
committer: Lars Helge Overland 
branch nick: dhis2
timestamp: Thu 2014-12-18 21:59:57 +0100
message:
  Fixed bug related to user invite and validation. Removed cascading from the 
usercredentials > userroles association.
modified:
  
dhis-2/dhis-services/dhis-service-core/src/main/resources/org/hisp/dhis/user/hibernate/UserCredentials.hbm.xml
  
dhis-2/dhis-web/dhis-web-api/src/main/java/org/hisp/dhis/webapi/controller/user/UserController.java


--
lp:dhis2
https://code.launchpad.net/~dhis2-devs-core/dhis2/trunk

Your team DHIS 2 developers is subscribed to branch lp:dhis2.
To unsubscribe from this branch go to 
https://code.launchpad.net/~dhis2-devs-core/dhis2/trunk/+edit-subscription
=== modified file 'dhis-2/dhis-services/dhis-service-core/src/main/resources/org/hisp/dhis/user/hibernate/UserCredentials.hbm.xml'
--- dhis-2/dhis-services/dhis-service-core/src/main/resources/org/hisp/dhis/user/hibernate/UserCredentials.hbm.xml	2014-12-05 15:27:18 +
+++ dhis-2/dhis-services/dhis-service-core/src/main/resources/org/hisp/dhis/user/hibernate/UserCredentials.hbm.xml	2014-12-18 20:59:57 +
@@ -24,7 +24,7 @@
 
 
 
-
+
   
   
   ___
Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp


Re: [Dhis2-devs] Creating Sync between Linode(External Server) and Local Server

2014-12-18 Thread gerald thomas
Sorry about the previous mail

Dear Bob,
I want to setup a Test Server so that we can do test on the various
scenarios highlighted and see which one will work best for Sierra
Leone. Basically the Test Server, will be acting as our Central Server
 for this test case. I will be sending you all the information once
the setup had been completed.
It is best we test something than do nothing.

Thanks in advance for your cooperation.

On 12/18/14, gerald thomas  wrote:
> Dear All,
> Sierra Leone wants to finally migrate to an online server (External
> server hosted outside the Ministry) but we will like to create a daily
> backup of that server locally in case anything goes wrong.
> My questions:
>
> 1.  We need a help with a script that can create a sync between the
> External Server and the Local Server (at least twice a day)
>
> 2. Is there something we should know from past experiences about
> hosting servers on the cloud
>
> Please feel free to share anything and I will be grateful to learn new
> things about dhis2
>
> --
> Regards,
>
> Gerald
>


-- 
Regards,

Gerald

___
Mailing list: https://launchpad.net/~dhis2-devs
Post to : dhis2-devs@lists.launchpad.net
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp