Review Request 20907: [PATCH] CLOUDSTACK-6552 Cloudstack-Management install package creates log directory that is never used

2014-04-30 Thread David Bierce

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/20907/
---

Review request for cloudstack.


Summary (updated)
-

[PATCH] CLOUDSTACK-6552 Cloudstack-Management install package creates log 
directory that is never used


Bugs: CLOUDSTACK-6552
https://issues.apache.org/jira/browse/CLOUDSTACK-6552


Repository: cloudstack-git


Description (updated)
---

The RPM build cloud.spec creates a directory that is never used by cloudstack 
management and is just left as an empty directory name cloudstack-management in 
/var/log.


Diffs (updated)
-

  packaging/centos63/cloud.spec 83c598b 

Diff: https://reviews.apache.org/r/20907/diff/


Testing (updated)
---

Build RPMs on CentOS 6.4 and 6.5 using master

mvn clean install -Dnonoss -Dnoredist
package.sh -p noredist

Resulting packages installed on Centos 6.5 without the unused directory. 


Thanks,

David Bierce



Review Request 20921: [PATCH] CLOUDSTACK-6552 Cloudstack-Management install package creates log directory that is never used

2014-04-30 Thread David Bierce

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/20921/
---

Review request for cloudstack.


Bugs: CLOUDSTACK-6552
https://issues.apache.org/jira/browse/CLOUDSTACK-6552


Repository: cloudstack-git


Description
---

The RPM build cloud.spec creates a directory that is never used by cloudstack 
management and is just left as an empty directory name cloudstack-management in 
/var/log.


Diffs
-

  packaging/centos63/cloud.spec 83c598b 

Diff: https://reviews.apache.org/r/20921/diff/


Testing
---

Build RPMs on CentOS 6.4 and 6.5 using master

mvn clean install -Dnonoss -Dnoredist
package.sh -p noredist

Resulting packages installed on Centos 6.5 without the unused directory. 


Thanks,

David Bierce



Re: Heart Bleed - Apache Cloud Stack - open ssl

2014-05-06 Thread David Bierce
That blog posted mentions new templates being created for previous versions of 
Cloudstack.  The post implies a few days and it has now been a month.  Have new 
templates been created and the blog post hasn’t been updated or are those still 
pending release?




On May 6, 2014, at 8:18 AM, David Nalley  wrote:

> Hi Prakash:
> 
> You might want to check out:
> https://blogs.apache.org/cloudstack/entry/how_to_mitigate_openssl_heartbleed
> 
> 
> On Mon, May 5, 2014 at 1:48 PM, Prakash Rao Banuka > wrote:
> 
>> Hi:
>> 
>> It is learnt that openssl library is being called when console of system
>> vms (secondarystorage , console proxy vm and router vm) is launched.
>> 
>> Is it mandatory to update openssl library in Apache Cloud Stack? If yes,
>> is there any official document.
>> 
>> Thank you
>> Prakash
>> 
>> 
>> 



Review Request 24779: [CLOUDSTACK-6254] Template disappears when download cleanup

2014-08-16 Thread David Bierce

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24779/
---

Review request for cloudstack.


Bugs: CLOUDSTACK-6254
https://issues.apache.org/jira/browse/CLOUDSTACK-6254


Repository: cloudstack-git


Description
---

PATCH] This is a quick stab at fixing a dataloss bug.  The ultimate
 solution is to refactor UploadManager to not use any deprecated code. It
 appears there is still code left over that uses the UploadVO/Dao which no
 long contains data about URL transfers.  This method was hardcoded to always
 pass Upload.Type.VOLUME as part of cleanup which was causing templates to be
 removed entirely from secondary storage not just the symlink on secondary
 storage.

Rather than try to refactor all of it out, this puts
logic for determining if the cleanup task is for a volume or a template
by doing a lookup on the URL.  It is a duplication of the same logic
from the calling method but is a very minimal code change until the
large problem is fixed.


Diffs
-

  
plugins/storage/image/default/src/org/apache/cloudstack/storage/datastore/driver/CloudStackImageStoreDriverImpl.java
 4796653 

Diff: https://reviews.apache.org/r/24779/diff/


Testing
---

On Cloudstack 4.2 4.3
Set cleanupurl to 30 seconds.  Downloaded a template, cleanup remvoed it from 
database, didn't remove the template.
Downloaded Volume, volume was cleaned up from secondary stoage and database.


Thanks,

David Bierce



Review Request for CLOUDSTACK-6254

2014-08-25 Thread David Bierce
Hello —

I was wondering if someone could take a look at 
https://issues.apache.org/jira/browse/CLOUDSTACK-6254  I’ve submitted a patch 
to work around the issue in 4.2 and 4.3.  It appears fixed in 4.4+ but this is 
a bug that does cause dataloss in the form of templates being deleted.  Unless 
there is a duplicate I didn’t find, this issue seems to be neglected.


Thanks,

David Bierce

Re: Review Request 24779: [CLOUDSTACK-6254] Template disappears when download cleanup

2014-08-25 Thread David Bierce

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24779/#review51442
---


- David Bierce


On Aug. 17, 2014, 3:02 a.m., David Bierce wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24779/
> ---
> 
> (Updated Aug. 17, 2014, 3:02 a.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Bugs: CLOUDSTACK-6254
> https://issues.apache.org/jira/browse/CLOUDSTACK-6254
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> PATCH] This is a quick stab at fixing a dataloss bug.  The ultimate
>  solution is to refactor UploadManager to not use any deprecated code. It
>  appears there is still code left over that uses the UploadVO/Dao which no
>  long contains data about URL transfers.  This method was hardcoded to always
>  pass Upload.Type.VOLUME as part of cleanup which was causing templates to be
>  removed entirely from secondary storage not just the symlink on secondary
>  storage.
> 
> Rather than try to refactor all of it out, this puts
> logic for determining if the cleanup task is for a volume or a template
> by doing a lookup on the URL.  It is a duplication of the same logic
> from the calling method but is a very minimal code change until the
> large problem is fixed.
> 
> 
> Diffs
> -
> 
>   
> plugins/storage/image/default/src/org/apache/cloudstack/storage/datastore/driver/CloudStackImageStoreDriverImpl.java
>  4796653 
> 
> Diff: https://reviews.apache.org/r/24779/diff/
> 
> 
> Testing
> ---
> 
> On Cloudstack 4.2 4.3
> Set cleanupurl to 30 seconds.  Downloaded a template, cleanup remvoed it from 
> database, didn't remove the template.
> Downloaded Volume, volume was cleaned up from secondary stoage and database.
> 
> 
> Thanks,
> 
> David Bierce
> 
>



Re: Review Request 24779: [CLOUDSTACK-6254] Template disappears when download cleanup

2014-08-25 Thread David Bierce


> On Aug. 25, 2014, 8:56 p.m., Nitin Mehta wrote:
> > plugins/storage/image/default/src/org/apache/cloudstack/storage/datastore/driver/CloudStackImageStoreDriverImpl.java,
> >  line 95
> > <https://reviews.apache.org/r/24779/diff/1/?file=662277#file662277line95>
> >
> > Thanks for working on this patch.
> > Any reason why you cant take the same approach as in master right now - 
> > that of adding a param - entityType in the method - can you please look at 
> > that code and fix it same way - keeping it consistent and more elegant ?

Because I'm new to Java and didn't know about the entityType method :)  I'll 
clean it up and resumbit.  While the fix is more elligant, it still feels like 
the correct fix is to remove all references to the UploadVO since the table is 
deprecated and should never have new data in it.


- David


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24779/#review51439
-------


On Aug. 17, 2014, 3:02 a.m., David Bierce wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24779/
> ---
> 
> (Updated Aug. 17, 2014, 3:02 a.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Bugs: CLOUDSTACK-6254
> https://issues.apache.org/jira/browse/CLOUDSTACK-6254
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> PATCH] This is a quick stab at fixing a dataloss bug.  The ultimate
>  solution is to refactor UploadManager to not use any deprecated code. It
>  appears there is still code left over that uses the UploadVO/Dao which no
>  long contains data about URL transfers.  This method was hardcoded to always
>  pass Upload.Type.VOLUME as part of cleanup which was causing templates to be
>  removed entirely from secondary storage not just the symlink on secondary
>  storage.
> 
> Rather than try to refactor all of it out, this puts
> logic for determining if the cleanup task is for a volume or a template
> by doing a lookup on the URL.  It is a duplication of the same logic
> from the calling method but is a very minimal code change until the
> large problem is fixed.
> 
> 
> Diffs
> -
> 
>   
> plugins/storage/image/default/src/org/apache/cloudstack/storage/datastore/driver/CloudStackImageStoreDriverImpl.java
>  4796653 
> 
> Diff: https://reviews.apache.org/r/24779/diff/
> 
> 
> Testing
> ---
> 
> On Cloudstack 4.2 4.3
> Set cleanupurl to 30 seconds.  Downloaded a template, cleanup remvoed it from 
> database, didn't remove the template.
> Downloaded Volume, volume was cleaned up from secondary stoage and database.
> 
> 
> Thanks,
> 
> David Bierce
> 
>



Re: Review Request 24779: [CLOUDSTACK-6254] Template disappears when download cleanup

2014-08-27 Thread David Bierce

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24779/
---

(Updated Aug. 27, 2014, 3:46 p.m.)


Review request for cloudstack.


Changes
---

Fixes the cleanup process to only remove the Template symlink, instead of 
delete the template from Secondary Storage.  Changed to use the method Nintin 
suggested.  This patch it tested on 4.2 using the same method as previously 
described.  Will be testing on 4.3 today.


Bugs: CLOUDSTACK-6254
https://issues.apache.org/jira/browse/CLOUDSTACK-6254


Repository: cloudstack-git


Description
---

PATCH] This is a quick stab at fixing a dataloss bug.  The ultimate
 solution is to refactor UploadManager to not use any deprecated code. It
 appears there is still code left over that uses the UploadVO/Dao which no
 long contains data about URL transfers.  This method was hardcoded to always
 pass Upload.Type.VOLUME as part of cleanup which was causing templates to be
 removed entirely from secondary storage not just the symlink on secondary
 storage.

Rather than try to refactor all of it out, this puts
logic for determining if the cleanup task is for a volume or a template
by doing a lookup on the URL.  It is a duplication of the same logic
from the calling method but is a very minimal code change until the
large problem is fixed.


Diffs (updated)
-

  
engine/api/src/org/apache/cloudstack/storage/image/datastore/ImageStoreEntity.java
 7ebfd0d 
  
engine/storage/image/src/org/apache/cloudstack/storage/image/store/ImageStoreImpl.java
 7bbe324 
  
engine/storage/src/org/apache/cloudstack/storage/image/BaseImageStoreDriverImpl.java
 2905f08 
  engine/storage/src/org/apache/cloudstack/storage/image/ImageStoreDriver.java 
444a6c7 
  
plugins/storage/image/default/src/org/apache/cloudstack/storage/datastore/driver/CloudStackImageStoreDriverImpl.java
 4796653 
  server/src/com/cloud/storage/StorageManagerImpl.java 2a79b0c 

Diff: https://reviews.apache.org/r/24779/diff/


Testing
---

On Cloudstack 4.2 4.3
Set cleanupurl to 30 seconds.  Downloaded a template, cleanup remvoed it from 
database, didn't remove the template.
Downloaded Volume, volume was cleaned up from secondary stoage and database.


Thanks,

David Bierce



Re: Review Request 24779: [CLOUDSTACK-6254] Template disappears when download cleanup

2014-08-28 Thread David Bierce


> On Aug. 25, 2014, 9:12 p.m., David Bierce wrote:
> >
> 
> Sebastien Goasguen wrote:
> Let me know if your patch should be applied to 4.3 as well, as I am 
> preparing 4.3.1

It appears fixed in a refactor by Nitin Mehta for 4.3 so no merge for 4.3 
required.


- David


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24779/#review51442
---


On Aug. 27, 2014, 3:46 p.m., David Bierce wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24779/
> ---
> 
> (Updated Aug. 27, 2014, 3:46 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Bugs: CLOUDSTACK-6254
> https://issues.apache.org/jira/browse/CLOUDSTACK-6254
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> PATCH] This is a quick stab at fixing a dataloss bug.  The ultimate
>  solution is to refactor UploadManager to not use any deprecated code. It
>  appears there is still code left over that uses the UploadVO/Dao which no
>  long contains data about URL transfers.  This method was hardcoded to always
>  pass Upload.Type.VOLUME as part of cleanup which was causing templates to be
>  removed entirely from secondary storage not just the symlink on secondary
>  storage.
> 
> Rather than try to refactor all of it out, this puts
> logic for determining if the cleanup task is for a volume or a template
> by doing a lookup on the URL.  It is a duplication of the same logic
> from the calling method but is a very minimal code change until the
> large problem is fixed.
> 
> 
> Diffs
> -
> 
>   
> engine/api/src/org/apache/cloudstack/storage/image/datastore/ImageStoreEntity.java
>  7ebfd0d 
>   
> engine/storage/image/src/org/apache/cloudstack/storage/image/store/ImageStoreImpl.java
>  7bbe324 
>   
> engine/storage/src/org/apache/cloudstack/storage/image/BaseImageStoreDriverImpl.java
>  2905f08 
>   
> engine/storage/src/org/apache/cloudstack/storage/image/ImageStoreDriver.java 
> 444a6c7 
>   
> plugins/storage/image/default/src/org/apache/cloudstack/storage/datastore/driver/CloudStackImageStoreDriverImpl.java
>  4796653 
>   server/src/com/cloud/storage/StorageManagerImpl.java 2a79b0c 
> 
> Diff: https://reviews.apache.org/r/24779/diff/
> 
> 
> Testing
> ---
> 
> On Cloudstack 4.2 4.3
> Set cleanupurl to 30 seconds.  Downloaded a template, cleanup remvoed it from 
> database, didn't remove the template.
> Downloaded Volume, volume was cleaned up from secondary stoage and database.
> 
> 
> Thanks,
> 
> David Bierce
> 
>



Shell commands vs Native Java file handling

2014-09-02 Thread David Bierce
While investigating an issue with secondary storage templates, I noticed all 
the file handling I came across shelled and execute things like mkdir, rm, etc, 
etc.  With the migration to Java 7 is there still a reason to continue that 
method of file handling or in the future could/should file operations be 
handled by Java?


Thanks,
David Bierce




Re: Shell commands vs Native Java file handling

2014-09-02 Thread David Bierce
Me too, but I’m new to this code base and to Java so I didn’t know if there was 
a rational behind it or not.  :)

There were a few places in SSVM where it is handling symlinks, for which 
support wasn’t added till Java 7 (according to the Docs), that was the only 
reason I could think of.

Thanks,
David Bierce
On Sep 2, 2014, at 12:06 PM, Mike Tutkowski  
wrote:

> If we could keep code like that all in Java, that would be my personal
> preference.
> 
> 
> On Tue, Sep 2, 2014 at 10:42 AM, David Bierce 
> wrote:
> 
>> While investigating an issue with secondary storage templates, I noticed
>> all the file handling I came across shelled and execute things like mkdir,
>> rm, etc, etc.  With the migration to Java 7 is there still a reason to
>> continue that method of file handling or in the future could/should file
>> operations be handled by Java?
>> 
>> 
>> Thanks,
>> David Bierce
>> 
>> 
>> 
> 
> 
> -- 
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the cloud
> <http://solidfire.com/solution/overview/?video=play>*™*



Review Request 25585: [CLOUDSTACK-2823] SystemVMs start fail on CentOS 6.4

2014-09-12 Thread David Bierce

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25585/
---

Review request for cloudstack.


Repository: cloudstack-git


Description
---

SystemVMs start fail on CentOS 6


Diffs
-

  systemvm/patches/debian/config/etc/init.d/cloud-early-config 9152df2 

Diff: https://reviews.apache.org/r/25585/diff/


Testing
---

Code patch applied to Systemvm.iso and cloud-early inside the systemvm 
template.  Routers start consitently after patched.

Tested against cloudstack 4.2.1
Centos 6.4

Patch is against 4.3 since 4.2+ is effected by the issue.


Thanks,

David Bierce



Re: CentOS KVM systemvm issue

2014-09-12 Thread David Bierce
John —

I’ve submitted our patch to work around the issue to review board and tied it 
to the original ticket we found.  I submitted it against the 4.3 but I know 
you’ve been testing the patch on 4.2.  If someone could take a look at it for 
sanity, please check.  It looks like it would be an issue in all version of 
cloudstack on CentOS/KVM as a hypervisor.

Review Request 25585: [CLOUDSTACK-2823] SystemVMs start fail on CentOS 6.4




Thanks,
David Bierce
Senior System Administrator  | Appcore

Office +1.800.735.7104
Direct +1.515.612.7801 
www.appcore.com

On Sep 12, 2014, at 1:23 PM, John Skinner  wrote:

> Actually, I believe the kernel is the problem. The hosts are running CentOS 
> 6, the systemvm is stock template, Debian 7. This does not seem to be an 
> issue on Ubuntu KVM hypervisors.
> 
> The fact that you are rebuilding systemvms on reboot is exactly why you are 
> not seeing this issue. New system VMs are usually successful, it’s when you 
> reboot them or start a stopped one where this issue shows up.
> 
> The serial port is loading, but I think the behavior is different after 
> initial boot because if you access the system VM after you reboot it you do 
> not have anything on /proc/cmdline and in /var/cache/cloud/cmdline the file 
> is old and does not contain the new control network IP address. However, I am 
> able to net cat the serial port between the hypervisor and the systemvm after 
> it comes up - but CloudStack will eventually force stop the VM since it 
> doesn’t get the new control network IP address it assumes it never started.
> 
> Which is why when we wrap that while loop to check for an empty string on 
> $cmd it works every time after that.
> 
> Change that global setting from true to false, and try to reboot a few 
> routers. I guarantee you will see this issue.
> 
> John Skinner
> Appcore
> 
> On Sep 12, 2014, at 10:48 AM, Marcus  wrote:
> 
>> You may also want to investigate on whether you are seeing a race condition
>> with /dev/vport0p1 coming on line and cloud-early-config running. It will
>> be indicated by a log line in the systemvm /var/log/cloud.log:
>> 
>> log_it "/dev/vport0p1 not loaded, perhaps guest kernel is too old."
>> 
>> Actually, if it has anything to do with the virtio-serial socket that would
>> probably be logged. Can you open a bug in Jira and provide the logs?
>> 
>> On Fri, Sep 12, 2014 at 9:36 AM, Marcus  wrote:
>> 
>>> Can you provide more info? Is the host running CentOS 6.x, or is your
>>> systemvm? What is rebooted, the host or the router, and how is it rebooted?
>>> We have what sounds like the same config (CentOS 6.x hosts, stock
>>> community provided systemvm), and are running thousands of virtual routers,
>>> rebooted regularly with no issue (both hosts and virtual routers).  One
>>> setting we may have that you may not is that our system vms are rebuilt
>>> from scratch on every reboot (recreate.systemvm.enabled=true in global
>>> settings), not that I expect this to be the problem, but might be something
>>> to look at.
>>> 
>>> On Fri, Sep 12, 2014 at 8:49 AM, John Skinner 
>>> wrote:
>>> 
>>>> I have found that on CloudStack 4.2 + (when we changed to using the
>>>> virtio-socket to send data to the systemvm) when running CentOS 6.X
>>>> cloud-early-config fails. On new systemvm creation there is a high chance
>>>> for success, but still a chance for failure. After the systemvm has been
>>>> created a simple reboot will cause start to fail every time. This has been
>>>> confirmed on 2 separate CloudStack 4.2 environments; 1 running CentOS 6.3
>>>> KVM, and another running CentOS 6.2 KVM. This can be fixed with a simple
>>>> modification to the get_boot_params function in the cloud-early-config
>>>> script. If you wrap the while read line inside of another while that checks
>>>> if $cmd returns an empty string it fixes the issue.
>>>> 
>>>> This is a pretty nasty issue for any one running CloudStack 4.2 + on
>>>> CentOS 6.X
>>>> 
>>>> John Skinner
>>>> Appcore
>>> 
>>> 
>>> 
> 



Re: Review Request 25585: [CLOUDSTACK-2823] SystemVMs start fail on CentOS 6.4

2014-09-12 Thread David Bierce

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/25585/
---

(Updated Sept. 13, 2014, 1:01 a.m.)


Review request for cloudstack.


Changes
---

Updated Patch to include a sleep while looping on the serial interface waiting 
for messages


Repository: cloudstack-git


Description
---

SystemVMs start fail on CentOS 6


Diffs
-

  systemvm/patches/debian/config/etc/init.d/cloud-early-config 9152df2 

Diff: https://reviews.apache.org/r/25585/diff/


Testing
---

Code patch applied to Systemvm.iso and cloud-early inside the systemvm 
template.  Routers start consitently after patched.

Tested against cloudstack 4.2.1
Centos 6.4

Patch is against 4.3 since 4.2+ is effected by the issue.


File Attachments (updated)


Updated Patch which includes a sleep
  
https://reviews.apache.org/media/uploaded/files/2014/09/13/c612e257-c83e-4a8e-8970-44d078838db6__patch.diff


Thanks,

David Bierce



Re: Review Request 25585: [CLOUDSTACK-2823] SystemVMs start fail on CentOS 6.4

2014-09-15 Thread David Bierce
Good Idea.  I’ve submitted an update with a little wait in the while so to not 
stress out the CPU waiting.  Looks like it missed against the 4.3.1 release, 
should I also submit for 4.4.1?



Thanks,
David Bierce
Senior System Administrator  | Appcore

Office +1.800.735.7104
Direct +1.515.612.7801 
www.appcore.com

On Sep 12, 2014, at 8:01 PM, David Bierce  wrote:

> Looks good, and I was going to suggest waiting like that if it turned out
> to be a race condition. The only thing I would suggest is maybe a sleep so
> it doesn't kill CPU if for some reason the system vm never gets cmdline.
> On Sep 12, 2014 1:28 PM, "David Bierce"  wrote:
> 
>> John —
>> 
>> I’ve submitted our patch to work around the issue to review board and tied
>> it to the original ticket we found.  I submitted it against the 4.3 but I
>> know you’ve been testing the patch on 4.2.  If someone could take a look at
>> it for sanity, please check.  It looks like it would be an issue in all
>> version of cloudstack on CentOS/KVM as a hypervisor.
>> 
>> Review Request 25585: [CLOUDSTACK-2823] SystemVMs start fail on CentOS 6.4
>> 
>> 
>> 
>> 
>> Thanks,
>> David Bierce
>> Senior System Administrator  | Appcore
>> 
>> Office +1.800.735.7104
>> Direct +1.515.612.7801
>> www.appcore.com
>> 
>> On Sep 12, 2014, at 1:23 PM, John Skinner 
>> wrote:
>> 
>>> Actually, I believe the kernel is the problem. The hosts are running
>> CentOS 6, the systemvm is stock template, Debian 7. This does not seem to
>> be an issue on Ubuntu KVM hypervisors.
>>> 
>>> The fact that you are rebuilding systemvms on reboot is exactly why you
>> are not seeing this issue. New system VMs are usually successful, it’s when
>> you reboot them or start a stopped one where this issue shows up.
>>> 
>>> The serial port is loading, but I think the behavior is different after
>> initial boot because if you access the system VM after you reboot it you do
>> not have anything on /proc/cmdline and in /var/cache/cloud/cmdline the file
>> is old and does not contain the new control network IP address. However, I
>> am able to net cat the serial port between the hypervisor and the systemvm
>> after it comes up - but CloudStack will eventually force stop the VM since
>> it doesn’t get the new control network IP address it assumes it never
>> started.
>>> 
>>> Which is why when we wrap that while loop to check for an empty string
>> on $cmd it works every time after that.
>>> 
>>> Change that global setting from true to false, and try to reboot a few
>> routers. I guarantee you will see this issue.
>>> 
>>> John Skinner
>>> Appcore
>>> 
>>> On Sep 12, 2014, at 10:48 AM, Marcus  wrote:
>>> 
>>>> You may also want to investigate on whether you are seeing a race
>> condition
>>>> with /dev/vport0p1 coming on line and cloud-early-config running. It
>> will
>>>> be indicated by a log line in the systemvm /var/log/cloud.log:
>>>> 
>>>> log_it "/dev/vport0p1 not loaded, perhaps guest kernel is too old."
>>>> 
>>>> Actually, if it has anything to do with the virtio-serial socket that
>> would
>>>> probably be logged. Can you open a bug in Jira and provide the logs?
>>>> 
>>>> On Fri, Sep 12, 2014 at 9:36 AM, Marcus  wrote:
>>>> 
>>>>> Can you provide more info? Is the host running CentOS 6.x, or is your
>>>>> systemvm? What is rebooted, the host or the router, and how is it
>> rebooted?
>>>>> We have what sounds like the same config (CentOS 6.x hosts, stock
>>>>> community provided systemvm), and are running thousands of virtual
>> routers,
>>>>> rebooted regularly with no issue (both hosts and virtual routers).  One
>>>>> setting we may have that you may not is that our system vms are rebuilt
>>>>> from scratch on every reboot (recreate.systemvm.enabled=true in global
>>>>> settings), not that I expect this to be the problem, but might be
>> something
>>>>> to look at.
>>>>> 
>>>>> On Fri, Sep 12, 2014 at 8:49 AM, John Skinner <
>> john.skin...@appcore.com>
>>>>> wrote:
>>>>> 
>>>>>> I have found that on CloudStack 4.2 + (when we changed to using the
>>>>>> virtio-socket to send data to the systemvm) when running CentOS 6.X
>>>>>> cloud-early-config fails. On new systemvm creation there is a high
>> chance
>>>>>> for success, but still a chance for failure. After the systemvm has
>> been
>>>>>> created a simple reboot will cause start to fail every time. This has
>> been
>>>>>> confirmed on 2 separate CloudStack 4.2 environments; 1 running CentOS
>> 6.3
>>>>>> KVM, and another running CentOS 6.2 KVM. This can be fixed with a
>> simple
>>>>>> modification to the get_boot_params function in the cloud-early-config
>>>>>> script. If you wrap the while read line inside of another while that
>> checks
>>>>>> if $cmd returns an empty string it fixes the issue.
>>>>>> 
>>>>>> This is a pretty nasty issue for any one running CloudStack 4.2 + on
>>>>>> CentOS 6.X
>>>>>> 
>>>>>> John Skinner
>>>>>> Appcore
>>>>> 
>>>>> 
>>>>> 
>>> 
>> 
>> 



Sorry for the Spam.

2014-09-23 Thread David Bierce
Sorry for the spam.  Apple mail decided apache dev list was the destination for 
our internal mailing list.  It has been fixed.


Thanks,
David Bierce
Senior System Administrator  | Appcore

Office +1.800.735.7104
Direct +1.515.612.7801 
www.appcore.com



Cloudstack Mirror

2014-09-23 Thread David Bierce
We now have a mirror of cloudstack locally.

http://mirror.appcore.com/cloudstack/

Contains both cloudstack signed RPM and DEBs

/rhel for the rpm repo for YUM
/ubuntu for the deb repo for APT

This is also a mirror of the all the systemvms for all versions in /systemvms



Thanks,
David Bierce
Senior System Administrator  | Appcore

Office +1.800.735.7104
Direct +1.515.612.7801 
www.appcore.com



Pending Reviews

2014-11-20 Thread David Bierce
Ello —

I had a few patches that were posted to review board.

https://reviews.apache.org/r/25585/ <https://reviews.apache.org/r/25585/>
https://reviews.apache.org/r/24779/ <https://reviews.apache.org/r/24779/>

These patches received some comments and were updated but have otherwise been 
untouched.  Is there a normal process to accept or reject patches rather than 
letting them sit in limbo for a few months?


Thanks,
David Bierce






Re: Review Request 24779: [CLOUDSTACK-6254] Template disappears when download cleanup

2014-11-20 Thread David Bierce


> On Nov. 20, 2014, 6:33 p.m., edison su wrote:
> > seems it's already checked into master and 4.5 by Nitin? see commit: 
> > 5393387bbd4e2d3659fb0c7171e6ff347ad6a07b

This was a backport to 4.2 and 4.3


- David


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24779/#review62374
---


On Aug. 27, 2014, 3:46 p.m., David Bierce wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24779/
> ---
> 
> (Updated Aug. 27, 2014, 3:46 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Bugs: CLOUDSTACK-6254
> https://issues.apache.org/jira/browse/CLOUDSTACK-6254
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> ---
> 
> PATCH] This is a quick stab at fixing a dataloss bug.  The ultimate
>  solution is to refactor UploadManager to not use any deprecated code. It
>  appears there is still code left over that uses the UploadVO/Dao which no
>  long contains data about URL transfers.  This method was hardcoded to always
>  pass Upload.Type.VOLUME as part of cleanup which was causing templates to be
>  removed entirely from secondary storage not just the symlink on secondary
>  storage.
> 
> Rather than try to refactor all of it out, this puts
> logic for determining if the cleanup task is for a volume or a template
> by doing a lookup on the URL.  It is a duplication of the same logic
> from the calling method but is a very minimal code change until the
> large problem is fixed.
> 
> 
> Diffs
> -
> 
>   
> engine/api/src/org/apache/cloudstack/storage/image/datastore/ImageStoreEntity.java
>  7ebfd0d 
>   
> engine/storage/image/src/org/apache/cloudstack/storage/image/store/ImageStoreImpl.java
>  7bbe324 
>   
> engine/storage/src/org/apache/cloudstack/storage/image/BaseImageStoreDriverImpl.java
>  2905f08 
>   
> engine/storage/src/org/apache/cloudstack/storage/image/ImageStoreDriver.java 
> 444a6c7 
>   
> plugins/storage/image/default/src/org/apache/cloudstack/storage/datastore/driver/CloudStackImageStoreDriverImpl.java
>  4796653 
>   server/src/com/cloud/storage/StorageManagerImpl.java 2a79b0c 
> 
> Diff: https://reviews.apache.org/r/24779/diff/
> 
> 
> Testing
> ---
> 
> On Cloudstack 4.2 4.3
> Set cleanupurl to 30 seconds.  Downloaded a template, cleanup remvoed it from 
> database, didn't remove the template.
> Downloaded Volume, volume was cleaned up from secondary stoage and database.
> 
> 
> Thanks,
> 
> David Bierce
> 
>



Re: pnfs support?

2015-01-15 Thread David Bierce
pNFS is NFS v4.1 with a the ability to have the head end paralleled and HA, but 
you’re still reliant on the backend doing any sort of replication. Adding 
support the cloudstack way would be trivial if it was just verified the client 
supported it on the hypervisor, and even then the deployments I’ve seen would 
still allow failover to NFS v3

For the most part, people seem to be waiting for CephFS to become primetime, 
but if you’re using the big NFS hardware solutions, I think most support pNFS 
now.

I’ve personally used pNFS with https://github.com/nfs-ganesha/ 
 as the NFS server and Ceph as the backend, 
but that felt kind of dirty and will be less useful for most when CephFS goes 
primetime.

 
> On Jan 14, 2015, at 9:38 PM, John Kinsella  wrote:
> 
> Somebody in the silicon valley meetup just asked about pNFS [1] - I’d never 
> heard of it, but sounds interesting and in theory would negate a lot of the 
> ugliness of NFS.
> 
> Curious if anybody else is familiar with it, or if there’s a general interest 
> in having support in ACS?
> 
> John
> 1: http://www.pnfs.com/



Re: pnfs support?

2015-01-15 Thread David Bierce
>> 
> 
> Be aware that CephFS should not be used for hosting virtual machines.
> RBD, the block device is much better for that job and already support
> with KVM since CloudStack 4.0

Today, yes, don’t use CephFS for virtual machines.  We did try running CephFS 
for virtualization and it was, not fun.  The setup for pNFS with Ceph wasn’t 
CephFS on the backend but an unholy union of iSCSI/CLVM/RBD it was only for 
testing.  RBD is excellent for kvm/libvirt deployments, especially with 
Cloudstack integration.

> 
> Wido
> 
>> I’ve personally used pNFS with https://github.com/nfs-ganesha/
>>  as the NFS server and Ceph as the
>> backend, but that felt kind of dirty and will be less useful for
>> most when CephFS goes primetime.
>> 
>> 
>>> On Jan 14, 2015, at 9:38 PM, John Kinsella 
>>> wrote:
>>> 
>>> Somebody in the silicon valley meetup just asked about pNFS [1] -
>>> I’d never heard of it, but sounds interesting and in theory would
>>> negate a lot of the ugliness of NFS.
>>> 
>>> Curious if anybody else is familiar with it, or if there’s a
>>> general interest in having support in ACS?
>>> 
>>> John 1: http://www.pnfs.com/
>> 
>> 
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1
> 
> iQIcBAEBAgAGBQJUuA4OAAoJEAGbWC3bPspCIqUP/R/FX3miAsIQUysnsWzSY71Y
> 1WU+SVk8Xpqh0zbcYVmevuIztn9co2n3LgJotdln1hVXe2FhvSoYO9yZfrIxCRJ3
> D0F4+zn/8Kc7uPzJIe7qxrc5+Eh+DpBOM4rZYRAFHM7MUk8WnnZENiuyg9EBkxdV
> HAPUcSeqPqzzVpP8wdvbZNLoBQkuxJVwVIUqJxSAUfhFSiDKRdNbcB1uwu8Zo+IJ
> m7i533kMqOVH97rOiErFLFpJaQdwuhU5cVYAnQ6/4BYW96H+Mt/X5dwJCHuZVMxH
> MqBpubVcEYfk7wBrcHbiVvXkmXnCN6ff01GrkwTeH3N+amXUdXX9KU9JwkuaJ6qt
> DBkPjEsn3sAkt96S1b0OFSi9W3F/Oeb93XmXXIkOMsXUI39PJBbIUOiRQo16B3hJ
> aWo2LjcQD2sSH7wMzo1Jf9AJKAFQCCFIGLUUx2KzTo1XNM6vWuzHC3Pp6lpV39uw
> +DCRORuTPhQ/7Cu19Gk9YhHVTnc8KKjBqfGVUCQi2fR+UikgB20NAAe2JAVqYxyY
> sePu4uv2PXJ4TF+XfaeRAouVahnbke8kwQ7VN9e+mipwOzLEC2iT3nmQJtdMhEju
> 062t5oZYJUjgqfIYnWB3RkWhVul6LG5l2N7vF/BYmuDwgYicE4BxECB6E4vxV7rw
> DQquKEwhvmFOy9BtVCyw
> =GVFx
> -END PGP SIGNATURE-



Console Proxy Change

2015-01-25 Thread David Bierce
Ello --

I’ve been looking into different ways to use the console proxy and kind of 
wanted to get other people input before diving in.  Talking with other 
cloudstack users, they scrap cloudstack logs and VNC directly to the hypervisor.

The major change I was looking at would be to add to the console, or at least 
somewhere console cloud link to, a web client like NoVNC or guacamole.  Then 
modify the console proxy to, instead of display the VNC console, create a∂ 
Websocket for the console client to use.

The approach would be similar to how OpenStack does console access.  Their 
novnc-proxy demon could even be a mostly drop in enchantment with some advanced 
serial console features, but the agent could also be extended to handle the 
authentication and proxy/websockifying.

Is this a horrible, awful idea?

David Bierce

Re: Console Proxy Change

2015-01-26 Thread David Bierce
Part of the other grand plan of it, would allow the removal of the DNS/SSL 
Wildcard configuration required to use a console, or at least make the SSL 
portion a bit more flexible.


> On Jan 25, 2015, at 11:26 PM, Marcus  wrote:
> 
> I've had some experience with this, in particular with KVM. For
> administrators we distributed a simple script that would call virt-viewer,
> which is more or less a VNC client that is distributed along with tools
> like virsh. It knows how to connect to a VM console by the name defined in
> libvirt, and can do so over ssh using your admin credentials for the
> hypervisor. The script could use api access to know which hypervisor and VM
> name to pass to virt-viewer, but what we did was use a custom URL handler
> in our browsers and the tweaked the console button to fire that off instead
> of open a console proxy window. We also had an applescript that would
> launch "chicken of the VNC" in a similar manner by creating an ssh tunnel
> and connecting CotVNC locally. Sounds tricky but was relatively easy for
> the admins to script up without developer help.
> 
> For customers, we did much along the lines of what you're talking about,
> since it distributes the VNC work to individual browsers and scales.
> There's a websocket proxy often used with novnc. We had to first make some
> modifications to CloudStack in the form of an api call that would return
> hypervisor IP and VNC port given a VM id. Then we could feed that to novnc,
> and we additionally had to modify the proxy for authentication. I honestly
> don't remember exactly how it was all put together, I just remember the api
> call and some minor changes to tthe proxy and novnc itself. The new api
> call would be unnecessary if the proxy were integrated.
> 
> I think it would be great if the console proxy were to get revamped to host
> novnc+websocket proxy. It would be faster and more featureful. Even just a
> websocket proxy would be nice, as people would be free to integrate their
> own web VNC with it.
> On Jan 25, 2015 7:27 PM, "David Bierce"  wrote:
> 
>> Ello --
>> 
>> I’ve been looking into different ways to use the console proxy and kind of
>> wanted to get other people input before diving in.  Talking with other
>> cloudstack users, they scrap cloudstack logs and VNC directly to the
>> hypervisor.
>> 
>> The major change I was looking at would be to add to the console, or at
>> least somewhere console cloud link to, a web client like NoVNC or
>> guacamole.  Then modify the console proxy to, instead of display the VNC
>> console, create a∂ Websocket for the console client to use.
>> 
>> The approach would be similar to how OpenStack does console access.  Their
>> novnc-proxy demon could even be a mostly drop in enchantment with some
>> advanced serial console features, but the agent could also be extended to
>> handle the authentication and proxy/websockifying.
>> 
>> Is this a horrible, awful idea?
>> 
>> David Bierce