Min,

Why are objects being compressed before being sent to S3?

Thanks,
-John

On Jun 17, 2013, at 12:24 PM, Min Chen <min.c...@citrix.com> wrote:

> Hi Tom,
> 
>       Thanks for your testing. Glad to hear that multipart is working fine by
> using Cloudian. Regarding your questions about .gz template, that behavior
> is as expected. We will upload it to S3 as its .gz format. Only when the
> template is used and downloaded to primary storage, we will use staging
> area to decompress it.
>       We will look at the bugs you filed and update them accordingly.
> 
>       -min
> 
> On 6/17/13 12:31 AM, "Thomas O'Dowd" <tpod...@cloudian.com> wrote:
> 
>> Thanks Min - I filed 3 small issues today. I've a couple more but I want
>> to try and repeat them again before I file them and I've no time right
>> now. Please let me know if you need any further detail on any of these.
>> 
>> https://issues.apache.org/jira/browse/CLOUDSTACK-3027
>> https://issues.apache.org/jira/browse/CLOUDSTACK-3028
>> https://issues.apache.org/jira/browse/CLOUDSTACK-3030
>> 
>> An example of the other issues I'm running into are that when I upload
>> an .gz template on regular NFS storage, it is automatically decompressed
>> for me where as with S3 the template remains as a .gz file. Is this
>> correct or not? Also, perhaps related but after successfully uploading
>> the template to S3 and then trying to start an instance using it, I can
>> select it and go all the way to the last screen where I think the action
>> button says launch instance or something and it fails with a resource
>> unreachable error. I'll have to dig up the error later and file the bug
>> as my machine got rebooted over the weekend.
>> 
>> The multipart upload looks like it is working correctly though and I can
>> verify the checksums etc are correct with what they should be.
>> 
>> Tom.
>> 
>> On Fri, 2013-06-14 at 16:55 +0000, Min Chen wrote:
>>> HI Tom,
>>> 
>>>     You can file JIRA ticket for object_store branch by prefixing your bug
>>> with "Object_Store_Refactor" and mentioning that it is using build from
>>> object_store. Here is an example bug filed from Sangeetha against
>>> object_store branch build:
>>> https://issues.apache.org/jira/browse/CLOUDSTACK-2528.
>>>     If you use devcloud for testing, you may run into an issue where ssvm
>>> cannot access public url when you register a template, so register
>>> template will fail. You may have to set up internal web server inside
>>> devcloud and post template to be registered there to give a URL that
>>> devcloud can access. We mainly used devcloud to run our TestNG
>>> automation
>>> test earlier, and then switched to real hypervisor for real testing.
>>>     Thanks
>>>     -min
>>> 
>>> On 6/14/13 1:46 AM, "Thomas O'Dowd" <tpod...@cloudian.com> wrote:
>>> 
>>>> Edison,
>>>> 
>>>> I've got devcloud running along with the object_store branch and I've
>>>> finally been able to test a bit today.
>>>> 
>>>> I found some issues (or things that I think are bugs) and would like to
>>>> file a few issues. I know where the bug database is and I have an
>>>> account but what is the best way to file bugs against this particular
>>>> branch? I guess I can select "Future" as the version? What other way
>>> are
>>>> feature branches usually identified in issues? Perhaps in the subject?
>>>> Please let me know the preference.
>>>> 
>>>> Also, can you describe (or point me at a document) what the best way to
>>>> test against the object_store branch is? So far I have been doing the
>>>> following but I'm not sure it is the best?
>>>> 
>>>> a) setup devcloud.
>>>> b) stop any instances on devcloud from previous runs
>>>>     xe vm-shutdown --multiple
>>>> c) check out and update the object_store branch.
>>>> d) clean build as described in devcloud doc (ADIDD for short)
>>>> e) deploydb (ADIDD)
>>>> f) start management console (ADIDD) and wait for it.
>>>> g) deploysvr (ADIDD) in another shell.
>>>> h) on devcloud machine use xentop to wait for 2 vms to launch.
>>>>   (I'm not sure what the nfs vm is used for here??)
>>>> i) login on gui -> infra -> secondary and remove nfs secondary storage
>>>> j) add s3 secondary storage (using cache of old secondary storage?)
>>>> 
>>>> Then rest of testing starts from here... (and also perhaps in step j)
>>>> 
>>>> Thanks,
>>>> 
>>>> Tom.
>>>> -- 
>>>> Cloudian KK - http://www.cloudian.com/get-started.html
>>>> Fancy 100TB of full featured S3 Storage?
>>>> Checkout the Cloudian® Community Edition!
>>>> 
>>> 
>> 
>> -- 
>> Cloudian KK - http://www.cloudian.com/get-started.html
>> Fancy 100TB of full featured S3 Storage?
>> Checkout the Cloudian® Community Edition!
>> 
> 

Reply via email to