Min,
Cool. I just wanted to make sure we weren't compressing the template and
template.properties …
Thanks for the clarification,
-John
On Jun 17, 2013, at 12:49 PM, Min Chen wrote:
> John,
>
> Let me clarify, we didn't do extra compression before sending to S3.
> Only
> when user pr
John,
Let me clarify, we didn't do extra compression before sending to S3.
Only
when user provides a URL pointing to a compressed template during
registering, we will just download that template to S3 without
decompressing it afterwards as we did for NFS currently. If the register
url pro
Min,
Why are objects being compressed before being sent to S3?
Thanks,
-John
On Jun 17, 2013, at 12:24 PM, Min Chen wrote:
> Hi Tom,
>
> Thanks for your testing. Glad to hear that multipart is working fine by
> using Cloudian. Regarding your questions about .gz template, that behavior
>
Hi Tom,
Thanks for your testing. Glad to hear that multipart is working fine by
using Cloudian. Regarding your questions about .gz template, that behavior
is as expected. We will upload it to S3 as its .gz format. Only when the
template is used and downloaded to primary storage, we will us
Thanks Min - I filed 3 small issues today. I've a couple more but I want
to try and repeat them again before I file them and I've no time right
now. Please let me know if you need any further detail on any of these.
https://issues.apache.org/jira/browse/CLOUDSTACK-3027
https://issues.apache.org/ji
HI Tom,
You can file JIRA ticket for object_store branch by prefixing your bug
with "Object_Store_Refactor" and mentioning that it is using build from
object_store. Here is an example bug filed from Sangeetha against
object_store branch build:
https://issues.apache.org/jira/browse/CLOUDSTA
Edison,
I've got devcloud running along with the object_store branch and I've
finally been able to test a bit today.
I found some issues (or things that I think are bugs) and would like to
file a few issues. I know where the bug database is and I have an
account but what is the best way to file b
m]
>> Sent: Friday, June 07, 2013 7:54 AM
>> To: dev@cloudstack.apache.org
>> Cc: Kelly McLaughlin
>> Subject: Re: Object based Secondary storage.
>>
>> Thomas,
>>
>> The AWS API explicitly states the ETag is not guaranteed to be an integrity
>> ha
> -Original Message-
> From: John Burwell [mailto:jburw...@basho.com]
> Sent: Friday, June 07, 2013 7:54 AM
> To: dev@cloudstack.apache.org
> Cc: Kelly McLaughlin
> Subject: Re: Object based Secondary storage.
>
> Thomas,
>
> The AWS API explicitly states
': 'AmazonS3',
'transfer-encoding': 'chunked', 'connection': 'Keep-Alive', 'x-amz-request-id':
'8DFF5D8025E58E99', 'cache-control': 'proxy-revalidate', 'date': 'Thu, 06 Jun
2013 22:39:4
d by Amazon S3 is:
>>>> "70e1860be687d43c039873adef4280f2-3"
>>>>
>>>> DEBUG: Sending request method_string='POST',
>>>> uri='/fixes/icecake/systdfdfdfemvm.iso1?uploadId=vdkPSAtaA7g.fdfdfdfdf..iaKRNW_8QGz.bXdfdfdfdfdfkFXwUwL
,
> >>
> >> DEBUG: Response: {'status': 200, 'headers': {, 'server': 'AmazonS3',
> >> 'transfer-encoding': 'chunked', 'connection': 'Keep-Alive',
> >> 'x-amz-req
;: '8DFF5D8025E58E99', 'cache-control': 'proxy-revalidate',
>> 'date': 'Thu, 06 Jun 2013 22:39:47 GMT', 'content-type': 'application/xml'},
>> 'reason': 'OK', 'data': '> encoding=&qu
erver': 'AmazonS3',
>> 'transfer-encoding': 'chunked', 'connection': 'Keep-Alive',
>> 'x-amz-request-id': '8DFF5D8025E58E99', 'cache-control': 'proxy-revalidate',
>> 'date':
un 2013 22:39:47 GMT', 'content-type': 'application/xml'},
> 'reason': 'OK', 'data': ' encoding="UTF-8"?>\n\n xmlns="http://s3.amazonaws.com/doc/2006-03-01/";>http://fdfdfdfdfdfdfKey>fixes/icecake/systemvm.iso
amz-request-id': '8DFF5D8025E58E99', 'cache-control': 'proxy-revalidate',
> 'date': 'Thu, 06 Jun 2013 22:39:47 GMT', 'content-type': 'application/xml'},
> 'reason': 'OK', 'data':
lidate', 'date': 'Thu, 06 Jun
2013 22:39:47 GMT', 'content-type': 'application/xml'}, 'reason': 'OK', 'data':
'\n\nhttp://s3.amazonaws.com/doc/2006-03-01/";>http://fdfdfdfdfdfdfKey>fixes/icecake/systemvm.iso1&q
Hi John,
I didn't actually calculating the MD5 explicitly. I traced the code to
ServiceUtils.downloadObjectToFile method from amazon s3 sdk, my invocation
of S3Utils.getObject failed at the following code in ServiceUtils:
byte[] clientSideHash = null;
byte[] serverSideHash = null;
Min,
Are you calculating the MD5 or letting the Amazon client do it?
Thanks,
-John
On Jun 6, 2013, at 4:54 PM, Min Chen wrote:
> Thanks Tom. Indeed I have a S3 question that need some advise from some S3
> experts. To support upload object > 5G, I have used TransferManager.upload
> to upload o
Thanks Tom. Indeed I have a S3 question that need some advise from some S3
experts. To support upload object > 5G, I have used TransferManager.upload
to upload object to S3, upload went fine and object are successfully put
to S3. However, later on when I am using "s3cmd get " to
retrieve this objec
Thanks Min. I've printed out the material and am reading new threads.
Can't comment much yet until I understand things a bit more.
Meanwhile, feel free to hit me up with any S3 questions you have. I'm
looking forward to playing with the object_store branch and testing it
out.
Tom.
On Wed, 2013-0
Welcome Tom. You can check out this FS
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Backup+Objec
t+Store+Plugin+Framework for secondary storage architectural work done in
object_store branch.You may also check out the following recent threads
regarding 3 major technical questions
Hi all,
I'm new here. I'm interested in Cloudstack Secondary storage using S3
object stores. I checked out and built cloudstack today and found the
object_store branch (not built it yet). I haven't done Java since 2004
(mostly erlang/C++/python) so I'm rusty but I know the finer parts of
S3 :-)
A
23 matches
Mail list logo