[ceph-users] ceph firefly 0.80.4 unable to use rbd map and ceph fs mount

2014-07-21 Thread 漆晓芳
Hi,all:
I 'm dong tests with firefly 0.80.4,I want to test the performance with 
tools such as FIO,iozone,when I decided to test the rbd storage performance 
with fio,I ran commands on a client node as follows:


rbd create img1 --size 1024 --pool data(this command went on well)


rbd map img1 --pool data -id admin --keyring /etc/ceph/ceph.client.admin.keyring
then unexpected thing happened,the client node crashed ,the screen showed 
messages that with:
[81077ae0]?flush_kthread _worker
and many other messages like that.
I have to stop the node and restart to make it work again.


similary thing happed when I tried to mount ceph FS with kernel driver .
I ran command as follows:
mkdir /mnt/test
mount -t ceph 192.168.50.191:/ /mnt/test -o 
name=admin,secret=AQATSKdNGBcwLhAAnNDKnH65FmVKpXZJVasUeQ==
the node also crashed and can't work any more ,then I had to restart the node .
I 'm puzzled about the problem,I wonder if the problem lies in my linux kernal 
or any other issues.Thanks for any help!


my cluster are made up of one monitor ,six osds and a client.
os:ubuntu 12.04 LTS
ceph version:firefly 0.80.4




yours sincerely,
  ifstillfly









___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph firefly 0.80.4 unable to use rbd map and ceph fs mount

2014-07-21 Thread Wido den Hollander

On 07/21/2014 11:32 AM, 漆晓芳 wrote:

Hi,all:
 I 'm dong tests with firefly 0.80.4,I want to test the performance
with tools such as FIO,iozone,when I decided to test the rbd storage
performance with fio,I ran commands on a client node as follows:



Which kernel on the client? Can you try the trusty 3.13 kernel on the 
12.04 client?


Wido


rbd create img1 --size 1024 --pool data(this command went on well)

rbd map img1 --pool data -id admin --keyring
/etc/ceph/ceph.client.admin.keyring
then unexpected thing happened,the client node crashed ,the screen
showed messages that with:
[81077ae0]?flush_kthread _worker
and many other messages like that.
I have to stop the node and restart to make it work again.

similary thing happed when I tried to mount ceph FS with kernel driver .
I ran command as follows:
mkdir /mnt/test
mount -t ceph 192.168.50.191:/ /mnt/test -o
name=admin,secret=AQATSKdNGBcwLhAAnNDKnH65FmVKpXZJVasUeQ==
the node also crashed and can't work any more ,then I had to restart the
node .
I 'm puzzled about the problem,I wonder if the problem lies in my linux
kernal or any other issues.Thanks for any help!

my cluster are made up of one monitor ,six osds and a client.
os:ubuntu 12.04 LTS
ceph version:firefly 0.80.4


yours sincerely,
   ifstillfly







<#>


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph firefly 0.80.4 unable to use rbd map and ceph fs mount

2014-07-21 Thread Ilya Dryomov
On Mon, Jul 21, 2014 at 1:58 PM, Wido den Hollander  wrote:
> On 07/21/2014 11:32 AM, 漆晓芳 wrote:
>>
>> Hi,all:
>>  I 'm dong tests with firefly 0.80.4,I want to test the performance
>> with tools such as FIO,iozone,when I decided to test the rbd storage
>> performance with fio,I ran commands on a client node as follows:
>>
>
> Which kernel on the client? Can you try the trusty 3.13 kernel on the 12.04
> client?
>
> Wido
>
>> rbd create img1 --size 1024 --pool data(this command went on well)
>>
>> rbd map img1 --pool data -id admin --keyring
>> /etc/ceph/ceph.client.admin.keyring
>> then unexpected thing happened,the client node crashed ,the screen
>> showed messages that with:
>> [81077ae0]?flush_kthread _worker
>> and many other messages like that.
>> I have to stop the node and restart to make it work again.
>>
>> similary thing happed when I tried to mount ceph FS with kernel driver .
>> I ran command as follows:
>> mkdir /mnt/test
>> mount -t ceph 192.168.50.191:/ /mnt/test -o
>> name=admin,secret=AQATSKdNGBcwLhAAnNDKnH65FmVKpXZJVasUeQ==
>> the node also crashed and can't work any more ,then I had to restart the
>> node .
>> I 'm puzzled about the problem,I wonder if the problem lies in my linux
>> kernal or any other issues.Thanks for any help!
>>
>> my cluster are made up of one monitor ,six osds and a client.
>> os:ubuntu 12.04 LTS
>> ceph version:firefly 0.80.4

It looks like 12.04 LTS kernel can be as old as 3.2.  This is most
probably a known bug in kernels older than 3.8 (I think) which
manifests as a crash on 'rbd map' or cephfs mount if the kernel misses
required feature bits, which it of course does in this case because you
are running the latest firefly point release against a couple years old
kernel.  I'll see if the fix can be cleanly backported.

A word of advice: when you think the problem may lie in your kernel
(and even when you don't), specify the kernel version you are running,
not just the "os".

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Is possible to use one SSD journal hard disk for 3 OSD ?

2014-07-21 Thread 不坏阿峰
i have only one SSD want to improve Ceph perfermnace.
Is possible to use one SSD journal hard disk for 3 OSD ?

if it is possible ,how to config it ?
many thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Is possible to use one SSD journal hard disk for 3 OSD ?

2014-07-21 Thread Iban Cabrillo
Dear,
  I am not an expert, but Yes This is possible.
  I have RAID1 SAS disk journal for 3 journal SATA osds (maybe this is not
the smartest solution)

  When you preparere the OSDs for example:

  ceph-deploy --verbose osd prepare cephosd01:/dev/"sdd_device":"path_to
journal_ssddisk_X"

  path_to journal_ssddisk_X must exists (mkdir -p /var/ceph/osd1; touch
/var/ceph/osd1/journal)
  for example:

  ceph-deploy --verbose osd prepare
cephosd01:/dev/sdg:/var/ceph/osd1/journal
  ceph-deploy --verbose osd prepare
cephosd01:/dev/sdf:/var/ceph/osd2/journal
  ceph-deploy --verbose osd prepare
cephosd01:/dev/sdh:/var/ceph/osd3/journal

Then activate the OSDs...

  ceph-deploy --verbose osd activate
cephosd01:/dev/sdg1:/var/ceph/osd1/journal
  ceph-deploy --verbose osd activate
cephosd01:/dev/sdf1:/var/ceph/osd2/journal
  ceph-deploy --verbose osd activate
cephosd01:/dev/sdh1:/var/ceph/osd3/journal

regards, I


2014-07-21 12:30 GMT+02:00 不坏阿峰 :

> i have only one SSD want to improve Ceph perfermnace.
> Is possible to use one SSD journal hard disk for 3 OSD ?
>
> if it is possible ,how to config it ?
> many thanks
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 

Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY:
http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC

Bertrand Russell:
*"El problema con el mundo es que los estúpidos están seguros de todo y los
inteligentes están llenos de dudas*"
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-extras for rhel7

2014-07-21 Thread Simon Ironside

Hi,

Is there going to be ceph-extras repos for rhel7?

Unless I'm very much mistaken I think the RHEL 7.0 release qemu-kvm 
packages don't support RBD.


Cheers,
Simon.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] osd crashed with assert at add_log_entry

2014-07-21 Thread Sahana Lokeshappa
Hi All,

I have ceph cluster with 3 monitors, 3 osd nodes (3 osds in each node)

While Io was going on, rebooted a osd node which includes osds osd.6, osd.7, 
osd.8.

osd.0 and osd.2 crashed with assert(e.version > info.last_update): 
PG:add_log_entry

2014-07-17 17:54:14.893962 7f91f3660700 -1 osd/PG.cc: In function 'void 
PG::add_log_entry(pg_log_entry_t&, ceph::bufferlist&)' thread 7f91f3660700 time 
2014-07-17 17:54:13.252064
osd/PG.cc: 2619: FAILED assert(e.version > info.last_update)
ceph version andisk-sprint-2-drop-3-390-g2dbd85c 
(2dbd85c94cf27a1ff0419c5ea9359af7fe30e9b6)
1: (PG::add_log_entry(pg_log_entry_t&, ceph::buffer::list&)+0x481) [0x733a61]
2: (PG::append_log(std::vector >&, eversion_t, ObjectStore::Transaction&, 
bool)+0xdf) [0x74483f]
3: 
(ReplicatedBackend::sub_op_modify(std::tr1::shared_ptr)+0xcfe) 
[0x8193be]
4: 
(ReplicatedBackend::handle_message(std::tr1::shared_ptr)+0x4a6)
 [0x904586]
5: (ReplicatedPG::do_request(std::tr1::shared_ptr, 
ThreadPool::TPHandle&)+0x2db) [0x7aedcb]
6: (OSD::dequeue_op(boost::intrusive_ptr, 
std::tr1::shared_ptr, ThreadPool::TPHandle&)+0x459) [0x635719]
7: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x346) 
[0x635ce6]
8: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x8ce) [0xa4a1ce]
9: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0xa4c420]
10: (()+0x8182) [0x7f920f579182]
11: (clone()+0x6d) [0x7f920d91a30d]


Raised tracker : http://tracker.ceph.com/issues/8887

Logs are attached to tracker.



Thanks
Sahana Lokeshappa
Test Development Engineer I
[cid:image001.png@01CE9342.6D040E30]
3rd Floor, Bagmane Laurel, Bagmane Tech Park
C V Raman nagar, Bangalore 560093
T: +918042422283
sahana.lokesha...@sandisk.com




PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MDS crash when running a standby one

2014-07-21 Thread John Spray
For the question of OSD failures causing MDS crashes, there are many
places where the MDS asserts that OSD operations succeeded (grep the
code for "assert(r == 0)") -- we could probably do a better job of
handling these, e.g. log the OSD error and respawn rather than
assert'ing.

John

On Sat, Jul 19, 2014 at 11:13 AM, Florent B  wrote:
> Hi,
>
> Is it a known issue ? Has it been fixed in recent Firefly releases ?
>
> On 07/09/2014 03:21 PM, Yan, Zheng wrote:
>> there is memory leak bug in standby replay code, your issue is likely
>> caused by it.
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] radosgw-agent failed to parse

2014-07-21 Thread Peter

hello again,

i couldn't find  
'http://us-secondary.example.comhttp://us-secondary.example.com/ 
' in any zone or regions config files. 
How could it be getting the URL from someplace else if i am specifying 
as command line option after radosgw-agent ?



Here is region config:


{ "name": "us",
  "api_name": "us",
  "is_master": "True",
  "endpoints": [
"http:\/\/us-master.example.com:80\/"],
  "master_zone": "us-master",
  "zones": [
{ "name": "us-master",
  "endpoints": [
"http:\/\/us-master.example.com:80\/"],
  "log_meta": "true",
  "log_data": "true"},
{ "name": "us-secondary",
  "endpoints": [
"http:\/\/us-master.example.com:80\/"],
  "log_meta": "true",
  "log_data": "true"}
],
  "placement_targets": [
   {
 "name": "default-placement",
 "tags": []
   }
  ],
  "default_placement": "default-placement"}


I also get the above when i navigate to 
http://us-master.example.com/admin/config  and 
http://us-secondary.example.com/admin/config .


us-master zone looks like this:


{ "domain_root": ".us-master.domain.rgw",
  "control_pool": ".us-master.rgw.control",
  "gc_pool": ".us-master.rgw.gc",
  "log_pool": ".us-master.log",
  "intent_log_pool": ".us-master.intent-log",
  "usage_log_pool": ".us-master.usage",
  "user_keys_pool": ".us-master.users",
  "user_email_pool": ".us-master.users.email",
  "user_swift_pool": ".us-master.users.swift",
  "user_uid_pool": ".us-master.users.uid",
  "system_key": { "access_key": "EA02UO07DA8JJJX7ZIPJ", "secret_key": 
"InmPlbQhsj7dqjdNabqkZaqR8ShWC6fS0XVo"},

  "placement_pools": [
{ "key": "default-placement",
  "val": { "index_pool": ".us-master.rgw.buckets.index",
   "data_pool": ".us-master.rgw.buckets"}
}
  ]
}


us-secondary zone:


{ "domain_root": ".us-secondary.domain.rgw",
  "control_pool": ".us-secondary.rgw.control",
  "gc_pool": ".us-secondary.rgw.gc",
  "log_pool": ".us-secondary.log",
  "intent_log_pool": ".us-secondary.intent-log",
  "usage_log_pool": ".us-secondary.usage",
  "user_keys_pool": ".us-secondary.users",
  "user_email_pool": ".us-secondary.users.email",
  "user_swift_pool": ".us-secondary.users.swift",
  "user_uid_pool": ".us-secondary.users.uid",
  "system_key": { "access_key": "EA02UO07DA8JJJX7ZIPJ", "secret_key": 
"InmPlbQhsj7dqjdNabqkZaqR8ShWC6fS0XVo"},

  "placement_pools": [
{ "key": "default-placement",
  "val": { "index_pool": ".us-secondary.rgw.buckets.index",
   "data_pool": ".us-secondary.rgw.buckets"}
}
  ]
}


us-master user exists on us-master cluster gateway, us-secondary user 
exists on us-secondary cluster gateway. both us-master and us-secondary 
gateway users have same access and secret key. should us-master and 
us-secondary users exist on both clusters?


i can resolve us-master.example.com and us-secondary.example.com from 
both gateways.



Thanks

On 09/07/14 22:20, Craig Lewis wrote:

Just to ask a couple obvious questions...

You didn't accidentally 
put 'http://us-secondary.example.comhttp://us-secondary.example.com/ 
' in any of your region or zone 
configuration files?  The fact that it's missing the :80 makes me 
think it's getting that URL from someplace that isn't the command line.


You do have both system users on both clusters, with the same access 
and secret keys?


You can resolve us-secondary.example.com 
. from this host?



I tested URLs of the form http://us-secondary.example.com/ and 
http://us-secondary.example.com:80 in my setup, and both work fine.




On Wed, Jul 9, 2014 at 3:56 AM, Peter > wrote:


thank you for your reply. I am running ceph 0.80.1, radosgw-agent
1.2 on Ubuntu 14.04 LTS (GNU/Linux 3.13.0-24-generic x86_64) . I
also ran into this same issue with ubuntu 12.04 previously.
There are no special characters in the access or secret key (ive
had issues with this before so i make sure of this).

here is the output python interpreter:

Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more
information.

>>> import urlparse
>>> result =
urlparse.urlparse('http://us-secondary.example.com:80')
>>> print result.hostname, result.port
us-secondary.example.com  80


that looks ok to me.



On 07/07/14 22:57, Josh Durgin wrote:

On 07/04/2014 08:36 AM, Peter wrote:

i am having issues running radosgw-agent to sync data
between two
radosgw zones. As far as i can tell both zones are running
correctly.

My issue is when i run the radosgw-agent command:


radosgw-agent -v --src-access-key 
--s

Re: [ceph-users] radosgw-agent failed to parse

2014-07-21 Thread Peter

typo, should read:

{ "name": "us-secondary",
  "endpoints": [
"http:\/\/us-secondary.example.com:80\/"],
  "log_meta": "true",
  "log_data": "true"}

in region config below


On 21/07/14 15:13, Peter wrote:

hello again,

i couldn't find  
'http://us-secondary.example.comhttp://us-secondary.example.com/ 
' in any zone or regions config 
files. How could it be getting the URL from someplace else if i am 
specifying as command line option after radosgw-agent ?



Here is region config:


{ "name": "us",
  "api_name": "us",
  "is_master": "True",
  "endpoints": [
"http:\/\/us-master.example.com:80\/"],
  "master_zone": "us-master",
  "zones": [
{ "name": "us-master",
  "endpoints": [
"http:\/\/us-master.example.com:80\/"],
  "log_meta": "true",
  "log_data": "true"},
{ "name": "us-secondary",
  "endpoints": [
"http:\/\/us-master.example.com:80\/"],
  "log_meta": "true",
  "log_data": "true"}
],
  "placement_targets": [
   {
 "name": "default-placement",
 "tags": []
   }
  ],
  "default_placement": "default-placement"}


I also get the above when i navigate to 
http://us-master.example.com/admin/config and 
http://us-secondary.example.com/admin/config .


us-master zone looks like this:


{ "domain_root": ".us-master.domain.rgw",
  "control_pool": ".us-master.rgw.control",
  "gc_pool": ".us-master.rgw.gc",
  "log_pool": ".us-master.log",
  "intent_log_pool": ".us-master.intent-log",
  "usage_log_pool": ".us-master.usage",
  "user_keys_pool": ".us-master.users",
  "user_email_pool": ".us-master.users.email",
  "user_swift_pool": ".us-master.users.swift",
  "user_uid_pool": ".us-master.users.uid",
  "system_key": { "access_key": "EA02UO07DA8JJJX7ZIPJ", "secret_key": 
"InmPlbQhsj7dqjdNabqkZaqR8ShWC6fS0XVo"},

  "placement_pools": [
{ "key": "default-placement",
  "val": { "index_pool": ".us-master.rgw.buckets.index",
   "data_pool": ".us-master.rgw.buckets"}
}
  ]
}


us-secondary zone:


{ "domain_root": ".us-secondary.domain.rgw",
  "control_pool": ".us-secondary.rgw.control",
  "gc_pool": ".us-secondary.rgw.gc",
  "log_pool": ".us-secondary.log",
  "intent_log_pool": ".us-secondary.intent-log",
  "usage_log_pool": ".us-secondary.usage",
  "user_keys_pool": ".us-secondary.users",
  "user_email_pool": ".us-secondary.users.email",
  "user_swift_pool": ".us-secondary.users.swift",
  "user_uid_pool": ".us-secondary.users.uid",
  "system_key": { "access_key": "EA02UO07DA8JJJX7ZIPJ", "secret_key": 
"InmPlbQhsj7dqjdNabqkZaqR8ShWC6fS0XVo"},

  "placement_pools": [
{ "key": "default-placement",
  "val": { "index_pool": ".us-secondary.rgw.buckets.index",
   "data_pool": ".us-secondary.rgw.buckets"}
}
  ]
}


us-master user exists on us-master cluster gateway, us-secondary user 
exists on us-secondary cluster gateway. both us-master and 
us-secondary gateway users have same access and secret key. should 
us-master and us-secondary users exist on both clusters?


i can resolve us-master.example.com and us-secondary.example.com from 
both gateways.



Thanks

On 09/07/14 22:20, Craig Lewis wrote:

Just to ask a couple obvious questions...

You didn't accidentally 
put 'http://us-secondary.example.comhttp://us-secondary.example.com/ 
' in any of your region or zone 
configuration files?  The fact that it's missing the :80 makes me 
think it's getting that URL from someplace that isn't the command line.


You do have both system users on both clusters, with the same access 
and secret keys?


You can resolve us-secondary.example.com 
. from this host?



I tested URLs of the form http://us-secondary.example.com/ and 
http://us-secondary.example.com:80 in my setup, and both work fine.




On Wed, Jul 9, 2014 at 3:56 AM, Peter > wrote:


thank you for your reply. I am running ceph 0.80.1, radosgw-agent
1.2 on Ubuntu 14.04 LTS (GNU/Linux 3.13.0-24-generic x86_64) . I
also ran into this same issue with ubuntu 12.04 previously.
There are no special characters in the access or secret key (ive
had issues with this before so i make sure of this).

here is the output python interpreter:

Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more
information.

>>> import urlparse
>>> result =
urlparse.urlparse('http://us-secondary.example.com:80')
>>> print result.hostname, result.port
us-secondary.example.com  80


that looks ok to me.



On 07/07/14 22:57, Josh Durgin wrote:

On 07/04/2014 08:36 AM, Peter wrote:

i am having issues running radosgw-agent to sync data
between two
radosgw zones. As far as 

Re: [ceph-users] Is possible to use one SSD journal hard disk for 3 OSD ?

2014-07-21 Thread 不坏阿峰
thanks for ur reply.

in ur case, u deploy 3 osds in one server.  my case is that 3 osds in 3
server.
how to do ?


2014-07-21 17:59 GMT+07:00 Iban Cabrillo :

> Dear,
>   I am not an expert, but Yes This is possible.
>   I have RAID1 SAS disk journal for 3 journal SATA osds (maybe this is not
> the smartest solution)
>
>   When you preparere the OSDs for example:
>
>   ceph-deploy --verbose osd prepare cephosd01:/dev/"sdd_device":"path_to
> journal_ssddisk_X"
>
>   path_to journal_ssddisk_X must exists (mkdir -p /var/ceph/osd1; touch
> /var/ceph/osd1/journal)
>   for example:
>
>   ceph-deploy --verbose osd prepare
> cephosd01:/dev/sdg:/var/ceph/osd1/journal
>   ceph-deploy --verbose osd prepare
> cephosd01:/dev/sdf:/var/ceph/osd2/journal
>   ceph-deploy --verbose osd prepare
> cephosd01:/dev/sdh:/var/ceph/osd3/journal
>
> Then activate the OSDs...
>
>   ceph-deploy --verbose osd activate
> cephosd01:/dev/sdg1:/var/ceph/osd1/journal
>   ceph-deploy --verbose osd activate
> cephosd01:/dev/sdf1:/var/ceph/osd2/journal
>   ceph-deploy --verbose osd activate
> cephosd01:/dev/sdh1:/var/ceph/osd3/journal
>
> regards, I
>
>
> 2014-07-21 12:30 GMT+02:00 不坏阿峰 :
>
>> i have only one SSD want to improve Ceph perfermnace.
>> Is possible to use one SSD journal hard disk for 3 OSD ?
>>
>> if it is possible ,how to config it ?
>> many thanks
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
> --
>
> 
> Iban Cabrillo Bartolome
> Instituto de Fisica de Cantabria (IFCA)
> Santander, Spain
> Tel: +34942200969
> PGP PUBLIC KEY:
> http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
>
> 
> Bertrand Russell:
> *"El problema con el mundo es que los estúpidos están seguros de todo y
> los inteligentes están llenos de dudas*"
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Is possible to use one SSD journal hard disk for 3 OSD ?

2014-07-21 Thread Indra Pramana
AFAIK, it's not possible. A journal should be on the same server as the OSD
it serves. CMIIW.

Thank you.


On Mon, Jul 21, 2014 at 10:34 PM, 不坏阿峰  wrote:

> thanks for ur reply.
>
> in ur case, u deploy 3 osds in one server.  my case is that 3 osds in 3
> server.
> how to do ?
>
>
> 2014-07-21 17:59 GMT+07:00 Iban Cabrillo :
>
> Dear,
>>   I am not an expert, but Yes This is possible.
>>   I have RAID1 SAS disk journal for 3 journal SATA osds (maybe this is
>> not the smartest solution)
>>
>>   When you preparere the OSDs for example:
>>
>>   ceph-deploy --verbose osd prepare cephosd01:/dev/"sdd_device":"path_to
>> journal_ssddisk_X"
>>
>>   path_to journal_ssddisk_X must exists (mkdir -p /var/ceph/osd1; touch
>> /var/ceph/osd1/journal)
>>   for example:
>>
>>   ceph-deploy --verbose osd prepare
>> cephosd01:/dev/sdg:/var/ceph/osd1/journal
>>   ceph-deploy --verbose osd prepare
>> cephosd01:/dev/sdf:/var/ceph/osd2/journal
>>   ceph-deploy --verbose osd prepare
>> cephosd01:/dev/sdh:/var/ceph/osd3/journal
>>
>> Then activate the OSDs...
>>
>>   ceph-deploy --verbose osd activate
>> cephosd01:/dev/sdg1:/var/ceph/osd1/journal
>>   ceph-deploy --verbose osd activate
>> cephosd01:/dev/sdf1:/var/ceph/osd2/journal
>>   ceph-deploy --verbose osd activate
>> cephosd01:/dev/sdh1:/var/ceph/osd3/journal
>>
>> regards, I
>>
>>
>> 2014-07-21 12:30 GMT+02:00 不坏阿峰 :
>>
>>> i have only one SSD want to improve Ceph perfermnace.
>>> Is possible to use one SSD journal hard disk for 3 OSD ?
>>>
>>> if it is possible ,how to config it ?
>>> many thanks
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>>
>> --
>>
>> 
>> Iban Cabrillo Bartolome
>> Instituto de Fisica de Cantabria (IFCA)
>> Santander, Spain
>> Tel: +34942200969
>> PGP PUBLIC KEY:
>> http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
>>
>> 
>> Bertrand Russell:
>> *"El problema con el mundo es que los estúpidos están seguros de todo y
>> los inteligentes están llenos de dudas*"
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Is possible to use one SSD journal hard disk for 3 OSD ?

2014-07-21 Thread Iban Cabrillo
Yes, Indra is right. Osds and journal must be on the same server.
Regards, I
El 21/07/2014 16:38, "Indra Pramana"  escribió:

> AFAIK, it's not possible. A journal should be on the same server as the
> OSD it serves. CMIIW.
>
> Thank you.
>
>
> On Mon, Jul 21, 2014 at 10:34 PM, 不坏阿峰  wrote:
>
>> thanks for ur reply.
>>
>> in ur case, u deploy 3 osds in one server.  my case is that 3 osds in 3
>> server.
>> how to do ?
>>
>>
>> 2014-07-21 17:59 GMT+07:00 Iban Cabrillo :
>>
>> Dear,
>>>   I am not an expert, but Yes This is possible.
>>>   I have RAID1 SAS disk journal for 3 journal SATA osds (maybe this is
>>> not the smartest solution)
>>>
>>>   When you preparere the OSDs for example:
>>>
>>>   ceph-deploy --verbose osd prepare cephosd01:/dev/"sdd_device":"path_to
>>> journal_ssddisk_X"
>>>
>>>   path_to journal_ssddisk_X must exists (mkdir -p /var/ceph/osd1; touch
>>> /var/ceph/osd1/journal)
>>>   for example:
>>>
>>>   ceph-deploy --verbose osd prepare
>>> cephosd01:/dev/sdg:/var/ceph/osd1/journal
>>>   ceph-deploy --verbose osd prepare
>>> cephosd01:/dev/sdf:/var/ceph/osd2/journal
>>>   ceph-deploy --verbose osd prepare
>>> cephosd01:/dev/sdh:/var/ceph/osd3/journal
>>>
>>> Then activate the OSDs...
>>>
>>>   ceph-deploy --verbose osd activate
>>> cephosd01:/dev/sdg1:/var/ceph/osd1/journal
>>>   ceph-deploy --verbose osd activate
>>> cephosd01:/dev/sdf1:/var/ceph/osd2/journal
>>>   ceph-deploy --verbose osd activate
>>> cephosd01:/dev/sdh1:/var/ceph/osd3/journal
>>>
>>> regards, I
>>>
>>>
>>> 2014-07-21 12:30 GMT+02:00 不坏阿峰 :
>>>
 i have only one SSD want to improve Ceph perfermnace.
 Is possible to use one SSD journal hard disk for 3 OSD ?

 if it is possible ,how to config it ?
 many thanks

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


>>>
>>>
>>> --
>>>
>>> 
>>> Iban Cabrillo Bartolome
>>> Instituto de Fisica de Cantabria (IFCA)
>>> Santander, Spain
>>> Tel: +34942200969
>>> PGP PUBLIC KEY:
>>> http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC
>>>
>>> 
>>> Bertrand Russell:
>>> *"El problema con el mundo es que los estúpidos están seguros de todo y
>>> los inteligentes están llenos de dudas*"
>>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Strange radosgw error

2014-07-21 Thread Fabrizio G. Ventola
Hello everyone,

I'm having a weird issue with radosgw that previously was working perfectly.

With sudo /usr/bin/radosgw -d -c /etc/ceph/ceph.conf --debug_ms 1,  I
obtain (IPs obfuscated):

2014-07-21 17:24:01.034677 7fc5e0a4f700  1 -- :0/1002111 <==
osd.10 :6800/1246 3  osd_op_reply(5 zone_info.default
[getxattrs,stat] v0'0 uv0 ack = -2 ((2) No such file or directory)) v6
 226+0+0 (1843907961 0 0) 0x7fc5b8000c40 con 0x1dbd160
2014-07-21 17:24:01.035078 7fc5eab0f780  1 -- :0/1002111 mark_down
0x1dbd770 -- 0x1dbf010
2014-07-21 17:24:01.035356 7fc5eab0f780  1 -- :0/1002111 mark_down
0x1dbd160 -- 0x1dbcef0
2014-07-21 17:24:01.035845 7fc5eab0f780  1 -- :0/1002111 mark_down
0x1dbb520 -- 0x1dbb2b0
2014-07-21 17:24:01.036024 7fc5eab0f780  1 -- :0/1002111 mark_down_all
2014-07-21 17:24:01.036761 7fc5eab0f780  1 -- :0/1002111 shutdown complete.
2014-07-21 17:24:01.036942 7fc5eab0f780 -1 Couldn't init storage
provider (RADOS)

The cluster status is ok, every OSD is in and up, I've tried to run
radosgw with --name parameter. It's quite strange because other
radosgw instances that are using the same cluster are working. Any
idea?

Cheers,
Fabrizio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] recover ceph journal disk

2014-07-21 Thread Cristian Falcas
Hello,

We have a test project where we are using ceph+openstack.

Today we had some problems with this setup and we had to force reboot the
server. After that, the partition where we keep the ceph journal could not
mount.

When we checked it, we got this:

btrfsck /dev/mapper/vg_ssd-ceph_ssd
Checking filesystem on /dev/mapper/vg_ssd-ceph_ssd
UUID: 7121568d-3f6b-46b2-afaa-b2e543f31ba4
checking extents
checking fs roots
root 5 inode 257 errors 80
Segmentation fault


Considering that we are using btrfs on ceph, could we format the journal
and continue our work? Or will this kill our entire node? We don't care
very much about the data from the last minutes before the crash.

Best regards,
Cristian Falcas
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Is possible to use one SSD journal hard disk for 3 OSD ?

2014-07-21 Thread 不坏阿峰
thanks a lot to help me confirm that SSD speed up OSD's journal
to improve perfermance. one server need seprate ssd


2014-07-21 21:50 GMT+07:00 Iban Cabrillo :

> Yes, Indra is right. Osds and journal must be on the same server.
> Regards, I
> El 21/07/2014 16:38, "Indra Pramana"  escribió:
>
> AFAIK, it's not possible. A journal should be on the same server as the
>> OSD it serves. CMIIW.
>>
>> Thank you.
>>
>>
>> On Mon, Jul 21, 2014 at 10:34 PM, 不坏阿峰  wrote:
>>
>>> thanks for ur reply.
>>>
>>> in ur case, u deploy 3 osds in one server.  my case is that 3 osds in 3
>>> server.
>>> how to do ?
>>>
>>>
>>> 2014-07-21 17:59 GMT+07:00 Iban Cabrillo :
>>>
>>> Dear,
   I am not an expert, but Yes This is possible.
   I have RAID1 SAS disk journal for 3 journal SATA osds (maybe this is
 not the smartest solution)

   When you preparere the OSDs for example:

   ceph-deploy --verbose osd prepare
 cephosd01:/dev/"sdd_device":"path_to journal_ssddisk_X"

   path_to journal_ssddisk_X must exists (mkdir -p /var/ceph/osd1; touch
 /var/ceph/osd1/journal)
   for example:

   ceph-deploy --verbose osd prepare
 cephosd01:/dev/sdg:/var/ceph/osd1/journal
   ceph-deploy --verbose osd prepare
 cephosd01:/dev/sdf:/var/ceph/osd2/journal
   ceph-deploy --verbose osd prepare
 cephosd01:/dev/sdh:/var/ceph/osd3/journal

 Then activate the OSDs...

   ceph-deploy --verbose osd activate
 cephosd01:/dev/sdg1:/var/ceph/osd1/journal
   ceph-deploy --verbose osd activate
 cephosd01:/dev/sdf1:/var/ceph/osd2/journal
   ceph-deploy --verbose osd activate
 cephosd01:/dev/sdh1:/var/ceph/osd3/journal

 regards, I


 2014-07-21 12:30 GMT+02:00 不坏阿峰 :

> i have only one SSD want to improve Ceph perfermnace.
> Is possible to use one SSD journal hard disk for 3 OSD ?
>
> if it is possible ,how to config it ?
> many thanks
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


 --

 
 Iban Cabrillo Bartolome
 Instituto de Fisica de Cantabria (IFCA)
 Santander, Spain
 Tel: +34942200969
 PGP PUBLIC KEY:
 http://pgp.mit.edu/pks/lookup?op=get&search=0xD9DF0B3D6C8C08AC

 
 Bertrand Russell:
 *"El problema con el mundo es que los estúpidos están seguros de todo y
 los inteligentes están llenos de dudas*"

>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Is OSDs based on VFS?

2014-07-21 Thread Jaemyoun Lee
Hi all,

I wonder that OSDs use system calls of Virtual File System (i.e. open,
read, write, etc) when they access disks.

I mean ... Could I monitor I/O command requested by OSD to disks if I
monitor VFS?

- Jae

-- 
  이재면 Jaemyoun Lee

  CPS Lab. ( Cyber-Physical Systems Laboratory in Hanyang University)
  E-mail : jm...@cpslab.hanyang.ac.kr
  Homepage : http://cpslab.hanyang.ac.kr
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Is OSDs based on VFS?

2014-07-21 Thread Kyle Bader
> I wonder that OSDs use system calls of Virtual File System (i.e. open, read,
> write, etc) when they access disks.
>
> I mean ... Could I monitor I/O command requested by OSD to disks if I
> monitor VFS?

Ceph OSDs run on top of a traditional filesystem, so long as they
support xattrs - xfs by default. As such you can use kernel
instrumentation to view what is going on "under" the Ceph OSDs.

-- 

Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] recover ceph journal disk

2014-07-21 Thread Gregory Farnum
On Monday, July 21, 2014, Cristian Falcas  wrote:

> Hello,
>
> We have a test project where we are using ceph+openstack.
>
> Today we had some problems with this setup and we had to force reboot the
> server. After that, the partition where we keep the ceph journal could not
> mount.
>
> When we checked it, we got this:
>
> btrfsck /dev/mapper/vg_ssd-ceph_ssd
> Checking filesystem on /dev/mapper/vg_ssd-ceph_ssd
> UUID: 7121568d-3f6b-46b2-afaa-b2e543f31ba4
> checking extents
> checking fs roots
> root 5 inode 257 errors 80
> Segmentation fault
>
>
> Considering that we are using btrfs on ceph, could we format the journal
> and continue our work? Or will this kill our entire node? We don't care
> very much about the data from the last minutes before the crash.
>
> Best regards,
> Cristian Falcas
>

Usually this is very unsafe, but with btrfs it should be fine (it takes
periodic snapshots and will roll back to the latest one to get a consistent
view). You can find help on reformatting the journals in the doc or help
text. :)
-Greg

-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Is OSDs based on VFS?

2014-07-21 Thread Gregory Farnum
On Monday, July 21, 2014, Jaemyoun Lee  wrote:

> Hi all,
>
> I wonder that OSDs use system calls of Virtual File System (i.e. open,
> read, write, etc) when they access disks.
>
> I mean ... Could I monitor I/O command requested by OSD to disks if I
> monitor VFS?
>

Yes. The default configuration stores data in a local filesystem and
accesses it like any other sophisticated filesystem consumer.
-Greg


>
> - Jae
>
> --
>   이재면 Jaemyoun Lee
>
>   CPS Lab. ( Cyber-Physical Systems Laboratory in Hanyang University)
>   E-mail : jm...@cpslab.hanyang.ac.kr
> 
>   Homepage : http://cpslab.hanyang.ac.kr
>


-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Is OSDs based on VFS?

2014-07-21 Thread Jaemyoun Lee
Thanks for your rapid reply

- Jae


On Tue, Jul 22, 2014 at 1:29 AM, Gregory Farnum  wrote:

> On Monday, July 21, 2014, Jaemyoun Lee  wrote:
>
>> Hi all,
>>
>> I wonder that OSDs use system calls of Virtual File System (i.e. open,
>> read, write, etc) when they access disks.
>>
>> I mean ... Could I monitor I/O command requested by OSD to disks if I
>> monitor VFS?
>>
>
> Yes. The default configuration stores data in a local filesystem and
> accesses it like any other sophisticated filesystem consumer.
> -Greg
>
>
>>
>> - Jae
>>
>> --
>>   이재면 Jaemyoun Lee
>>
>>   CPS Lab. ( Cyber-Physical Systems Laboratory in Hanyang University)
>>   E-mail : jm...@cpslab.hanyang.ac.kr
>>   Homepage : http://cpslab.hanyang.ac.kr
>>
>
>
> --
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>



-- 
  이재면 Jaemyoun Lee

  CPS Lab. ( Cyber-Physical Systems Laboratory in Hanyang University)
  E-mail : jm...@cpslab.hanyang.ac.kr
  Homepage : http://cpslab.hanyang.ac.kr
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Is OSDs based on VFS?

2014-07-21 Thread Jaemyoun Lee
Thanks for your rapid reply

- Jae


On Tue, Jul 22, 2014 at 1:28 AM, Kyle Bader  wrote:

> > I wonder that OSDs use system calls of Virtual File System (i.e. open,
> read,
> > write, etc) when they access disks.
> >
> > I mean ... Could I monitor I/O command requested by OSD to disks if I
> > monitor VFS?
>
> Ceph OSDs run on top of a traditional filesystem, so long as they
> support xattrs - xfs by default. As such you can use kernel
> instrumentation to view what is going on "under" the Ceph OSDs.
>
> --
>
> Kyle
>
>


-- 
  이재면 Jaemyoun Lee

  CPS Lab. ( Cyber-Physical Systems Laboratory in Hanyang University)
  E-mail : jm...@cpslab.hanyang.ac.kr
  Homepage : http://cpslab.hanyang.ac.kr
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Turns 10 Twitter Photo Contest

2014-07-21 Thread Patrick McGarry
Hey cephers,

Just wanted to let you guys know that we are launching a Twitter photo
contest as a part of OSCON that will run through the end of the month.
If you tweet a photo of how you are celebrating Ceph's 10th birthday
to @ceph w/ #cephturns10, you could win a desktop Ceph cluster built
by our very own Mark Nelson.  Check out the links below for details:

Blog:
http://ceph.com/uncategorized/ceph-turns-10-twitter-photo-contest/

Contest Page:
https://wiki.ceph.com/Community/Contests

Contest Details:
https://wiki.ceph.com/Community/Contests/Ceph_Turns_10_Twitter_Photo_Contest

Official Contest Rules:
https://wiki.ceph.com/@api/deki/files/31/Ceph10thBirthdayTwitterPhotoContest--FINAL.pdf

Additionally, if you are at OSCON stop by booth P2 to say hi and enjoy
cupcakes (Tues only) and special edition t-shirts.  Happy birthday to
Ceph!



Best Regards,

Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com  ||  http://community.redhat.com
@scuttlemonkey || @ceph
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Possible to schedule deep scrub to nights?

2014-07-21 Thread Gregory Farnum
On Sun, Jul 20, 2014 at 2:05 PM, David  wrote:
> Thanks!
>
> Found this thread, guess I’ll do something like this then.
> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg09984.html
>
> Question though - will it still obey the scrubbing variables? Say I’ll
> schedule 1000 PGs during night, will it still just do 1 OSD at a time
> (default max scrub)?

max scrub is a per-OSD setting, not a cluster-wide setting. But yes,
it will respect those config options.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com

>
> Kind Regards,
> David
>
>
> 18 jul 2014 kl. 20:04 skrev Gregory Farnum :
>
> There's nothing built in to the system but I think some people have
> had success with scripts that set nobackfill during the day, and then
> trigger them regularly at night. Try searching the list archives. :)
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Fri, Jul 18, 2014 at 12:56 AM, David  wrote:
>
> Is there any known workarounds to schedule deep scrubs to run nightly?
> Latency does go up a little bit when it runs so I’d rather that it didn’t
> affect our daily activities.
>
> Kind Regards,
> David
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Toshiba / Sandisk ssds

2014-07-21 Thread Stefan Priebe - Profihost AG
Hi all,

has anybody already used any toshiba or sandisk ssds for ceph? We're evaluating 
alternatives to our current consumer ssd cluster and I would be happy to get 
some feedback on those drives.

Greets,
Stefan

Excuse my typo sent from my mobile phone.___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Issues with federated gateway sync

2014-07-21 Thread Justice London
Hello, I am having issues getting FG working between east/west data-center
test configurations. I have the sync default.conf configured like this:

source: "http://10.20.2.39:80";
src_zone: "us-west-1"
src_access_key: 
src_secret_key: http://10.30.3.178:80";
dest_zone: "us-east-1"
dest_access_key: 
dest_secret_key: 
10.30.3.178:6800/3700 -- osd_op(client.7160.0:450
testfolder%2FArcherC7v1_en_3_13_34_up_boot%28140402%29.bin [call
version.read,getxattrs,stat] 6.44385098 ack+read e66) v4 -- ?+0
0x7fc57c01cdc0 con 0x20dba80
2014-07-21 15:01:13.348006 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
osd.0 10.30.3.178:6800/3700 99  osd_op_reply(450
testfolder%2FArcherC7v1_en_3_13_34_up_boot%28140402%29.bin
[call,getxattrs,stat] v0'0 uv0 ack = -2 ((2) No such file or directory)) v6
 309+0+0 (375136675 0 0) 0x7fc5f4005b90 con 0x20dba80
2014-07-21 15:01:13.348299 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
10.30.3.178:6800/3700 -- osd_op(client.7160.0:451 testfolder [call
version.read,getxattrs,stat] 6.62cce9f7 ack+read e66) v4 -- ?+0
0x7fc57c01cc10 con 0x20dba80
2014-07-21 15:01:13.349174 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
osd.0 10.30.3.178:6800/3700 100  osd_op_reply(451 testfolder
[call,getxattrs,stat] v0'0 uv1 ondisk = 0) v6  261+0+139 (3119832768 0
2317765080) 0x7fc5f4005a00 con 0x20dba80
2014-07-21 15:01:13.349324 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
10.30.3.178:6800/3700 -- osd_op(client.7160.0:452 testfolder [call
version.check_conds,call version.read,read 0~524288] 6.62cce9f7 ack+read
e66) v4 -- ?+0 0x7fc57c01cc10 con 0x20dba80
2014-07-21 15:01:13.350009 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
osd.0 10.30.3.178:6800/3700 101  osd_op_reply(452 testfolder
[call,call,read 0~140] v0'0 uv1 ondisk = 0) v6  261+0+188 (1382517052 0
1901701781) 0x7fc5f4000fd0 con 0x20dba80
2014-07-21 15:01:13.350122 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
10.30.3.178:6800/3700 -- osd_op(client.7160.0:453
.bucket.meta.testfolder:us-west.20011.1 [call version.read,getxattrs,stat]
6.1851d0ad ack+read e66) v4 -- ?+0 0x7fc57c01d780 con 0x20dba80
2014-07-21 15:01:13.350914 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
osd.0 10.30.3.178:6800/3700 102  osd_op_reply(453
.bucket.meta.testfolder:us-west.20011.1 [call,getxattrs,stat] v0'0 uv1
ondisk = 0) v6  290+0+344 (1757888169 0 2994068559) 0x7fc5f4000fd0 con
0x20dba80
2014-07-21 15:01:13.351131 7fc5deffd700  0 WARNING: couldn't find acl
header for bucket, generating default
2014-07-21 15:01:13.351177 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
10.30.0.22:6800/12749 -- osd_op(client.7160.0:454 admin [getxattrs,stat]
8.8cee537f ack+read e66) v4 -- ?+0 0x7fc57c023a10 con 0x20e4010
2014-07-21 15:01:13.352755 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
osd.1 10.30.0.22:6800/12749 150  osd_op_reply(454 admin
[getxattrs,stat] v0'0 uv1 ondisk = 0) v6  214+0+91 (3932713703 0
605478480) 0x7fc5fc001130 con 0x20e4010
2014-07-21 15:01:13.352843 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
10.30.0.22:6800/12749 -- osd_op(client.7160.0:455 admin [read 0~524288]
8.8cee537f ack+read e66) v4 -- ?+0 0x7fc57c023810 con 0x20e4010
2014-07-21 15:01:13.353679 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
osd.1 10.30.0.22:6800/12749 151  osd_op_reply(455 admin [read 0~313]
v0'0 uv1 ondisk = 0) v6  172+0+313 (855218883 0 3348830508)
0x7fc5fc001130 con 0x20e4010
2014-07-21 15:01:13.354106 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
10.30.0.23:6800/28001 -- osd_op(client.7160.0:456 statelog.obj_opstate.57
[call statelog.add] 10.bb49d85f ondisk+write e66) v4 -- ?+0 0x7fc57c02b090
con 0x20e0a70
2014-07-21 15:01:13.363690 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
osd.2 10.30.0.23:6800/28001 103  osd_op_reply(456
statelog.obj_opstate.57 [call] v66'47 uv47 ondisk = 0) v6  190+0+0
(4198807369 0 0) 0x7fc604005300 con 0x20e0a70
2014-07-21 15:01:13.363928 7fc5deffd700  0 > HTTP_DATE -> Mon Jul 21
20:01:13 2014
2014-07-21 15:01:13.363947 7fc5deffd700  0 > HTTP_X_AMZ_COPY_SOURCE ->
testfolder%2FArcherC7v1_en_3_13_34_up_boot%28140402%29.bin
2014-07-21 15:01:13.520133 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
10.30.0.23:6800/28001 -- osd_op(client.7160.0:457 statelog.obj_opstate.57
[call statelog.add] 10.bb49d85f ondisk+write e66) v4 -- ?+0 0x7fc57c023870
con 0x20e0a70
2014-07-21 15:01:13.524531 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
osd.2 10.30.0.23:6800/28001 104  osd_op_reply(457
statelog.obj_opstate.57 [call] v66'48 uv48 ondisk = 0) v6  190+0+0
(518743807 0 0) 0x7fc6040072d0 con 0x20e0a70
2014-07-21 15:01:13.524723 7fc5deffd700  1 == req done
req=0x7fc5e000fcf0 http_status=403 ==
2014-07-21 15:01:13.673430 7fc62d95e700  1 -- 10.30.3.178:0/1028990 -->
10.30.0.24:6800/15997 -- ping v1 -- ?+0 0x7fc637e0 con 0x20df800
2014-07-21 15:01:13.673499 7fc62d95e700  1 -- 10.30.3.178:0/1028990 -->
10.30.3.178:6800/3700 -- ping v1 -- ?+0 0x7fc6a340 con 0x20dba80
2014-07-21 15:01:13.673523 7fc62d95e700  1 -- 10.30.3.178:0/10

Re: [ceph-users] Issues with federated gateway sync

2014-07-21 Thread Yehuda Sadeh
On Mon, Jul 21, 2014 at 1:07 PM, Justice London
 wrote:
> Hello, I am having issues getting FG working between east/west data-center
> test configurations. I have the sync default.conf configured like this:
>
> source: "http://10.20.2.39:80";
> src_zone: "us-west-1"
> src_access_key: 
> src_secret_key:  destination: "http://10.30.3.178:80";
> dest_zone: "us-east-1"
> dest_access_key: 
> dest_secret_key:  log_file: /var/log/radosgw/radosgw-sync-us-east-west.log
>
> No real errors are logged on the agent end, but I see the following in the
> remove radosgw end:
> 2014-07-21 15:01:13.346569 7fc5deffd700  1 == starting new request
> req=0x7fc5e000fcf0 =
> 2014-07-21 15:01:13.346947 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> 10.30.3.178:6800/3700 -- osd_op(client.7160.0:450
> testfolder%2FArcherC7v1_en_3_13_34_up_boot%28140402%29.bin [call
> version.read,getxattrs,stat] 6.44385098 ack+read e66) v4 -- ?+0
> 0x7fc57c01cdc0 con 0x20dba80
> 2014-07-21 15:01:13.348006 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> osd.0 10.30.3.178:6800/3700 99  osd_op_reply(450
> testfolder%2FArcherC7v1_en_3_13_34_up_boot%28140402%29.bin
> [call,getxattrs,stat] v0'0 uv0 ack = -2 ((2) No such file or directory)) v6
>  309+0+0 (375136675 0 0) 0x7fc5f4005b90 con 0x20dba80
> 2014-07-21 15:01:13.348299 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> 10.30.3.178:6800/3700 -- osd_op(client.7160.0:451 testfolder [call
> version.read,getxattrs,stat] 6.62cce9f7 ack+read e66) v4 -- ?+0
> 0x7fc57c01cc10 con 0x20dba80
> 2014-07-21 15:01:13.349174 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> osd.0 10.30.3.178:6800/3700 100  osd_op_reply(451 testfolder
> [call,getxattrs,stat] v0'0 uv1 ondisk = 0) v6  261+0+139 (3119832768 0
> 2317765080) 0x7fc5f4005a00 con 0x20dba80
> 2014-07-21 15:01:13.349324 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> 10.30.3.178:6800/3700 -- osd_op(client.7160.0:452 testfolder [call
> version.check_conds,call version.read,read 0~524288] 6.62cce9f7 ack+read
> e66) v4 -- ?+0 0x7fc57c01cc10 con 0x20dba80
> 2014-07-21 15:01:13.350009 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> osd.0 10.30.3.178:6800/3700 101  osd_op_reply(452 testfolder
> [call,call,read 0~140] v0'0 uv1 ondisk = 0) v6  261+0+188 (1382517052 0
> 1901701781) 0x7fc5f4000fd0 con 0x20dba80
> 2014-07-21 15:01:13.350122 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> 10.30.3.178:6800/3700 -- osd_op(client.7160.0:453
> .bucket.meta.testfolder:us-west.20011.1 [call version.read,getxattrs,stat]
> 6.1851d0ad ack+read e66) v4 -- ?+0 0x7fc57c01d780 con 0x20dba80
> 2014-07-21 15:01:13.350914 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> osd.0 10.30.3.178:6800/3700 102  osd_op_reply(453
> .bucket.meta.testfolder:us-west.20011.1 [call,getxattrs,stat] v0'0 uv1
> ondisk = 0) v6  290+0+344 (1757888169 0 2994068559) 0x7fc5f4000fd0 con
> 0x20dba80
> 2014-07-21 15:01:13.351131 7fc5deffd700  0 WARNING: couldn't find acl header
> for bucket, generating default
> 2014-07-21 15:01:13.351177 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> 10.30.0.22:6800/12749 -- osd_op(client.7160.0:454 admin [getxattrs,stat]
> 8.8cee537f ack+read e66) v4 -- ?+0 0x7fc57c023a10 con 0x20e4010
> 2014-07-21 15:01:13.352755 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> osd.1 10.30.0.22:6800/12749 150  osd_op_reply(454 admin [getxattrs,stat]
> v0'0 uv1 ondisk = 0) v6  214+0+91 (3932713703 0 605478480)
> 0x7fc5fc001130 con 0x20e4010
> 2014-07-21 15:01:13.352843 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> 10.30.0.22:6800/12749 -- osd_op(client.7160.0:455 admin [read 0~524288]
> 8.8cee537f ack+read e66) v4 -- ?+0 0x7fc57c023810 con 0x20e4010
> 2014-07-21 15:01:13.353679 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> osd.1 10.30.0.22:6800/12749 151  osd_op_reply(455 admin [read 0~313]
> v0'0 uv1 ondisk = 0) v6  172+0+313 (855218883 0 3348830508)
> 0x7fc5fc001130 con 0x20e4010
> 2014-07-21 15:01:13.354106 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> 10.30.0.23:6800/28001 -- osd_op(client.7160.0:456 statelog.obj_opstate.57
> [call statelog.add] 10.bb49d85f ondisk+write e66) v4 -- ?+0 0x7fc57c02b090
> con 0x20e0a70
> 2014-07-21 15:01:13.363690 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> osd.2 10.30.0.23:6800/28001 103  osd_op_reply(456
> statelog.obj_opstate.57 [call] v66'47 uv47 ondisk = 0) v6  190+0+0
> (4198807369 0 0) 0x7fc604005300 con 0x20e0a70
> 2014-07-21 15:01:13.363928 7fc5deffd700  0 > HTTP_DATE -> Mon Jul 21
> 20:01:13 2014
> 2014-07-21 15:01:13.363947 7fc5deffd700  0 > HTTP_X_AMZ_COPY_SOURCE ->
> testfolder%2FArcherC7v1_en_3_13_34_up_boot%28140402%29.bin
> 2014-07-21 15:01:13.520133 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> 10.30.0.23:6800/28001 -- osd_op(client.7160.0:457 statelog.obj_opstate.57
> [call statelog.add] 10.bb49d85f ondisk+write e66) v4 -- ?+0 0x7fc57c023870
> con 0x20e0a70
> 2014-07-21 15:01:13.524531 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> osd.2 10.30.0.23:6800/28001 104  osd_op_reply(457
> s

Re: [ceph-users] Issues with federated gateway sync

2014-07-21 Thread Justice London
I did. It was created as such on the east/west location (per the example FG
configuration):

radosgw-admin user create --uid="us-east" --display-name="Region-US
Zone-East" --name client.radosgw.us-east-1 --system
radosgw-admin user create --uid="us-west" --display-name="Region-US
Zone-West" --name client.radosgw.us-west-1 --system

Also, sorry, the zone names in the default.conf are us-west and us-east.

This is also logged on the radosgw-agent log:

Mon, 21 Jul 2014 20:26:20 GMT
x-amz-copy-source:testfolder%2FArcherC7v1_en_3_13_34_up_boot%28140402%29.bin
/testfolder/ArcherC7v1_en_3_13_34_up_boot%28140402%29.bin
2014-07-21T15:26:20.598 24627:DEBUG:boto:url =
'http://10.30.3.178/testfolder/ArcherC7v1_en_3_13_34_up_boot%28140402%29.bin'
params={'rgwx-op-id': 'storage1:24575:1', 'rgwx-source-zone':
u'us-west', 'rgwx-client-id': 'radosgw-agent'}
headers={'Content-Length': '0', 'User-Agent': 'Boto/2.2.2 (linux2)',
'x-amz-copy-source':
'testfolder%2FArcherC7v1_en_3_13_34_up_boot%28140402%29.bin', 'Date':
'Mon, 21 Jul 2014 20:26:20 GMT', 'Content-Type': 'application/json;
charset=UTF-8', 'Authorization': 'AWS :
data=None
2014-07-21T15:26:20.599 24627:INFO:urllib3.connectionpool:Starting new
HTTP connection (1): 10.30.3.178
2014-07-21T15:26:20.925 24627:DEBUG:urllib3.connectionpool:"PUT
/testfolder/ArcherC7v1_en_3_13_34_up_boot%28140402%29.bin?rgwx-op-id=storage1%3A24575%3A1&rgwx-source-zone=us-west&rgwx-client-id=radosgw-agent
HTTP/1.1" 403 78
2014-07-21T15:26:20.925 24627:DEBUG:radosgw_agent.worker:exception
during sync: Http error code 403 content AccessDenied
2014-07-21T15:26:20.926 24627:DEBUG:boto:StringToSign:
GET


Justice




On Mon, Jul 21, 2014 at 1:28 PM, Yehuda Sadeh  wrote:

> On Mon, Jul 21, 2014 at 1:07 PM, Justice London
>  wrote:
> > Hello, I am having issues getting FG working between east/west
> data-center
> > test configurations. I have the sync default.conf configured like this:
> >
> > source: "http://10.20.2.39:80";
> > src_zone: "us-west-1"
> > src_access_key: 
> > src_secret_key:  > destination: "http://10.30.3.178:80";
> > dest_zone: "us-east-1"
> > dest_access_key: 
> > dest_secret_key:  > log_file: /var/log/radosgw/radosgw-sync-us-east-west.log
> >
> > No real errors are logged on the agent end, but I see the following in
> the
> > remove radosgw end:
> > 2014-07-21 15:01:13.346569 7fc5deffd700  1 == starting new request
> > req=0x7fc5e000fcf0 =
> > 2014-07-21 15:01:13.346947 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> > 10.30.3.178:6800/3700 -- osd_op(client.7160.0:450
> > testfolder%2FArcherC7v1_en_3_13_34_up_boot%28140402%29.bin [call
> > version.read,getxattrs,stat] 6.44385098 ack+read e66) v4 -- ?+0
> > 0x7fc57c01cdc0 con 0x20dba80
> > 2014-07-21 15:01:13.348006 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> > osd.0 10.30.3.178:6800/3700 99  osd_op_reply(450
> > testfolder%2FArcherC7v1_en_3_13_34_up_boot%28140402%29.bin
> > [call,getxattrs,stat] v0'0 uv0 ack = -2 ((2) No such file or directory))
> v6
> >  309+0+0 (375136675 0 0) 0x7fc5f4005b90 con 0x20dba80
> > 2014-07-21 15:01:13.348299 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> > 10.30.3.178:6800/3700 -- osd_op(client.7160.0:451 testfolder [call
> > version.read,getxattrs,stat] 6.62cce9f7 ack+read e66) v4 -- ?+0
> > 0x7fc57c01cc10 con 0x20dba80
> > 2014-07-21 15:01:13.349174 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> > osd.0 10.30.3.178:6800/3700 100  osd_op_reply(451 testfolder
> > [call,getxattrs,stat] v0'0 uv1 ondisk = 0) v6  261+0+139 (3119832768
> 0
> > 2317765080) 0x7fc5f4005a00 con 0x20dba80
> > 2014-07-21 15:01:13.349324 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> > 10.30.3.178:6800/3700 -- osd_op(client.7160.0:452 testfolder [call
> > version.check_conds,call version.read,read 0~524288] 6.62cce9f7 ack+read
> > e66) v4 -- ?+0 0x7fc57c01cc10 con 0x20dba80
> > 2014-07-21 15:01:13.350009 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> > osd.0 10.30.3.178:6800/3700 101  osd_op_reply(452 testfolder
> > [call,call,read 0~140] v0'0 uv1 ondisk = 0) v6  261+0+188
> (1382517052 0
> > 1901701781) 0x7fc5f4000fd0 con 0x20dba80
> > 2014-07-21 15:01:13.350122 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> > 10.30.3.178:6800/3700 -- osd_op(client.7160.0:453
> > .bucket.meta.testfolder:us-west.20011.1 [call
> version.read,getxattrs,stat]
> > 6.1851d0ad ack+read e66) v4 -- ?+0 0x7fc57c01d780 con 0x20dba80
> > 2014-07-21 15:01:13.350914 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> > osd.0 10.30.3.178:6800/3700 102  osd_op_reply(453
> > .bucket.meta.testfolder:us-west.20011.1 [call,getxattrs,stat] v0'0 uv1
> > ondisk = 0) v6  290+0+344 (1757888169 0 2994068559) 0x7fc5f4000fd0
> con
> > 0x20dba80
> > 2014-07-21 15:01:13.351131 7fc5deffd700  0 WARNING: couldn't find acl
> header
> > for bucket, generating default
> > 2014-07-21 15:01:13.351177 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> > 10.30.0.22:6800/12749 -- osd_op(client.7160.0:454 admin [getxattrs,stat]
> > 8.8cee537f ack+read 

Re: [ceph-users] radosgw-agent failed to parse

2014-07-21 Thread Craig Lewis
I was hoping for some easy fixes :-P

I created two system users, in both zones.  Each user has different access
and secret, but I copied the access and secret from the primary to the
secondary.  I can't imaging that this would cause the problem you're
seeing, but it is something different from the examples.

Sorry, I'm out of ideas.



On Mon, Jul 21, 2014 at 7:13 AM, Peter  wrote:

>  hello again,
>
> i couldn't find  'http://us-secondary.example.comhttp://
> us-secondary.example.com/' in any zone or regions config files. How could
> it be getting the URL from someplace else if i am specifying as command
> line option after radosgw-agent ?
>
>
> Here is region config:
>
> { "name": "us",
>   "api_name": "us",
>   "is_master": "True",
>   "endpoints": [
> "http:\/\/us-master.example.com:80\/"
> ],
>   "master_zone": "us-master",
>   "zones": [
> { "name": "us-master",
>   "endpoints": [
> "http:\/\/us-master.example.com:80\/"
> ],
>   "log_meta": "true",
>   "log_data": "true"},
> { "name": "us-secondary",
>   "endpoints": [
> "http:\/\/us-master.example.com:80\/"
> ],
>   "log_meta": "true",
>   "log_data": "true"}
> ],
>   "placement_targets": [
>{
>  "name": "default-placement",
>  "tags": []
>}
>   ],
>   "default_placement": "default-placement"}
>
>
> I also get the above when i navigate to
> http://us-master.example.com/admin/config  and
> http://us-secondary.example.com/admin/config .
>
> us-master zone looks like this:
>
> { "domain_root": ".us-master.domain.rgw",
>   "control_pool": ".us-master.rgw.control",
>   "gc_pool": ".us-master.rgw.gc",
>   "log_pool": ".us-master.log",
>   "intent_log_pool": ".us-master.intent-log",
>   "usage_log_pool": ".us-master.usage",
>   "user_keys_pool": ".us-master.users",
>   "user_email_pool": ".us-master.users.email",
>   "user_swift_pool": ".us-master.users.swift",
>   "user_uid_pool": ".us-master.users.uid",
>   "system_key": { "access_key": "EA02UO07DA8JJJX7ZIPJ", "secret_key":
> "InmPlbQhsj7dqjdNabqkZaqR8ShWC6fS0XVo"},
>   "placement_pools": [
> { "key": "default-placement",
>   "val": { "index_pool": ".us-master.rgw.buckets.index",
>"data_pool": ".us-master.rgw.buckets"}
> }
>   ]
> }
>
>
> us-secondary zone:
>
> { "domain_root": ".us-secondary.domain.rgw",
>   "control_pool": ".us-secondary.rgw.control",
>   "gc_pool": ".us-secondary.rgw.gc",
>   "log_pool": ".us-secondary.log",
>   "intent_log_pool": ".us-secondary.intent-log",
>   "usage_log_pool": ".us-secondary.usage",
>   "user_keys_pool": ".us-secondary.users",
>   "user_email_pool": ".us-secondary.users.email",
>   "user_swift_pool": ".us-secondary.users.swift",
>   "user_uid_pool": ".us-secondary.users.uid",
>   "system_key": { "access_key": "EA02UO07DA8JJJX7ZIPJ", "secret_key":
> "InmPlbQhsj7dqjdNabqkZaqR8ShWC6fS0XVo"},
>   "placement_pools": [
> { "key": "default-placement",
>   "val": { "index_pool": ".us-secondary.rgw.buckets.index",
>"data_pool": ".us-secondary.rgw.buckets"}
> }
>   ]
> }
>
>
> us-master user exists on us-master cluster gateway, us-secondary user
> exists on us-secondary cluster gateway. both us-master and us-secondary
> gateway users have same access and secret key. should us-master and
> us-secondary users exist on both clusters?
>
> i can resolve us-master.example.com and us-secondary.example.com from
> both gateways.
>
>
> Thanks
>
>
> On 09/07/14 22:20, Craig Lewis wrote:
>
>  Just to ask a couple obvious questions...
>
>  You didn't accidentally put 'http://us-secondary.example.comhttp://
> us-secondary.example.com/' in any of your region or zone configuration
> files?  The fact that it's missing the :80 makes me think it's getting that
> URL from someplace that isn't the command line.
>
>  You do have both system users on both clusters, with the same access and
> secret keys?
>
>  You can resolve us-secondary.example.com. from this host?
>
>
>  I tested URLs of the form http://us-secondary.example.com/ and
> http://us-secondary.example.com:80 in my setup, and both work fine.
>
>
>
> On Wed, Jul 9, 2014 at 3:56 AM, Peter  wrote:
>
>> thank you for your reply. I am running ceph 0.80.1, radosgw-agent 1.2 on
>> Ubuntu 14.04 LTS (GNU/Linux 3.13.0-24-generic x86_64) . I also ran into
>> this same issue with ubuntu 12.04 previously.
>> There are no special characters in the access or secret key (ive had
>> issues with this before so i make sure of this).
>>
>> here is the output python interpreter:
>>
>>  Python 2.7.6 (default, Mar 22 2014, 22:59:56)
>>> [GCC 4.8.2] on linux2
>>> Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> >>> import urlparse
>>> >>> result = urlparse.urlparse('http://us-secondary.example.com:80')
>>> >>> print result.hostname, result.port
>>>  us-secondary.example.com 80
>>>
>>
>> that looks ok to me.
>>
>>
>>
>> On 07/07/14 22:57, Jo

Re: [ceph-users] problem in ceph installation

2014-07-21 Thread pragya jain
please somebody help me in installing ceph. I am installing it on an Ubuntu 
14.04 desktop VM.

Currently, I am using the link 
http://eu.ceph.com/docs/wip-6919/start/quick-start/
But it's failed and I got following error
W: Failed to fetch 
bzip2:/var/lib/apt/lists/partial/in.archive.ubuntu.com_ubuntu_dists_trusty-updates_main_binary-amd64_Packages
  Hash Sum mismatch


Please help to resolve it.

Regards 
Pragya Jain



On Thursday, 17 July 2014 12:40 PM, pragya jain  wrote:
 

>
>
>Hi all,
>
>
>I am installing ceph on ubuntu 14.04 desktop 64-bit VM using the link 
>http://eu.ceph.com/docs/wip-6919/start/quick-start/
>
>
>But I got following error while installing ceph
>
>
>-
>root@prag2648-VirtualBox:~# sudo apt-get update && sudo apt-get install ceph
>Ign http://security.ubuntu.com trusty-security InRelease  
>Ign http://in.archive.ubuntu.com trusty
 InRelease 
>Hit http://security.ubuntu.com trusty-security Release.gpg    
>Ign http://in.archive.ubuntu.com trusty-updates InRelease 
>Hit http://security.ubuntu.com trusty-security Release    
>Ign http://in.archive.ubuntu.com trusty-backports InRelease   
>Hit http://in.archive.ubuntu.com trusty Release.gpg   
>Hit http://security.ubuntu.com trusty-security/main Sources   
>Hit http://in.archive.ubuntu.com trusty-updates Release.gpg   
>Hit http://security.ubuntu.com trusty-security/restricted Sources
>Ign http://extras.ubuntu.com trusty
 InRelease 
>Hit http://in.archive.ubuntu.com trusty-backports Release.gpg 
>Hit http://security.ubuntu.com trusty-security/universe Sources
>Hit http://extras.ubuntu.com trusty Release.gpg   
>Hit http://in.archive.ubuntu.com trusty Release   
>Hit http://security.ubuntu.com trusty-security/multiverse Sources
>Hit http://extras.ubuntu.com trusty Release   
>Hit http://in.archive.ubuntu.com trusty-updates Release   
>Hit http://ceph.com trusty InRelease  
>Hit http://extras.ubuntu.com trusty/main Sources  
>Hit http://security.ubuntu.com trusty-security/main amd64 Packages
>Hit http://in.archive.ubuntu.com trusty-backports Release 
>Hit http://extras.ubuntu.com trusty/main amd64 Packages   
>Hit http://security.ubuntu.com trusty-security/restricted amd64 Packages
>Hit http://ceph.com trusty/main amd64 Packages    
>Hit http://extras.ubuntu.com
 trusty/main i386 Packages    
>Hit http://in.archive.ubuntu.com trusty/main Sources  
>Hit http://security.ubuntu.com trusty-security/universe amd64 Packages
>Hit http://ceph.com trusty/main i386 Packages 
>Hit http://in.archive.ubuntu.com trusty/restricted Sources    
>Hit http://security.ubuntu.com trusty-security/multiverse amd64 Packages
>Hit http://in.archive.ubuntu.com trusty/universe Sources  
>Hit http://security.ubuntu.com trusty-security/main i386 Packages
>Hit http://in.archive.ubuntu.com trusty/multiverse Sources    
>Hit
 http://security.ubuntu.com trusty-security/restricted i386 Packages
>Hit http://in.archive.ubuntu.com trusty/main amd64 Packages   
>Hit http://security.ubuntu.com trusty-security/universe i386 Packages
>Hit http://in.archive.ubuntu.com trusty/restricted amd64 Packages
>Hit http://security.ubuntu.com trusty-security/multiverse i386 Packages
>Ign http://extras.ubuntu.com trusty/main Translation-en_IN    
>Hit http://in.archive.ubuntu.com trusty/universe amd64 Packages
>Ign http://extras.ubuntu.com trusty/main Translation-en   
>Hit http://in.archive.ubuntu.com trusty/multiverse amd64 Packages
>Hit http://security.ubuntu.com trusty-security/main Translation-en
>Hit http://in.archive.ubuntu.com
 trusty/main i386 Packages    
>Hit http://in.archive.ubuntu.com trusty/restricted i386 Packages
>Hit http://in.archive.ubuntu.com trusty/universe i386 Packages
>Hit http://in.archive.ubuntu.com trusty/multiverse i386 Packages
>Hit http://security.ubuntu.com trusty-security/restricted Translation-en
>Hit http://in.archive.ubuntu.com trusty/main Translation-en   
>Hit http://security.ubuntu.com trusty-security/universe Translation-en
>Hit http://in.archive.ubuntu.com trusty/multiverse Translation-en
>Hit http://in.archive.ubuntu.com trusty/restricted Translation-en
>Hit http://in.archive.ubuntu.com trusty/universe Translation-en
>Hit http://in.archive.ubuntu.com trusty-updates/main Sources  
>Hit
 http://in.archive.ubuntu.com trusty-updates/restricted Sources
>Ign http://ceph.com trusty/main Translation-en_IN 
>Hit http://in.archive.ubuntu.com trusty-updates/universe Sources
>Hit http://in.archive.ubuntu.com trusty-updates/multiverse Sources
>Ign http://ceph.com trusty/main Translation-en    
>Get:1 http://in.archive.ubuntu.com trusty-updates/main amd64 Packages [218 kB]
>Hit http://security.ubuntu.com trusty-security/multiverse Translation-en
>Ign http://security.ubuntu.com trusty-security/main Translation-en_IN
>Ign http://security.ubunt

Re: [ceph-users] osd crashed with assert at add_log_entry

2014-07-21 Thread Gregory Farnum
I'll see what I can do with this tomorrow, but it can be difficult to deal
with commits from an out-of-tree build, or even with commits that got
merged in following other changes (which is what happened with this
commit). I didn't see any obviously relevant commits in the git history, so
I want to track it down.
-Greg

Software Engineer #42 @ http://inktank.com | http://ceph.com


On Mon, Jul 21, 2014 at 6:35 AM, Sahana Lokeshappa <
sahana.lokesha...@sandisk.com> wrote:

>  Hi All,
>
>
>
> I have ceph cluster with 3 monitors, 3 osd nodes (3 osds in each node)
>
>
>
> While Io was going on, rebooted a osd node which includes osds osd.6,
> osd.7, osd.8.
>
>
>
> osd.0 and osd.2 crashed with assert(e.version > info.last_update):
> PG:add_log_entry
>
>
>
> 2014-07-17 17:54:14.893962 7f91f3660700 -1 osd/PG.cc: In function 'void
> PG::add_log_entry(pg_log_entry_t&, ceph::bufferlist&)' thread 7f91f3660700
> time 2014-07-17 17:54:13.252064
>
> osd/PG.cc: 2619: FAILED assert(e.version > info.last_update)
>
> ceph version andisk-sprint-2-drop-3-390-g2dbd85c
> (2dbd85c94cf27a1ff0419c5ea9359af7fe30e9b6)
>
> 1: (PG::add_log_entry(pg_log_entry_t&, ceph::buffer::list&)+0x481)
> [0x733a61]
>
> 2: (PG::append_log(std::vector std::allocator >&, eversion_t,
> ObjectStore::Transaction&, bool)+0xdf) [0x74483f]
>
> 3:
> (ReplicatedBackend::sub_op_modify(std::tr1::shared_ptr)+0xcfe)
> [0x8193be]
>
> 4:
> (ReplicatedBackend::handle_message(std::tr1::shared_ptr)+0x4a6)
> [0x904586]
>
> 5: (ReplicatedPG::do_request(std::tr1::shared_ptr,
> ThreadPool::TPHandle&)+0x2db) [0x7aedcb]
>
> 6: (OSD::dequeue_op(boost::intrusive_ptr,
> std::tr1::shared_ptr, ThreadPool::TPHandle&)+0x459)
> [0x635719]
>
> 7: (OSD::ShardedOpWQ::_process(unsigned int,
> ceph::heartbeat_handle_d*)+0x346) [0x635ce6]
>
> 8: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x8ce)
> [0xa4a1ce]
>
> 9: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0xa4c420]
>
> 10: (()+0x8182) [0x7f920f579182]
>
> 11: (clone()+0x6d) [0x7f920d91a30d]
>
>
>
>
>
> Raised tracker : http://tracker.ceph.com/issues/8887
>
>
>
> Logs are attached to tracker.
>
>
>
>
>
>
>
> Thanks
>
> *Sahana Lokeshappa*
>
> * Test Development Engineer I *[image: cid:image001.png@01CE9342.6D040E30]
> 3rd Floor, Bagmane Laurel, Bagmane Tech Park
>
> C V Raman nagar, Bangalore 560093
> T: +918042422283
>
> sahana.lokesha...@sandisk.com
>
>
>
> --
>
> PLEASE NOTE: The information contained in this electronic mail message is
> intended only for the use of the designated recipient(s) named above. If
> the reader of this message is not the intended recipient, you are hereby
> notified that you have received this message in error and that any review,
> dissemination, distribution, or copying of this message is strictly
> prohibited. If you have received this communication in error, please notify
> the sender by telephone or e-mail (as shown above) immediately and destroy
> any and all copies of this message in your possession (whether hard copies
> or electronically stored copies).
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com