Re: [ceph-users] osd crashed with assert at add_log_entry

2014-07-21 Thread Gregory Farnum
I'll see what I can do with this tomorrow, but it can be difficult to deal with commits from an out-of-tree build, or even with commits that got merged in following other changes (which is what happened with this commit). I didn't see any obviously relevant commits in the git history, so I want to

Re: [ceph-users] problem in ceph installation

2014-07-21 Thread pragya jain
please somebody help me in installing ceph. I am installing it on an Ubuntu 14.04 desktop VM. Currently, I am using the link  http://eu.ceph.com/docs/wip-6919/start/quick-start/ But it's failed and I got following error W: Failed to fetch bzip2:/var/lib/apt/lists/partial/in.archive.ubuntu.com_ub

Re: [ceph-users] radosgw-agent failed to parse

2014-07-21 Thread Craig Lewis
I was hoping for some easy fixes :-P I created two system users, in both zones. Each user has different access and secret, but I copied the access and secret from the primary to the secondary. I can't imaging that this would cause the problem you're seeing, but it is something different from the

Re: [ceph-users] Issues with federated gateway sync

2014-07-21 Thread Justice London
I did. It was created as such on the east/west location (per the example FG configuration): radosgw-admin user create --uid="us-east" --display-name="Region-US Zone-East" --name client.radosgw.us-east-1 --system radosgw-admin user create --uid="us-west" --display-name="Region-US Zone-West" --name

Re: [ceph-users] Issues with federated gateway sync

2014-07-21 Thread Yehuda Sadeh
On Mon, Jul 21, 2014 at 1:07 PM, Justice London wrote: > Hello, I am having issues getting FG working between east/west data-center > test configurations. I have the sync default.conf configured like this: > > source: "http://10.20.2.39:80"; > src_zone: "us-west-1" > src_access_key: > src_secret_

[ceph-users] Issues with federated gateway sync

2014-07-21 Thread Justice London
Hello, I am having issues getting FG working between east/west data-center test configurations. I have the sync default.conf configured like this: source: "http://10.20.2.39:80"; src_zone: "us-west-1" src_access_key: src_secret_key: http://10.30.3.178:80"; dest_zone: "us-east-1" dest_access_key:

[ceph-users] Toshiba / Sandisk ssds

2014-07-21 Thread Stefan Priebe - Profihost AG
Hi all, has anybody already used any toshiba or sandisk ssds for ceph? We're evaluating alternatives to our current consumer ssd cluster and I would be happy to get some feedback on those drives. Greets, Stefan Excuse my typo sent from my mobile phone.__

Re: [ceph-users] Possible to schedule deep scrub to nights?

2014-07-21 Thread Gregory Farnum
On Sun, Jul 20, 2014 at 2:05 PM, David wrote: > Thanks! > > Found this thread, guess I’ll do something like this then. > https://www.mail-archive.com/ceph-users@lists.ceph.com/msg09984.html > > Question though - will it still obey the scrubbing variables? Say I’ll > schedule 1000 PGs during night,

[ceph-users] Ceph Turns 10 Twitter Photo Contest

2014-07-21 Thread Patrick McGarry
Hey cephers, Just wanted to let you guys know that we are launching a Twitter photo contest as a part of OSCON that will run through the end of the month. If you tweet a photo of how you are celebrating Ceph's 10th birthday to @ceph w/ #cephturns10, you could win a desktop Ceph cluster built by ou

Re: [ceph-users] Is OSDs based on VFS?

2014-07-21 Thread Jaemyoun Lee
Thanks for your rapid reply - Jae On Tue, Jul 22, 2014 at 1:28 AM, Kyle Bader wrote: > > I wonder that OSDs use system calls of Virtual File System (i.e. open, > read, > > write, etc) when they access disks. > > > > I mean ... Could I monitor I/O command requested by OSD to disks if I > > moni

Re: [ceph-users] Is OSDs based on VFS?

2014-07-21 Thread Jaemyoun Lee
Thanks for your rapid reply - Jae On Tue, Jul 22, 2014 at 1:29 AM, Gregory Farnum wrote: > On Monday, July 21, 2014, Jaemyoun Lee wrote: > >> Hi all, >> >> I wonder that OSDs use system calls of Virtual File System (i.e. open, >> read, write, etc) when they access disks. >> >> I mean ... Coul

Re: [ceph-users] Is OSDs based on VFS?

2014-07-21 Thread Gregory Farnum
On Monday, July 21, 2014, Jaemyoun Lee wrote: > Hi all, > > I wonder that OSDs use system calls of Virtual File System (i.e. open, > read, write, etc) when they access disks. > > I mean ... Could I monitor I/O command requested by OSD to disks if I > monitor VFS? > Yes. The default configuration

Re: [ceph-users] recover ceph journal disk

2014-07-21 Thread Gregory Farnum
On Monday, July 21, 2014, Cristian Falcas wrote: > Hello, > > We have a test project where we are using ceph+openstack. > > Today we had some problems with this setup and we had to force reboot the > server. After that, the partition where we keep the ceph journal could not > mount. > > When we c

Re: [ceph-users] Is OSDs based on VFS?

2014-07-21 Thread Kyle Bader
> I wonder that OSDs use system calls of Virtual File System (i.e. open, read, > write, etc) when they access disks. > > I mean ... Could I monitor I/O command requested by OSD to disks if I > monitor VFS? Ceph OSDs run on top of a traditional filesystem, so long as they support xattrs - xfs by de

[ceph-users] Is OSDs based on VFS?

2014-07-21 Thread Jaemyoun Lee
Hi all, I wonder that OSDs use system calls of Virtual File System (i.e. open, read, write, etc) when they access disks. I mean ... Could I monitor I/O command requested by OSD to disks if I monitor VFS? - Jae -- 이재면 Jaemyoun Lee CPS Lab. ( Cyber-Physical Systems Laboratory in Hanyang Uni

Re: [ceph-users] Is possible to use one SSD journal hard disk for 3 OSD ?

2014-07-21 Thread 不坏阿峰
thanks a lot to help me confirm that SSD speed up OSD's journal to improve perfermance. one server need seprate ssd 2014-07-21 21:50 GMT+07:00 Iban Cabrillo : > Yes, Indra is right. Osds and journal must be on the same server. > Regards, I > El 21/07/2014 16:38, "Indra Pramana" escribió: > > AF

[ceph-users] recover ceph journal disk

2014-07-21 Thread Cristian Falcas
Hello, We have a test project where we are using ceph+openstack. Today we had some problems with this setup and we had to force reboot the server. After that, the partition where we keep the ceph journal could not mount. When we checked it, we got this: btrfsck /dev/mapper/vg_ssd-ceph_ssd Check

[ceph-users] Strange radosgw error

2014-07-21 Thread Fabrizio G. Ventola
Hello everyone, I'm having a weird issue with radosgw that previously was working perfectly. With sudo /usr/bin/radosgw -d -c /etc/ceph/ceph.conf --debug_ms 1, I obtain (IPs obfuscated): 2014-07-21 17:24:01.034677 7fc5e0a4f700 1 -- :0/1002111 <== osd.10 :6800/1246 3 osd_op_reply(5 zone_in

Re: [ceph-users] Is possible to use one SSD journal hard disk for 3 OSD ?

2014-07-21 Thread Iban Cabrillo
Yes, Indra is right. Osds and journal must be on the same server. Regards, I El 21/07/2014 16:38, "Indra Pramana" escribió: > AFAIK, it's not possible. A journal should be on the same server as the > OSD it serves. CMIIW. > > Thank you. > > > On Mon, Jul 21, 2014 at 10:34 PM, 不坏阿峰 wrote: > >> th

Re: [ceph-users] Is possible to use one SSD journal hard disk for 3 OSD ?

2014-07-21 Thread Indra Pramana
AFAIK, it's not possible. A journal should be on the same server as the OSD it serves. CMIIW. Thank you. On Mon, Jul 21, 2014 at 10:34 PM, 不坏阿峰 wrote: > thanks for ur reply. > > in ur case, u deploy 3 osds in one server. my case is that 3 osds in 3 > server. > how to do ? > > > 2014-07-21 17:

Re: [ceph-users] Is possible to use one SSD journal hard disk for 3 OSD ?

2014-07-21 Thread 不坏阿峰
thanks for ur reply. in ur case, u deploy 3 osds in one server. my case is that 3 osds in 3 server. how to do ? 2014-07-21 17:59 GMT+07:00 Iban Cabrillo : > Dear, > I am not an expert, but Yes This is possible. > I have RAID1 SAS disk journal for 3 journal SATA osds (maybe this is not > th

Re: [ceph-users] radosgw-agent failed to parse

2014-07-21 Thread Peter
typo, should read: { "name": "us-secondary", "endpoints": [ "http:\/\/us-secondary.example.com:80\/"], "log_meta": "true", "log_data": "true"} in region config below On 21/07/14 15:13, Peter wrote: hello again, i couldn't find 'http://us-secondary.example.comh

Re: [ceph-users] radosgw-agent failed to parse

2014-07-21 Thread Peter
hello again, i couldn't find 'http://us-secondary.example.comhttp://us-secondary.example.com/ ' in any zone or regions config files. How could it be getting the URL from someplace else if i am specifying as command line option after radosgw-agent ? Here i

Re: [ceph-users] MDS crash when running a standby one

2014-07-21 Thread John Spray
For the question of OSD failures causing MDS crashes, there are many places where the MDS asserts that OSD operations succeeded (grep the code for "assert(r == 0)") -- we could probably do a better job of handling these, e.g. log the OSD error and respawn rather than assert'ing. John On Sat, Jul

[ceph-users] osd crashed with assert at add_log_entry

2014-07-21 Thread Sahana Lokeshappa
Hi All, I have ceph cluster with 3 monitors, 3 osd nodes (3 osds in each node) While Io was going on, rebooted a osd node which includes osds osd.6, osd.7, osd.8. osd.0 and osd.2 crashed with assert(e.version > info.last_update): PG:add_log_entry 2014-07-17 17:54:14.893962 7f91f3660700 -1 osd

[ceph-users] ceph-extras for rhel7

2014-07-21 Thread Simon Ironside
Hi, Is there going to be ceph-extras repos for rhel7? Unless I'm very much mistaken I think the RHEL 7.0 release qemu-kvm packages don't support RBD. Cheers, Simon. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listin

Re: [ceph-users] Is possible to use one SSD journal hard disk for 3 OSD ?

2014-07-21 Thread Iban Cabrillo
Dear, I am not an expert, but Yes This is possible. I have RAID1 SAS disk journal for 3 journal SATA osds (maybe this is not the smartest solution) When you preparere the OSDs for example: ceph-deploy --verbose osd prepare cephosd01:/dev/"sdd_device":"path_to journal_ssddisk_X" path_to

[ceph-users] Is possible to use one SSD journal hard disk for 3 OSD ?

2014-07-21 Thread 不坏阿峰
i have only one SSD want to improve Ceph perfermnace. Is possible to use one SSD journal hard disk for 3 OSD ? if it is possible ,how to config it ? many thanks ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph

Re: [ceph-users] ceph firefly 0.80.4 unable to use rbd map and ceph fs mount

2014-07-21 Thread Ilya Dryomov
On Mon, Jul 21, 2014 at 1:58 PM, Wido den Hollander wrote: > On 07/21/2014 11:32 AM, 漆晓芳 wrote: >> >> Hi,all: >> I 'm dong tests with firefly 0.80.4,I want to test the performance >> with tools such as FIO,iozone,when I decided to test the rbd storage >> performance with fio,I ran commands on

Re: [ceph-users] ceph firefly 0.80.4 unable to use rbd map and ceph fs mount

2014-07-21 Thread Wido den Hollander
On 07/21/2014 11:32 AM, 漆晓芳 wrote: Hi,all: I 'm dong tests with firefly 0.80.4,I want to test the performance with tools such as FIO,iozone,when I decided to test the rbd storage performance with fio,I ran commands on a client node as follows: Which kernel on the client? Can you try the t

[ceph-users] ceph firefly 0.80.4 unable to use rbd map and ceph fs mount

2014-07-21 Thread 漆晓芳
Hi,all: I 'm dong tests with firefly 0.80.4,I want to test the performance with tools such as FIO,iozone,when I decided to test the rbd storage performance with fio,I ran commands on a client node as follows: rbd create img1 --size 1024 --pool data(this command went on well) rbd map i