[ceph-users] recover ceph journal disk

2014-07-21 Thread Cristian Falcas
node? We don't care very much about the data from the last minutes before the crash. Best regards, Cristian Falcas ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] fs as btrfs and ceph journal

2014-07-25 Thread Cristian Falcas
Hello, I'm using btrfs for OSDs and want to know if it still helps to have the journal on a faster drive. From what I've read I'm under the impression that with btrfs journal, the OSD journal doesn't do much work anymore. Best rega

Re: [ceph-users] ceph.com centos7 repository ?

2014-07-26 Thread Cristian Falcas
Just to let you know that qemu packages from centos don't have rbd compiled in. You will need to compile your own packages with the -ev version from redhat for this. On Thu, Jul 10, 2014 at 4:58 PM, Erik Logtenberg wrote: > Hi, > > RHEL7 repository works just as well. CentOS 7 is effectively a

[ceph-users] snapshoting on btrfs vs xfs

2015-02-04 Thread Cristian Falcas
and reading all the horror story here and on btrfs mailing list. Is the snapshoting performed by ceph or by the fs? Can we switch to xfs and have the same capabilities: instant snapshot + instant boot from snapshot? Best regards, Cristian Falcas

Re: [ceph-users] snapshoting on btrfs vs xfs

2015-02-04 Thread Cristian Falcas
eed. On Wed, Feb 4, 2015 at 11:22 PM, Sage Weil wrote: > On Wed, 4 Feb 2015, Cristian Falcas wrote: >> Hi, >> >> We have an openstack installation that uses ceph as the storage backend. >> >> We use mainly snapshot and boot from snapshot from an original >>

Re: [ceph-users] snapshoting on btrfs vs xfs

2015-02-04 Thread Cristian Falcas
We want to use this script as a service for start/stop (but it wasn't tested yet): #!/bin/bash # chkconfig: - 50 90 # description: make a journal for osd.0 in ram start () { -f /dev/shm/osd.0.journal || ceph-osd -i 0 --mkjournal } stop () { service ceph stop osd.0 && ceph-osd -i osd.0 --flush-j

[ceph-users] osd cvrashing

2015-06-08 Thread Cristian Falcas
thout loosing everything? Best regards, Cristian Falcas ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] osd.1 marked down after no pg stats for ~900seconds

2015-06-21 Thread Cristian Falcas
.94.2-0.el7.centos.x86_64 python-cephfs-0.94.2-0.el7.centos.x86_64 I don't know if that matters, but the physical machine is a ceph+openstack all in one installation. Thank you, Cristian Falcas ___ ceph-users mailing list ceph-users@lists.cep

Re: [ceph-users] osd.1 marked down after no pg stats for ~900seconds

2015-06-21 Thread Cristian Falcas
clean, 384 active+clean; 502 GB data, 183 GB used, 2279 GB / 2469 GB avail; 0 B/s rd, 24171 B/s wr, 4 On Sun, Jun 21, 2015 at 6:19 PM, Cristian Falcas wrote: > Hello, > > When doing a fio test on a vm, after some time the osd goes down with this > error: > > osd.1 marke

[ceph-users] error when executing ceph osd pool set foo-hot cache-mode writeback

2014-10-28 Thread Cristian Falcas
ph osd pool set ssd_cache cache_target_dirty_ratio .4 ceph osd pool set ssd_cache cache_target_full_ratio .8 Best regards, Cristian Falcas ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] error when executing ceph osd pool set foo-hot cache-mode writeback

2014-10-28 Thread Cristian Falcas
It's from here: https://ceph.com/docs/v0.79/dev/cache-pool/#cache-mode In that page there are both commands On Tue, Oct 28, 2014 at 6:03 PM, Gregory Farnum wrote: > On Tue, Oct 28, 2014 at 3:24 AM, Cristian Falcas > wrote: >> Hello, >> >> In the documentation abou

[ceph-users] journal on entire ssd device

2014-10-29 Thread Cristian Falcas
Hello, Will there be any benefit in making the journal the size of an entire ssd disk? I was also thinking on increasing "journal max write entries" and "journal queue max ops". But will it matter, or it will have the same effect as a 4gb journal on the same ssd? Thank

[ceph-users] osd 100% cpu, very slow writes

2014-10-30 Thread Cristian Falcas
journal = /dev/shm/osd.1.journal journal dio = false Test performed with dd: sync dd bs=4M count=512 if=/home/user/backup_2014_10_27.raw of=/var/lib/ceph/osd/ceph-1/backup_2014_10_27.raw conv=fdatasync 512+0 records in 512+0 records out 2147483648 bytes (2.1 GB) copied, 16.3971 s, 131 MB/s Than

Re: [ceph-users] osd 100% cpu, very slow writes

2014-10-30 Thread Cristian Falcas
13 PM, Cristian Falcas wrote: > Hello, > > I have an one node ceph installation and when trying to import an > image using qemu, it works fine for some time and after that the osd > process starts using ~100% of cpu and the number of op/s increases and > the writes decrease dramat

Re: [ceph-users] osd 100% cpu, very slow writes

2014-11-05 Thread Cristian Falcas
, On Wed, Nov 5, 2014 at 7:51 PM, Gregory Farnum wrote: > On Thu, Oct 30, 2014 at 8:13 AM, Cristian Falcas > wrote: >> Hello, >> >> I have an one node ceph installation and when trying to import an >> image using qemu, it works fine for some time and after that the os

[ceph-users] how to set up disks in the same host

2013-12-06 Thread Cristian Falcas
then independent OSDs? Best regards, Cristian Falcas ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] how to set up disks in the same host

2013-12-07 Thread Cristian Falcas
ander wrote: > On 12/06/2013 11:00 AM, Cristian Falcas wrote: >> >> Hi all, >> >> What will be the fastest disks setup between those 2: >> - 1 OSD build from 6 disks in raid 10 and one ssd for journal >> - 3 OSDs, each with 2 disks in raid 1 and a common

Re: [ceph-users] how to set up disks in the same host

2013-12-07 Thread Cristian Falcas
t > > - Original Message - >> From: "Cristian Falcas" >> To: "Wido den Hollander" >> Cc: ceph-users@lists.ceph.com >> Sent: Samstag, 7. Dezember 2013 15:44:08 >> Subject: Re: [ceph-users] how to set up disks in the same host >> >>

Re: [ceph-users] [openstack-community] Create VM (8 core and 8GB memory)

2013-12-21 Thread Cristian Falcas
anning to use that machine for anything, I would say that you can have a VM with maximum 2 cores and 3GB of ram. Best regards, Cristian Falcas On Sat, Dec 21, 2013 at 1:52 PM, Vikas Parashar wrote: > Thanks Loic > > > On Sat, Dec 21, 2013 at 2:40 PM, Loic Dachary wrote: >> >> H

[ceph-users] first installation, ceph never goes to health ok

2014-01-31 Thread Cristian Falcas
o a clean state. Is this expected with one host only? Best regards, Cristian Falcas ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] first installation, ceph never goes to health ok

2014-01-31 Thread Cristian Falcas
ceph.conf file: > > osd crush chooseleaf type = 0 > > Then, follow the rest of the procedure. > > > On Fri, Jan 31, 2014 at 2:41 PM, Cristian Falcas > wrote: >> >> Hi list, >> >> I'm trying to play with ceph, but I can't get the machine to

Re: [ceph-users] client: centos6.4 no rbd.ko

2014-05-14 Thread Cristian Falcas
Why don't you want to update to one of the elrepo kernels? If you already went to the openstack kernel, you are using an unsupported kernel. I don't think anybody from redhat bothered to backport the ceph client code to a 2.6.32 kernel. Cristian Falcas On Wed, May 14, 2014 a