node? We don't care
very much about the data from the last minutes before the crash.
Best regards,
Cristian Falcas
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello,
I'm using btrfs for OSDs and want to know if it still helps to have the
journal on a faster drive. From what I've read I'm under the impression
that with btrfs journal, the OSD journal doesn't do much work anymore.
Best rega
Just to let you know that qemu packages from centos don't have rbd compiled
in. You will need to compile your own packages with the -ev version from
redhat for this.
On Thu, Jul 10, 2014 at 4:58 PM, Erik Logtenberg wrote:
> Hi,
>
> RHEL7 repository works just as well. CentOS 7 is effectively a
and reading all the horror story here
and on btrfs mailing list.
Is the snapshoting performed by ceph or by the fs? Can we switch to
xfs and have the same capabilities: instant snapshot + instant boot
from snapshot?
Best regards,
Cristian Falcas
eed.
On Wed, Feb 4, 2015 at 11:22 PM, Sage Weil wrote:
> On Wed, 4 Feb 2015, Cristian Falcas wrote:
>> Hi,
>>
>> We have an openstack installation that uses ceph as the storage backend.
>>
>> We use mainly snapshot and boot from snapshot from an original
>>
We want to use this script as a service for start/stop (but it wasn't
tested yet):
#!/bin/bash
# chkconfig: - 50 90
# description: make a journal for osd.0 in ram
start () {
-f /dev/shm/osd.0.journal || ceph-osd -i 0 --mkjournal
}
stop () {
service ceph stop osd.0 && ceph-osd -i osd.0 --flush-j
thout loosing everything?
Best regards,
Cristian Falcas
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.94.2-0.el7.centos.x86_64
python-cephfs-0.94.2-0.el7.centos.x86_64
I don't know if that matters, but the physical machine is a ceph+openstack
all in one installation.
Thank you,
Cristian Falcas
___
ceph-users mailing list
ceph-users@lists.cep
clean, 384 active+clean; 502 GB data, 183 GB used, 2279 GB /
2469 GB avail; 0 B/s rd, 24171 B/s wr, 4
On Sun, Jun 21, 2015 at 6:19 PM, Cristian Falcas
wrote:
> Hello,
>
> When doing a fio test on a vm, after some time the osd goes down with this
> error:
>
> osd.1 marke
ph osd pool set ssd_cache cache_target_dirty_ratio .4
ceph osd pool set ssd_cache cache_target_full_ratio .8
Best regards,
Cristian Falcas
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
It's from here: https://ceph.com/docs/v0.79/dev/cache-pool/#cache-mode
In that page there are both commands
On Tue, Oct 28, 2014 at 6:03 PM, Gregory Farnum wrote:
> On Tue, Oct 28, 2014 at 3:24 AM, Cristian Falcas
> wrote:
>> Hello,
>>
>> In the documentation abou
Hello,
Will there be any benefit in making the journal the size of an entire ssd disk?
I was also thinking on increasing "journal max write entries" and
"journal queue max ops".
But will it matter, or it will have the same effect as a 4gb journal
on the same ssd?
Thank
journal = /dev/shm/osd.1.journal
journal dio = false
Test performed with dd:
sync
dd bs=4M count=512 if=/home/user/backup_2014_10_27.raw
of=/var/lib/ceph/osd/ceph-1/backup_2014_10_27.raw conv=fdatasync
512+0 records in
512+0 records out
2147483648 bytes (2.1 GB) copied, 16.3971 s, 131 MB/s
Than
13 PM, Cristian Falcas
wrote:
> Hello,
>
> I have an one node ceph installation and when trying to import an
> image using qemu, it works fine for some time and after that the osd
> process starts using ~100% of cpu and the number of op/s increases and
> the writes decrease dramat
,
On Wed, Nov 5, 2014 at 7:51 PM, Gregory Farnum wrote:
> On Thu, Oct 30, 2014 at 8:13 AM, Cristian Falcas
> wrote:
>> Hello,
>>
>> I have an one node ceph installation and when trying to import an
>> image using qemu, it works fine for some time and after that the os
then independent OSDs?
Best regards,
Cristian Falcas
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ander wrote:
> On 12/06/2013 11:00 AM, Cristian Falcas wrote:
>>
>> Hi all,
>>
>> What will be the fastest disks setup between those 2:
>> - 1 OSD build from 6 disks in raid 10 and one ssd for journal
>> - 3 OSDs, each with 2 disks in raid 1 and a common
t
>
> - Original Message -
>> From: "Cristian Falcas"
>> To: "Wido den Hollander"
>> Cc: ceph-users@lists.ceph.com
>> Sent: Samstag, 7. Dezember 2013 15:44:08
>> Subject: Re: [ceph-users] how to set up disks in the same host
>>
>>
anning to use that machine for anything, I would say
that you can have a VM with maximum 2 cores and 3GB of ram.
Best regards,
Cristian Falcas
On Sat, Dec 21, 2013 at 1:52 PM, Vikas Parashar wrote:
> Thanks Loic
>
>
> On Sat, Dec 21, 2013 at 2:40 PM, Loic Dachary wrote:
>>
>> H
o a clean state.
Is this expected with one host only?
Best regards,
Cristian Falcas
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ceph.conf file:
>
> osd crush chooseleaf type = 0
>
> Then, follow the rest of the procedure.
>
>
> On Fri, Jan 31, 2014 at 2:41 PM, Cristian Falcas
> wrote:
>>
>> Hi list,
>>
>> I'm trying to play with ceph, but I can't get the machine to
Why don't you want to update to one of the elrepo kernels? If you
already went to the openstack kernel, you are using an unsupported
kernel.
I don't think anybody from redhat bothered to backport the ceph client
code to a 2.6.32 kernel.
Cristian Falcas
On Wed, May 14, 2014 a
22 matches
Mail list logo