On Sat, 8 Feb 2014, Christian Balzer wrote:
> On Fri, 7 Feb 2014 19:22:54 -0800 (PST) Sage Weil wrote:
>
> > On Sat, 8 Feb 2014, Christian Balzer wrote:
> > > On Fri, 07 Feb 2014 18:46:31 +0100 Christian Kauhaus wrote:
> > >
> > > > Am 07.02.2014 14:42, schrieb Mark Nelson:
> > > > > Ok, so the r
On 02/07/2014 03:11 AM, Alexandre DERUMIER wrote:
>>> This page reads "If you set rbd_cache=true, you must set cache=writeback or
>>> risk data loss." ...
> if you enable writeback,guest send flush request. If the host is crashing,
> you'll lost datas but it'll not corrupt the guest filesystem.
On Fri, 7 Feb 2014 19:22:54 -0800 (PST) Sage Weil wrote:
> On Sat, 8 Feb 2014, Christian Balzer wrote:
> > On Fri, 07 Feb 2014 18:46:31 +0100 Christian Kauhaus wrote:
> >
> > > Am 07.02.2014 14:42, schrieb Mark Nelson:
> > > > Ok, so the reason I was wondering about the use case is if you were
>
On Sat, 8 Feb 2014, Christian Balzer wrote:
> On Fri, 07 Feb 2014 18:46:31 +0100 Christian Kauhaus wrote:
>
> > Am 07.02.2014 14:42, schrieb Mark Nelson:
> > > Ok, so the reason I was wondering about the use case is if you were
> > > doing RBD specifically. Fragmentation has been something we've
On Fri, 07 Feb 2014 18:46:31 +0100 Christian Kauhaus wrote:
> Am 07.02.2014 14:42, schrieb Mark Nelson:
> > Ok, so the reason I was wondering about the use case is if you were
> > doing RBD specifically. Fragmentation has been something we've
> > periodically kind of battled with but still see in
Hi All,
There is a new release of ceph-deploy, the easy deployment tool for Ceph.
Although this is primarily a bug-fix release, the library that
ceph-deploy uses to connect
to remote hosts (execnet) was updated with the latest stable release.
A full list of changes can be found in the changelog:
I have confirmed this in production, with the default max-entries.
I have a bucket that I'm no longer writing to. Radosgw-agent had
stopped replicating this bucket. radosgw-admin bucket stats shows that
the slave is missing ~600k objects.
I uploaded a 1 byte file to the bucket. On the nex
Am 07.02.2014 14:42, schrieb Mark Nelson:
> Ok, so the reason I was wondering about the use case is if you were doing RBD
> specifically. Fragmentation has been something we've periodically kind of
> battled with but still see in some cases. BTRFS especially can get pretty
> spectacularly fragmen
Thakns!
--
Regards
Dominik
2014-02-06 14:18 GMT+01:00 Dan van der Ster :
> Hi,
> Our three radosgw's are OpenStack VMs. Seems to work for our (limited)
> testing, and I don't see a reason why it shouldn't work.
> Cheers, Dan
>
> -- Dan van der Ster || Data & Storage Services || CERN IT Department
On 02/06/2014 01:41 PM, Christian Kauhaus wrote:
Am 06.02.2014 16:24, schrieb Mark Nelson:
Hi Christian, can you tell me a little bit about how you are using Ceph and
what kind of IO you are doing?
Sure. We're using it almost exclusively for serving VM images that are
accessed from Qemu's buil
Hi,
Does anyone know what the issue is with this?
Thanks
*Graeme*
On 06/02/14 13:21, Graeme Lambert wrote:
Hi all,
Can anyone advise what the problem below is with rbd-fuse? From
http://mail.blameitonlove.com/lists/ceph-devel/msg14723.html it looks
like this has happened before but shoul
> I'm sorry, but I did not understand you :)
Sorry (-: My finger touched the RETURN-key to fast...
Try to setup a bigger value for the read ahead cache, maybe 256 MB?
echo "262144">/sys/block/vda/queue/read_ahead_kb
Try also "fio" performance tool - it will show more detailed information.
I'm sorry, but I did not understand you :)
2014-02-07 Daniel Schwager :
> setup a bitter value for read_ahead_kb ? I tested with 256 MB read ahead
> cache (
>
>
>
>
>
>
>
> *From:* ceph-users-boun...@lists.ceph.com [mailto:
> ceph-users-boun...@lists.ceph.com] *On Behalf Of * ???
> *Sen
setup a bitter value for read_ahead_kb ? I tested with 256 MB read ahead cache (
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of ???
Sent: Friday, February 07, 2014 10:55 AM
To: Konrad Gutkowski
Cc: ceph-users@lists.ceph.com
Subject: Re:
echo "noop">/sys/block/vda/queue/scheduler
echo "1000" >/sys/block/vda/queue/nr_requests
echo "8192">/sys/block/vda/queue/read_ahead_kb
[root@nfs tmp]# dd if=test of=/dev/null
39062500+0 records in
39062500+0 records out
200 bytes (20 GB) copied, 244.024 s, 82.0 MB/s
Changing these param
Hi,
I have some questions about comming cache pool feature.
Is it only a cache ? (are the datas on both cache pool and main pool ?)
Or are the datas migrated from the main pool to cache pool ?
Do we need to enable replication on the cache pool ?
What happen if we loose osds from cache pool ?
Am 06.02.2014 16:24, schrieb Mark Nelson:
> Hi Christian, can you tell me a little bit about how you are using Ceph and
> what kind of IO you are doing?
Just forgot to mention: we're running Ceph 0.72.2 on Linux 3.10 (both storage
servers and inside VMs) and Qemu-KVM 1.5.3.
Regards
Christian
--
>>This page reads "If you set rbd_cache=true, you must set cache=writeback or
>>risk data loss." ...
Because if you don't set writeback in qemu, qemu don't send flush requests.
And if you enable rbd cache, and your host is crashing, you'll lost datas and
possible corrupt the filesystem.
if yo
Hi,W dniu 07.02.2014 o 08:14 Ирек Фасихов pisze:[...]Why might such a low speed sequential read? Do ideas on this issue?
Iirc you need to set your readahead for the device higher (inside the vm) to compensate for network rtt.blockdev --setra x /dev/vdaThanks.-- С уважением, Фасихов Ирек Нургаязов
19 matches
Mail list logo