Amazing improvements to performance in the preview now.. I wonder
will there be a filestore --> bluestore upgrade path...
On Wed, Aug 31, 2016 at 6:32 AM, Alexandre DERUMIER wrote:
> Hi,
>
> Here the slides of the ceph bluestore prensentation
>
> http://events.linuxfoundation.org/sites/events/f
From: w...@globe.de [mailto:w...@globe.de]
Sent: 30 August 2016 18:40
To: n...@fisk.me.uk; 'Alex Gorbachev'
Cc: 'Horace Ng'
Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance
Hi Nick,
here are my answers and questions...
Am 30.08.16 um 19:05 schrieb Nick Fisk:
> Op 31 augustus 2016 om 9:51 schreef "Brian ::" :
>
>
> Amazing improvements to performance in the preview now.. I wonder
Indeed, great work!
> will there be a filestore --> bluestore upgrade path...
>
Yes and No. Since the OSDs API doesn't change you can 'simply':
1. Shut down OSD
2. Wi
From: w...@globe.de [mailto:w...@globe.de]
Sent: 31 August 2016 08:56
To: n...@fisk.me.uk; 'Alex Gorbachev' ; 'Horace Ng'
Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance
Nick,
what do you think about Infiniband?
I have read that with Infiniband the latency is at 1,2us
Hi all,
I just want to confirm that the patch works in our environment.
Thanks!
On 08/30/2016 02:04 PM, Dennis Kramer (DBS) wrote:
> Awesome Goncalo, that is very helpful.
>
> Cheers.
>
> On 08/30/2016 01:21 PM, Goncalo Borges wrote:
>> Hi Dennis.
>>
>> That is the first issue we saw and has no
Hi kefu,
A quick question about the incremental osdmap.
I used ceph-objectstore-tool with the op get-inc-osdmap to get the specific
inc osdmap, but how to view the content of the incremental osdmap? Seems
osdmaptool --print incremental-osdmap can't output the content.
Thanks
Zhongyan
_
It seems using the 'sync' mount option on the server uploader01 is also a
valid work around.
Is it a problem that the meta data is available to other cephfs clients
ahead of the file contents being flushed by the client doing the write?
I think having an invalid page cache of zeros is a problem b
Hi,
I have a CephFS filesystem which is re-exported through NFS Ganesha (v2.3.0)
with Ceph 10.2.2
The export works fine, but when calling a chgrp on a file the UID is set to
root.
Example list of commands:
$ chown www-data:www-data myfile
That works, file is now owned by www-data/www-data
$
On Wed, Aug 31, 2016 at 11:23 AM, Wido den Hollander wrote:
> Hi,
>
> I have a CephFS filesystem which is re-exported through NFS Ganesha (v2.3.0)
> with Ceph 10.2.2
>
> The export works fine, but when calling a chgrp on a file the UID is set to
> root.
>
> Example list of commands:
>
> $ chown
> Op 31 augustus 2016 om 12:42 schreef John Spray :
>
>
> On Wed, Aug 31, 2016 at 11:23 AM, Wido den Hollander wrote:
> > Hi,
> >
> > I have a CephFS filesystem which is re-exported through NFS Ganesha
> > (v2.3.0) with Ceph 10.2.2
> >
> > The export works fine, but when calling a chgrp on a f
I believe this is a Ganesha bug, as discussed on the Ganesha list.
Daniel
On 08/31/2016 06:55 AM, Wido den Hollander wrote:
Op 31 augustus 2016 om 12:42 schreef John Spray :
On Wed, Aug 31, 2016 at 11:23 AM, Wido den Hollander wrote:
Hi,
I have a CephFS filesystem which is re-exported th
On Wed, Aug 31, 2016 at 12:49 AM, Sean Redmond wrote:
> Hi,
>
> I have been able to pick through the process a little further and replicate
> it via the command line. The flow seems looks like this:
>
> 1) The user uploads an image to webserver server 'uploader01' it gets
> written to a path such
>>> Loris Cuoghi schrieb am Dienstag, 30. August
2016 um
16:34:
> Hello,
>
Hi Loris,
thank you for your answer.
> Le 30/08/2016 à 14:08, Steffen Weißgerber a écrit :
>> Hello,
>>
>> after correcting the configuration for different qemu vm's with rbd
disks
>> (we removed the cache=writethrou
>>eanwhile I tried to update the viostor driver within the vm (a W2k8)
>>but that
>>results in a bluescreen.
>>When booting via recovery console and loading the new driver from an
>>actual
>>qemu driver iso the disks are all in writeback mode.
>>So maybe the cache mode depends on the iodriver with
HI Brad,
After exploring Ceph, I found sometimes when I run *make* command without
removing the build folder, it will end with this error:
/home/agung/project/samc/ceph/build/bin/init-ceph: ceph conf ./ceph.conf
not found; system is not configured.
rm -f core*
ip 127.0.0.1
port 6789
NOTE: hostna
On 08/18/2016 12:42 AM, Brad Hubbard wrote:
> On Thu, Aug 18, 2016 at 1:12 AM, agung Laksono
> wrote:
>>
>> Is there a way to make the compiling process be faster? something
>> like only compile a particular code that I change.
>
> Sure, just use the same build directory and run "make" again af
>>> Alexandre DERUMIER schrieb am Mittwoch, 31.
August 2016
um 16:10:
>> >eanwhile I tried to update the viostor driver within the vm (a
W2k8)
>>>but that
>>>results in a bluescreen.
>
>>>When booting via recovery console and loading the new driver from
an
>>>actual
>>>qemu driver iso the disk
Hi,
I'm looking for pros and cons of mounting /var/lib/mysql with CephFS or RBD
for getting best performance. MySQL save data as files on mostly
configuration but the I/O is block access because the file is opened until
MySQL down. This case give us both options for storing the data files. For
RBD
I have updated the tracker with some log extracts as I seem to be hitting
this or a very similar issue.
I was unsure of the correct syntax for the command ceph-objectstore-tool to
try and extract that information.
On Wed, Aug 31, 2016 at 5:56 AM, Brad Hubbard wrote:
>
> On Wed, Aug 31, 2016 at
In my testing, using RBD-NBD is faster than using RBD or CephFS.
For a MySQL/sysbench test using 25 threads using OLTP, using a 40G network
between the client and Ceph, here are some of my results:
Using ceph-rbd: transactions per sec: 8620
using ceph rbd-nbd: transaction per sec: 9359
using c
Hi,
Thank you for your opinion. I don't know if RBD-NBD is supported by
OpenStack since my environment is OpenStack. What file system do you use in
your test for RBD and RBD-NBD?
Best regards,
On Wed, Aug 31, 2016 at 10:11 PM, RDS wrote:
> In my testing, using RBD-NBD is faster than using RBD
I am not sure how to tell?
Server1 and Server2 mount the ceph file system using kernel client 4.7.2
and I can replicate the problem using '/usr/bin/sum' to read the file or a
http GET request via a web server (apache).
On Wed, Aug 31, 2016 at 2:38 PM, Yan, Zheng wrote:
> On Wed, Aug 31, 2016 at
> Op 31 augustus 2016 om 15:28 schreef Daniel Gryniewicz :
>
>
> I believe this is a Ganesha bug, as discussed on the Ganesha list.
>
Ah, thanks. Do you maybe have a link or subject so I can chime in?
Wido
> Daniel
>
> On 08/31/2016 06:55 AM, Wido den Hollander wrote:
> >
> >> Op 31 augustu
On 08/31/2016 02:15 PM, Wido den Hollander wrote:
Op 31 augustus 2016 om 15:28 schreef Daniel Gryniewicz :
I believe this is a Ganesha bug, as discussed on the Ganesha list.
Ah, thanks. Do you maybe have a link or subject so I can chime in?
Wido
https://sourceforge.net/p/nfs-ganesha/mai
On Tue, Aug 30, 2016 at 2:17 AM, Andrei Mikhailovsky wrote:
> Hello
>
> I've got a small cluster of 3 osd servers and 30 osds between them running
> Jewel 10.2.2 on Ubuntu 16.04 LTS with stock kernel version 4.4.0-34-generic.
>
> I am experiencing rather frequent osd crashes, which tend to happen
xfs
> On Aug 31, 2016, at 12:14 PM, Lazuardi Nasution
> wrote:
>
> Hi,
>
> Thank you for your opinion. I don't know if RBD-NBD is supported by OpenStack
> since my environment is OpenStack. What file system do you use in your test
> for RBD and RBD-NBD?
>
> Best regards,
>
> On Wed, Aug 31,
After a power failure left our jewel cluster crippled, I have hit a sticking
point in attempted recovery.
Out of 8 osd’s, we likely lost 5-6, trying to salvage what we can.
In addition to rados pools, we were also using CephFS, and the cephfs.metadata
and cephfs.data pools likely lost plenty of
> Op 31 augustus 2016 om 22:14 schreef Gregory Farnum :
>
>
> On Tue, Aug 30, 2016 at 2:17 AM, Andrei Mikhailovsky
> wrote:
> > Hello
> >
> > I've got a small cluster of 3 osd servers and 30 osds between them running
> > Jewel 10.2.2 on Ubuntu 16.04 LTS with stock kernel version 4.4.0-34-gener
> Op 31 augustus 2016 om 22:56 schreef Reed Dier :
>
>
> After a power failure left our jewel cluster crippled, I have hit a sticking
> point in attempted recovery.
>
> Out of 8 osd’s, we likely lost 5-6, trying to salvage what we can.
>
That's probably to much. How do you mean lost? Is XFS
Multiple XFS corruptions, multiple leveldb issues. Looked to be result of write
cache settings which have been adjusted now.
You’ll see below that there are tons of PG’s in bad states, and it was slowly
but surely bringing the number of bad PGs down, but it seems to have hit a
brick wall with t
On Thu, Sep 1, 2016 at 1:08 AM, Sean Redmond wrote:
> I have updated the tracker with some log extracts as I seem to be hitting
> this or a very similar issue.
I've updated the tracker asking for "rados list-inconsistent-obj" from
the relevant pgs.
That should give us a better idea of the nature
Hi,
http://docs.ceph.com/docs/master/rados/operations/cache-tiering/ states
>
> Note A larger hit_set_count results in more RAM consumed by the ceph-osd
> process.
By how much - what order - kb ? mb ? gb ?
After some spelunking - there's osd_hit_set_max_size, is it fair to make
the following a
Hi Kenneth, All
Just an update for completeness on this topic.
We have been hit again by this issue.
I have been discussing it with Brad (RH staff) in another ML thread, and
I have opened a tracker issue: http://tracker.ceph.com/issues/17177
I believe this is a bug since there are other peop
Hi Cephers!
The re-weight value of OSD is always 1 when we create and activate an OSD
daemon.
I utilize ceph-deploy tool whenever deploy ceph cluster.
Is there a default reweight value of ceph-deploy tool?
Can we adjust the reweight value when we activate OSD daemon?
ID WEIGHT TYPE NAME
Hey,
it is normal for reweight value to be 1. You (with "ceph osd reweight
OSDNUM newweight") or "ceph osd reweight-by-utilization" can decrease it
to move some pgs out of that OSD.
Thing that usually differs and depends on disk size is "weight"
On 16-08-31 22:06, 한승진 wrote:
Hi Cephers!
The
35 matches
Mail list logo