Hi All,
I'm using tgt(1.0.55) + librbd(H 0.94.5) for iSCSI service。Recently
encountered problems, TGT in the absence of pressure crush, exception
information is as follows:“kernel: tgtd[52067]: segfault at 0 ip
7f424cb0d76a sp 7f4228fe0b90 error 4 in
librbd.so.1.0.0[7f424c9b900
Hi all
Even when using ceph fuse, quotas are only enabled once you mount with the
--client-quota option.
Cheers
Goncalo
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of gjprabu
[gjpr...@zohocorp.com]
Sent: 16 December 2016 18:18
To: gjprab
Hey cephers,
Just wanted to put this out there in case there were any
package-maintenance wizards itching to contribute. The CentOS storage
SIG has worked hard to make sure that Ceph builds make it through
their own build system and have published two releases based on Jewel
and Hammer.
The curre
The fact that you are all SSD I would do exactly what Wido said -
gracefully remove the OSD and gracefully bring up the OSD on the new
SSD.
Let Ceph do what its designed to do. The rsync idea looks great on
paper - not sure what issues you will run into in practise.
On Fri, Dec 16, 2016 at 12:38
Thanks Burkhard, JiaJia..
able to resolve the issue with the "*
--typecode=1:45b0969e-9b03-4f30-b4c6-b4b80ceff106 " *for journal &
"*--typecode=2:4fbd7e29-9d25-41b8-afd0-062c0ceff05d"
*for the data partition while creating the partition with sgdisk !
Thanks
Sandeep
On Fri, Dec 16, 2016 at 3:01
Hi Daniel,
If you deploy your cluster by manual method, you can specify the OSD number as
you wish.
Here are the steps of manual deployment:
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#adding-osds
Sincerely,
Craig Chi
On 2016-12-16 21:51, Daniel Corleywrote:
>
> Is there
Is there a way to specify and OSD number on creation ? We run into
situations where we have nodes that if the OSds are not created
sequentially following the sda,sdb naming convention then the numbers
are less than easy to correlate to hardware. In the example shown
below we know OSD #341
Hi Matthew,
On Fri, 16 Dec 2016 12:30:06 +, Matthew Vernon wrote:
> Hello,
> On 15/12/16 10:25, David Disseldorp wrote:
>
> > Are you using the Linux kernel CephFS client (mount.ceph), or the
> > userspace ceph-fuse back end? Quota enforcement is performed by the
> > client, and is currently
2016-12-16 10:19 GMT+01:00 Wido den Hollander :
>
> > Op 16 december 2016 om 9:49 schreef Alessandro Brega <
> alessandro.bre...@gmail.com>:
> >
> >
> > 2016-12-16 9:33 GMT+01:00 Wido den Hollander :
> >
> > >
> > > > Op 16 december 2016 om 9:26 schreef Alessandro Brega <
> > > alessandro.bre...@g
On Fri, 16 Dec 2016 12:48:39 +0530, gjprabu wrote:
> Now we are mounted client using ceph-fuse and still allowing me to put a data
> above the limit(100MB). Below is quota details.
>
>
>
> getfattr -n ceph.quota.max_bytes test
>
> # file: test
>
> ceph.quota.max_bytes="1"
>
>
>
Hello,
On 15/12/16 10:25, David Disseldorp wrote:
> Are you using the Linux kernel CephFS client (mount.ceph), or the
> userspace ceph-fuse back end? Quota enforcement is performed by the
> client, and is currently only supported by ceph-fuse.
Is server enforcement of quotas planned?
Regards,
M
Hello JiaJia,
I tried with the below directives
enable experimental unrecoverable data corrupting features =
"bluestore,rocksdb"
&
enable experimental unrecoverable data corrupting features = *
Still the below warning showing in the ceph -s output
~~~
WARNING: the following dangerous
Hi,
1 - rados or rbd bug ? We're using rados bench.
2 - This is not bandwith related. If it was, it should happen almost
instantly and not 15 minutes after I start to write to the pool.
Once it has happened on the pool, I can then reproduce with a fewer
--concurrent-ios, like 12 or even 1.
T
Hi,
The manual method is good if you have small number of OSD's, but in case of
OSD's > 200 it will be a very time consuming task to create the OSD's like
that.
Also i used the ceph-ansible to setup my cluster with 2 OSD's per SSD and
my cluster was UP & running but i encountered the auto mount p
In your scenario, don't use ceph-disk
follow http://docs.ceph.com/docs/jewel/rados/operations/add-or-rm-osds/
-- Original --
From: "Burkhard Linke";
Date: Fri, Dec 16, 2016 05:09 PM
To: "CEPH list";
Subject: Re: [ceph-users] 2 OSD's per drive , unable to st
Hi Burkhard,
How can i achieve that so all the OSD's will auto start at boot time?
Regards,
Sandeep
On Fri, Dec 16, 2016 at 2:39 PM, Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:
> Hi,
>
> On 12/16/2016 09:22 AM, sandeep.cool...@gmail.com wrote:
>
> Hi,
>
> I was tryi
Hi,
On 12/16/2016 09:22 AM, sandeep.cool...@gmail.com wrote:
Hi,
I was trying the scenario where i have partitioned my drive (/dev/sdb)
into 4 (sdb1, sdb2 , sdb3, sdb4) using the sgdisk utility:
# sgdisk -z /dev/sdb
# sgdisk -n 1:0:+1024 /dev/sdb -c 1:"ceph journal"
# sgdisk -n 1:0:+1024 /d
Hi,
you need to flush all caches before starting read tests. With fio you
can probably do this if you keep the files that it creates.
as root on all clients and all osd nodes run:
echo 3 > /proc/sys/vm/drop_caches
But fio is a little problematic for ceph because of the caches in the
clients
hi skinjo,
forgot to ask that if it's necessary to disconnect all the client before doing
set-overlay ? we didn't sweep the clients out while setting overlay
-- Original --
From: "JiaJia Zhong";
Date: Wed, Dec 14, 2016 11:24 AM
To: "Shinobu Kinjo";
Cc: "CEPH
On 15.12.2016 16:49, Bjoern Laessig wrote:
> What does your Cluster do? Where is your data. What happens now?
You could configure the interfaces between the nodes as pointopoint
links and run OSPF on them. The cluster nodes then would have their node
IP on a dummy interface. OSPF would sort out t
Ok, I understand.
And the same configuration has worked on your NVMe servers? If yes it’s
strange, but I think that Ceph developers can tell you why better than me for
this part :-)
Regards,
___
PSA Groupe
Loïc Devulder (loic.devul
2016-12-16 9:33 GMT+01:00 Wido den Hollander :
>
> > Op 16 december 2016 om 9:26 schreef Alessandro Brega <
> alessandro.bre...@gmail.com>:
> >
> >
> > Hi guys,
> >
> > I'm running a ceph cluster using 0.94.9-1trusty release on XFS for RBD
> > only. I'd like to replace some SSDs because they are c
Hi,
I’m not sure that having multiple OSD on one drive is supported.
And also: why do you want this? It’s not good for perfomance and more important
for data redundancy.
Regards,
___
PSA Groupe
Loïc Devulder (loic.devul...@mpsa.com<
> Op 16 december 2016 om 9:26 schreef Alessandro Brega
> :
>
>
> Hi guys,
>
> I'm running a ceph cluster using 0.94.9-1trusty release on XFS for RBD
> only. I'd like to replace some SSDs because they are close to their TBW.
>
> I know I can simply shutdown the OSD, replace the SSD, restart th
Hi guys,
I'm running a ceph cluster using 0.94.9-1trusty release on XFS for RBD
only. I'd like to replace some SSDs because they are close to their TBW.
I know I can simply shutdown the OSD, replace the SSD, restart the OSD and
ceph will take care of the rest. However I don't want to do it this w
Hi,
I was trying the scenario where i have partitioned my drive (/dev/sdb) into
4 (sdb1, sdb2 , sdb3, sdb4) using the sgdisk utility:
# sgdisk -z /dev/sdb
# sgdisk -n 1:0:+1024 /dev/sdb -c 1:"ceph journal"
# sgdisk -n 1:0:+1024 /dev/sdb -c 2:"ceph journal"
# sgdisk -n 1:0:+4096 /dev/sdb -c 3:"cep
26 matches
Mail list logo