Hi !
We are running a ceph 12.2.7 Cluster and use it for RBDs.
We have now a few new servers installed with Ubuntu 18.
The default kernel version is v4.15.0.
When we create a new rbd and map/xfs-format/mount it, everything looks fine.
But if we want to map/mount a rbd that has already data in i
Hi,
I'm wanting to pin to an older version of Ceph Luminous (12.2.4) and I've
noticed that https://download.ceph.com/debian-luminous/ does not support
this via apt install:
apt install ceph works for 12.2.7 but
apt install ceph=12.2.4-1xenial does not work
The deb file are there, they're just not
What is the correct procedure for re-creating an incomplete placement group
that belongs to an erasure coded pool?
I'm facing a situation when too many shards of 3 PGs were lost during OSD
crashes, and taking the data loss was decided, but can't force ceph to
recreate those PGs. The query output sh
I have a idle test cluster (centos7.5, Linux c04
3.10.0-862.9.1.el7.x86_64), and a client kernel mount cephfs.
I tested reading a few files on this cephfs mount and get very low
results compared to the rados bench. What could be the issue here?
[@client folder]# dd if=5GB.img of=/dev/null st
Hi Marc,
In general dd isn't the best choice for benchmarking.
In you case there are at least 3 differences from rados bench :
1)If I haven't missed something then you're comparing reads vs. writes
2) Block Size is difference ( 512 bytes for dd vs . 4M for rados bench)
3) Just a single dd in
No need to delete it, that situation should be mostly salvagable by
setting osd_find_best_info_ignore_history_les temporarily on the
affected OSDs.
That should cause you to "just" lose some writes resulting in inconsistent data.
Paul
2018-08-28 11:08 GMT+02:00 Maks Kowalik :
> What is the correc
I was not trying to compare the test results I know they are different.
I am showing that reading is slow on cephfs (I am doing an rsync to
cephfs and I assumed that rsync is just reading the file in a similar
way)
And cluster is sort of in same ok state.
Meanwhile I did similar test with ce
Was there not some issue a while ago that was related to a kernel
setting? Because I can remember doing some tests that ceph-fuse was
always slower than the kernel module.
-Original Message-
From: Marc Roos
Sent: dinsdag 28 augustus 2018 12:37
To: ceph-users; ifedotov
Subject: Re: [
kernel
c01,c02,c03:/backup /home/backupceph
name=cephfs.backup,secretfile=/root/client.cephfs.backup.key,_netdev 0 0
c01,c02,c03:/backup /home/backup2 fuse.ceph
ceph.id=cephfs.backup,_netdev 0 0
Mounts root cephfs
c01,c02,c03:/backup /home/backup2
Thank you for answering.
Where is this option documented?
Do I set it in the config file, or using "tell osd.number" or admin-daemon?
Do I set it on the primary OSD of the up set, on all OSDs of the up set, or
maybe on all historical peers holding the shards of a particular group?
Is this option da
Scrubs discovered the following inconsistency:
2018-08-23 17:21:07.933458 osd.62 osd.62 10.122.0.140:6805/77767 6 :
cluster [ERR] 9.3cd shard 113: soid
9:b3cd8d89:::.dir.default.153398310.112:head omap_digest 0xea4ba012 !=
omap_digest 0xc5acebfd from shard 62, omap_digest 0xea4ba012 != omap_digest
I don't think it's documented.
It won't affect PGs that are active+clean.
Takes effect during peering, easiest to set it in ceph.conf and
restart the OSDs on *all* OSDs that you want to rescue.
Important to not forget to unset it afterwards
Paul
2018-08-28 13:21 GMT+02:00 Maks Kowalik :
> Thank
On Mon, Aug 27, 2018 at 11:19 PM đức phạm xuân wrote:
>
> Hello Jason Dillaman,
>
> I'm working with Ceph Object Storage Multi-Site v2, ceph's version is mimic.
> Now I want to delay replicate data from a master site to a slave site. I
> don't know whether dose ceph has support the mechanism?
T
Hi,
Just to inform that I finally resolved my problem few weeks ago now but I
wanted to make sure that it was solved permanently.
I set the timeout of OSDs to a larger number of seconds and set the no out and
no down flag on the cluster.
Basically I just waited that the “clean” ended but I notic
It's a bug. search thread "Poor CentOS 7.5 client performance" in ceph-users.
On Tue, Aug 28, 2018 at 2:50 AM Marc Roos wrote:
>
>
> I have a idle test cluster (centos7.5, Linux c04
> 3.10.0-862.9.1.el7.x86_64), and a client kernel mount cephfs.
>
> I tested reading a few files on this cephfs moun
Thanks!!!
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46212.html
echo 8192 >/sys/devices/virtual/bdi/ceph-1/read_ahead_kb
-Original Message-
From: Yan, Zheng [mailto:uker...@gmail.com]
Sent: dinsdag 28 augustus 2018 15:44
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-
I don't want to "rescue" any OSDs. I want to clean the incomplete PGs to
make CEPH proceed with PG re-creation and making those groups active again.
In my case which OSDs should I start with the
"osd_find_best_info_ignore_history_les" option?
This is the part of query output from one of the groups
That is the expected behavior of the ceph repo. In the past when I needed a
specific version I would download the packages for the version to a folder
and you can create a repo file that reads from a local directory. That's
how I would re-install my test lab after testing an upgrade procedure to
tr
There is a [1] tracker open for this issue. There are 2 steps that should
get a pg to scrub/repair that is just issuing the scrub, but not running
it. First is to increase osd_max_scrubs on the OSDs involved in the PG. If
that doesn't fix it, then try increasing your osd_deep_scrub_interval on
all
Hi,
Would you be able to recommend erasure code plugin ?
The default is jerasure but lrc appears to be more efficient
Ill appreciate any hints and/or pointers to resources / best practices
Thanks
Steven
___
ceph-users mailing list
ceph-users@lists.ce
This is what we have for our fstab to mount a specific subfolder using
ceph-fuse
id=cephfs-backup,client_mountpoint=/backup /home/backup2
fuse.ceph _netdev,noatime,rw 0 0
On Tue, Aug 28, 2018 at 4:04 AM Marc Roos wrote:
>
> kernel
> c01,c02,c03:/backup /home/backupceph
Try to update to kernel-3.10.0-862.11.6.el7.x86_64.rpm that should solve the
problem.
Best
Dietmar
Am 28. August 2018 11:50:31 MESZ schrieb Marc Roos :
>
>I have a idle test cluster (centos7.5, Linux c04
>3.10.0-862.9.1.el7.x86_64), and a client kernel mount cephfs.
>
>I tested reading a few
On 08/28/2018 09:24 AM, Jason Dillaman wrote:
On Mon, Aug 27, 2018 at 11:19 PM đức phạm xuân wrote:
Hello Jason Dillaman,
I'm working with Ceph Object Storage Multi-Site v2, ceph's version is mimic.
Now I want to delay replicate data from a master site to a slave site. I don't
know whether
Dear cephers,
I am new to the storage domain.
Trying to get my head around the enterprise - production-ready setup.
The following article helps a lot here: (Yahoo ceph implementation)
https://yahooeng.tumblr.com/tagged/object-storage
But a couple of questions:
What HDD would they have used here
James, well for a start don't use a SAN. I speak as someone who managed a
SAN with Brocade switches and multipathing for an F1 team. CEPH is Software
Defined Storage. You want discreet storage servers with a high bandwidth
Ethernet (or maybe Infiniband) fabric.
Fibrechannel still has it place here
Hi James,
I can see where some of the confusion has arisen, hopefully I can put at least
some of it to rest. In the Tumblr post from Yahoo, the keyword to look out for
is “nodes”, which is distinct from individual hard drives which in Ceph is an
OSD in most cases. So you would have multiple
James, you also use the words enterprise and production ready.
Is Redhat support important to you?
On Tue, 28 Aug 2018 at 23:56, John Hearns wrote:
> James, well for a start don't use a SAN. I speak as someone who managed a
> SAN with Brocade switches and multipathing for an F1 team. CEPH is
Am 28.08.18 um 07:14 schrieb Yan, Zheng:
> On Mon, Aug 27, 2018 at 10:53 AM Oliver Freyermuth
> wrote:
>>
>> Thanks for the replies.
>>
>> Am 27.08.18 um 19:25 schrieb Patrick Donnelly:
>>> On Mon, Aug 27, 2018 at 12:51 AM, Oliver Freyermuth
>>> wrote:
These features are critical for us, so
After moving back to tcmalloc my random crash issues have been resolved.
I would advise disabling support for jemalloc on bluestore since its not
stable or safe... seems risky to allow this?
_
*Tyler Bishop*
EST 2007
O: 513-299-7108 x1000
M: 513-646-5
Hi everyone,
Please help me welcome Mike Perez, the new Ceph community manager!
Mike has a long history with Ceph: he started at DreamHost working on
OpenStack and Ceph back in the early days, including work on the original
RBD integration. He went on to work in several roles in the OpenStack
On 08/28/2018 06:13 PM, Sage Weil wrote:
> Hi everyone,
>
> Please help me welcome Mike Perez, the new Ceph community manager!
>
> Mike has a long history with Ceph: he started at DreamHost working on
> OpenStack and Ceph back in the early days, including work on the original
> RBD integration.
Wherever I go, there you are ;). Glad to have you back again!
Cheers,
Erik
On Tue, Aug 28, 2018, 10:25 PM Dan Mick wrote:
> On 08/28/2018 06:13 PM, Sage Weil wrote:
> > Hi everyone,
> >
> > Please help me welcome Mike Perez, the new Ceph community manager!
> >
> > Mike has a long history with C
Welcome!
At 2018-08-29 09:13:24, "Sage Weil" wrote:
>Hi everyone,
>
>Please help me welcome Mike Perez, the new Ceph community manager!
>
>Mike has a long history with Ceph: he started at DreamHost working on
>OpenStack and Ceph back in the early days, including work on the original
>RBD
Welcome Mike!
On Tue, Aug 28, 2018 at 10:19 PM, linghucongsong
wrote:
>
>
>
>
> Welcome!
>
>
>
> At 2018-08-29 09:13:24, "Sage Weil" wrote:
> >Hi everyone,
> >
> >Please help me welcome Mike Perez, the new Ceph community manager!
> >
> >Mike has a long history with Ceph: he started at DreamHost
Great! Welcome Mike!
Am 29. August 2018 05:36:20 MESZ schrieb Alvaro Soto :
>Welcome Mike!
>
>On Tue, Aug 28, 2018 at 10:19 PM, linghucongsong
>
>wrote:
>
>>
>>
>>
>>
>> Welcome!
>>
>>
>>
>> At 2018-08-29 09:13:24, "Sage Weil" wrote:
>> >Hi everyone,
>> >
>> >Please help me welcome Mike Perez, t
Hi,
because there aren´t any replies in proxmox-mailing-lists, i´ll give it
a try here - has anyone expierence about the following circumstance?
Any hints are welcome...:
for backup testing purposes we run a ceph-cluster with radosgw (S3) and
nfs-ganesha to export s3 via nfs. cluster is running o
Hi David,
Thanks for your reply. That's how I'm currently handling it.
Kind regards,
Tom
On Tue, Aug 28, 2018 at 4:36 PM David Turner wrote:
> That is the expected behavior of the ceph repo. In the past when I needed
> a specific version I would download the packages for the version to a
> fol
37 matches
Mail list logo