Hi all:
I found when I set the bucket expiration rule , after the expiration
date, when I upload a new object , it will be deleted , and I found
the related code like the following:
if (prefix_iter->second.expiration_date != boost::none) {
//we have checked it before
Why this should be true ?
Hello everyone,
We're getting ready for the next round of Google Summer of Code and Outreachy.
Ali Maredia and I will help organize the Ceph project for each program.
We are looking to have project ideas for Google Summer of Code by
February 4th as our project application deadline is due by Febr
Hello everyone,
Just a reminder that the deadline for Cephalocon Barcelona 2019 CFP is
February 1 at 11:59 pm PST. Please get your proposed sessions in soon
as possible for our selection committee to review. Thanks!
https://ceph.com/cephalocon/barcelona-2019/
https://linuxfoundation.smapply.io/pr
Hello, Ceph users,
TL;DR: radosgw fails on me with the following message:
2019-01-17 09:34:45.247721 7f52722b3dc0 0 rgw_init_ioctx ERROR:
librados::Rados::pool_create returned (34) Numerical result out of range (this
can be due to a pool or placement group misconfiguration, e.g. pg_num
On Wed, Jan 16, 2019 at 11:17 PM Patrick Donnelly wrote:
>
> On Wed, Jan 16, 2019 at 1:21 AM Marvin Zhang wrote:
> > Hi CephFS experts,
> > From document, I know multi-fs within a cluster is still experiment feature.
> > 1. Is there any estimation about stability and performance for this feature?
Hi,
I have a sad ceph cluster.
All my osds complain about failed reply on heartbeat, like so:
osd.10 635 heartbeat_check: no reply from 192.168.160.237:6810 osd.42
ever on either front or back, first ping sent 2019-01-16
22:26:07.724336 (cutoff 2019-01-16 22:26:08.225353)
.. I've checked the net
Are you sure, no service like firewalld is running?
Did you check that all machines have the same MTU and jumbo frames are
enabled if needed?
I had this problem when I first started with ceph and forgot to
disable firewalld.
Replication worked perfectly fine but the OSD was kicked out every few se
How / where can I monitor the ios on cephfs mount / client?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Should I not be able to increase the io's by splitting the data writes
over eg. 2 cephfs mounts? I am still getting similar overall
performance. Is it even possible to increase performance by using
multiple mounts?
Using 2 kernel mounts on CentOS 7.6
_
Thanks you for responding!
First thing: I disabled the firewall on all the nodes.
More specifically not firewalld, but the NixOS firewall, since I run NixOS.
I can netcat both udp and tcp traffic on all ports between all nodes
without problems.
Next, I tried raising the mtu to 9000 on the nics wh
I would like to know the simplest and surest way to set up a RGW instance
with an EC-pool for storing large quantity of data.
1. I am currently trying to do this on a cluster that is not yet open to
users. (i.e. I can mess around with it and, in the worst case, start all
over.)
2. I deployed RGW
You can do that either straight from your client, or by querying the
perf dump if you're using ceph-fuse.
Mohamad
On 1/17/19 6:19 AM, Marc Roos wrote:
>
> How / where can I monitor the ios on cephfs mount / client?
>
> ___
> ceph-users mailing list
> ce
This is sort of related to my email yesterday, but has anyone ever rebuilt a
bucket index using the objects themselves?
It seems to be that it would be possible since the bucket_id is contained
within the rados object name:
# rados -p .rgw.buckets.index listomapkeys .dir.default.56630221.139618
On Mon, Jan 14, 2019 at 8:52 PM Yanko Davila wrote:
>
>
> Hello
>
> My name is Yanko Davila, I´m new to ceph so please pardon my ignorance. I
> have a question about Bluestore and SPDK.
>
> I´m currently running ceph version:
>
> ceph version 12.2.10 (177915764b752804194937482a39e95e0ca3de94) lum
Since you're using jumbo frames, make sure everything between the nodes
properly supports them (nics & switches). I've tested this in the past by
using the size option in ping (you need to use a payload size of 8972 instead
of 9000 to account for the 28 byte header):
ping -s 8972 192.168.160.
Hi,
The default limit for buckets per user in ceph is 1000, but it is
adjustable via radosgw-admin user modify --max-buckets
One of our users is asking for a significant increase (they're mooting
100,000), and I worry about the impact on RGW performance since, I
think, there's only one objec
On Tue, Jan 15, 2019 at 4:42 AM Yanko Davila wrote:
>
> Hello
>
> I was able to find the device selector. Now I have an issue understanding
> the steps to activate the osd. Once I setup spdk the device disappears from
> lsblk as expected. So the ceph manual is not very helpful after spdk is
>
On Fri, Jan 18, 2019 at 12:53 AM kefu chai wrote:
>
> On Tue, Jan 15, 2019 at 4:42 AM Yanko Davila wrote:
> >
> > Hello
> >
> > I was able to find the device selector. Now I have an issue understanding
> > the steps to activate the osd. Once I setup spdk the device disappears from
> > lsblk as
On Thu, Jan 17, 2019 at 4:42 AM Johan Thomsen wrote:
> Thanks you for responding!
>
> First thing: I disabled the firewall on all the nodes.
> More specifically not firewalld, but the NixOS firewall, since I run NixOS.
> I can netcat both udp and tcp traffic on all ports between all nodes
> witho
Hi,
We am trying to use Ceph in our products to address some of the use cases.
We think Ceph block device for us. One of the use cases is that we have a
number of jobs running in containers that need to have Read-Only access to
shared data. The data is written once and is consumed multiple times.
Hi,
first of: I'm probably not the expert you are waiting for, but we are using
CephFS for HPC / HTC (storing datafiles), and make use of containers for all
jobs (up to ~2000 running in parallel).
We also use RBD, but for our virtualization infrastructure.
While I'm always one of the first to
Hi Stefan,
I'm taking a stab at reproducing this in-house. Any details you can
give me that might help would be much appreciated. I'll let you know
what I find.
Thanks,
Mark
On 1/16/19 1:56 PM, Stefan Priebe - Profihost AG wrote:
i reverted the whole cluster back to 12.2.8 - recovery
Hey,
I recall reading about this somewhere but I can't find it in the docs or list
archive and confirmation from a dev or someone who knows for sure would be
nice. What I recall is that bluestore has a max 4GB file size limit based on
the design of bluestore not the osd_max_object_size setting.
I want to bring back my cluster to HEALTHY state because right now I have
not access to the data.
I have an 3+2 EC pool on a 5 node cluster. 3 nodes were lost, all data
wiped. They were reinstalled and added to cluster again.
The "ceph health detail" command says to reduce min_size number to a va
When you use 3+2 EC that means you have 3 data chunks and 2 erasure chunks for
your data. So you can handle two failures, but not three. The min_size
setting is preventing you from going below 3 because that's the number of data
chunks you specified for the pool. I'm sorry to say this, but si
I've run my home cluster with drives ranging in size from 500GB to 8TB before
and the biggest issue you run into is that the bigger drives will get a
proportional more number of PGs which will increase the memory requirements on
them. Typically you want around 100 PGs/OSD, but if you mix 4TB an
Hello Mark,
for whatever reason i didn't get your mails - most probably you kicked
me out of CC/TO and only sent to the ML? I've only subscribed to a daily
digest. (changed that for now)
So i'm very sorry to answer so late.
My messages might sound a bit confuse as it isn't easy reproduced and we
Hello Mark,
after reading
http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
again i'm really confused how the behaviour is exactly under 12.2.8
regarding memory and 12.2.10.
Also i stumpled upon "When tcmalloc and cache autotuning is enabled," -
we're compiling against an
On 01/17/2019 04:46 AM, Mike Perez wrote:
> Hey everyone,
>
> We're getting close to the release of Ceph Nautilus, and I wanted to
> start the discussion of our next shirt!
>
> It looks like in the past we've used common works from Wikipedia pages.
>
> https://en.wikipedia.org/wiki/Nautilus
>
>
On 1/17/19 4:06 PM, Stefan Priebe - Profihost AG wrote:
Hello Mark,
after reading
http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
again i'm really confused how the behaviour is exactly under 12.2.8
regarding memory and 12.2.10.
Also i stumpled upon "When tcmalloc and
>> Lenz has provided this image that is currently being used for the 404
>> page of the dashboard:
>>
>> https://github.com/ceph/ceph/blob/master/src/pybind/mgr/dashboard/frontend/src/assets/1280px-Nautilus_Octopus.jpg
>
> Nautilus *shells* are somewhat iconic/well known/distinctive. Maybe a
> v
On Thu, Jan 17, 2019 at 3:23 AM Marc Roos wrote:
> Should I not be able to increase the io's by splitting the data writes
> over eg. 2 cephfs mounts? I am still getting similar overall
> performance. Is it even possible to increase performance by using
> multiple mounts?
>
> Using 2 kernel mounts
On Thu, Jan 17, 2019 at 2:44 AM Dan van der Ster wrote:
>
> On Wed, Jan 16, 2019 at 11:17 PM Patrick Donnelly wrote:
> >
> > On Wed, Jan 16, 2019 at 1:21 AM Marvin Zhang wrote:
> > > Hi CephFS experts,
> > > From document, I know multi-fs within a cluster is still experiment
> > > feature.
> >
Hi:
Recently , I was trying to find a way to map a rbd device that can
talk with back end with rdma, There are three ways to export a rbd
device , krbd, nbd, iscsi .It seems that only iscsi may give a
chance. Has anyone tried to configure this and can give some advices
?
__
Hello,
I run into an error [1] while using OpenStack-Ansible to deploy Ceph
(using ceph-ansible 3.1).
My configuration was to use a non-collocated scenario with one SSD
(/dev/sdb) and two HDDs (/dev/sdc, /dev/sdd) on every host. Ceph OSD
configuration can be found at here [2].
[1] https://pasted
Ok, lesson learned the hard way. Thank goodness it was a test cluster.
Thanks a lot Bryan!
El jue., 17 ene. 2019 a las 21:46, Bryan Stillwell ()
escribió:
> When you use 3+2 EC that means you have 3 data chunks and 2 erasure chunks
> for your data. So you can handle two failures, but not three.
36 matches
Mail list logo