Thank you Jason .
Does RBD volumes consistency group supported in Jewel ? can we take
consistent snapshots for volumes consistency group .
Implementation is for openstack -->
http://docs.openstack.org/admin-guide/blockstorage-consistency-groups.html
Thanks Again .
*Yair Magnezi *
*Storage &
S3 is originally Amazon's protocol so any details could be found in Amazon’s
documentation. If you want to know the details about the restful API, see
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTcors.html
Now Sree has been updated adding a bucket action to help setup bucket CORS.
S3 is originally Amazon's protocol so any details could be found in Amazon’s
documentation. If you want to know the details about the restful API, see
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTcors.html
Now Sree has been updated adding a bucket action to help setup bucket CORS.
Thanks.
Warm Regards.
From: Tu Holmes [mailto:tu.hol...@gmail.com]
Sent: Tuesday, May 03, 2016 1:38 AM
To: ceph-users@lists.ceph.com; Michael Ferguson
Subject: Re: [ceph-users] Lab Newbie Here: Where do I start?
I would start here.
https://www.redhat.com/en/resources/red-hat-ceph
Hi Peter,
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Peter Kerdisle
> Sent: 02 May 2016 08:17
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] Erasure pool performance expectations
>
> Hi guys,
>
> I am currently testing the
Hey Nick,
Thanks for taking the time to answer my questions. Some in-line comments.
On Tue, May 3, 2016 at 10:51 AM, Nick Fisk wrote:
> Hi Peter,
>
>
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Peter Kerdisle
> > Sent: 02 May 2
On Tue, May 3, 2016 at 3:20 AM, Yair Magnezi wrote:
> Does RBD volumes consistency group supported in Jewel ? can we take
> consistent snapshots for volumes consistency group .
No, this feature is being actively worked on for the Kraken release of
Ceph (the next major release after Jewel).
--
J
Hi Cephers,
I am running a very small cluster of 3 storage and 2 monitor nodes.
After I kill 1 osd-daemon, the cluster never recovers fully. 9 PGs
remain undersized for unknown reason.
After I restart that 1 osd deamon, the cluster recovers in no time .
Size of all pools are 3 and min_size is 2
Hi Cephers,
I am running a very small cluster of 3 storage and 2 monitor nodes.
After I kill 1 osd-daemon, the cluster never recovers fully. 9 PGs
remain undersized for unknown reason.
After I restart that 1 osd deamon, the cluster recovers in no time .
Size of all pools are 3 and min_size is 2
> -Original Message-
> From: Peter Kerdisle [mailto:peter.kerdi...@gmail.com]
> Sent: 03 May 2016 12:15
> To: n...@fisk.me.uk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Erasure pool performance expectations
>
> Hey Nick,
>
> Thanks for taking the time to answer my quest
The degraded pgs are mapped to the down OSD and have not mapped to a new
OSD. Removing the OSD would likely result in a full recovery.
As a note, having two monitors (or any even number of monitors) is not
recommended. If either monitor goes down you will lose quorum. The
recommended number of mon
Hello,
I am planning to make some changes to our ceph cluster and would like to ask
the community of the best route to take.
Our existing cluster is made of 3 osd servers (two of which are also mon
servers) and the total of 3 mon servers. The cluster is currently running on
Ubuntu 14.04.x LT
Thanks Tupper for replying.
Shouldn't the PG be remapped to other OSDs ?
Yes , removing OSD from the cluster is resulting into full recovery.
But that should not be needed , right ?
On Tue, May 3, 2016 at 6:31 PM, Tupper Cole wrote:
> The degraded pgs are mapped to the down OSD and have not m
Hi,
i am currently trying to make a more or less smart decision what HDD's
will be used for the cold storage behind the ssd cache tier.
As i saw, there are lately different drives available:
512N ( 512 bytes native sector size )
512E ( 512 bytes emulated on 4k sector size )
4kN ( 4k native secto
Yes the pg *should *get remapped, but that is not always the case. For
discussion on thi, check out the tracker below. Your particular
circumstances may be a little different, but the idea is the same.
http://tracker.ceph.com/issues/3806
On Tue, May 3, 2016 at 9:16 AM, Gaurav Bafna wrote:
> T
Also , the old PGs are not mapped to the down osd as seen from the
ceph health detail
pg 5.72 is active+undersized+degraded, acting [16,49]
pg 5.4e is active+undersized+degraded, acting [16,38]
pg 5.32 is active+undersized+degraded, acting [39,19]
pg 5.37 is active+undersized+degraded, acting [43,
Pgs are degraded because they don't have enough copies of the data. What
is your replication size?
You can refer to
http://docs.ceph.com/docs/master/rados/operations/pg-states/ for PG states.
Varada
On Tuesday 03 May 2016 06:56 PM, Gaurav Bafna wrote:
> Also , the old PGs are not mapped to the
Thank you, I will attempt to play around with these settings and see if I
can achieve better read performance.
Appreciate your insights.
Peter
On Tue, May 3, 2016 at 3:00 PM, Nick Fisk wrote:
>
>
> > -Original Message-
> > From: Peter Kerdisle [mailto:peter.kerdi...@gmail.com]
> > Sent
The replication size is 3 and min_size is 2. Yes , they don't have
enough copies. Ceph by itself should recover from this state to ensure
durability .
@Tupper : In this bug, each node is hosting only three osds . In my
set up , every node has 23 osds. So this should not be the issue .
On Tue, M
In addition to what nick said, it's really valuable to watch your cache
tier write behavior during heavy IO. One thing I noticed is you said
you have 2 SSDs for journals and 7 SSDs for data. If they are all of
the same type, you're likely bottlenecked by the journal SSDs for
writes, which com
Mark,
Thanks for pointing out about the throttles, they completely slipped my
mind. But then it got me thinking, why weren't they kicking in and stopping
too much promotions happening in the case of the OP.
I had a quick look at my current OSD settings
sudo ceph --admin-daemon /var/run/ceph/ceph
aha! I blame sage and take no responsibility. :D
https://github.com/ceph/ceph/commit/49c3521b05c33fff68a926d404d5216d1b078955
On 05/03/2016 09:24 AM, Nick Fisk wrote:
https://github.com/ceph/ceph/search?utf8=%E2%9C%93&q=osd_tier_promote_max_by
tes_sec
_
Hi,
I have a test Ceph cluster in my lab which will be a storage backend for one of
my projects.
This cluster is my first experience on CentOS-7, but recently I had some use
case on Ubuntu 14.04 too.
Actually everything works fine and I have a good functionality on this cluster,
but the main p
Hi,
we have a number of legacy applications that do not cope well with the
POSIX locking semantics in CephFS due to missing locking support (e.g.
flock syscalls). We are able to fix some of these applications, but
others are binary only.
Is it possible to disable POSIX locking completely in
On Tue, May 3, 2016 at 9:30 AM, Burkhard Linke
wrote:
> Hi,
>
> we have a number of legacy applications that do not cope well with the POSIX
> locking semantics in CephFS due to missing locking support (e.g. flock
> syscalls). We are able to fix some of these applications, but others are
> binary
Hi Roozbeh,
There isn't nearly enough information here regarding your benchmark and
test parameters to be able to tell why you are seeing performance
swings. It could be anything from network hiccups, to throttling in the
ceph stack, to unlucky randomness in object distribution, to vibrations
On Sun, May 1, 2016 at 5:52 PM, Andrus, Brian Contractor
wrote:
> All,
>
>
>
> I thought there was a way to mount CephFS using the kernel driver and be
> able to honor selinux labeling.
>
> Right now, if I do ‘ls -lZ' on a mounted cephfs, I get question marks
> instead of any contexts.
>
> When I
Hi Oliver,
Thanks for your reply.
The problem could have been caused by crashing/flapping OSD's. The cluster
is stable now, but lots of pg problems remain.
$ ceph health
HEALTH_ERR 4 pgs degraded; 158 pgs inconsistent; 4 pgs stuck degraded; 1
pgs stuck inactive; 10 pgs stuck unclean; 4 pgs stuck
Hi Blade,
if you dont see anything in the logs, then you should raise the debug
level/frequency.
You must at least see, that the repair command has been issued ( started ).
Also i am wondering about the [6] from your output.
That means, that there is only 1 copy of it ( on osd.6 ).
What is yo
My crush map keeps putting some OSDs on the wrong node. Restarting them
fixes it temporarily, but they eventually hop back to the other node that
they aren't really on.
Is there anything that can cause this to look for?
Ceph 9.2.1
-Ben
___
ceph-users m
Hi,
On 03.05.2016 18:39, Gregory Farnum wrote:
On Tue, May 3, 2016 at 9:30 AM, Burkhard Linke
wrote:
Hi,
we have a number of legacy applications that do not cope well with the POSIX
locking semantics in CephFS due to missing locking support (e.g. flock
syscalls). We are able to fix some of th
ceph-disk can prepare a disk a partition or a directory to be used as a device.
What are the implications and limits of using a directory?
Can it be used both for journal and storage?
What file system should the directory exist on?
Vincenzo Pii | TERALYTICS
DevOps Engineer
Teralytics AG | Zollst
https://github.com/ceph/ceph-docker
Is someone using ceph-docker in production or the project is meant more for
development and experimentation?
Vincenzo Pii | TERALYTICS
DevOps Engineer
Teralytics AG | Zollstrasse 62 | 8005 Zurich | Switzerland
phone: +41 (0) 79 191 11 08
email: vincenzo@t
The Hammer .93 to .94 notes said:
If upgrading from v0.93, setosd enable degraded writes = false on all
osds prior to upgrading. The degraded writes feature has been reverted due
to 11155.
Our cluster is now on Infernalis 9.2.1 and we still have this setting set.
Can we get rid of it? Was this r
On Tue, May 3, 2016 at 4:10 PM, Ben Hines wrote:
> The Hammer .93 to .94 notes said:
> If upgrading from v0.93, setosd enable degraded writes = false on all osds
> prior to upgrading. The degraded writes feature has been reverted due to
> 11155.
>
> Our cluster is now on Infernalis 9.2.1 and we
I mistakenly created a cache pool with way too few PGs. It's attached
as a write-back cache to an erasure-coded pool, has data in it, etc.;
cluster's using Infernalis. Normally, I can increase pg_num live, but
when I try in this case I get:
# ceph osd pool set cephfs_data_cache pg_num 256
Error
Hi Ben,
What OS+Version ?
Best Regards,
Wade
On Tue, May 3, 2016 at 2:44 PM Ben Hines wrote:
> My crush map keeps putting some OSDs on the wrong node. Restarting them
> fixes it temporarily, but they eventually hop back to the other node that
> they aren't really on.
>
> Is there anything tha
Hey cephers,
Sorry for the late notice here, but due to an unavoidable conflict it
seems we’ll have to move this month’s CDM to next week. I’m leaving
the URL for blueprints the same in case there are bookmarks or other
links still floating around out there, but please submit at least a
couple of
38 matches
Mail list logo