Hi,
On 01/08/2018 05:40 PM, Alessandro De Salvo wrote:
Thanks Lincoln,
indeed, as I said the cluster is recovering, so there are pending ops:
pgs: 21.034% pgs not active
1692310/24980804 objects degraded (6.774%)
5612149/24980804 objects misplaced (22.466%)
On Tue, Jan 02, 2018 at 04:54:55PM +, John Spray wrote:
On Tue, Jan 2, 2018 at 10:43 AM, Jan Fajerski wrote:
Hi lists,
Currently the ceph status output formats all numbers with binary unit
prefixes, i.e. 1MB equals 1048576 bytes and an object count of 1M equals
1048576 objects. I received
Hello ceph users, I have a question for you.
I have checked ceph documentation and a number of online conversations in
order to find some details and real-life experience about RBD image hosted
on EC-encoded pools. I understand that it is essential to use Bluestore
storage and create separate repl
On Tue, 9 Jan 2018, Jan Fajerski wrote:
> On Tue, Jan 02, 2018 at 04:54:55PM +, John Spray wrote:
> > On Tue, Jan 2, 2018 at 10:43 AM, Jan Fajerski wrote:
> > > Hi lists,
> > > Currently the ceph status output formats all numbers with binary unit
> > > prefixes, i.e. 1MB equals 1048576 bytes a
On Tue, Jan 9, 2018 at 6:14 AM, Sage Weil wrote:
> On Mon, 8 Jan 2018, Adam C. Emerson wrote:
>> Good day,
>>
>> I've just merged some changs into master that set us up to compile
>> with C++17. This will require a reasonably new compiler to build
>> master.
>
> Yay!
>
>> Due to a change in how 'n
Hello John,
Thank you for the clarification. I am using Google cloud platform for this
setup and I don't think I can assign a public ip directly to an interface
there. Hence the question.
Thanks
On Jan 8, 2018 1:51 PM, "John Petrini" wrote:
> ceph will always bind to the local IP. It can't bin
Hello.
My real life experience tells me that this kind of setup will use much more
hardware resources and will show lower benchmarks compared to recommended
replicated pools on the same hardware.
Writes to ec in some cases better than replicated pools.
http://en.community.dell.com/cfs-file/__
On Mon, Jan 8, 2018 at 8:02 PM, Marc Roos wrote:
>
> I guess the mds cache holds files, attributes etc but how many files
> will the default "mds_cache_memory_limit": "1073741824" hold?
We always used to get asked how much memory a given mds_cache_size (in
inodes) would require, I guess it was on
The script has not been adapted for this - at the end
http://download.ceph.com/nfs-ganesha/rpm-V2.5-stable/luminous/x86_64/
nfs-ganesha-rgw-2.5.4-.el7.x86_64.rpm
^
-Original Message-
From: Marc Roos
Sent: dinsdag 29 augustus 2017 12:10
To: amare...@redhat.c
This was fixed on next (for 2.6, currently in -rc1) but not backported
to 2.5.
Daniel
On 01/09/2018 12:41 PM, Marc Roos wrote:
The script has not been adapted for this - at the end
http://download.ceph.com/nfs-ganesha/rpm-V2.5-stable/luminous/x86_64/
nfs-ganesha-rgw-2.5.4-.el7.x86_64.rp
Hi,
I've recently upgraded from Jewel to Luminous and I'm therefore new to
using the Dashboard. I noted this section in the documentation:
http://docs.ceph.com/docs/master/mgr/dashboard/#load-balancer
"Please note that the dashboard will only start on the
manager which is active
I would just like to mirror what Dan van der Ster’s sentiments are.
As someone attempting to move an OSD to bluestore, with limited/no LVM
experience, it is a completely different beast and complexity level compared to
the ceph-disk/filestore days.
ceph-deploy was a very simple tool that did ex
2018-01-09 19:34 GMT+01:00 Tim Bishop :
> Hi,
>
> I've recently upgraded from Jewel to Luminous and I'm therefore new to
> using the Dashboard. I noted this section in the documentation:
>
> http://docs.ceph.com/docs/master/mgr/dashboard/#load-balancer
>
> "Please note that the dashboard w
Hello,
We have a user "testuser" with below permissions :
$ ceph auth get client.testuser
exported keyring for client.testuser
[client.testuser]
key = ==
caps mon = "profile rbd"
caps osd = "profile rbd pool=ecpool, profile rbd pool=cv, profile
rbd-read-only pool=templ
Hi Brent,
Brent Kennedy wrote to the mailing list:
Unfortunately, I don?t see that setting documented anywhere other
than the release notes. Its hard to find guidance for questions in
that case, but luckily you noted it in your blog post. I wish I
knew what setting to put that at. I did
On Tue, Jan 9, 2018 at 1:35 PM, Reed Dier wrote:
> I would just like to mirror what Dan van der Ster’s sentiments are.
>
> As someone attempting to move an OSD to bluestore, with limited/no LVM
> experience, it is a completely different beast and complexity level compared
> to the ceph-disk/filest
Hi ceph-users,
Hoping that this is something small that I am overlooking, but could use the
group mind to help.
Ceph 12.2.2, Ubuntu 16.04 environment.
OSD (0) is an 8TB spinner (/dev/sda) and I am moving from a filestore journal
to a blocks.db and WAL device on an NVMe partition (/dev/nvme0n1p5
On Tue, Jan 9, 2018 at 2:19 PM, Reed Dier wrote:
> Hi ceph-users,
>
> Hoping that this is something small that I am overlooking, but could use the
> group mind to help.
>
> Ceph 12.2.2, Ubuntu 16.04 environment.
> OSD (0) is an 8TB spinner (/dev/sda) and I am moving from a filestore
> journal to a
> -221.81000 host node24
> 0 hdd 7.26999 osd.0 destroyed0
> 1.0
> 8 hdd 7.26999 osd.8up 1.0
> 1.0
> 16 hdd 7.26999 osd.16 up 1.0
> 1.00
After removing the —osd-id flag, everything came up normally.
> -221.82448 host node24
> 0 hdd 7.28450 osd.0 up 1.0 1.0
> 8 hdd 7.26999 osd.8 up 1.0 1.0
> 16 hdd 7.26999
On Tue, Jan 9, 2018 at 3:27 PM, Reed Dier wrote:
> After removing the —osd-id flag, everything came up normally.
We just verified this is a bug when using --osd-id and that ID is no
longer available in the cluster. I've created
http://tracker.ceph.com/issues/22642 to get this fixed properly.
>
On Tue, Jan 9, 2018 at 6:34 PM, Tim Bishop wrote:
> Hi,
>
> I've recently upgraded from Jewel to Luminous and I'm therefore new to
> using the Dashboard. I noted this section in the documentation:
>
> http://docs.ceph.com/docs/master/mgr/dashboard/#load-balancer
>
> "Please note that the d
Hi,
While upgrading a server with a CephFS mount tonight, it stalled on installing
a new kernel, because it was waiting for `sync`.
I'm pretty sure it has something to do with the CephFS filesystem which caused
some issues last week. I think the kernel still has a reference to the
probably la
Hi All
I have a ceph host (12.2.2) which has 14 OSDs which seem to go down the
up, what should I look at to try to identify the issue ?
The system has three LSI SAS9201-8i cards which is then connected 14
drives at this time. (option of 24 drives)
I have three of these chassis but only one is runn
Have you checked your firewall?
From: ceph-users on behalf of Mike O'Connor
Sent: Wednesday, 10 January 2018 3:40:30 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] OSDs going down/up at random
Hi All
I have a ceph host (12.2.2) which has 14 OSDs which
On 10/01/2018 3:52 PM, Linh Vu wrote:
>
> Have you checked your firewall?
>
There are no ip tables rules at this time but connection tracking is
enable. I would expect errors about running out of table space if that
was an issue.
Thanks
Mike
___
ceph-use
Hi Mike,
Could you show system log at moment osd down and up?
On Jan 10, 2018 12:52, "Mike O'Connor" wrote:
> On 10/01/2018 3:52 PM, Linh Vu wrote:
> >
> > Have you checked your firewall?
> >
> There are no ip tables rules at this time but connection tracking is
> enable. I would expect errors
On 10/01/2018 4:24 PM, Sam Huracan wrote:
> Hi Mike,
>
> Could you show system log at moment osd down and up?
Ok so I have no idea how I missed this each time I looked but the syslog
does show a problem.
I've created the dump file mentioned in the log its 29M compressed so
any one who wants it I'l
On Tue, Jan 09, 2018 at 02:14:51PM -0500, Alfredo Deza wrote:
> On Tue, Jan 9, 2018 at 1:35 PM, Reed Dier wrote:
> > I would just like to mirror what Dan van der Ster’s sentiments are.
> >
> > As someone attempting to move an OSD to bluestore, with limited/no LVM
> > experience, it is a completely
As per a previous thread, my pgs are set too high. I tried adjusting the
"mon max pg per osd" up higher and higher, which did clear the
error(restarted monitors and managers each time), but it seems that data
simply wont move around the cluster. If I stop the primary OSD of an
incomplete pg, the
30 matches
Mail list logo