> On 14 Apr 2016, at 00:09, Gregory Farnum wrote:
>
> On Wed, Apr 13, 2016 at 3:02 PM, Sage Weil wrote:
>> Hi everyone,
>>
>> The third (and likely final) Jewel release candidate is out. We have a
>> very small number of remaining blocker issues and a bit of final polish
>> before we publish
>>> Christian Balzer schrieb am Dienstag, 12. April 2016 um
>>> 01:39:
> Hello,
>
Hi,
> I'm officially only allowed to do (preventative) maintenance during weekend
> nights on our main production cluster.
> That would mean 13 ruined weekends at the realistic rate of 1 OSD per
> night, so y
Official website of the developer mailing list (ceph-devel) address is wrong,
Who can give me a correct address to subscribe . thanks!___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
Does anybody know what auth capabilities are required to run commands such
as:
ceph daemon osd.0 perf dump
Even with the client.admin user, I can't get it to work:
$ ceph daemon osd.0 perf dump --name client.admin
--keyring=/etc/ceph/ceph.client.admin.keyring
{}
$ ceph auth get client.admi
On Thu, Apr 14, 2016 at 8:31 AM, Vincenzo Pii
wrote:
>
> On 14 Apr 2016, at 00:09, Gregory Farnum wrote:
>
> On Wed, Apr 13, 2016 at 3:02 PM, Sage Weil wrote:
>
> Hi everyone,
>
> The third (and likely final) Jewel release candidate is out. We have a
> very small number of remaining blocker iss
On Thu, Apr 14, 2016 at 7:32 PM, John Spray wrote:
> On Thu, Apr 14, 2016 at 8:31 AM, Vincenzo Pii
> wrote:
>>
>> On 14 Apr 2016, at 00:09, Gregory Farnum wrote:
>>
>> On Wed, Apr 13, 2016 at 3:02 PM, Sage Weil wrote:
>>
>> Hi everyone,
>>
>> The third (and likely final) Jewel release candidate
On Thu, Apr 14, 2016 at 11:17 AM, Sergio A. de Carvalho Jr.
wrote:
> Hi,
>
> Does anybody know what auth capabilities are required to run commands such
> as:
When you're doing "ceph daemon", no ceph authentication is happening:
this is a local connection to a UNIX socket in /var/run/ceph. So thi
Hello,
I tried to configure ceph logging to a remote syslog host based on
Sebastian Han's Blog
(http://www.sebastien-han.fr/blog/2013/01/07/logging-in-ceph/):
ceph.conf
[global]
...
log_file = none
log_to_syslog = true
err_to_syslog = true
[mon]
mon_cluster_log_to_syslog = true
mon_cluster_log
Good morning,
We've been running a medium-sized (88 OSDs - all SSD) ceph cluster for the past
20 months. We're very happy with our experience with the platform so far.
Shortly, we will be embarking on an initiative to replace all 88 OSDs with new
drives (Planned maintenance and lifecycle replac
If you have empty drive slots in your OSD hosts, I'd be tempted to
insert new drive in slot, set noout, shutdown one OSD, unmount OSD
directory, dd the old drive to the new one, remove old drive, restart OSD.
No rebalancing and minimal data movment when the OSD rejoins.
-K.
On 04/14/2016 04:29 P
ceph-devel is hosted at vger.kernel.com rather than ceph.com. This is
unlike the other mailing lists, but all the addresses related to it on
the site look correct. eg
http://vger.kernel.org/vger-lists.html#ceph-devel
-Greg
On Thu, Apr 14, 2016 at 2:56 AM, wrote:
> Official website of the develop
Hi,
that's how I did it for my osd's 25 to 30 (you can add as much as osd
numbers you like as long
you have free space).
First you can reweight the osd's to 0 to move their copies to other
osd's
for i in {25..30};
do
ceph osd crush reweight osd.$i
done
and have to wait until it's done (when c
> Op 14 april 2016 om 15:29 schreef Stephen Mercier
> :
>
>
> Good morning,
>
> We've been running a medium-sized (88 OSDs - all SSD) ceph cluster for the
> past 20 months. We're very happy with our experience with the platform so far.
>
> Shortly, we will be embarking on an initiative to repl
Sadly, this is not an option. Not only are there no free slots on the hosts,
but we're downgrading in size for each OSD because we decided to sacrifice
space to make a significant jump in drive quality.
We're not really too concerned about the rebalancing, as we monitor the cluster
closely and
> Op 14 april 2016 om 14:46 schreef Steffen Weißgerber :
>
>
> Hello,
>
> I tried to configure ceph logging to a remote syslog host based on
> Sebastian Han's Blog
> (http://www.sebastien-han.fr/blog/2013/01/07/logging-in-ceph/):
>
> ceph.conf
>
> [global]
> ...
> log_file = none
> log_to_sys
Thank you for the advice.
Our crush map is actually setup with replication set to 3, and at least one
copy in each cabinet, ensuring no one host is a single point of failure. We
fully intended on performing this maintenance over the course of many week, one
host at a time. We felt that the stag
Hello,
I upgraded from 10.1.0 to 10.1.2 with ceph-deploy and my cluster is down
now. getting below errors
ceph -s
2016-04-14 17:04:58.909894 7f14686e4700 0 -- :/2590574876 >>
10.10.200.4:6789/0 pipe(0x7f146405adf0 sd=3 :0 s=1 pgs=0 cs=0 l=1
c=0x7f146405c0b0).fault
2016-04-14 17:05:01.909949 7f14
Am 2016-04-14 16:05, schrieb Lomayani S. Laizer:
Hello,
I upgraded from 10.1.0 to 10.1.2 with ceph-deploy and my cluster is
down now. getting below errors
ceph -s
2016-04-14 17:04:58.909894 7f14686e4700 0 -- :/2590574876 >>
10.10.200.4:6789/0 [1] pipe(0x7f146405adf0 sd=3 :0 s=1 pgs=0 cs=0 l=1
On Thu, Apr 14, 2016 at 6:32 AM, John Spray wrote:
> On Thu, Apr 14, 2016 at 8:31 AM, Vincenzo Pii
> wrote:
>>
>> On 14 Apr 2016, at 00:09, Gregory Farnum wrote:
>>
>> On Wed, Apr 13, 2016 at 3:02 PM, Sage Weil wrote:
>>
>> Hi everyone,
>>
>> The third (and likely final) Jewel release candidate
Hello,
[reduced to ceph-users]
On Thu, 14 Apr 2016 11:43:07 +0200 Steffen Weißgerber wrote:
>
>
> >>> Christian Balzer schrieb am Dienstag, 12. April 2016
> >>> um 01:39:
>
> > Hello,
> >
>
> Hi,
>
> > I'm officially only allowed to do (preventative) maintenance during
> > weekend nights
On Thu, Apr 14, 2016 at 7:05 AM, Lomayani S. Laizer wrote:
> Hello,
> I upgraded from 10.1.0 to 10.1.2 with ceph-deploy and my cluster is down
> now. getting below errors
>
> ceph -s
>
> 2016-04-14 17:04:58.909894 7f14686e4700 0 -- :/2590574876 >>
> 10.10.200.4:6789/0 pipe(0x7f146405adf0 sd=3 :0
Hi!
A fresh install of 10.1.2 on CentOS 7.2.1511 fails adding osds:
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v
prepare --cluster ceph --fs-type xfs -- /dev/sdm /dev/sdi
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
The reason seems to be a failing partprobe c
Hi,
Am 14.04.2016 um 03:32 schrieb Christian Balzer:
> On Wed, 13 Apr 2016 14:51:58 +0200 Michael Metz-Martini | SpeedPartner GmbH
> wrote:
>> Am 13.04.2016 um 04:29 schrieb Christian Balzer:
>>> On Tue, 12 Apr 2016 09:00:19 +0200 Michael Metz-Martini | SpeedPartner GmbH
>>> wrote:
Am 11.04
Hello Gregory,
Thanks for your reply. I think am hitting the same bug. Below is the link
for log just after an upgrade
https://justpaste.it/ta16
--
Lomayani
On Thu, Apr 14, 2016 at 6:24 PM, Gregory Farnum wrote:
> On Thu, Apr 14, 2016 at 7:05 AM, Lomayani S. Laizer
> wrote:
> > Hello,
> > I u
Yep! This is fixed in the jewel and master branches now, but we're
going to wait until the next rc (or final release!) to push official
packages for it.
In the meantime, you can install those from our gitbuilders following
the instructions at
http://docs.ceph.com/docs/master/install/get-packages/#
It doesn't seem like it would be wise to run such systems on top of rbd.
-Sam
On Thu, Apr 14, 2016 at 11:05 AM, Jianjian Huo wrote:
> On Wed, Apr 13, 2016 at 6:06 AM, Sage Weil wrote:
>> On Tue, 12 Apr 2016, Jan Schermer wrote:
>>> Who needs to have exactly the same data in two separate objects
Hi Michael,
The partprobe issue was resolved for me by updating parted to the
package from Fedora 22: parted-3.2-16.fc22.x86_64. It shouldn't
require any other dependencies updated to install on EL7 varieties.
http://tracker.ceph.com/issues/15176
regards,
Ben
On Thu, Apr 14, 2016 at 12:35 PM,
Hello,
Upgraded the cluster but still seeing the same issue. Is the cluster not
recoverable?
ceph --version
ceph version 10.1.2-64-ge657ecf (e657ecf8e437047b827aa89fb9c10be82643300c)
root@mon-b:~# ceph -w
2016-04-14 22:17:56.766169 7f5da3fff700 0 -- 10.10.200.3:0/1828342317 >>
10.10.200.3:6789/0
Hi Ben!
Thanks for the information - I will try that (although I am not happy to
leave the centos / redhat path)...
Gruesse
Michael
On 2016-04-14 20:44, Benjeman Meekhof wrote:
> Hi Michael,
>
> The partprobe issue was resolved for me by updating parted to the
> package from Fedora 22: parted-
On Thu, Apr 14, 2016 at 12:19 PM, Lomayani S. Laizer
wrote:
> Hello,
> Upgraded the cluster but still seeing the same issue. Is the cluster not
> recoverable?
>
> ceph --version
> ceph version 10.1.2-64-ge657ecf (e657ecf8e437047b827aa89fb9c10be82643300c)
>
> root@mon-b:~# ceph -w
> 2016-04-14 22:1
On Thu, 14 Apr 2016 19:39:01 +0200 Michael Metz-Martini | SpeedPartner
GmbH wrote:
> Hi,
>
> Am 14.04.2016 um 03:32 schrieb Christian Balzer:
[massive snip]
Thanks for that tree/du output, it matches what I expected.
You'd think XFS wouldn't be that intimidated by directories of that size.
>
>
Hi,
Am 15.04.2016 um 03:07 schrieb Christian Balzer:
>> We thought this was a good idea so that we can change the replication
>> size different for doc_root and raw-data if we like. Seems this was a
>> bad idea for all objects.
> I'm not sure how you managed to get into that state or if it's a bug
Hello,
On Fri, 15 Apr 2016 07:02:13 +0200 Michael Metz-Martini | SpeedPartner
GmbH wrote:
> Hi,
>
> Am 15.04.2016 um 03:07 schrieb Christian Balzer:
> >> We thought this was a good idea so that we can change the replication
> >> size different for doc_root and raw-data if we like. Seems this wa
Hi,
Am 15.04.2016 um 07:43 schrieb Christian Balzer:
> On Fri, 15 Apr 2016 07:02:13 +0200 Michael Metz-Martini | SpeedPartner
> GmbH wrote:
>> Am 15.04.2016 um 03:07 schrieb Christian Balzer:
We thought this was a good idea so that we can change the replication
size different for doc_roo
34 matches
Mail list logo