Hi Team,
We are installing new ceph setup version jewel and while active tehe osd its
throughing error RuntimeError: Failed to execute command: /usr/sbin/ceph-disk
-v activate --mark-init systemd --mount /home/data/osd1. We try to reinstall
the osd machine and still same error . K
Hi,
We try to use host name instead of ip address but mounted partion showing
up address only . How show the host name instead of ip address.
On Wed, 01 Mar 2017 07:43:17 +0530 superdebu...@gmail.com wrote
We do try to use DNS to hide the IP and achieve kinds of HA, but failed.
Hi Robert,
As per my understand whichever partion it has that will be replicated from
base machine to docker container. My only concern is instead of ip how to show
the dns name.
Regards
Prabu
On Tue, 28 Feb 2017 13:44:30 +0530 r.san...@heinlein-support.de wrote
On 28.02.2017 07
Wido den Hollander wrote:
:
: > Op 27 februari 2017 om 15:59 schreef Jan Kasprzak :
: > : > : > Here is some statistics from our biggest instance of the object
storage:
: > : > : >
: > : > : > objects stored: 100_000_000
: > : > : > < 1024 bytes:10_000_000
: > : > : > 1k-64k bytes:80_000_
Hello,
I have a cluster CEPH (10.2.5-1trusty) I use the various
possibilities:
-Block
- Object
- CephFS
root@ih-par1-cld1-ceph-01:~# cat /etc/ceph/ceph.conf
[]
mon_host =
10.4.0.1, 10.4.0.3, 10.4.0.5
[]
public_network =
10.4.0.0/24
cluster_network = 192.168.33.0/24
[]
I ha
On 01.03.2017 10:54, gjprabu wrote:
> Hi,
>
> We try to use host name instead of ip address but mounted partion
> showing up address only . How show the host name instead of ip address.
What is the security gain you try to achieve by hiding the IPs?
Regards
--
Robert Sander
Heinlein Support
On Wed, Mar 1, 2017 at 10:15 AM, Jimmy Goffaux wrote:
>
>
> Hello,
>
> I have a cluster CEPH (10.2.5-1trusty) I use the various possibilities:
>
> -Block
>
> - Object
>
> - CephFS
>
>
>
>
>
> root@ih-par1-cld1-ceph-01:~# cat /etc/ceph/ceph.conf
> []
> mon_host = 10.4.0.1, 10.4.0.3, 10.4.0.5
>
Hi Robert,
This container host will be provided to end user and we don't want to expose
this ip to end users.
Regards
Prabu GJ
On Wed, 01 Mar 2017 16:03:49 +0530 Robert Sander
wrote
On 01.03.2017 10:54, gjprabu wrote:
> Hi,
>
> We try
In my case the version will be identical. But I might have to do this node by
node approach if I can't stabilize the more general shutdown/bring-up approach.
There are 192 OSD in my cluster, so it will take a while to go node by node
unfortunately.
-Chris
> On Mar 1, 2017, at 2:50 AM, Steffen
On 02/28/17 18:55, Heller, Chris wrote:
> Quick update. So I'm trying out the procedure as documented here.
>
> So far I've:
>
> 1. Stopped ceph-mds
> 2. set noout, norecover, norebalance, nobackfill
> 3. Stopped all ceph-osd
> 4. Stopped ceph-mon
> 5. Installed new OS
> 6. Started ceph-mon
> 7. St
Hi All,
Anybody faced similar issue and is there any solution on this.
Regards
Prabu GJ
On Wed, 01 Mar 2017 14:21:14 +0530 gjprabu
wrote
Hi Team,
We are installing new ceph setup version jewel and while active tehe osd its
That is a good question, and I'm not sure how to answer. The journal is on its
own volume, and is not a symlink. Also how does one flush the journal? That
seems like an important step when bringing down a cluster safely.
-Chris
> On Mar 1, 2017, at 8:37 AM, Peter Maloney
> wrote:
>
> On 02/2
Hi,
Are you sure ceph-disk is installed on target machine?
Regards, I
El mié., 1 mar. 2017 14:38, gjprabu escribió:
> Hi All,
>
> Anybody faced similar issue and is there any solution on this.
>
> Regards
> Prabu GJ
>
>
> On Wed, 01 Mar 2017 14:21:14 +0530 *gjprabu >* wrote -
Hi Iban,
Sure it is there. Ceph prepared was working properly and activate is
through the error.
root@cephnode1~#df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/vda2 ext4 7.6G 2.2G 5.4G 29% /
devtmpfs d
On 03/01/17 14:41, Heller, Chris wrote:
> That is a good question, and I'm not sure how to answer. The journal
> is on its own volume, and is not a symlink. Also how does one flush
> the journal? That seems like an important step when bringing down a
> cluster safely.
>
You only need to flush the j
I see. My journal is specified in ceph.conf. I'm not removing it from the OSD
so sounds like flushing isn't needed in my case.
-Chris
> On Mar 1, 2017, at 9:31 AM, Peter Maloney
> wrote:
>
> On 03/01/17 14:41, Heller, Chris wrote:
>> That is a good question, and I'm not sure how to answer. The
On 03/01/17 15:36, Heller, Chris wrote:
> I see. My journal is specified in ceph.conf. I'm not removing it from
> the OSD so sounds like flushing isn't needed in my case.
>
Okay but it seems it's not right if it's saying it's a non-block
journal. (meaning a file, not a block device).
Double check
Well , I think the argument here is not all about security gain, it just
NOT a user friendly way to let "df" show out 7 IPs of monitorsMuch
better if they seeing something like "mycephfs.mydomain.com".
And using DNS give you the flexibility of changing your monitor quorum
members , without not
> Op 1 maart 2017 om 15:40 schreef Xiaoxi Chen :
>
>
> Well , I think the argument here is not all about security gain, it just
> NOT a user friendly way to let "df" show out 7 IPs of monitorsMuch
> better if they seeing something like "mycephfs.mydomain.com".
>
mount / df simply prints th
On Wed, 1 Mar 2017, Wido den Hollander wrote:
> > Op 1 maart 2017 om 15:40 schreef Xiaoxi Chen :
> >
> >
> > Well , I think the argument here is not all about security gain, it just
> > NOT a user friendly way to let "df" show out 7 IPs of monitorsMuch
> > better if they seeing something like
> Op 1 maart 2017 om 16:57 schreef Sage Weil :
>
>
> On Wed, 1 Mar 2017, Wido den Hollander wrote:
> > > Op 1 maart 2017 om 15:40 schreef Xiaoxi Chen :
> > >
> > >
> > > Well , I think the argument here is not all about security gain, it just
> > > NOT a user friendly way to let "df" show out
Hi,
I have 5 data nodes (bluestore, kraken), each with 24 OSDs.
I enabled the optimal crush tunables.
I'd like to try to "really" use EC pools, but until now I've faced cluster
lockups when I was using 3+2 EC pools with a host failure domain.
When a host was down for instance ;)
Since I'd like t
Dear all,
i use the rbd-nbd connector.
Is there a way to reclaim free space from rbd image using this component
or not?
Thanks,
Max
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi all-
We use Amazon S3 quite a bit at $WORK but are evaluating Ceph+radosgw as an
alternative for some things. We have an "S3 smoke test" written using the AWS
Java SDK that we use to validate a number of operations. On my Kraken cluster,
multi-part uploads work fine for s3cmd. Our smoke test
I had similar issues when I created all the rbd-related pools with
erasure-coding instead of replication. -Roger
On Wed, Mar 1, 2017 at 11:47 AM John Nielsen wrote:
> Hi all-
>
> We use Amazon S3 quite a bit at $WORK but are evaluating Ceph+radosgw as
> an alternative for some things. We have a
You should be able to issue an fstrim against the filesystem on top of
the nbd device or run blkdiscard against the raw device if you don't
have a filesystem.
On Wed, Mar 1, 2017 at 1:26 PM, Massimiliano Cuttini wrote:
> Dear all,
>
> i use the rbd-nbd connector.
> Is there a way to reclaim free
This sounds like this bug:
http://tracker.ceph.com/issues/17076
Will be fixed in 10.2.6. It's triggered by aws4 auth, so a workaround
would be to use aws2 instead.
Yehuda
On Wed, Mar 1, 2017 at 10:46 AM, John Nielsen wrote:
> Hi all-
>
> We use Amazon S3 quite a bit at $WORK but are evaluating
Thanks! Changing to V2 auth does indeed work around the issue with the newer
SDK.
> On Mar 1, 2017, at 12:33 PM, Yehuda Sadeh-Weinraub wrote:
>
> This sounds like this bug:
> http://tracker.ceph.com/issues/17076
>
> Will be fixed in 10.2.6. It's triggered by aws4 auth, so a workaround
> would
>Still applies. Just create a Round Robin DNS record. The clients will
obtain a new monmap while they are connected to the cluster.
It works to some extent, but causing issue for "mount -a". We have such
deployment nowaday, a GTM(kinds of dns) record created with all MDS ips and
it works fine in t
> mount / df simply prints the monmap. It doesn't print what you added when
you mounted the filesystem.
>
> Totally normal behavior.
Not true again,
df only show what IP or IPs you added when mounting, also mount
10.189.11.138:6789:/sharefs_prod/8c285b3b59a843b6aab623314288ee36 2.8P
108T 2.7
On Thu, 2 Mar 2017, Xiaoxi Chen wrote:
> >Still applies. Just create a Round Robin DNS record. The clients will
> obtain a new monmap while they are connected to the cluster.
> It works to some extent, but causing issue for "mount -a". We have such
> deployment nowaday, a GTM(kinds of dns) record c
Thank you for your response. :)
Version was Jewel - 10.2.2. And, yes I did restart the monitors with no
change in results.
For the record, here's the problem. It was a multi-pool cluster, and
the Crush rules had an inappropriately large number for the step
chooseleaf line. I won't get i
Hi! Maybe you should check your network.Does the network of your mon to each
other is ok. Then do you only run ceph in the mon host.Is there any others
program run and waste the cpu of your host. If that is all ok , you can try
expend the expire time for each mon to lease timeout.This may re
Hi,
I am using Ceph Jewel, version 10.2.2 and trying to map a RBD image
which has stripe parameters to test performance, however, when I try to
mount it, i get "rbd: map failed: (22) Invalid argument" . Please
confirm if we cannot use the stripe parameters with this version as well
and need to go
On Wed, Mar 1, 2017 at 8:23 PM, Daleep Singh Bais wrote:
> Hi,
>
> I am using Ceph Jewel, version 10.2.2 and trying to map a RBD image
> which has stripe parameters to test performance, however, when I try to
> mount it, i get "rbd: map failed: (22) Invalid argument" . Please
> confirm if we canno
Hello everyone,
Hammer 0.94.10 update was announced in the blog a week ago. However, there
are no packages available for either version of redhat. Can someone tell
me what is going on?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.
Hello, I'm chasing down a situation where there's periodic slow requests
occurring. While the specific version in this case is 0.80.7 Firefly, I
think this log format is the same in newer versions. I can verify.
There's a host of symptoms going on, but one strange anomaly I found that I
wasn't
On Wed, 1 Mar 2017, Stephen Blinick wrote:
> Hello, I'm chasing down a situation where there's periodic slow requests
> occurring. While the specific version in this case is 0.80.7 Firefly, I
> think this log format is the same in newer versions. I can verify.
> There's a host of symptoms going
On Wed, Mar 1, 2017 at 11:34 PM, Sage Weil wrote:
> On Wed, 1 Mar 2017, Stephen Blinick wrote:
> > Hello, I'm chasing down a situation where there's periodic slow requests
> > occurring. While the specific version in this case is 0.80.7 Firefly, I
> > think this log format is the same in newer
Thanks Greg for the info.
As per our testing, we fix this warning problem by disabling ceph-mgr
service on all the ceph nodes .if the warning still persist, we go on the
last ceph node of the cluster and tried starting and stoping ceph-mgr
service as this operation solved the issue.
Do you have
40 matches
Mail list logo