On 22 May 2015 at 00:10, gjprabu wrote:
> Hi All,
>
> We are using rbd and map the same rbd image to the rbd device on
> two different client but i can't see the data until i umount and mount -a
> partition. Kindly share the solution for this issue.
>
Whats the image used for? if its a f
can you paste dmesg and system logs? I am using 3 node OCFS2 with RBD
and had no problems.
On 15-10-23 08:40, gjprabu wrote:
Hi Frederic,
Can you give me some solution, we are spending more time to
solve this issue.
Regards
Prabu
On Thu, 15 Oct 2015 17:14:13 +0530 *Tyler
Hi Henrik,
Thanks for your reply, Still we are facing same issue. we found this dmesg logs
and this is known logs because our self made down node1 and made up, this is
showing in logs and other then we didn't found error message. Even we do have
problem while unmounting. umount process goes
On Thu, Oct 22, 2015 at 10:59 PM, Allen Liao wrote:
> Does ceph guarantee image consistency if an rbd image is unmapped on one
> machine then immediately mapped on another machine? If so, does the same
> apply to issuing a snapshot command on machine B as soon as the unmap
> command finishes on m
After upgrading on git + 802cf861352d3c77800488d812009cbbc7184c73.patch, I got
one host repeated restart problem: 2/3 OSDs - "osd/ReplicatedPG.cc: 387: FAILED
assert(needs_recovery)" after start of repair. After series of restart, on
progressed repair %, HEALTH_OK. Are there subject to report mo
Hello,
We have two separate networks in our Ceph cluster design:
10.197.5.0/24 - The "front end" network, "skinny pipe", all 1Gbe, intended to
be a management or control plane network
10.174.1.0/24 - The "back end" network, "fat pipe", all OSD nodes use 2x bonded
10Gbe, intended to be the
On 23-10-15 14:58, Jon Heese wrote:
> Hello,
>
>
>
> We have two separate networks in our Ceph cluster design:
>
>
>
> 10.197.5.0/24 - The "front end" network, "skinny pipe", all 1Gbe,
> intended to be a management or control plane network
>
> 10.174.1.0/24 - The "back end" network, "fat
The "public" network is where all storage accesses from other systems or
clients will occur. When you map RBD's to other hosts, access object storage
through the RGW, or CephFS access, you will access the data through the
"public" network. The "cluster" network is where all internal replication
> After reading and understanding your mail, i moved on to do some experiments
> regarding deep flatten. some questions showed up:
> here is my experiement:
> ceph version I used: ceph -v output:
> ceph version 9.1.0-299-g89b2b9b
> 1. create a separate pool for test:
> rados mkpool pool100
> 2. cr
Bill,
Thanks for the explanation – that helps a lot. In that case, I actually want
the 10.174.1.0/24 network to be both my cluster and my public network, because
I want all “heavy” data traffic to be on that network. And by “heavy”, I mean
large volumes of data, both normal Ceph client traffi
Hi,
On 10/14/2015 06:32 AM, Gregory Farnum wrote:
On Mon, Oct 12, 2015 at 12:50 AM, Burkhard Linke
wrote:
*snipsnap*
Thanks, that did the trick. I was able to locate the host blocking the file
handles and remove the objects from the EC pool.
Well, all except one:
# ceph df
...
ec_
Hello.
Some strange things happen with my ceph installation after I was moved journal
to SSD disk.
OS: Ubuntu 15.04 with ceph version 0.94.2-0ubuntu0.15.04.1
server: dell r510 with PERC H700 Integrated 512MB RAID cache
my cluster have:
1 monitor node
2 OSD nodes with 6 OSD daemons at each server
The drive you have is not suitable at all for journal. Horrible, actually.
"test with fio (qd=32,128,256, bs=4k) show very good performance of SSD disk
(10-30k write io)."
This is not realistic. Try:
fio --sync=1 --fsync=1 --direct=1 --iodepth=1 --ioengine=aio
Jan
On 23 Oct 2015, at 16:3
I'm currently working on deploying a new VM cluster using KVM + RBD. I've
noticed through the list that the latest "Hammer" (0.94.4) release can
cause issues with librbd and caching.
We've worked around this issue in our existing clusters by only upgrading
the OSD & MON hosts, while leaving the h
Nevermind. I'm apparently blind. Ignore me.
Thank You,
Logan Barfield
Tranquil Hosting
On Fri, Oct 23, 2015 at 11:08 AM, Logan Barfield
wrote:
> I'm currently working on deploying a new VM cluster using KVM + RBD. I've
> noticed through the list that the latest "Hammer" (0.94.4) release ca
Hi
I have been looking for info about "osd pool default size" and the
reason its 3 as default.
I see it got changed in v0.82 from 2 to 3,
Here its 2.
http://docs.ceph.com/docs/v0.81/rados/configuration/pool-pg-config-ref/
and in v0.82 its 3.
http://docs.ceph.com/docs/v0.82/rados/configuratio
I understand that my SSD is not suitable for journal. I want to test ceph using
existing components before buy more expensive SSD (such as intel dc s3700).
I run fio with those options:
[global]
ioengine=libaio
invalidate=1
ramp_time=5
iodepth=1
runtime=300
time_based
direct=1
bs=4k
size=1m
file
Hi,
When upgrading to the next release, is it necessary to first upgrade to
the most recent point release of the prior release or can one upgrade
from the initial release of the named version? The release notes don't
appear to indicate it is necessary
(http://docs.ceph.com/docs/master/release-not
Trying to figure out if ceph supports inotify, or some form of notification, I
see this issue from 4 years ago:
http://tracker.ceph.com/issues/1296
And the corresponding discussion thread
http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/3355
Basically no information there, and what i
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Depends on the releases. To go from Hammer to Infernalis, you do for
example, but I don't think there is any requirement for Firefly to
Hammer. It is always a good idea to go through the latest point
release just to be safe.
-
Robert
Yes, that's correct.
We use the public/cluster networks exclusively, so in the configuration we
specify the MON addresses on the public network, and define both the
public/cluster network subnet. I've not tested, but wonder if it's possible to
have the MON addresses on a 1GbE network, then def
On Fri, Oct 23, 2015 at 7:08 AM, Burkhard Linke
wrote:
> Hi,
>
> On 10/14/2015 06:32 AM, Gregory Farnum wrote:
>>
>> On Mon, Oct 12, 2015 at 12:50 AM, Burkhard Linke
>> wrote:
>>>
>>>
> *snipsnap*
>>>
>>> Thanks, that did the trick. I was able to locate the host blocking the
>>> file
>>> handles
On Fri, Oct 23, 2015 at 8:17 AM, Stefan Eriksson wrote:
> Hi
>
> I have been looking for info about "osd pool default size" and the reason
> its 3 as default.
>
> I see it got changed in v0.82 from 2 to 3,
>
> Here its 2.
> http://docs.ceph.com/docs/v0.81/rados/configuration/pool-pg-config-ref/
>
On Fri, Oct 23, 2015 at 10:14 AM, Edward Ned Harvey (ceph)
wrote:
> Trying to figure out if ceph supports inotify, or some form of notification,
> I see this issue from 4 years ago:
>
> http://tracker.ceph.com/issues/1296
>
>
>
> And the corresponding discussion thread
>
> http://comments.gmane.or
I am trying to add a filestore OSD node to my cluster and got this
during ceph-deploy activate.
The message still appears when "ceph-disk activate" is run as root. Is
this functionality broken in 9.1.0 or is it something misconfigured on
my box? And /var/lib/ceph is chown'ed to ceph:ceph.
[WARNING
Am 23.10.2015 um 20:53 schrieb Gregory Farnum:
> On Fri, Oct 23, 2015 at 8:17 AM, Stefan Eriksson wrote:
>
> Nothing changed to make two copies less secure. 3 copies is just so
> much more secure and is the number that all the companies providing
> support recommend, so we changed the default.
> (
Hi, I'm wondering when using a cache pool tier if there's an upper bound
on when something written to the cache is flushed back to the backing
pool? Something like a cache_max_flush_age setting? Basically I'm
wondering if I have the unfortunate case of all of the SSD replicas for
a cache pool
@John-Paul Robinson:
I’ve also experienced nfs being blocked when serving rbd devices (XFS system).
In my scenario I had rbd device mapped on an OSD host and nfs exported (lab
scenario). Log entries below.. Running Centos 7 w/
3.10.0-229.14.1.el7.x86_64. Next step for me is to compile 3.1
I am trying to pass deep-flatten during clone creation and got this:
rbd clone --image-feature deep-flatten d0@s0 d1
rbd: image format can only be set when creating or importing an image
On Fri, Oct 23, 2015 at 6:27 AM, Jason Dillaman wrote:
>> After reading and understanding your mail, i moved
Looks like it is a bug:
Features are parsed and set here:
https://github.com/ceph/ceph/blob/master/src/rbd.cc#L3235
format_specified is forced to true here:
https://github.com/ceph/ceph/blob/master/src/rbd.cc#L3268
Error is produced here:
https://github.com/ceph/ceph/blob/master/src/rbd.cc#L3449
30 matches
Mail list logo