Hi, Gregory.
On the other hand, I checked the fix 63e44e32974c9bae17bb1bfd4261dcb024ad845c
should be the one that we need. However, I notice that this fix has only been
backported down to v11.0.0, can we simply apply it to our Hammer
version(0.94.5)?
-邮件原件-
发件人: 许雪寒
发送时间: 2017年3月17日 1
So I've tested this procedure locally and it works successfully for me.
$ ./ceph -v
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
ceph version 0.94.10 (b1e0532418e4631af01acbc0cedd426f1905f4af)
$ ./ceph-objectstore-tool import-rados rbd 0.3.export
Importing from pgid 0.3
Wr
Hello,
On Fri, 17 Mar 2017 02:51:48 + Rich Rocque wrote:
> Hi,
>
>
> I talked with the person in charge about your initial feedback and questions.
> The thought is to switch to a new setup and I was asked to pass it on and ask
> for thoughts on whether this would be sufficient or not.
>
I ended up using a newer version of ceph-deploy and things went more smoothly
after that.
Thanks again to everyone for all the help!
Shain
> On Mar 16, 2017, at 10:29 AM, Shain Miley wrote:
>
> This sender failed our fraud detection checks and may not be who they appear
> to be. Learn about
Hi,
Thanks for the link.
I unset the nodown config option and things did seem to improve, although we
did still get a few reports from users about issues related filesystem (rbd)
access, even after that action was taken.
Thanks again,
Shain
> On Mar 13, 2017, at 2:43 AM, Alexandre DERUMIER
Thanks all for the help,
I was able to reinstall Ubuntu, reinstall Ceph, after a server reboot the OSD’s
are once again part of the cluster.
Thanks again,
Shain
> On Mar 10, 2017, at 2:55 PM, Lincoln Bryant wrote:
>
> Hi Shain,
>
> As long as you don’t nuke the OSDs or the journals, you sho
Hi,
I talked with the person in charge about your initial feedback and questions.
The thought is to switch to a new setup and I was asked to pass it on and ask
for thoughts on whether this would be sufficient or not.
Use case:
Overview: Need to provide shared storage/high-availability for (us
Not sure, if this is still true with Jewel CephFS ie
cephfs does not support any type of quota, df always reports entire cluster
size.
https://www.spinics.net/lists/ceph-users/msg05623.html
--
Deepak
From: Deepak Naidu
Sent: Thursday, March 16, 2017 6:19 PM
To: 'ceph-users'
Subject: CephFS mou
I got it. Thanks very much:-)
发件人: Gregory Farnum [mailto:gfar...@redhat.com]
发送时间: 2017年3月17日 2:10
收件人: 许雪寒
抄送: ceph-users@lists.ceph.com; jiajia zhong
主题: Re: 答复: [ceph-users] 答复: 答复: Pipe "deadlock" in Hammer, 0.94.5
On Thu, Mar 16, 2017 at 3:36 AM 许雪寒 wrote:
Hi, Gregory, is it possible to
Greetings,
I am trying to build a CephFS system. Currently I have created my crush map
which uses only certain OSD & I have pools created out from them. But when I
mount the cephFS the mount size is my entire ceph cluster size, how is that ?
Ceph cluster & pools
[ceph-admin@storageAdmin ~]$ c
Any chance you have two or more instance of rbd-mirror daemon running
against the same cluster (zone2 in this instance)? The error message
is stating that there is another process that owns the exclusive lock
to the image and it is refusing to release it. The fact that the
status ping-pongs back-an
On Thu, Mar 16, 2017 at 3:24 PM, Adam Carheden wrote:
> On Thu, Mar 16, 2017 at 11:55 AM, Jason Dillaman wrote:
>> On Thu, Mar 16, 2017 at 1:02 PM, Adam Carheden wrote:
>>> Ceph can mirror data between clusters
>>> (http://docs.ceph.com/docs/master/rbd/rbd-mirroring/), but can it
>>> mirror data
Hi!,
I'm having a problem with a new ceph deployment using rbd mirroring and
it's just in case someone can help me out or point me in the right
direction.
I have a ceph jewel install, with 2 clusters(zone1,zone2), rbd is working
fine, but the rbd mirroring between sites is not working correctly.
This might be a dumb question, but I'm not at all sure what the "global
quotas" in the radosgw region map actually do.
It is like a default quota which is applied to all users or buckets,
without having to set them individually, or is it a blanket/aggregate
quota applied across all users and b
On Thu, Mar 16, 2017 at 11:55 AM, Jason Dillaman wrote:
> On Thu, Mar 16, 2017 at 1:02 PM, Adam Carheden wrote:
>> Ceph can mirror data between clusters
>> (http://docs.ceph.com/docs/master/rbd/rbd-mirroring/), but can it
>> mirror data between pools in the same cluster?
>
> Unfortunately, that's
On Thu, Mar 16, 2017 at 5:50 PM, Chad William Seys
wrote:
> Hi All,
> After upgrading to 10.2.6 on Debian Jessie, the MDS server fails to start.
> Below is what is written to the log file from attempted start to failure:
> Any ideas? I'll probably try rolling back to 10.2.5 in the meantime.
On Thu, Mar 16, 2017 at 3:36 AM 许雪寒 wrote:
> Hi, Gregory, is it possible to unlock Connection::lock in
> Pipe::read_message before tcp_read_nonblocking is called? I checked the
> code again, it seems that the code in tcp_read_nonblocking doesn't need to
> be locked by Connection::lock.
Unfortun
On Thu, Mar 16, 2017 at 1:02 PM, Adam Carheden wrote:
> Ceph can mirror data between clusters
> (http://docs.ceph.com/docs/master/rbd/rbd-mirroring/), but can it
> mirror data between pools in the same cluster?
Unfortunately, that's a negative. The rbd-mirror daemon currently
assumes that the loc
Hi All,
After upgrading to 10.2.6 on Debian Jessie, the MDS server fails to
start. Below is what is written to the log file from attempted start to
failure:
Any ideas? I'll probably try rolling back to 10.2.5 in the meantime.
Thanks!
C.
On 03/16/2017 12:48 PM, r...@mds01.hep.wisc.edu wr
On 16.03.2017 08:26, Youssef Eldakar wrote:
Thanks for the reply, Anthony, and I am sorry my question did not give
sufficient background.
This is the cluster behind archive.bibalex.org. Storage nodes keep archived
webpages as multi-member GZIP files on the disks, which are formatted using XFS
Ceph can mirror data between clusters
(http://docs.ceph.com/docs/master/rbd/rbd-mirroring/), but can it
mirror data between pools in the same cluster?
My use case is DR in the even of a room failure. I have a single CEPH
cluster that spans multiple rooms. The two rooms have separate power
and cool
I found this
http://ceph.com/geen-categorie/ceph-osd-where-is-my-data/
which leads me to think I can perhaps directly process the files on the OSD by
going to the /var/lib/ceph/osd directory.
Would that make sense?
Youssef Eldakar
Bibliotheca Alexandrina
From:
It looks like things are working a bit better today…however now I am getting
the following error:
[hqosd6][DEBUG ] detect platform information from remote host
[hqosd6][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: Ubuntu 14.04 trusty
[hqosd6][INFO ] installing ceph on h
On Thu, 16 Mar 2017, Brad Hubbard wrote:
> On Thu, Mar 16, 2017 at 4:33 PM, nokia ceph wrote:
> > Hello Brad,
> >
> > I meant for this parameter bdev_aio_max_queue_depth , Sage suggested try
> > diff values, 128,1024 , 4096 . So my doubt how this calculation happens? Is
> > this related to memory
Hello,
We use ceph cluster with 10 nodes/servers with 15 OSDs per node.
Here, I wanted to use 10 OSDs for block storage (i.e volumes pool) and 5
OSDs for obj. storage (ie rgw pool). And plan to use "replica" type for
block and obj. pools.
Please advise, if the above is good use or any bottlenecks
Regarding opennebula it is working, we do find the network functionality less
then flexible. We would prefer the orchestration layer allow each primary group
to create a network infrastructure internally to meet their needs and then
automatically provide nat from one or more public ip addresses
On Thu, Mar 16, 2017 at 11:12 AM, TYLin wrote:
> Hi all,
>
> We have a CephFS which its metadata pool and data pool share same set of
> OSDs. According to the PGs calculation:
>
> (100*num_osds) / num_replica
That guideline tells you rougly how many PGs you want in total -- when
you have multipl
Hi all,
We have a CephFS which its metadata pool and data pool share same set of OSDs.
According to the PGs calculation:
(100*num_osds) / num_replica
If we have 56 OSDs, we should set 5120 PGs to each pool to make the data evenly
distributed to all the OSDs. However, if we set metadata pool an
Hi, Gregory, is it possible to unlock Connection::lock in Pipe::read_message
before tcp_read_nonblocking is called? I checked the code again, it seems that
the code in tcp_read_nonblocking doesn't need to be locked by Connection::lock.
-邮件原件-
发件人: Gregory Farnum [mailto:gfar...@redhat.co
On Tue, Mar 14, 2017 at 5:55 PM, John Spray wrote:
> On Tue, Mar 14, 2017 at 2:10 PM, Andras Pataki
> wrote:
>> Hi John,
>>
>> I've checked the MDS session list, and the fuse client does appear on that
>> with 'state' as 'open'. So both the fuse client and the MDS agree on an
>> open connection.
Hello,
On Thu, 16 Mar 2017 02:44:29 + Robin H. Johnson wrote:
> On Thu, Mar 16, 2017 at 02:22:08AM +, Rich Rocque wrote:
> > Has anyone else run into this or have any suggestions on how to remedy it?
> We need a LOT more info.
>
Indeed.
> > After a couple months of almost no issues,
Thanks for the reply, Anthony, and I am sorry my question did not give
sufficient background.
This is the cluster behind archive.bibalex.org. Storage nodes keep archived
webpages as multi-member GZIP files on the disks, which are formatted using XFS
as standalone file systems. The access system
My mistake, I've run it on a wrong system ...
I've attached the terminal output.
I've run this on a test system where I was getting the same segfault when
trying import-rados.
Kind regards,
Laszlo
On 16.03.2017 07:41, Laszlo Budai wrote:
[root@storage2 ~]# gdb -ex 'r' -ex 't a a bt full' -e
Sounds good :), Brad many thanks for the explanation .
On Thu, Mar 16, 2017 at 12:42 PM, Brad Hubbard wrote:
> On Thu, Mar 16, 2017 at 4:33 PM, nokia ceph
> wrote:
> > Hello Brad,
> >
> > I meant for this parameter bdev_aio_max_queue_depth , Sage suggested try
> > diff values, 128,1024 , 4096 .
On Thu, Mar 16, 2017 at 4:33 PM, nokia ceph wrote:
> Hello Brad,
>
> I meant for this parameter bdev_aio_max_queue_depth , Sage suggested try
> diff values, 128,1024 , 4096 . So my doubt how this calculation happens? Is
> this related to memory?
The bdev_aio_max_queue_depth parameter represents
35 matches
Mail list logo