Hi Marcus,
Am 18.11.2013 23:51, schrieb m...@linuxbox.com:
On Mon, Nov 18, 2013 at 02:38:42PM +0100, Stefan Priebe - Profihost AG wrote:
You may actually be doing O_SYNC - recent kernels implement O_DSYNC,
but glibc maps O_DSYNC into O_SYNC. But since you're writing to the
block device this won
Hello David and Chris,
Thank you for your replies in this thread.
>> The automatic repair should handle getting an EIO during read of the
object replica.
I think when osd tries to read object from primary disk, which inside of
bad sector, the controller does not respond with EIO but something el
2) Can't grow once you reach the hard limit of 14TB, and if you have
multiple of such machines, then fragmentation becomes a problem
3) might have the risk of 14TB partition corruption wiping out all
your shares
14TB limit is due to EXT(3/4) recommendation(/implementation)?
__
Hi all,
I also would like to see cephfs stable, especially with the snapshot function.
I tried to figure out the roadmap but couldn't get a clear picture?
Is there a target date for production-ready snapshot-functionality?
until than a possible alternative (sorry without ceph :-/)
is using glust
On Tue, Nov 19, 2013 at 1:29 AM, Dnsbed Ops wrote:
> Hi,
>
> When an osd node server restarted, I found the osd daemon doesn't get
> started.
>
> I must run these two commands from the deploy node to restart them:
>
> ceph-deploy osd prepare ceph3.anycast.net:/tmp/osd2
> ceph-deploy osd activate c
On Tue, Nov 19, 2013 at 1:33 AM, Dnsbed Ops wrote:
> Hello,
>
> I deployed one monitor daemon in a separate server successfully.
> But I can't deploy it together with the osd node.
> I run the deployment command and got:
>
> ceph@172-17-6-65:~/my-cluster$ ceph-deploy mon create ceph3.geocast.net
>
Hi,
when using the librados c library, the documentation of the different functions
just tells that it returns a negative error code on failure,
e.g. the rados_read function
(http://ceph.com/docs/master/rados/api/librados/#rados_read).
Is there anywhere any further documentation which error cod
Hi every one,
In the course of playing with RBD, I noticed a few things:
* The RBD images are so thin-provisioned, you can create arbitrarily
large ones.
On my 0.72.1 freshly-installed empty 200TB cluster, I was able to
create a 1PB image:
$ rbd create --image-format 2 --size 1073741824
Hi,
I've setup a new cluster as follows:
3 * MON node
3 x OSD node with dual SSD for journal and 10 X 1TB disk
2 x 1GB ethernet, two networks with cluster & public configured
When running the rados bench (from a mon or OSD node) I've noticed the OSDs
marking itself down.
They do seem to reconnec
EDIT: sorry about the "No such file" error
Now, it seems this is a separate issue: the system I was using was
apparently unable to map devices to images in format 2. I will be
investigating that further before mentioning it again.
I would still appreciate answers about the 1PB image and the t
Ok, probably hitting this:
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/
flapping OSD part...
Cheers,
Robert
From: ceph-users-boun...@lists.ceph.com [ceph-users-boun...@lists.ceph.com] on
behalf of Robert van Leeuwen [robert.vanl
Hi Nicolas
just fyi
rbd format 2 is not supported yet by the linux kernel (module)
it only can be used as a target for virtual machines using librbd
see: man rbd --> --image-format
shrinking time: same happend to me,
rbd (v1) device
took about a week to shrink from 1PB to 10TB
the good news: I h
On 11/19/13, 1:29 AM, "Dnsbed Ops" wrote:
>Hi,
>
>When an osd node server restarted, I found the osd daemon doesn't get
>started.
>
>I must run these two commands from the deploy node to restart them:
>
>ceph-deploy osd prepare ceph3.anycast.net:/tmp/osd2
>ceph-deploy osd activate ceph3.anycast.n
Thank you! I am studying and testing CEPH. I think it will be very good
for my needs
On 18-11-2013 20:31, Timofey root wrote:
On 09 нояб. 2013 г., at 1:46, Gregory Farnum wrote:
On Fri, Nov 8, 2013 at 8:49 AM, Listas wrote:
Hi !
I have clusters (IMAP service) with 2 members configured wit
On 11/18/2013 01:19 AM, YIP Wai Peng wrote:
> Hi Dima,
>
> Benchmark FYI.
>
> $ /usr/sbin/bonnie++ -s 0 -n 5:1m:4k
> Version 1.97 --Sequential Create-- Random Create
> altair -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
> files:max:min
On Nov 19, 2013, at 3:47 PM, Bernhard Glomm wrote:
> Hi Nicolas
> just fyi
> rbd format 2 is not supported yet by the linux kernel (module)
I believe this is wrong. I think linux supports rbd format 2 images since 3.10.
wogri
___
ceph-users mailing
On 19.11.2013 06:56, Robert van Leeuwen wrote:
> Hi,
>
> ...
> It looks like it is just using /dev/sdX for this instead of the
> /dev/disk/by-id /by-path given by ceph-deploy.
>
> ...
Hi Robert,
I'm using the disk-label:
fstab:
LABEL=osd.0 /var/lib/ceph/osd/ceph-0 xfs noatime,nodiratime
On 11/19/13, 2:10 PM, "Wolfgang Hennerbichler" wrote:
>
>On Nov 19, 2013, at 3:47 PM, Bernhard Glomm
>wrote:
>
>> Hi Nicolas
>> just fyi
>> rbd format 2 is not supported yet by the linux kernel (module)
>
>I believe this is wrong. I think linux supports rbd format 2 images since
>3.10.
One mor
So is there any size limit on RBD images? I had a failure this morning
mounting 1TB RBD. Deleting now (why does it take so long to delete if it was
never even mapped, much less written to?) and will retry with smaller images.
See output below. This is 0.72 on Ubuntu 13.04 with 3.12 kernel.
>-Original Message-
>From: Gruher, Joseph R
>Sent: Tuesday, November 19, 2013 12:24 PM
>To: 'Wolfgang Hennerbichler'; Bernhard Glomm
>Cc: ceph-users@lists.ceph.com
>Subject: RE: [ceph-users] Size of RBD images
>
>So is there any size limit on RBD images? I had a failure this morning
>mo
Sorry for the delay, I'm still catching up since the openstack
conference.
Does the system user for the destination zone exist with the same
access secret and key in the source zone?
If you enable debug rgw = 30 on the destination you can see why the
copy_obj from the source zone is failing.
Jo
On 11/13/2013 09:06 PM, lixuehui wrote:
And on the slave zone gateway instence ,the info is like this :
2013-11-14 12:54:24.516840 7f51e7fef700 1 == starting new
request req=0xb1e3b0 =
2013-11-14 12:54:24.526640 7f51e7fef700 1 == req done
req=0xb1e3b0 http
On 11/14/2013 09:54 AM, Dmitry Borodaenko wrote:
On Thu, Nov 14, 2013 at 6:00 AM, Haomai Wang wrote:
We are using the nova fork by Josh Durgin
https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd - are there
more patches that need to be integrated?
I hope I can release or push commits
Hi Yip,
Thanks for the code. With respect to "can't grow", I think I can (with some
difficulty perhaps?) resize the vm if I needed to, but I'm really just
trying to buy myself time till CEPH-FS is production readyPoint #3
scares me, so I'll have to think about that one. Most likely I'd use a
c
On 11/19/2013 05:28 AM, Behar Veliqi wrote:
Hi,
when using the librados c library, the documentation of the different functions
just tells that it returns a negative error code on failure,
e.g. the rados_read function
(http://ceph.com/docs/master/rados/api/librados/#rados_read).
Is there anyw
Thank you Alfredo. I now got it.
On 2013-11-19 20:44, Alfredo Deza wrote:
That really doesn't restart them, it creates them. I think ceph-deploy
is not destroying anything
here so it appears like it restarts when in fact is re-doing the whole
process again.
_
1a) The Ceph documentation on Openstack integration make a big (and
valuable) point that cloning images should be instantaneous/quick due to
the copy-on-write functionality. See "Boot from volume" at bottom of
http://ceph.com/docs/master/rbd/rbd-openstack/. Here's the excerpt:
When Glance and Cind
1a. I believe it's dependent on format 2 images not btrfs.
1b. Snapshot works independent of the backing file system.
2. All data goes through the journals.
4a. Rbd image objects are not striped, they come in default 4MB chunks,
consecutive sectors will come from the same object and osd. I don't
On Wednesday, 20 November 2013, Gautam Saxena wrote:
> Hi Yip,
>
> Thanks for the code. With respect to "can't grow", I think I can (with
> some difficulty perhaps?) resize the vm if I needed to, but I'm really just
> trying to buy myself time till CEPH-FS is production readyPoint #3
> scares
On Wednesday, 20 November 2013, Dimitri Maziuk wrote:
> On 11/18/2013 01:19 AM, YIP Wai Peng wrote:
> > Hi Dima,
> >
> > Benchmark FYI.
> >
> > $ /usr/sbin/bonnie++ -s 0 -n 5:1m:4k
> > Version 1.97 --Sequential Create-- Random
> Create
> > altair -Create-
Thanks Michael.
So quick correction based on Michael's response. In question 4, I should
have not made any reference to Ceph objects, since objects are not striped
(per Michael's response). Instead, I should simply have used the words
"Ceph VM Image" instead of "Ceph objects". A Ceph VM image woul
> So quick correction based on Michael's response. In question 4, I should
> have not made any reference to Ceph objects, since objects are not striped
> (per Michael's response). Instead, I should simply have used the words "Ceph
> VM Image" instead of "Ceph objects". A Ceph VM image would constit
Hello,
I follow the doc there:
http://ceph.com/docs/master/start/quick-ceph-deploy/
Just installed three mon-nodes, but one got failed.
The command and output:
ceph@ceph1:~/my-cluster$ ceph-deploy --overwrite-conf mon create
ceph3.geocast.net
[ceph_deploy.cli][INFO ] Invoked (1.3.2): /usr/bi
And this is the ceph.conf:
[global]
fsid = 0615ddc1-abff-4fe2-8919-68448b9f6faa
mon_initial_members = ceph2, ceph3, ceph4
mon_host = 172.17.6.66,172.17.6.67,172.17.6.68
auth_supported = cephx
osd_journal_size = 1024
filestore_xattr_use_omap = true
Thanks.
于 2013-11-20 12:47, Dnsbed Ops 回复:
He
Hallo,
we are going to setup a 36-40TB (brutto) test setup in our company for
disk2disk2tape backup. Now, we are faced to decide if we go the high- or low
density ceph way.
--- The big node (only one necessary):
1 x Supermicro, System 6027R-E1R12T with 2 x CPU E5-2620v2 (6 core (Hyper
thread
35 matches
Mail list logo