I got these exact same error messages on my RHEL build. It did however seem
that my OSD's were correctly built and brought online. Never got to the
bottom of these errors though.
There were others on this forum that had similar issues, and theirs too seemed
to be working successfully despit
Hello,
Any one has ever tested simultaneous downloads from RGW? especially with
multi clients which are using download manager applications?
In my test result is not OK, RGW stop responding to other clients while
it's serving a download request,
Kind regards
__
You are right AW , i also faced this very same error message apparently everything just works fine till now no issues reported yet.RegardsKaran SinghFrom: "alistair whittle" To: "Narendra Trivedi" , ceph-users@lists.ceph.comSent: Thursday, 31 October, 2013 12:06:52 PMSubject: Re: [ceph-users] Act
Bryan,
We are setting up a cluster using xfs and have been a bit concerned about
running into similar issues to the ones you described below.
I am just wondering if you came across any potential downsides to using a 2K
block size with xfs on your osd's.
Thanks,
Shain
Shain Miley | Manager of
On Thu, Oct 31, 2013 at 8:07 AM, Karan Singh wrote:
> You are right AW , i also faced this very same error message apparently
> everything just works fine till now [image: Smile] no issues reported
> yet.
>
> Regards
> Karan Singh
>
> --
> *From: *"alistair whittle"
Hi,
I have strange radosgw error:
==
2013-10-26 21:18:29.844676 7f637beaf700 0 setting object
tag=_ZPeVs7d6W8GjU8qKr4dsilbGeo6NOgw
2013-10-26 21:18:30.049588 7f637beaf700 0 WARNING: set_req_state_err
err_no=125 resorting to 500
2013-10-26 21:18:30.049738 7f637beaf700 2 req 61655:0.224186:s3
Hi,Pls is this a good setup for a production environment test of ceph? My focus
is on the SSD ... should it be partitioned(sdf1,2 ,3,4) and shared by the four
OSDs on a host? or is this a better configuration for the SSD to be just one
partition(sdf1) while all osd uses that one partition?my set
Hello to all,
Here is my ceph osd tree output :
# idweight type name up/down reweight
-1 20 root example
-12 20 drive ssd
-22 20 datacenter ssd-dc1
-10410 room ssd-dc1-A
-50210
On Thu, Oct 31, 2013 at 2:29 PM, Alexis GÜNST HORN
wrote:
> step take example
> step emit
This is the problem, AFAICT. Just omit those two lines in both rules
and it should work.
Cheers, dan
___
ceph-users mailing list
ceph-users@lists.
On Thu, Oct 31, 2013 at 2:29 PM, Alexis GÜNST HORN
wrote:
> -11 0 drive hdd
> -21 0 datacenter hdd-dc1
> -1020 room hdd-dc1-A
> -5030 host A-ceph-osd-2
> 20 0
https://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
See "rule ssd-primary"
31 окт. 2013 г. 17:29 пользователь "Alexis GÜNST HORN" <
alexis.gunsth...@outscale.com> написал:
> Hello to all,
>
> Here is my ceph osd tree output :
>
>
> # idweight t
Thanks Alfredo for clarifying this.
Regards
karan
- Original Message -
From: "Alfredo Deza"
To: "Karan Singh"
Cc: "alistair whittle" , "Narendra Trivedi"
, ceph-users@lists.ceph.com
Sent: Thursday, 31 October, 2013 3:18:37 PM
Subject: Re: [ceph-users] Activating OSDs takes fore
Hello Charles
Need some more clarification with your setup , Did you mean
1) There is 1 SSD ( 60 GB ) on each server i.e 6 SSD on all 6 servers ?
2) your osd.3 , osd.4 , osd.5 uses same journal ( /dev/sdf2 ) ?
Regards
Karan Singh
- Original Message -
From: "charles L"
To: "cep
Yep ! Thanks !
That was it :)
Thanks a lot.
Best Regards - Cordialement
Alexis GÜNST HORN,
Tel : 0826.206.307 (poste )
Fax : +33.1.83.62.92.89
IMPORTANT: The information contained in this message may be privileged
and confidential and protected from disclosure. If the reader of this
message is
AW/KS,
So, what is the solution? Activating OSDs seems to be hung in an infinite loop.
Do I just press control + C and assume everything would be fine? How do I
upgrade all the nodes? Maybe that will help. My ceph version is 0.67.4 FYI.
Thanks!
Narendra
From: Karan Singh [mailto:ksi...@csc.fi
On Thu, Oct 31, 2013 at 6:22 AM, Dominik Mostowiec
wrote:
> Hi,
> I have strange radosgw error:
>
> ==
> 2013-10-26 21:18:29.844676 7f637beaf700 0 setting object
> tag=_ZPeVs7d6W8GjU8qKr4dsilbGeo6NOgw
> 2013-10-26 21:18:30.049588 7f637beaf700 0 WARNING: set_req_state_err
> err_no=125 resorti
After activating OSDs, I tried to use ceph-deploy to copy the configuration
file and admin ket to my admin node and my ceph nodes so that I can use ceph
CLI without having to specify the monitor address and ceph.client.admin.keyring
each time I execute a command but I got the following error:
[
On Thu, Oct 31, 2013 at 12:01 PM, Trivedi, Narendra
wrote:
> After activating OSDs, I tried to use ceph-deploy to copy the configuration
> file and admin ket to my admin node and my ceph nodes so that I can use ceph
> CLI without having to specify the monitor address and
> ceph.client.admin.keyrin
I'm considering using rados callbacks in my application, but the
documentation doesn't include enough information about their context.
If I schedule a callback with rados_aio_create_completion(), from what
context will my callback be called? Will it be called from a signal
handler? Will it be cal
On Thu, 31 Oct 2013, asom...@gmail.com wrote:
> I'm considering using rados callbacks in my application, but the
> documentation doesn't include enough information about their context.
> If I schedule a callback with rados_aio_create_completion(), from what
> context will my callback be called? Wi
Hmm i have a pretty much default install with ceph-deploy, the crusmap is
untouched.
Btw I get similar errormessages when trying to mount cephfs
mount -t ceph ceph01:6789:/ /mnt/backupCephfs -o
name=admin,secretfile=admin.secret
tail syslog
Oct 31 17:14:27 ceph00 kernel: [103642.162813] libceph
Yeah, depending in what version of Ceph you deployed that could be it
exactly. We were a little more aggressive than we should have been in
pushing them out. See:
http://ceph.com/docs/master/rados/operations/crush-map/#tunables
-Greg
On Thursday, October 31, 2013, Fuchs, Andreas (SwissTXT) wrote:
I tested the osd performance from a single node. For this purpose I deployed a
new cluster (using ceph-deploy, as before) and on fresh/repartitioned drives. I
created a single pool, 1800 pgs. I ran the rados bench both on the osd server
and on a remote one. Cluster configuration stayed "default
Ok i halfway understand this.
So I can either upgrade to a kernel version v3.9 or later
Or
Change the crushmap with ceph osd crush tunables {PROFILE}
To a new profile.
But to which profile do I have to change so that my ubuntu client is supported?
Andi
From: Gregory Farnum [mailto:g...@inktank
Shain,
After getting the segfaults when running 'xfs_db -r "-c freesp -s"' on
a couple partitions, I'm concerned that 2K block sizes aren't nearly
as well tested as 4K block sizes. This could just be a problem with
RHEL/CentOS 6.4 though, so if you're using a newer kernel the problem
might alread
Well, looking at that doc page, with kernel 3.8 you can't use the
newest set so you should set your profile to "argonaut" (the old
ones). Or if you're feeling ambitious I believe you can turn on
CRUSH_TUNABLES without CRUSH_TUNABLES2 by manually editing and
injecting the CRUSH map (I haven't done t
Mark,
Thanks for the update.
Just an FYI I ran into an issue using the script when it turned out that the
last part of the file was exactly 0 bytes. in length.
For example:
begin upload of root.img
size 10737418240, 11 parts
upload part 1 size 1073741824
upload part 2 size 10737418
PS...I tested the cancel script it worked like a charm (the script found quite
a few partial files that needed to be removed).
Thanks again,
Shain
Shain Miley | Manager of Systems and Infrastructure, Digital Media |
smi...@npr.org | 202.513.3649
From:
Shain,
I investigated the segfault a little more since I sent this message
and found this email thread:
http://oss.sgi.com/archives/xfs/2012-06/msg00066.html
After reading that I did the following:
[root@den2ceph001 ~]# xfs_db -r "-c freesp -s" /dev/sdb1
Segmentation fault (core dumped)
[root@d
On Thu, Oct 31, 2013 at 10:11 AM, Sage Weil wrote:
> On Thu, 31 Oct 2013, asom...@gmail.com wrote:
>> I'm considering using rados callbacks in my application, but the
>> documentation doesn't include enough information about their context.
>> If I schedule a callback with rados_aio_create_completi
Attached is the request-createbucket-log.txt for the create bucket request
and the details of user making the request is in radogwadmin-user.txt
Please advise, what am I doing incorrectly?
I am using Node JS as the "client" for ceph S3 API .. and thus I need to
build my requests "manually'.
Your
Hello Alfredo,
Thanks for your response. The documentation is little confusing. Where do I
issue'mkdir /etc/ceph'? Also, 'ceph health' should be issued on the MON node?
Thanks!
Narendra
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Thursday, October 31,
Please ignore my e-mail ... found the issue ..the RequestBody was not empty.
-- Forwarded message --
From: Abhijeet Nakhwa
Date: Thu, Oct 31, 2013 at 3:40 PM
Subject: Unable to create bucket using S3 API
To: ceph-us...@ceph.com
Attached is the request-createbucket-log.txt for th
Okay I created mkdir'ed /etc/ceph on the admin node and issued the command
again and now it seems to go through:
[ceph@ceph-admin-node-centos-6-4 mycluster]$ ceph-deploy admin
ceph-admin-node-centos-6-4 ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4
ceph-node3-osd1-centos-6-4
[ceph_deplo
On Thu, Oct 31, 2013 at 4:01 PM, Trivedi, Narendra
wrote:
> Okay I created mkdir'ed /etc/ceph on the admin node and issued the command
> again and now it seems to go through:
>
> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ceph-deploy admin
> ceph-admin-node-centos-6-4 ceph-node1-mon-centos-6-4
Blast -I must have some shoddy arithmetic around the bit where I work
out the final pirce size. I'll experient...
Cheers
Mark
On 01/11/13 06:35, Shain Miley wrote:
PS...I tested the cancel script it worked like a charm (the script found quite
a few partial files that needed to be removed).
Hello!
Can anybody provide instructions on how to create and run a ceph mds
server manually? I already have a ceph cluster with monitors and osds
running, but I need to access the cluster from a POSIX client and
without installing ceph's packages (without using ceph-deploy).
Any help will b
Nothing has been recorded as far as I know.
However I’ve seen some guys from Scality recording sessions with a cam.
Scality? Are you there? :)
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Add
Yep,
I was blindly adding one extra part to cope with the effects of integer
division. I guess I could cast to floats etc and then take the ceil of
the result, but just checking if the file size is an exact multiple of
the part size seems the least error prone and clearest (diff attached
for
My Ceph cluster health checkup tells me the following. Should I be concerned?
What's the remedy? What is missing? I issued this command from the monitor
node. Please correct me if I am wrong, but I think admin's node job is done
after the installation unless I want to add additional OSD/MONs.
Narendra,
This is an issue. You really want your cluster to he HEALTH_OK with all
PGs active+clean. Some exceptions apply (like scrub / deep-scrub).
What do 'ceph health detail' and 'ceph osd tree' show?
Thanks,
Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
6330 East 7
Hi,
Please can you help with the Ceph Install guide.
Do we need to install Ceph server or client?
Regards,
Raghavendra Lad___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
42 matches
Mail list logo