Things you can check
* Is RGW node able to resolve bucket-2.ostore.athome.priv , try ping
bucket-2.ostore.athome.priv
* Is # s3cmd ls working or throwing errors ?
l
Are you sure the below entries are correct ? Generally host_base and
host_bucket should point to RGW FQDN in your case ceph-rado
Sorry, I am not sure whether it is look ok in your production environment.
Maybe you could use the command: ceph tell osd.0 injectargs
"-osd_scrub_sleep 0.5" . This command would affect only one osd.
If it works fine for some days, you could set for all osd.
This is just a suggestion.
2015-04-1
Hi all,
I'm testing ceph cache tier(0.80.9). The IOPS is really very good with
cache tier, but it's very slow to delete a rbd(even if an empty rbd).
It seems as if the cache pool will mark all the objects in the rbd to be
deleted, even if the objects do not exist.
Is this a problem of rbd?
How c
On 04/13/2015 02:25 AM, Christian Balzer wrote:
> On Sun, 12 Apr 2015 14:37:56 -0700 Gregory Farnum wrote:
>
>> On Sun, Apr 12, 2015 at 1:58 PM, Francois Lafont
>> wrote:
>>> Somnath Roy wrote:
>>>
Interesting scenario :-).. IMHO, I don't think cluster will be in
healthy state here if t
Dear ceph users,
we are planing a ceph storage cluster from scratch. Might be up to 1 PB
within the next 3 years, multiple buildings, new network infrastructure
for the cluster etc.
I had some excellent trainings on ceph, so the essential fundamentals
are familiar to me, and I know our goals/drea
>>So what would you suggest, what are your experiences?
Hi, you can have a look at mellanox sx1012 for example
http://www.mellanox.com/page/products_dyn?product_family=163
12 ports 40GB for around 4000€
you can use breakout cables to have 4x12 10GB ports.
They can be stacked with mlag and lacp
Hi Alexandre,
thanks for that suggestion. mellanox might be on our shoping list
already, but what regarding the redundandency design at all from your POV?
/Götz
Am 13.04.15 um 11:08 schrieb Alexandre DERUMIER:
>>> So what would you suggest, what are your experiences?
>
> Hi, you can have
Hello,
On Mon, 13 Apr 2015 11:03:24 +0200 Götz Reinicke - IT Koordinator wrote:
> Dear ceph users,
>
> we are planing a ceph storage cluster from scratch. Might be up to 1 PB
> within the next 3 years, multiple buildings, new network infrastructure
> for the cluster etc.
>
> I had some excelle
Hello,
Thanks for all replies! The Banana Pi could work. The built in SATA-power
in Banana Pi can power a 2.5" SATA disk. Cool. (Not 3.5" SATA since that
seem to require 12 V too.)
I found this post from Vess Bakalov about the same subject:
http://millibit.blogspot.se/2015/01/ceph-pi-adding-
Karan Singh wrote:
> Things you can check
>
> * Is RGW node able to resolve bucket-2.ostore.athome.priv , try ping
> bucket-2.ostore.athome.priv
Yes, my DNS configuration is ok. In fact, I test s3cmd directly
on my radosgw (its hostname is "ceph-radosgw1" but its fqdn is
"ostore.athome.priv")
You can give a try with swift API as well.
Karan Singh
Systems Specialist , Storage Platforms
CSC - IT Center for Science,
Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland
mobile: +358 503 812758
tel. +358 9 4572001
fax +358 9
Also what version of s3cmd you are using ??
To me the error “S3 error: 403 (SignatureDoesNotMatch)” seems to be from s3cmd
side rather RGW.
But lets diagnose.
Karan Singh
Systems Specialist , Storage Platforms
CSC - IT Center fo
HI all,
Works with ceph -admin-daemon
/var/run/ceph/ceph-client.radosgw.fr-rennes-radosgw1.asok config set debug_rgw
20
De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de
ghislain.cheval...@orange.com
Envoyé : mercredi 25 février 2015 15:06
À : Ceph Users
Objet : [ceph-us
Can you add "debug rbd = 20" to your config, re-run the rbd command, and paste
a link to the generated client log file?
The rbd_children and rbd_directory objects store state as omap key/values, not
as actual binary data within the object. You can use "rados -p rbd
listomapvals rbd_directory/r
Hi Everyone
I am using the ceph-disk command to prepare disks for an OSD.
The command is:
ceph-disk prepare --zap-disk --cluster $CLUSTERNAME --cluster-uuid $CLUSTERUUID
--fs-type xfs /dev/${1}
and this consistently raises the following error on RHEL7.1 and Ceph Hammer viz:
partx: specified ra
Hi all,
Coming back to that issue.
I successfully used keystone users for the rados gateway and the swift API but
I still don't understand how it can work with S3 API and i.e. S3 users
(AccessKey/SecretKey)
I found a swift3 initiative but I think It's only compliant in a pure OpenStack
swift
Thank you Xiaoxi very much!
Your answer is very clear. I need to read more first about CRUSH. But I think
basically your answer help me a lot. In practice, OSD check OSD should be OK.
There is possibility of a 'cross product' connections among OSD for heartbeat,
but by designing rules, that can
Hi,
On 13/04/2015 16:15, HEWLETT, Paul (Paul)** CTR ** wrote:
> Hi Everyone
>
> I am using the ceph-disk command to prepare disks for an OSD.
> The command is:
>
> *ceph-disk prepare --zap-disk --cluster $CLUSTERNAME --cluster-uuid
> $CLUSTERUUID --fs-type xfs /dev/${1}*
>
> and this consiste
Redundancy is a means to an end, not an end itself.
If you can afford to lose component X, manually replace it, and then return
everything impacted to service, then there's no point in making X redundant.
If you can afford to lose a single disk (which Ceph certainly can), then
there's no point in
This bug fix release fixes a few critical issues with CRUSH. The most
important addresses a bug in feature bit enforcement that may prevent
pre-hammer clients from communicating with the cluster during an upgrade.
This only manifests in some cases (for example, when the 'rack' type is in
use
- Original Message -
> From: "Francois Lafont"
> To: ceph-users@lists.ceph.com
> Sent: Sunday, April 12, 2015 8:47:40 PM
> Subject: [ceph-users] Radosgw: upgrade Firefly to Hammer, impossible to
> create bucket
>
> Hi,
>
> On a testing cluster, I have a radosgw on Firefly and the
I really like this proposal.
On Mon, Apr 13, 2015 at 2:33 AM, Joao Eduardo Luis wrote:
> On 04/13/2015 02:25 AM, Christian Balzer wrote:
>> On Sun, 12 Apr 2015 14:37:56 -0700 Gregory Farnum wrote:
>>
>>> On Sun, Apr 12, 2015 at 1:58 PM, Francois Lafont
>>> wrote:
Somnath Roy wrote:
>>>
Hi all,
I've got a Ceph cluster which serves volumes to a Cinder installation. It
runs Emperor.
I'd like to be able to replace some of the disks with OPAL disks and create
a new pool which uses exclusively the latter kind of disk. I'd like to have
a "traditional" pool and a "secure" one coexisting
We are getting ready to put the Quantas into production. We looked at
the Supermico Atoms (we have 6 of them), the rails were crap (they
exploded the first time you pull the server out, and they stick out of
the back of the cabinet about 8 inches, these boxes are already very
deep), we also ran out
I haven't really used the S3 stuff much, but the credentials should be in
keystone already. If you're in horizon, you can download them under Access
and Security->API Access. Using the CLI you can use the openstack client
like "openstack credential " or with
the keystone client like "keystone ec2-c
I went for something similar to the Quantas boxes but 4 stacked in 1x 4U box
http://www.supermicro.nl/products/system/4U/F617/SYS-F617H6-FTPT_.cfm
When you do the maths, even something like a banana pi + disk starts costing
a similar amount and you get so much more for your money in temrs of
proc
For us, using two 40Gb ports with VLANs is redundancy enough. We are
doing LACP over two different switches.
On Mon, Apr 13, 2015 at 3:03 AM, Götz Reinicke - IT Koordinator
wrote:
> Dear ceph users,
>
> we are planing a ceph storage cluster from scratch. Might be up to 1 PB
> within the next 3 ye
We also got one of those too. I think the cabling on the front and
limited I/O options deterred us, otherwise, I really liked that box
too.
On Mon, Apr 13, 2015 at 10:34 AM, Nick Fisk wrote:
> I went for something similar to the Quantas boxes but 4 stacked in 1x 4U box
>
> http://www.supermicro.n
We have the single-socket version of this chassis with 4 nodes in our
test lab. E3-1240v2 CPU with 10 spinners for OSDs, 2 DC S3700s, a 250GB
spinner for OS, 10GbE, and a SAS2308 HBA + on-board SATA. They work
well but were oddly a little slow for sequential reads from what I
remember. Overa
Hi Mark,
We added the 2x PCI Drive slot converter, so managed to squeeze 12 OSD's + 2
journals in each tray.
We did look at the E3 based nodes but as our first adventure into Ceph we
were unsure if the single CPU would have enough grunt. Going forward, now we
have some performance data, we might
On 04/13/2015 07:51 AM, Jason Dillaman wrote:
> Can you add "debug rbd = 20" to your config, re-run the rbd command, and
> paste a link to the generated client log file?
>
I set both rbd and rados log to 20
VOL=volume-61241645-e20d-4fe8-9ce3-c161c3d34d55
SNAP="$VOL"@snapshot-d535f359-503a-4eaf-9
Here's a vmcore, along with log files from Xen's crash dump utility.
https://drive.google.com/file/d/0Bz8b7ZiWX00AeHRhMjNvdVNLdDQ/view?usp=sharing
Let me know if we can help more.
On Fri, Apr 10, 2015 at 1:04 PM Ilya Dryomov wrote:
> On Fri, Apr 10, 2015 at 8:03 PM, Shawn Edwards
> wrote:
> >
I'm looking for documentation about what exactly each of these do and
I can't find it. Can someone point me in the right direction?
The names seem too ambiguous to come to any conclusion about what
exactly they do.
Thanks,
Robert
___
ceph-users mailing
Yes, when you flatten an image, the snapshots will remain associated to the
original parent. This is a side-effect from how librbd handles CoW with
clones. There is an open RBD feature request to add support for flattening
snapshots as well.
--
Jason Dillaman
Red Hat
dilla...@redhat.co
On Mon, Apr 13, 2015 at 10:18 PM, Shawn Edwards wrote:
> Here's a vmcore, along with log files from Xen's crash dump utility.
>
> https://drive.google.com/file/d/0Bz8b7ZiWX00AeHRhMjNvdVNLdDQ/view?usp=sharing
>
> Let me know if we can help more.
>
> On Fri, Apr 10, 2015 at 1:04 PM Ilya Dryomov wro
After doing some testing, I'm a bit confused even more.
What I'm trying to achieve is minimal data movement when I have to service
a node to replace a failed drive. Since these nodes don't have hot-swap
bays, I'll need to power down the box to replace the failed drive. I don't
want Ceph to shuffle
On 04/13/2015 03:17 PM, Jason Dillaman wrote:
> Yes, when you flatten an image, the snapshots will remain associated to the
> original parent. This is a side-effect from how librbd handles CoW with
> clones. There is an open RBD feature request to add support for flattening
> snapshots as well
Hi,
Yehuda Sadeh-Weinraub wrote:
> You're not missing anything. The script was only needed when we used
> the process manager of the fastcgi module, but it has been very long
> since we stopped using it.
Just to be sure, so if I understand well, these parts of the documentation:
1.
http://
- Original Message -
> From: "Francois Lafont"
> To: ceph-users@lists.ceph.com
> Sent: Monday, April 13, 2015 5:17:47 PM
> Subject: Re: [ceph-users] Purpose of the s3gw.fcgi script?
>
> Hi,
>
> Yehuda Sadeh-Weinraub wrote:
>
> > You're not missing anything. The script was only needed
Hi,
Robert LeBlanc wrote:
> What I'm trying to achieve is minimal data movement when I have to service
> a node to replace a failed drive. [...]
I will perhaps say something stupid but it seems to me that it's the
goal of the "noout" flag, isn't it?
1. ceph osd set noout
2. an old OSD disk fail
Hi,
Yehuda Sadeh-Weinraub wrote:
> The 405 in this case usually means that rgw failed to translate the http
> hostname header into
> a bucket name. Do you have 'rgw dns name' set correctly?
Ah, I have found and indeed it concerned "rgw dns name" as also Karan thought.
;)
But it's a little cur
Hi all!
I am testing rbd performance based on kernel rbd dirver, when I compared the
result of the kernel 3.13.6 with 3.18.11, my head gets so confused.
look at the result, down by a third.
3.13.6 IOPS
3.18.11 IOPS
4KB seq read
97169
23714
4KB seq write
10110
3177
4KB rand read
7589
Joao Eduardo wrote:
> To be more precise, it's the lowest IP:PORT combination:
>
> 10.0.1.2:6789 = rank 0
> 10.0.1.2:6790 = rank 1
> 10.0.1.3:6789 = rank 3
>
> and so on.
Ok, so if there is 2 possible quorum, the quorum with the
lowest IP:PORT will be chosen. But what happens if, in the
2 possi
UUID strings in the file /etc/fstab refer to devices. you can retrieve UUID
using command blkid, for instance:
# blkid /dev/sda1
/dev/sda1: UUID="1bcb1cbb-abd2-4cfa-bf89-1726ea6cd2fa" TYPE="xfs"
oyym...@gmail.com
From: Jesus Chavez (jeschave)
Date: 2015-04-14 07:03
To: oyym...@gmail.com
CC
44 matches
Mail list logo