Well, seems like they are on satellite :)
On 6 May 2015 at 02:58, Matthew Monaco wrote:
> On 05/05/2015 08:55 AM, Andrija Panic wrote:
> > Hi,
> >
> > small update:
> >
> > in 3 months - we lost 5 out of 6 Samsung 128Gb 850 PROs (just few days in
> > between of each SSD death) - cant believe it
Thanks Marc & Nick, that makes things much more clear!
/Götz
Am 05.05.15 um 11:36 schrieb Nick Fisk:
> Just to add, the caching promote/demotes whole objects, so if you have
> lots of small random IO’s you will need a lot more cache than compared
> to the actual amount of hot data. Reduci
Hi,
Coming back to that issue.
My endpoint wasn’t right set up.
I changed it to myrgw:myport (rgwow:8080) in the cloudberry profile or in the
curl request and I got a 403 error due to a potential bad role returned by
keystone.
In the radosgw log, I got
2015-05-05 14:58:23.895961 7fb9f4fe9700
On 05/05/2015 08:54 PM, Steffen W Sørensen wrote:
>
>> On 05/05/2015, at 18.52, Sage Weil wrote:
>>
>> On Tue, 5 May 2015, Tony Harris wrote:
>>> So with this, will even numbers then be LTS? Since 9.0.0 is following
>>> 0.94.x/Hammer, and every other release is normally LTS, I'm guessing 10.x.x,
Hi folks,
beside hardware and performance and failover design: How do you manage
to backup hundreds or thousands of TB :) ?
Any suggestions? Best practice?
A second ceph cluster at a different location? "bigger archive" Disks in
good boxes? Or tabe-libs?
What kind of backupsoftware can handle s
If anyone here is interested in what became of my problem with
dreadfully bad performance with ceph, I'd like to offer this follow up.
The problem, as it turns out, is a regression that exists only in
version 3.18 of the kernel. Upgrading to 4.0 solved the problem, and
performance is now norma
for the moment, you can use snapshot for backup
https://ceph.com/community/blog/tag/backup/
I think that async mirror is on the roadmap
https://wiki.ceph.com/Planning/Blueprints/Hammer/RBD%3A_Mirroring
if you use qemu, you can do qemu full backup. (qemu incremental backup is
coming for qemu 2
Addendum
In the keystone log, I got
2015-05-06 11:42:24.594 10435 INFO eventlet.wsgi.server [-] 10.193.108.238 - -
[06/May/2015 11:42:24] "POST /v2.0/s3tokens HTTP/1.1" 404 247 0.003872
Something is missing
This is my new quest…
De : CHEVALIER Ghislain IMT/OLPS
Envoyé : mercredi 6 mai 2015 10
For me personally I would always feel more comfortable with backups on a
completely different storage technology.
Whilst there are many things you can do with snapshots and replication, there
is always a small risk that whatever causes data loss on your primary system
may affect/replicate to yo
Snapshot on same storage cluster should definitely NOT be treated as
backup
Snapshot as a source for backup however can be pretty good solution for
some cases, but not every case.
For example if using ceph to serve static web files, I'd rather have
possibility to restore given file from given pat
my access_key and secret keys are generated by the tool radosgw-admin with
-gen-secret and --gen-access-key options before. I wrote down the keys and
assigned it in step 5
-- --
??: "Karan Singh";;
: 2015??5??4??(??) 1:50
??:
Hi again,
I found that a keystone extension is required to interact between s3 and
keystone and it’s possible to get the list of the installed extensions.
When I request post http://10.194.167.23:5000/v2.0/extension, I got in the
response body
http://docs.openstack.org/identity/api/v2.0";>
h
Hi,
While creating a Ceph user with a pre-generated key stored in a keyring
file, "ceph auth get-or-create" doesn't seem to take the keyring file into
account:
# cat /tmp/user1.keyring
[client.user1]
key = AQAuJEpVgLQmJxAAQmFS9a3R7w6EHAOAIU2uVw==
# ceph auth get-or-create -i /tmp/user1.keyring c
Hello,
I am trying to install Rados gateway. I have already a running cluster, but I
changed the default cluster name "ceph" to cluster1. When trying to run the
radosgw (using /etc/init.d/radosgw start) it looks for ceph.conf file by
default. Can you please advise me if the radosgw can operate
According to the help for get-or-create, it looks like it should take an
input file. I've only ever used ceph auth import in this regard. I would
file a bug report on get-or-create.
On Wed, May 6, 2015 at 8:36 AM, Sergio A. de Carvalho Jr. <
scarvalh...@gmail.com> wrote:
> Hi,
>
> While creating
As a point to
* someone accidentally removed a thing, and now they need a thing back
I thought MooseFS has an interesting feature that I thought would be good
for CephFS and maybe others.
Basically a timed Trashbin
"Deleted files are retained for a configurable period of time (a file
system level
Case in point, here's a little story as to why backup outside ceph is
necessary:
I was working on modifying journal locations for a running test ceph
cluster when, after bringing back a few OSD nodes, two PGs started being
marked as incomplete. That made all operations on the pool hang as, for
On 05/05/15 02:24, Lionel Bouton wrote:
> On 05/04/15 01:34, Sage Weil wrote:
>> On Mon, 4 May 2015, Lionel Bouton wrote:
>>> Hi,
>>>
>>> we began testing one Btrfs OSD volume last week and for this first test
>>> we disabled autodefrag and began to launch manual btrfs fi defrag.
>>> [...]
>> Cool.
On 05/06/2015 12:51 PM, Lionel Bouton wrote:
On 05/05/15 02:24, Lionel Bouton wrote:
On 05/04/15 01:34, Sage Weil wrote:
On Mon, 4 May 2015, Lionel Bouton wrote:
Hi,
we began testing one Btrfs OSD volume last week and for this first test
we disabled autodefrag and began to launch manual btrfs
2015-05-06 20:51 GMT+03:00 Lionel Bouton :
> On 05/05/15 02:24, Lionel Bouton wrote:
>> On 05/04/15 01:34, Sage Weil wrote:
>>> On Mon, 4 May 2015, Lionel Bouton wrote:
Hi,
we began testing one Btrfs OSD volume last week and for this first test
we disabled autodefrag and began t
Hi,
On 05/06/15 20:04, Mark Nelson wrote:
> [...]
> Out of curiosity, do you see excessive memory usage during
> defragmentation? Last time I spoke to josef it sounded like it wasn't
> particularly safe yet and could make the machine go OOM, especially if
> there are lots of snapshots.
>
We have
Hi,
On 05/06/15 20:07, Timofey Titovets wrote:
> 2015-05-06 20:51 GMT+03:00 Lionel Bouton :
>> Is there something that would explain why initially Btrfs creates the
>> 4MB files with 128k extents (32 extents / file) ? Is it a bad thing for
>> performance ?
> This kind of behaviour is a reason why
We have a separate primary and backup cluster running in two distinct
physical locations serving rbd images (totaling ~12TB at the moment) to
CIFS/NFS/iSCSI reshare hosts, serving clients . I do daily snapshots on
the primary cluster and then export-diff/import-diff on the backup
cluster, and then
Hi all,
http://ceph.com/docs/master/rados/operations/crush-map/#warning-when-tunables-are-non-optimal
says
"
The ceph-osd and ceph-mon daemons will start requiring the feature bits
of new connections as soon as they get the updated map. However,
already-connected clients are effectively grand
Hello everyone,
I’m build a new infrastructure which will serve S3 protocol, and I’d like your
help to estimate a hardware configuration to radosgw servers. I found many
information on - http://ceph.com/docs/master/start/hardware-recommendations/
but nothing about the radosgw daemon.
Regar
Hi folks,
Calling on the collective Ceph knowledge here. Since upgrading to Hammer,
we're now seeing:
health HEALTH_WARN
too many PGs per OSD (1536 > max 300)
We have 3 OSDs, so we have used the pg_num of 128 based on the suggestion
here: http://ceph.com/docs/master/rados/operat
Hi,
You've too many PG for too few OSD
As the docs you linked said:
When using multiple data pools for storing objects, you need to ensure
that you balance the number of placement groups per pool with the number
of placement groups per OSD so that you arrive at a reasonable total
number of placem
Thanks for the feedback. That language is confusing to me, then, since the
first paragraph seems to suggest using a pg_num of 128 in cases where we
have less than 5 OSDs, as we do here.
The warning below that is: "As the number of OSDs increases, chosing the
right value for pg_num becomes more imp
Hi,
I am setting up ceph from git master branch (g...@github.com:ceph/ceph.git)
and followed the steps listed at
http://docs.ceph.com/docs/master/install/build-ceph/
The build was successful on my RHEL6 host and used "make install" to
install the packages as described here.
http://docs.ceph.com/
I took a bit of time to get a feel for how different the straw2 mappings are vs
straw1 mappings. For a bucket in which all weights are the same, I saw no
changed mappings, which is as expected. However, on a map with 3 hosts each of
which has 4 osds with weights 1,2,3, and 4 (crush-different-w
Hello,
I dont get it. You lost just 6 osds out of 145 and your cluster is not
able to recover ?
what is the status of ceph -s ?
Saverio
2015-05-04 9:00 GMT+02:00 Yujian Peng :
> Hi,
> I'm encountering a data disaster. I have a ceph cluster with 145 osd. The
> data center had a power problem ye
Here's a little more information on our use case:
https://github.com/deis/deis/issues/3638
On Wed, May 6, 2015 at 2:53 PM, Chris Armstrong
wrote:
> Thanks for the feedback. That language is confusing to me, then, since the
> first paragraph seems to suggest using a pg_num of 128 in cases where w
On 07/05/15 07:53, Chris Armstrong wrote:
> Thanks for the feedback. That language is confusing to me, then, since
> the first paragraph seems to suggest using a pg_num of 128 in cases
> where we have less than 5 OSDs, as we do here.
>
> The warning below that is: "As the number of OSDs increases,
Just checking, are you aware of this ?
http://ceph.com/pgcalc/
FYI, the warning is given based on the following logic.
int per = sum_pg_up / num_in;
if (per > g_conf->mon_pg_warn_max_per_osd) {
//raise warning..
}
This is not considering any resources..It is solely depends on
RadosGW is pretty light compared to the rest of Ceph, but it depends on
your use case.
RadosGW just needs network bandwidth and a bit of CPU. It doesn't access
the cluster network, just the public network. If you have some spare
public network bandwidth, you can run on existing nodes. If you p
This is an older post of mine on this topic:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-April/038484.html.
The only thing that's changed since then is that Hammer now supports
RadosGW object versioning. A combination of RadosGW replication,
versioning, and access control meets my ne
System users are the only ones that need to be created in both zones.
Non-system users (and their sub-users) should be created in the primary
zone. radosgw-agent will replicate them to the secondary zone. I didn't
create sub-users for my system users, but I don't think it matters.
I can read my
Hi teqm,
Is it necessary to indicate in ceph.conf all OSD that we have in the
cluster ?
we have today reboot a cluster (5 nodes RHEL 6.5) and some OSD seem to have
change ID so crush map not mapped with the reality
Thanks
*Florent Monthel*
___
ceph-user
Hello,
On Thu, 7 May 2015 00:34:58 +0200 Saverio Proto wrote:
> Hello,
>
> I dont get it. You lost just 6 osds out of 145 and your cluster is not
> able to recover ?
>
He lost 6 OSDs at the same time.
With 145 OSDs and standard replication of 3 loosing 3 OSDs makes data loss
already extremely
Hi there,
I am using Ubuntu 14.04 with Ceph version 0.80.9-1trusty: 2 osd + mon node, 1
radosgw nodeI followed instruction instructions in the
docs:http://docs.ceph.com/docs/master/radosgw/config/but could not upload file
using swift interfaceCommand:$ swift -A http://client/auth/2.0 -U
storage
We don't have OSD entries in our Ceph config. They are not needed if you
don't have specific configs for different OSDs.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On May 6, 2015 7:18 PM, "Florent MONTHEL"
wrote:
> Hi teqm,
>
> Is it necessary to indicate in ceph.conf all
Hello Craig,
Your answer was something like I thought...
For RadosGW, I’m thinking in 2 physical servers with 16GB RAM and 2x processors
with 4 cores each and a Giga network for internet and 10Giga network to talk
with the cep cluster, and in front of this servers I’ll have load balancer to
giv
- Original Message -
> From: "Sean"
> To: "Yehuda Sadeh-Weinraub"
> Cc: ceph-users@lists.ceph.com
> Sent: Tuesday, May 5, 2015 12:14:19 PM
> Subject: Re: [ceph-users] Civet RadosGW S3 not storing complete obects;
> civetweb logs stop after rotation
>
>
>
> Hello Yehuda and the rest
OK I see the problem. Thanks for explanation.
However he talks about 4 hosts. So with the default CRUSHMAP losing 1
or more OSDs on the same host is irrelevant.
The real problem he lost 4 OSDs on different hosts with pools of size
3 , so he lost the PGs that where mapped to 3 failing drives.
So h
Why you don't use directly AWS S3 then ?
Saverio
2015-04-24 17:14 GMT+02:00 Mike Travis :
> To those interested in a tricky problem,
>
> We have a Ceph cluster running at one of our data centers. One of our
> client's requirements is to have them hosted at AWS. My question is: How do
> we effecti
45 matches
Mail list logo