On 07-07-15 19:45, Eino Tuominen wrote:
> Hello,
>
> I tried to find documentation about version dependencies. I understand that a
> newer client (librados) should always be able to talk to an older server, but
> how about the other way round?
>
For example a Firefly (0.80.X) client can talk
Anyone have any data on optimal # of shards for a radosgw bucket index?
We've had issues with bucket index contention with a few million+
objects in a single bucket so i'm testing out the sharding.
Perhaps at least one shard per OSD? Or, less? More?
I noticed some discussion here regarding slow
Hey Christian,
Thanks, I haven't caught up with my ceph-users backlog from last week
yet so hadn't noticed that thread (SMR drives are something I was
thinking about for a DR cluster and long term archival pool behind
rgw). But note that the He8 drives are not SMR.
Cheers,
On 8 July 2015 at 11:0
Hi Mika,
Feature request created:
https://bugzilla.redhat.com/show_bug.cgi?id=1240888
On Mon, Jul 6, 2015 at 4:21 PM, Vickie ch wrote:
> Dear Cephers,
> When a bucket created, the default quota setting is unlimited. Is
> there any setting can change this? That's admin no need to change bu
Here are some technical references:
https://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html
http://docs.aws.amazon.com/AmazonSimpleDB/latest/DeveloperGuide/HMACAuth.html
http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html
You also might choose to use s3curl (w
Re-added list.
On Wed, 8 Jul 2015 11:12:51 +1000 Nigel Williams wrote:
> On Wed, Jul 8, 2015 at 11:01 AM, Christian Balzer wrote:
> > In short SMR HDDs seem to be a bad match for Ceph or any random I/O.
>
> The He8 isn't shingled though, it is a PMR drive like the He6.
>
Argh!
That's what you
On Wed, 8 Jul 2015 10:28:17 +1000 Blair Bethwaite wrote:
> Hi folks,
>
> Does anyone have any experience with the newish HGST He8 8TB Helium
> filled HDDs? Storagereview looked at them here:
> http://www.storagereview.com/hgst_ultrastar_helium_he8_8tb_enterprise_hard_drive_review.
> I'm torn as t
Hi folks,
Does anyone have any experience with the newish HGST He8 8TB Helium
filled HDDs? Storagereview looked at them here:
http://www.storagereview.com/hgst_ultrastar_helium_he8_8tb_enterprise_hard_drive_review.
I'm torn as to the lower read performance shown there than e.g. the
He6 or Seagate
Hello,
I realize that one of buckets in my cluster have some strange stats and I see
that a issue like that was resolved on previously on issue #3127
(http://tracker.ceph.com/issues/3127), so I’d like to know how can I identify
if my case is that as described on that issue?
See the stats:
{
That's not something that CephFS supports yet; raw RADOS doesn't have any
kind of immutability support either. :(
-Greg
On Tue, Jul 7, 2015 at 5:28 PM Peter Tiernan wrote:
> Hi,
>
> i have a use case for CephFS whereby files can be added but not modified
> or deleted. Is this possible? Perhaps w
The error keeps coming back, eventually status changing to OK, then
back into errors.
I thought it looked like a connectivity issue as well with the
"wrongly marked me down", but firewall rules are allowing all traffic
on the cluster network.
Syslog is being flooded with messages like:
Jul 7 10:
Run :
'ceph-osd -i 0 -f' in a console and see what is the output.
Thanks & Regards
Somnath
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Fredy
Neeser
Sent: Tuesday, July 07, 2015 9:15 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] C
Hello,
I tried to find documentation about version dependencies. I understand that a
newer client (librados) should always be able to talk to an older server, but
how about the other way round?
--
Eino Tuominen
___
ceph-users mailing list
ceph-us
Unfortunately it seems that currently CephFS doesn't support Hadoop 2.*
The next step will be try Tachyon on top of Ceph.
Maybe somebody tried such constellation already?
-Original Message-
From: Lionel Bouton [mailto:lionel+c...@bouton.name]
Sent: Tuesday, July 07, 2015 7:49 PM
To: Dmitr
Steve Dainard writes:
> Hello,
>
> Ceph 0.94.1
> 2 hosts, Centos 7
>
> I have two hosts, one which ran out of / disk space which crashed all
> the osd daemons. After cleaning up the OS disk storage and restarting
> ceph on that node, I'm seeing multiple errors, then health OK, then
> back into th
Hello,
Ceph 0.94.1
2 hosts, Centos 7
I have two hosts, one which ran out of / disk space which crashed all
the osd daemons. After cleaning up the OS disk storage and restarting
ceph on that node, I'm seeing multiple errors, then health OK, then
back into the errors:
# ceph -w
http://pastebin.com
On 07/07/15 18:20, Dmitry Meytin wrote:
> Exactly because of that issue I've reduced the number of Ceph replications to
> 2 and the number of HDFS copies is also 2 (so we're talking about 4 copies).
> I want (but didn't tried yet) to change Ceph replication to 1 and change HDFS
> back to 3.
You
Hi,
i have a use case for CephFS whereby files can be added but not modified
or deleted. Is this possible? Perhaps with cephFS layout or cephx
capabilities.
thanks in advance
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.
Exactly because of that issue I've reduced the number of Ceph replications to 2
and the number of HDFS copies is also 2 (so we're talking about 4 copies).
I want (but didn't tried yet) to change Ceph replication to 1 and change HDFS
back to 3.
-Original Message-
From: Lionel Bouton [mai
Hi,
I had a working Ceph Hammer test setup with 3 OSDs and 1 MON (running on
VMs), and RBD was working fine.
The setup was not touched for two weeks (also no I/O activity), and when I
looked again, the cluster was in a bad state:
On the MON node (sto-vm20):
$ ceph health
HEALTH_WARN 72 pgs stal
On 07/07/15 17:41, Dmitry Meytin wrote:
> Hi Lionel,
> Thanks for the answer.
> The missing info:
> 1) Ceph 0.80.9 "Firefly"
> 2) map-reduce makes sequential reads of blocks of 64MB (or 128 MB)
> 3) HDFS which is running on top of Ceph is replicating data for 3 times
> between VMs which could be l
On 2015-07-03 01:31:35 +, Johannes Formann said:
Hi,
When rebooting one of the nodes (e. g. for a kernel upgrade) the OSDs
do not seem to shut down correctly. Clients hang and ceph osd tree show
the OSDs of that node still up. Repeated runs of ceph osd tree show
them going down after a whi
Hi Lionel,
Thanks for the answer.
The missing info:
1) Ceph 0.80.9 "Firefly"
2) map-reduce makes sequential reads of blocks of 64MB (or 128 MB)
3) HDFS which is running on top of Ceph is replicating data for 3 times between
VMs which could be located on the same physical host or different hosts
4)
Hello,
I have a test cluster of 12 OSDs, I deleted all pools then I set six of them
out. After I created a Pool of 100 PG, I have the PGs stuck in creating or
degraded state. Can you please advise. Does the Crush algo still taking the
OSDs marked as down in consideration? Even if I have data sh
On Tue, Jul 7, 2015 at 4:02 PM, Dan van der Ster wrote:
> Hi Greg,
>
> On Tue, Jul 7, 2015 at 4:25 PM, Gregory Farnum wrote:
>>> 4. "mds cache size = 500" is going to use a lot of memory! We have
>>> an MDS with just 8GB of RAM and it goes OOM after delegating around 1
>>> million caps. (thi
Hi Dmitry,
On 07/07/15 14:42, Dmitry Meytin wrote:
> Hi Christian,
> Thanks for the thorough explanation.
> My case is Elastic Map Reduce on top of OpenStack with Ceph backend for
> everything (block, object, images).
> With default configuration, performance is 300% worse than bare metal.
> I di
Hi Greg,
On Tue, Jul 7, 2015 at 4:25 PM, Gregory Farnum wrote:
>> 4. "mds cache size = 500" is going to use a lot of memory! We have
>> an MDS with just 8GB of RAM and it goes OOM after delegating around 1
>> million caps. (this is with mds cache size = 10, btw)
>
> Hmm. We do have some
On Fri, Jul 3, 2015 at 10:34 AM, Dan van der Ster wrote:
> Hi,
>
> We're looking at similar issues here and I was composing a mail just
> as you sent this. I'm just a user -- hopefully a dev will correct me
> where I'm wrong.
>
> 1. A CephFS cap is a way to delegate permission for a client to do I
On Thu, Jul 2, 2015 at 11:38 AM, Matteo Dacrema wrote:
> Hi all,
>
> I'm using CephFS on Hammer and I've 1.5 million files , 2 metadata servers
> in active/standby configuration with 8 GB of RAM , 20 clients with 2 GB of
> RAM each and 2 OSD nodes with 4 80GB osd and 4GB of RAM.
> I've noticed tha
Further clarification, 12:1 with SATA spinners as the OSD data drives.
On Tue, Jul 7, 2015 at 9:11 AM, David Burley
wrote:
> There is at least one benefit, you can go more dense. In our testing of
> real workloads, you can get a 12:1 OSD to Journal drive ratio (or even
> higher) using the P3700.
There is at least one benefit, you can go more dense. In our testing of
real workloads, you can get a 12:1 OSD to Journal drive ratio (or even
higher) using the P3700. This assumes you are willing to accept the impact
of losing 12 OSDs when a journal croaks.
On Tue, Jul 7, 2015 at 8:33 AM, Andrew
i'm trying to add a extra monitor with ceph-deploy
the current/first monitor is installed by hand
when i do
ceph-deploy mon add HOST
the new monitor seems to assimilate the old monitor
so the old/first monitor is now in the same state as the new monitor
so it is not aware of anything.
i needed t
On 07-07-15 14:42, Dmitry Meytin wrote:
> Hi Christian,
> Thanks for the thorough explanation.
> My case is Elastic Map Reduce on top of OpenStack with Ceph backend for
> everything (block, object, images).
> With default configuration, performance is 300% worse than bare metal.
> I did a few ch
Thank you Christian,
That comforts me in what I was thinking about the MONs, I will resize them
though, according to your advices and Paul's.
Regards,
Adrien
On Tue, Jul 7, 2015 at 6:18 AM, Christian Balzer wrote:
>
> Hello,
>
> On Sun, 5 Jul 2015 16:17:20 + Paul Evans wrote:
>
> > On Jul
Hi Christian,
Thanks for the thorough explanation.
My case is Elastic Map Reduce on top of OpenStack with Ceph backend for
everything (block, object, images).
With default configuration, performance is 300% worse than bare metal.
I did a few changes:
1) replication settings 2
2) read ahead size 20
Hello,
On Wed, 8 Jul 2015 00:33:59 +1200 Andrew Thrift wrote:
> We are running NVMe Intel P3700's as journals for about 8 months now.
> 1x P3700 per 6x OSD.
>
> So far they have been reliable.
>
> We are using S3700, S3710 and P3700 as journals and there is _currently_
> no real benefit of the
We are running NVMe Intel P3700's as journals for about 8 months now.1x
P3700 per 6x OSD.
So far they have been reliable.
We are using S3700, S3710 and P3700 as journals and there is _currently_ no
real benefit of the P3700 over the SATA units as journals for Ceph.
Regards,
Andrew
On Tu
Hello,
On Tue, 7 Jul 2015 11:45:11 + Dmitry Meytin wrote:
> I think it's essential for huge data clusters to deal with data locality.
> Even very expensive network stack (100Gb/s) will not mitigate the
> problem if you need to move petabytes of data many times a day. Maybe
> there is some wo
Hi, i'm trying to use admin ops through curl, but i don't know where to get
"Authorization: AWS {access-key}: {hash-of-header-and-secret}".
Can anyone help me how to get hash of header and secret?
My test user info is:
{
"user_id": "user1",
"display_name": "user1",
"email": "",
Nope I did not do any changes it just worked fine when executed.
Regards
Teclus Dsouza
From: MOSTAFA Ali (INTERN) [mailto:ali.mostafa.int...@3ds.com]
Sent: Tuesday, July 07, 2015 5:25 PM
To: Teclus Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco);
ceph-users@lists.ceph.com
Subject: RE: [ceph-user
So the test succeeded, did you made any changes or it worked right away?
Regards,
ALi
From: Teclus Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco)
[mailto:tec...@cisco.com]
Sent: mardi 7 juillet 2015 13:46
To: MOSTAFA Ali (INTERN); ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Ceph Rados-G
Hi Ali,
I have used this command and it worked fine for me . Can you be specific on
what you want to see from this output.
Regards
Teclus Dsouza
From: MOSTAFA Ali (INTERN) [mailto:ali.mostafa.int...@3ds.com]
Sent: Tuesday, July 07, 2015 4:57 PM
To: Teclus Dsouza -X (teclus - TECH MAHINDRA LIM
I think it's essential for huge data clusters to deal with data locality.
Even very expensive network stack (100Gb/s) will not mitigate the problem if
you need to move petabytes of data many times a day.
Maybe there is some workaround to the problem?
From: Van Leeuwen, Robert [mailto:rovanleeu
Since you are using Hammer, can you please test this method and send us your
feedback:
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/#add-an-rgw-instance
I didn't test it I don't really have time but I like to see the result
Regards,
Ali
From: MOSTAFA Ali (INTERN)
Sent: mardi 7 juill
Hello,
You have the same problem I faced, to solve it I moved my RGW to Ubuntu 15.04
and installed Apache2.4.10 and I used the Unix Socket . the documentation is
missing some commands, after you create your rgw configuration file in the conf
available folder of Apache you have to enable it and
Hello,
On Tue, 7 Jul 2015 09:51:56 + Van Leeuwen, Robert wrote:
> > I'm wondering if anyone is using NVME SSDs for journals?
> > Intel 750 series 400GB NVME SSD offers good performance and price in
> > comparison to let say Intel S3700 400GB.
> > http://ark.intel.com/compare/71915,86740 My c
> I'm wondering if anyone is using NVME SSDs for journals?
> Intel 750 series 400GB NVME SSD offers good performance and price in
> comparison to let say Intel S3700 400GB.
> http://ark.intel.com/compare/71915,86740
> My concern would be MTBF / TBW which is only 1.2M hours and 70GB per day for
>
CC'ing to ceph-users, where you're likely to get a proper response.
Ceph-community is for community related matters.
Cheers!
-Joao
On 07/07/2015 09:16 AM, Cristian Cristelotti wrote:
> Hi all,
>
> I'm facing issue with a centralized Keystone and I can't create containers
> with returning er
Hi,
I'm wondering if anyone is using NVME SSDs for journals?
Intel 750 series 400GB NVME SSD offers good performance and price in
comparison to let say Intel S3700 400GB.
http://ark.intel.com/compare/71915,86740
My concern would be MTBF / TBW which is only 1.2M hours and 70GB per day
for 5yrs o
I am trying on Ubuntu 14.04 and using Hammer Release. I seem to have
everything setup , but I am not sure what is the best alternate method to
test it .
Regards
Teclus Dsouza
From: MOSTAFA Ali (INTERN) [mailto:ali.mostafa.int...@3ds.com]
Sent: Tuesday, July 07, 2015 2:36 PM
To: Teclus Dsouz
Hi,
Which OS you are using ? I installed it on Ubuntu Vivid and it gave me a hard
time to work, I didn't manage to make it work on Ubuntu trusty. For Ubuntu
there's some missing commands. Since the hammer release and the newest
ceph-deploy, you can install the RGW with a single command, but I d
Hi Florent,
Yes this make sense now.
Thanks a lot
V.
On 01/07/15 20:19 , Florent MONTHEL wrote:
Hi Valery,
With the old account did you try to give FULL access to the new one user ID ?
Process should be :
From OLD account add FULL access to NEW account (S3 ACL with CloudBerry for
example)
Hello Everyone,
I was trying to configure Ceph Object Gateway and in the final boto script
running into connectivity issues .I am following the link
http://docs.ceph.com/docs/master/radosgw/config/for this .
I was able to get the Apache and FastCgi configured but in the Section f
53 matches
Mail list logo