Look at this:
https://github.com/ncw/rclone/issues/47
Because this is a json dump, it is encoding the / as \/.
It was source of confusion also for me.
Best regards
Saverio
2015-08-24 16:58 GMT+02:00 Luis Periquito :
> When I create a new user using radosgw-admin most of the time the secret
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Wang, Zhiqiang
> Sent: 01 September 2015 02:48
> To: Nick Fisk ; 'Samuel Just'
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] any recommendation of using EnhanceIO?
>
> > ---
Hi Greg, Zheng,
Is this fixed in a later version of the kernel client? Or would it be wise for
us to start using the fuse client?
Cheers,
Simon
> -Original Message-
> From: Gregory Farnum [mailto:gfar...@redhat.com]
> Sent: 31 August 2015 13:02
> To: Yan, Zheng
> Cc: Simon Hallam; Zhen
Hi, I've started the bucket --check --fix on friday evening and it's
still running. 'ceph -s' shows the cluster health as OK, I don't know if
there is anything else I could check? Is there a way of finding out if
its actually doing something?
We only have this issue on the one bucket with versioni
> -Original Message-
> From: Nick Fisk [mailto:n...@fisk.me.uk]
> Sent: Tuesday, September 1, 2015 3:55 PM
> To: Wang, Zhiqiang; 'Nick Fisk'; 'Samuel Just'
> Cc: ceph-users@lists.ceph.com
> Subject: RE: [ceph-users] any recommendation of using EnhanceIO?
>
>
>
>
>
> > -Original Mes
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Wang, Zhiqiang
> Sent: 01 September 2015 09:18
> To: Nick Fisk ; 'Samuel Just'
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] any recommendation of using EnhanceIO?
>
> > ---
> -Original Message-
> From: Nick Fisk [mailto:n...@fisk.me.uk]
> Sent: Tuesday, September 1, 2015 4:37 PM
> To: Wang, Zhiqiang; 'Samuel Just'
> Cc: ceph-users@lists.ceph.com
> Subject: RE: [ceph-users] any recommendation of using EnhanceIO?
>
>
>
>
>
> > -Original Message-
> >
Hi,
Like Shylesh said: you need to obey alignment constraints. See
rados_ioctx_pool_requires_alignment in
http://ceph.com/docs/hammer/rados/api/librados/
Cheers
On 01/09/2015 08:49, shylesh kumar wrote:
> I think this could be misaligned writes.
> Is it multiple of 4k ?? Its just a wild guess.
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Wang, Zhiqiang
> Sent: 01 September 2015 09:48
> To: Nick Fisk ; 'Samuel Just'
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] any recommendation of using EnhanceIO?
>
> > ---
> On Sep 1, 2015, at 16:13, Simon Hallam wrote:
>
> Hi Greg, Zheng,
>
> Is this fixed in a later version of the kernel client? Or would it be wise
> for us to start using the fuse client?
>
> Cheers,
I just wrote a fix
https://github.com/ceph/ceph-client/commit/33b68dde7f27927a7cb1a7691e3c5
Data lives in another container attached to OSD container as Docker volume.
According to `deis ps -a`, this volume was created two weeks ago, though
all files in `current` are very recent. I suspect that something removed
files in the data volume after reboot. As reboot was caused by CoreOS
update,
Hi Robert,
We are going to use ceph with ocfs2 in production. Here my doubt is rbd
mounted in 12 clients using ocfs2 clustering and network for server &
client will be 1 Gig. Is the throughput performance is ok for this setup?
Regards
Prabu
On Thu, 20 Aug 2015 02:15:53 +
Hi Greg,
Thanks for the update..
I think the documentation on Ceph should be reworded.
--snip--
http://ceph.com/docs/master/rados/operations/placement-groups/#choosing-the-number-of-placement-groups
* Less than 5 OSDs set pg_num to 128
* Between 5 and 10 OSDs set pg_num to 512
* Between 10 and
Hi,
we're in the process of changing 480G drives for 1200G drives, which should cut
the number of OSDs I have roughly to 1/3.
My largest "volumes" pool for OpenStack volumes has 16384 PGs at the moment and
I have 36K PGs in total. That equals to ~180 PGs/OSD and would become ~500 PG/s
OSD.
I k
Thanks for the awesome advice folks. Until I can go larger scale (50+ SATA
disks), I’m thinking my best option here is to just swap out these 1TB SATA
disks with 1TB SSDs. Am I oversimplifying the short term solution?
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Hello,
I have two large buckets in my RGW and I think the performance is
being impacted by the bucket index. One bucket contains 9 million
objects and the other one has 22 million. I'd like to shard the bucket
index and also change the ruleset of the .rgw.buckets.index pool to put
it on our
hi,all
Recently, i did some experiments on OSD data distribution,
we set up a cluster with 72 OSDs,all 2TB sata disk,
and ceph version is v0.94.3 and linux kernel version is 3.18,
and set "ceph osd crush tunables optimal".
There are 3 pools:
pool 0 'rbd' replicated size 2 min_size 1 crush_ruleset
Hi Jason,
I have a coredump with the size of 1200M compressed .
Where shall i put the dump ?
I think the crashes are often caused when I do a snapshot backup of the
vm-images.
Then somwething happens with locking which causes the cm to crash
Thanks
Christoph
On Mon, Aug 31, 2015 at 09
Can you bump up debug (debug rgw = 20, debug ms = 1), and see if the
operations (bucket listing and bucket check) go into some kind of
infinite loop?
Yehuda
On Tue, Sep 1, 2015 at 1:16 AM, Sam Wouters wrote:
> Hi, I've started the bucket --check --fix on friday evening and it's
> still running.
Hi!
open( ... O_APPEND) works fine in a single system. If many processes write to
the same file, their output will never overwrite each other.
On NFS overwriting is possible, as appending is only emulated - each write is
preceded by a seek to the current file size and race condition may occur.
Hi guy's,
I am totally new to Ceph deploy and I have succefully install ceph
cluster on a Admin node and able to active it by One monitor and two OSD.
After creation of Ceph Cluster I checked the Ceph Health Status and The
output was OK.
With that success I started to move to next stage for R
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
We are in a situation where we need to decrease PG for a pool as well.
One thought is to live migrate with block copy to a new pool with the
right number of PGs and then once they are all moved delete the old
pool. We don't have a lot of data in that
Hi Jan,
I am building two new clusters for testing. I been reading your messages
on the mailing list for a while now and want to thank you for your support.
I can redo all the numbers, but is your question to run all the test
again with [hdparm -W 1 /dev/sdc]? Please tell me what else you would
l
On Sep 1, 2015 4:41 PM, "Janusz Borkowski"
wrote:
>
> Hi!
>
> open( ... O_APPEND) works fine in a single system. If many processes
write to the same file, their output will never overwrite each other.
>
> On NFS overwriting is possible, as appending is only emulated - each
write is preceded by a s
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Just swapping out spindles for SSD will not give you orders of
magnitude performance gains as it does in regular cases. This is
because Ceph has a lot of overhead for each I/O which limits the
performance of the SSDs. In my testing, two Intel S3500 S
Unfortunately we are not in control of the VMs using this pool, so something
like "sync -> stop VM -> incremental sync -> start VM on new pool" would be
extremely complicated. I _think_ it's possible to misuse a cache tier to do
this (add a cache tier, remove the underlying tier, add a new pool
Hi cephers,
I would like to know the status for production-ready of Accelio & Ceph,
does anyone had a home-made procedure implemented with Ubuntu?
recommendations, comments?
Thanks in advance,
Best regards,
*German*
___
ceph-users mailing list
ceph-
I added sharding to our busiest RGW sites, but it will not shard existing
bucket indexes, only applies to new buckets. Even with that change, I'm still
considering moving the index pool to SSD. The main factor being the rate of
writes. We are looking at a project that will have extremely high wr
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I'm not convinced that a backing pool can be removed from a caching
tier. I just haven't been able to get around to trying it.
-
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Tue, Sep 1, 2015
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Accelio and Ceph are still in heavy development and not ready for production.
-
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Tue, Sep 1, 2015 at 10:31 AM, German Anders wrote:
Hi cephers,
Got it — I’ll keep that in mind. That may just be what I need to “get by” for
now. Ultimately, we’re looking to buy at least three nodes of servers that can
hold 40+ OSDs backed by 2TB+ SATA disks,
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Vete
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
You will be the one best equipped to answer the performance question.
You will have to figure out what minimal performance your application
will need. Then you have to match the disks to that (disk random IOPs
* # disks) / replicas will get you in th
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I would caution against large OSD nodes. You can really get into a
pinch with CPU and RAM during recovery periods. I know a few people
have it working well, but it requires a lot of tuning to get it right.
Personally, 20 disks in a box are too much f
Be selective with the SSDs you choose. I personally have tried Micron M500DC,
Intel S3500, and some PCIE cards that would all suffice. There are MANY that do
not work well at all. A shockingly large list, in fact.
Intel 3500/3700 are the gold standards.
Warren
From: ceph-users [mailto:ceph-use
It looks like it, this is what shows in the logs after bumping the debug
and requesting a bucket list.
2015-09-01 17:14:53.008620 7fccb17ca700 10 cls_bucket_list
aws-cmis-prod(@{i=.be-east.rgw.buckets.index}.be-east.rgw.buckets[be-east.5436.1])
start
abc_econtract/data/6shflrwbwwcm6dsemrpjit2li3v9
Thanks a lot for the quick response Robert, any idea when it's going to be
ready for production? any alternative solution for similar-performance?
Best regards,
*German *
2015-09-01 13:42 GMT-03:00 Robert LeBlanc :
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Accelio and Ceph are st
not sure where I can find the logs for the bucket check, I can't really
filter them out in the radosgw log.
-Sam
On 01-09-15 19:25, Sam Wouters wrote:
> It looks like it, this is what shows in the logs after bumping the debug
> and requesting a bucket list.
>
> 2015-09-01 17:14:53.008620 7fccb17c
Hi German,
We are working on to make it production ready ASAP. As you know RDMA is very
resource constrained and at the same time will outperform TCP as well. There
will be some definite tradeoff between cost Vs Performance.
We are lacking on ideas on how big the RDMA deployment could be and it w
I assume you filtered the log by thread? I don't see the response
messages. For the bucket check you can run radosgw-admin with
--log-to-stderr.
Can you also set 'debug objclass = 20' on the osds? You can do it by:
$ ceph tell osd.\* injectargs --debug-objclass 20
Also, it'd be interesting to ge
Hi Roy,
I understand, we are looking for using accelio with an starting small
cluster of 3 mon and 8 osd servers:
3x MON servers
2x Intel Xeon E5-2630v3 @2.40Ghz (32C with HT)
24x 16GB DIMM DDR3 1333Mhz (384GB)
2x 120GB Intel SSD DC S3500 (RAID-1 for OS)
1x ConnectX-3 VPI FDR 56Gb/
Hi,
see inline
On 01-09-15 20:14, Yehuda Sadeh-Weinraub wrote:
> I assume you filtered the log by thread? I don't see the response
> messages. For the bucket check you can run radosgw-admin with
> --log-to-stderr.
nothing is logged to the console when I do that
>
> Can you also set 'debug objclas
Thanks !
I think you should try installing from the ceph mainstream..There are some bug
fixes went on after Hammer (not sure if it is backported)..
I would say try with 1 drive -> 1 OSD first since presently we have seen some
stability issues (mainly due to resource constraint) with more OSDs in
Thanks Roy, we're planning to grow on this cluster if can get the
performance that we need, the idea is to run non-relational databases here,
so it would be high-io intensive. We are talking in grow terms of about
40-50 OSD servers with no more than 6 OSD daemons per server. If you got
some hints o
Thanks !
6 OSD daemons per server should be good.
Vu,
Could you please send out the doc you are maintaining ?
Regards
Somnath
From: German Anders [mailto:gand...@despegar.com]
Sent: Tuesday, September 01, 2015 11:36 AM
To: Somnath Roy
Cc: Robert LeBlanc; ceph-users
Subject: Re: [ceph-users] Acce
Sorry, forgot to mention:
- yes, filtered by thread
- the "is not valid" line occurred when performing the bucket --check
- when doing a bucket listing, I also get an "is not valid", but on a
different object:
7fe4f1d5b700 20 cls/rgw/cls_rgw.cc:460: entry
abc_econtract/data/6scbrrlo4vttk72melewiz
Thanks a lot guys, I'll configure the cluster and send you some feedback
once we test it
Best regards,
*German*
2015-09-01 15:38 GMT-03:00 Somnath Roy :
> Thanks !
>
> 6 OSD daemons per server should be good.
>
>
>
> Vu,
>
> Could you please send out the doc you are maintaining ?
>
>
>
> Regard
Hi,
I tried to set up a read-only permission for a client but it looks
always writable.
I did the following:
==Server end==
[client.cephfs_data_ro]
key = AQxx==
caps mon = "allow r"
caps osd = "allow r pool=cephfs_data, allow r pool=cephfs_metadata"
==Cl
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Nick,
I've been trying to replicate your results without success. Can you
help me understand what I'm doing that is not the same as your test?
My setup is two boxes, one is a client and the other is a server. The
server has Intel(R) Atom(TM) CPU C
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Robert LeBlanc
> Sent: 01 September 2015 21:48
> To: Nick Fisk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph SSD CPU Frequency Benchmarks
>
> -BEGIN PGP SIGNED MESSAGE-
Hi German,
You can try this small wiki to setup ceph/accelio
https://community.mellanox.com/docs/DOC-2141
thanks,
-vu
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of German
Anders
Sent: Tuesday, September 01, 2015 12:00 PM
To: Somnath Roy
Cc: ceph-users
Subject: Re: [
We also run RGW buckets with many millions of objects and had to shard
our existing buckets. We did have to delete the old ones first,
unfortunately.
I haven't tried moving the index pool to an SSD ruleset - would also
be interested in folks' experiences with this.
Thanks for the information on s
We have an application built on top of librados that has barely acceptable
performance and in need of optimizations. Since the code is functionally
correct, we have a hard time freeing up the resources to fully investigate
where the bottlenecks occur and fix them. We would like to hire a consult
- Original Message -
> From: "Aakanksha Pudipeddi-SSI"
> To: "Brad Hubbard"
> Sent: Wednesday, 2 September, 2015 6:25:49 AM
> Subject: RE: [ceph-users] Rados: Undefined symbol error
>
> Hello Brad,
>
> I wanted to clarify the "make install" part of building a cluster. I finished
> build
Hi ceph-users,
Hoping to get some help with a tricky problem. I have a rhel7.1 VM guest
(host machine also rhel7.1) with root disk presented from ceph 0.94.2-0
(rbd) using libvirt.
The VM also has a second rbd for storage presented from the same ceph
cluster, also using libvirt.
The VM boots fin
Hi Vu,
Thanks a lot for the link
Best regards,
*German*
2015-09-01 19:02 GMT-03:00 Vu Pham :
> Hi German,
>
>
>
> You can try this small wiki to setup ceph/accelio
>
>
>
> https://community.mellanox.com/docs/DOC-2141
>
>
>
> thanks,
>
> -vu
>
>
>
>
>
> *From:* ceph-users [mailto:ceph-users-b
Hi, ceph users:
I have a ceph cluster for rgw service in production, which was setup
according to the simple configuration tutorial, with only one deafult
region and one default zone. Even worse, I didn't enable neither the meta
logging nor the data logging in the master zone.
Now i want to add a
Hello,
On Tue, 1 Sep 2015 11:50:07 -0500 Kenneth Van Alstyne wrote:
> Got it — I’ll keep that in mind. That may just be what I need to “get
> by” for now. Ultimately, we’re looking to buy at least three nodes of
> servers that can hold 40+ OSDs backed by 2TB+ SATA disks,
>
As mentioned, pick d
57 matches
Mail list logo