Hi everyone,
What would give the best performance in most of the cases, civetweb or
apache?
Thank you
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Edgaras...
Just quoting a previous statement frim Yang:
" To use ACL, you need to add "--fuse_default_permissions=0
--client_acl_type=posix_acl" options to ceph-fuse. The
'--fuse_default_permissions=0' option disables kernel
file permission check and let ceph-fuse do the check."
Cheers
To enable quota, you need to pass "--client-quota" option to ceph-fuse
Yan, Zheng
On Mon, May 23, 2016 at 3:18 PM, Goncalo Borges
wrote:
> Hi Edgaras...
>
>
> Just quoting a previous statement frim Yang:
>
> " To use ACL, you need to add "--fuse_default_permissions=0
> --client_acl_type=posix_ac
Hi group,
A couple of weeks ago I ran into a couple of issues with an incomplete
placement group.
Luckly the data inside that PG was not relevant at all, so we decided to simply
remove it completely (using the flag osd_find_best_info_ignore_history_les).
However, now I cannot remove the bucke
Hi All
I'm doing some testing with OpenSUSE Leap 42.1, it ships with kernel 4.1.12
but I've also tested with 4.1.24
When I map an image with the kernel RBD client, max_sectors_kb = 127. I'm
unable to increase:
# echo 4096 > /sys/block/rbd0/queue/max_sectors_kb
-bash: echo: write error: Invalid a
A while back I attempted to create an RBD volume manually - intending it to be
an exact size of another LUN around 100G. The command line instead took this
to be the default MB argument for size and so I ended up with a 102400 TB
volume. Deletion was painfully slow (I never used the volume, i
Hey,
Sadly I'm still battling this issue. I did notice one interesting thing.
I changed the cache settings for my cache tier to add redundancy to the
pool which means a lot of recover activity on the cache. During all this
there were absolutely no slow requests reported. Is there anything I can
c
Check states of PGs using "ceph pg dump" and for every PG that is not
"active+clean", issue "ceph pg map " and get mapping OSDs. Check
the state of those OSDs by looking at their logs under /var/log/ceph/.
Regards,
Anand
On Mon, May 23, 2016 at 6:53 AM, Ken Peng wrote:
> Hi,
>
> # ceph -s
>
For performance, civetweb is better as fastcgi module associated with
apache is single threaded. But Apache does have fancy features which
civetweb lacks. If you are looking for just the performance, then go for
civetweb.
Regards,
Anand
On Mon, May 23, 2016 at 12:43 PM, fridifree wrote:
> Hi ev
Any plans to support quotas in CephFS kernel client?
-Mykola___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Heath Albritton
> Sent: 23 May 2016 01:24
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] NVRAM cards as OSD journals
>
> I'm contemplating the same thing as well. Or rather, I'
See here:
http://cephnotes.ksperis.com/blog/2014/07/04/remove-big-rbd-image
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Adrian Saul
> Sent: 23 May 2016 09:37
> To: 'ceph-users@lists.ceph.com'
> Subject: [ceph-users] RBD removal issue
Hi,
we ran into the same rgw problem when updating from infernalis to jewel
(version 10.2.1). Now I would like to run the script from Yehuda, but I am a
bit scared by
>I can create and get new buckets and objects but I've "lost" all my
> old buckets.
As I understood so far we do not need the
Hi there,
I was updating Ceph to 0.94.7 and now I am getting segmantation faults.
When getting status via "ceph -s" or "ceph health detail" I am getting
an error "Segmentation fault".
I have only two Monitor Deamon.. but didn't had any problems yet with
that.. maybe they maintenance time was too
Hi Christian,
Please share your suggestion.
Regards
Prabu GJ
On Sat, 21 May 2016 17:33:33 +0530 gjprabu
wrote
HI Christian,
Typo in my previous mail.
Thanks for your reply, It will be very helpful if we get the details on osd per
serv
Hello All,
There is a problem to mapping RBD images to ubuntu
16.04(Kernel 4.4.0-22-generic).
All of the ceph solution is based on ubunut 16.04(deploy, monitors, OSDs
and Clients).
#
#
there are some output of my config :
*$ ceph sta
Hello
I've recently updated my Hammer ceph cluster running on Ubuntu 14.04 LTS
servers and noticed a few issues during the upgrade. Just wanted to share my
experience.
I've installed the latest Jewel release. In my opinion, some of the issues I
came across relate to poor upgrade documentatio
Hello!
On Mon, May 23, 2016 at 11:26:38AM +0100, andrei wrote:
> 1. Ceph journals - After performing the upgrade the ceph-osd processes are
> not starting. I've followed the instructions and chowned /var/lib/ceph (also
> see point 2 below). The issue relates to the journal partitions, which ar
No plan so far. Current quota design requires client to do
bottom-to-top path walk, which is unfriendly for kernel client (due to
lock design of kernel).
On Mon, May 23, 2016 at 4:55 PM, Mykola Dvornik
wrote:
> Any plans to support quotas in CephFS kernel client?
>
> -Mykola
>
>
Thanks for a quick reply.
On Mon, 2016-05-23 at 20:08 +0800, Yan, Zheng wrote:
> No plan so far. Current quota design requires client to do
> bottom-to-top path walk, which is unfriendly for kernel client (due
> to
> lock design of kernel).
>
> On Mon, May 23, 2016 at 4:55 PM, Mykola Dvornik
> w
Parallel VFS lookups should improve this: https://lwn.net/Articles/685108/
On 23/05/16 15:08, Yan, Zheng wrote:
No plan so far. Current quota design requires client to do
bottom-to-top path walk, which is unfriendly for kernel client (due to
lock design of kernel).
On Mon, May 23, 2016 at 4:55
On Mon, May 23, 2016 at 10:12 AM, David wrote:
> Hi All
>
> I'm doing some testing with OpenSUSE Leap 42.1, it ships with kernel 4.1.12
> but I've also tested with 4.1.24
>
> When I map an image with the kernel RBD client, max_sectors_kb = 127. I'm
> unable to increase:
>
> # echo 4096 > /sys/bloc
On Mon, May 23, 2016 at 12:27 PM, Albert Archer
wrote:
> Hello All,
> There is a problem to mapping RBD images to ubuntu 16.04(Kernel
> 4.4.0-22-generic).
> All of the ceph solution is based on ubunut 16.04(deploy, monitors, OSDs and
> Clients).
> #
> ##
Thanks.
but how to use these features ???
so,there is no way to implement them on ubuntu 16.04 kernel (4.4.0) ???
it's strange !!! 🤔
On Mon, May 23, 2016 at 5:28 PM, Albert Archer
wrote:
> Thanks.
> but how to use these features ???
> so,there is no way to implement them on ubuntu 16.04 kernel (
On Mon, May 23, 2016 at 2:59 PM, Albert Archer wrote:
>
> Thanks.
> but how to use these features ???
> so,there is no way to implement them on ubuntu 16.04 kernel (4.4.0) ???
> it's strange !!!
What is your use case? If you are using the kernel client, create your
images with
$ rbd create
thanks.
The problem is, why kernel 4.4.X just support layering feature ??
it never support another features !!!
On Mon, May 23, 2016 at 5:39 PM, Ilya Dryomov wrote:
> On Mon, May 23, 2016 at 2:59 PM, Albert Archer
> wrote:
> >
> > Thanks.
> > but how to use these features ???
> > so,there is n
I've been running some tests with jewel, and wanted to enable jemalloc.
I noticed that the new jewel release now loads properly
/etc/default/ceph and has an option to use jemalloc.
I've installed jemalloc, enabled the LD_PRELOAD option, however doing
some tests it seems that it's still using tcmal
You need to build ceph code base to use jemalloc for OSDs..LD_PRELOAD won't
work..
Thanks & regards
Somnath
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Luis
Periquito
Sent: Monday, May 23, 2016 7:30 AM
To: Ceph Users
Subject: [ceph-users]
Thanks Somnath, I expected that much. But given the hint in the config
files do you know if they are built to use jemalloc? it seems not...
On Mon, May 23, 2016 at 3:34 PM, Somnath Roy wrote:
> You need to build ceph code base to use jemalloc for OSDs..LD_PRELOAD won't
> work..
>
> Thanks & rega
Yes, if you are using do_autogen , use -J option. If you are using config files
directly , use --with-jemalloc
-Original Message-
From: Luis Periquito [mailto:periqu...@gmail.com]
Sent: Monday, May 23, 2016 7:44 AM
To: Somnath Roy
Cc: Ceph Users
Subject: Re: [ceph-users] using jemalloc i
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Check out the Weighted Priority Queue option in Jewel, this really
helped reduce the impact of recovery and backfill on client traffic
with my testing. I think it really addresses a lot of the pain points
you mention.
-BEGIN PGP SIGNATURE-
Ve
Hello,
I'm a fairly new user and I am trying to bring up radosgw.
I am following this page:
http://docs.ceph.com/docs/master/install/install-ceph-gateway/
I have Jewel 10.2.1 installed with a co-located admin/mon host and a separate
osd host
First a question: Can I run radosgw on a co-locate
I for one am terrified of upgrading due to these messages (and indications
that the problem still may not be resolved even in 10.2.1) - holding off
until a clean upgrade is possible without running any hacky scripts.
-Ben
On Mon, May 23, 2016 at 2:23 AM, nick wrote:
> Hi,
> we ran into the same
Re:
> 2. Inefficient chown documentation - The documentation states that one should
> "chown -R ceph:ceph /var/lib/ceph" if one is looking to have ceph-osd ran as
> user ceph and not as root. Now, this command would run a chown process one
> osd at a time. I am considering my cluster to be a
TLDR;
Has anybody deployed a Ceph cluster using a single 40 gig nic? This is
discouraged in
http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/
"One NIC OSD in a Two Network Cluster:
Generally, we do not recommend deploying an OSD host with a single NIC in a
cluster with two n
4MB block size EC-object use case scenario (Mostly for reads , not so much for
writes) we saw some benefit separating public/cluster network for 40GbE. We
didn’t use two NIC though. We configured two ports on a NIC.
Both network can give up to 48Gb/s but with Mellanox card/Mellanox switch
combin
> Op 23 mei 2016 om 21:53 schreef Brady Deetz :
>
>
> TLDR;
> Has anybody deployed a Ceph cluster using a single 40 gig nic? This is
> discouraged in
> http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/
>
> "One NIC OSD in a Two Network Cluster:
> Generally, we do not reco
Hi,
I upgraded to 10.2.1 and noticed that lttng is a dependency for the RHEL
packages in that version. Since I have no intention of doing traces on ceph I
find myself wondering why ceph is now requiring these libraries to be
installed. Since the lttng packages are not included in RHEL/CentOS 7
Hi,
keep it simple, would be in my opinion devide different tasks to
different servers and networks.
The more stuff is running on one device, the higher is the chance that
they will influence each other and this way make debugging harder.
Our first setup's had all ( mon, osd, mds ) on one server
To be clear for future responders, separate mds and mon servers are in the
design. Everything is the same as the osd hardware except the chassis and
there aren't 24 hdds in there.
On May 23, 2016 4:27 PM, "Oliver Dzombic" wrote:
> Hi,
>
> keep it simple, would be in my opinion devide different ta
Hello Guys
Is several millions of object with Ceph ( for RGW use case ) still an issue
? Or is it fixed ?
Thnx
Vickey
On Thu, Jan 28, 2016 at 12:55 AM, Krzysztof Księżyk
wrote:
> Stefan Rogge writes:
>
> >
> >
> > Hi,
> > we are using the Ceph with RadosGW and S3 setting.
> > With more and m
Thanks - all sorted.
> -Original Message-
> From: Nick Fisk [mailto:n...@fisk.me.uk]
> Sent: Monday, 23 May 2016 6:58 PM
> To: Adrian Saul; ceph-users@lists.ceph.com
> Subject: RE: RBD removal issue
>
> See here:
>
> http://cephnotes.ksperis.com/blog/2014/07/04/remove-big-rbd-image
>
>
>
We (my customer ) are trying to test at Jewell now but I can say that the
above behavior was also observed by my customer at Infernalis. After 300
million or so objects in a single bucket the cluster basically fell down as
described above. Few hundred osds in this cluster. We are concerned that
thi
Hello,
On Mon, 23 May 2016 10:45:41 +0200 Peter Kerdisle wrote:
> Hey,
>
> Sadly I'm still battling this issue. I did notice one interesting thing.
>
> I changed the cache settings for my cache tier to add redundancy to the
> pool which means a lot of recover activity on the cache. During all
Hello,
separate public/private networks make sense for clusters that:
a) have vastly more storage bandwidth than a single link can handle or
b) are extremely read heavy on the client side, so replication reads can
be separated from the client ones.
I posit that neither is the case in your scena
Hello,
On Fri, 20 May 2016 15:52:45 + EP Komarla wrote:
> Hi,
>
> I am contemplating using a NVRAM card for OSD journals in place of SSD
> drives in our ceph cluster.
>
> Configuration:
>
> * 4 Ceph servers
>
> * Each server has 24 OSDs (each OSD is a 1TB SAS drive)
>
>
Hello,
On Fri, 20 May 2016 10:57:10 -0700 Anthony D'Atri wrote:
> [ too much to quote ]
>
> Dense nodes often work better for object-focused workloads than
> block-focused, the impact of delayed operations is simply speed vs. a
> tenant VM crashing.
>
Especially if they're don't have SSD journ
Hello!
I have a cluster of 2 nodes with 3 OSD each. The cluster full about 80%.
df -H
/dev/sdc127G 24G 3.9G 86% /var/lib/ceph/osd/ceph-1
/dev/sdd127G 20G 6.9G 75% /var/lib/ceph/osd/ceph-2
/dev/sdb127G 24G 3.5G 88% /var/lib/ceph/osd/ceph-0
When I switch off one
What apache gives that civetweb not?
Thank you
On May 23, 2016 11:49 AM, "Anand Bhat" wrote:
> For performance, civetweb is better as fastcgi module associated with
> apache is single threaded. But Apache does have fancy features which
> civetweb lacks. If you are looking for just the performance
Hello,
On Tue, 24 May 2016 10:28:02 +0700 Никитенко Виталий wrote:
> Hello!
> I have a cluster of 2 nodes with 3 OSD each. The cluster full about 80%.
>
According to your CRUSH map that's not quite true, namely ceph1-node2
entry.
And while that again according to your CRUSH map isn't in the de
I met this problem too.
Add the following entry in ceph.conf
[client]
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
when you add multiple images, there will the same number of *.asok files under
/var/run/ceph/ .
You can achieve statistics of multiple images using those *.aso
Hey Christian,
I honestly haven't seen any replies to my earlier message. I will traverse
my email and make sure I find it, my apologies.
I am graphing everything with collectd and graphite this is what makes it
so frustrating since I am not seeing any obvious pain points.
I am basically using t
52 matches
Mail list logo