Hi Sage,
Is there any timeline around the switch? So that we can plan ahead for the
testing.
We are running apache + mod-fastcgi in production at scale (540 OSDs, 9 RGW
hosts) and it looks good so far. Although at the beginning we came across a
problem with large volume of 500 error, which trac
there are release notes online for this version:
http://ceph.com/docs/master/release-notes/#v0-87-1-giant
Seems that someone just forgot to announce in the ML.
On Thu, Feb 26, 2015 at 7:12 AM, Alexandre DERUMIER
wrote:
> Hi,
>
> I known that Loic Dachary was currently working on backporting new
Hi Alexandre,
https://github.com/ceph/ceph/commits/giant
does not have the release notes but
https://github.com/ceph/ceph/commit/1c68264928cbc87b2848161be98779c9b1adb66d
was committed to master by Sage a few hours ago and has them. An announce will
most likely be posted in the next few days.
Hi Sage,
We switched from apache+fastcgi to civetweb (+haproxy) around one
month ago and so far it is working quite well. Just like GuangYang, we
had seen many error 500's with fastcgi, but we never investigated it
deeply. After moving to civetweb we don't get any errors at all no
matter what load
I fully support Wido. We also have no problems.
OS: CentOS7
[root@s3backup etc]# ceph -v
ceph version 0.80.8 (69eaad7f8308f21573c604f121956e64679a52a7)
2015-02-26 13:22 GMT+03:00 Dan van der Ster :
> Hi Sage,
>
> We switched from apache+fastcgi to civetweb (+haproxy) around one
> month ago and
Sage,
I also support CivetWeb over Apache+FAST CGI. I tried HAProxy with
multiple CivetWeb+RGW instances, it performs very well. It is easy to
configure and gives better response time.
On Thu, Feb 26, 2015 at 4:09 PM, Irek Fasikhov wrote:
> I fully support Wido. We also have no problems.
>
> OS:
On 25-02-15 20:31, Sage Weil wrote:
Hey,
We are considering switching to civetweb (the embedded/standalone rgw web
server) as the primary supported RGW frontend instead of the current
apache + mod-fastcgi or mod-proxy-fcgi approach. "Supported" here means
both the primary platform the upstrea
On Thu, 26 Feb 2015, Luis Periquito wrote:
> there are release notes online for this
> version: http://ceph.com/docs/master/release-notes/#v0-87-1-giant
> Seems that someone just forgot to announce in the ML.
That someone would be me, sorry! I've just sent them out now.
sage
__
This is the first (and possibly final) point release for Giant. Our focus
on stability fixes will be directed towards Hammer and Firefly.
We recommend that all v0.87 Giant users upgrade to this release.
Upgrading
-
* Due to a change in the Linux kernel version 3.18 and the limits of th
I¹d also like to set this up. I¹m not sure where to begin. When you say
enabled by default, where is it enabled?
Many thanks,
Mike
On 2/25/15, 1:49 PM, "Sage Weil" wrote:
>On Wed, 25 Feb 2015, Robert LeBlanc wrote:
>> We tried to get radosgw working with Apache + mod_fastcgi, but due to
>> t
Thanks Sage for the quick reply!
-=Mike
On 2/26/15, 8:05 AM, "Sage Weil" wrote:
>On Thu, 26 Feb 2015, Michael Kuriger wrote:
>> I¹d also like to set this up. I¹m not sure where to begin. When you
>>say
>> enabled by default, where is it enabled?
>
>The civetweb frontend is built into the rad
On Thu, 26 Feb 2015, Michael Kuriger wrote:
> I¹d also like to set this up. I¹m not sure where to begin. When you say
> enabled by default, where is it enabled?
The civetweb frontend is built into the radosgw process, so for the most
part you just have to get radosgw started and configured. It
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi Sage
On 25/02/15 19:31, Sage Weil wrote:
> We are considering switching to civetweb (the embedded/standalone
> rgw web server) as the primary supported RGW frontend instead of
> the current apache + mod-fastcgi or mod-proxy-fcgi approach.
> "Supp
On Thu, 26 Feb 2015, Wido den Hollander wrote:
> On 25-02-15 20:31, Sage Weil wrote:
> > Hey,
> >
> > We are considering switching to civetweb (the embedded/standalone rgw web
> > server) as the primary supported RGW frontend instead of the current
> > apache + mod-fastcgi or mod-proxy-fcgi approa
Yehuda Sadeh-Weinraub writes:
>
>
> - Original Message -
> > From: "Gregory Farnum"
> > To: "Tom Deneau"
> > Cc: ceph-users@...
> > Sent: Wednesday, February 25, 2015 3:20:07 PM
> > Subject: Re: [ceph-users] mixed ceph versions
> >
> > On Wed, Feb 25, 2015 at 3:11 PM, Deneau, Tom wr
Dear All,
Configuration of MDS and CephFS client is the same:
OS: CentOS 7.0.1406
ceph-0.87
Linux 3.10.0-123.20.1.el7.centos.plus.x86_64
dmesg: libceph: loaded (mon/osd proto 15/24)
dmesg: ceph: loaded (mds proto 32)
Using kernel ceph module, fstab mount options:
defaults,_netdev,ro,noatime,name=a
Hi All,
I've been provided by this hardware:
4xHP server G8
18 HD 1TB per server (72 HD in total)
3 SSD per server (12 in total)
I do have now 2 questions:
1. How would it look like having 1xSSD performing journal each 6xOSD?
Is it feasible? What would you do?
2. I thought it would not be a
Thanks, we were able to get it up and running very quickly. If it
performs well, I don't see any reason to use Apache+fast_cgi. I don't
have any problems just focusing on civetweb.
On Wed, Feb 25, 2015 at 2:49 PM, Sage Weil wrote:
> On Wed, 25 Feb 2015, Robert LeBlanc wrote:
>> We tried to get ra
For everybody else's reference, this is addressed in
http://tracker.ceph.com/issues/10944. That kernel has several known
bugs.
-Greg
On Tue, Feb 24, 2015 at 12:02 PM, Ilja Slepnev wrote:
> Dear All,
>
> Configuration of MDS and CephFS client is the same:
> OS: CentOS 7.0.1406
> ceph-0.87
> Linux
Robert --
We are still having trouble with this.
Can you share your [client.radosgw.gateway] section of ceph.conf and
were there any other special things to be aware of?
-- Tom
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf
[client.radosgw.gateway]
host = radosgw1
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
log file = /var/log/radosgw/client.radosgw.gateway.log
rgw print continue = false
rgw enable ops log = false
rgw ops log rados = false
rgw ops l
> Op 26 feb. 2015 om 18:22 heeft Sage Weil het volgende
> geschreven:
>
>> On Thu, 26 Feb 2015, Wido den Hollander wrote:
>>> On 25-02-15 20:31, Sage Weil wrote:
>>> Hey,
>>>
>>> We are considering switching to civetweb (the embedded/standalone rgw web
>>> server) as the primary supported RGW
+1 for proxy. Keep the civetweb lean and mean and if people need
"extras" let the proxy handle this. Proxies are easy to set-up and a
simple example could be included in the documentation.
On Thu, Feb 26, 2015 at 11:43 AM, Wido den Hollander wrote:
>
>
>> Op 26 feb. 2015 om 18:22 heeft Sage Weil
Hi Axel,
On Thu, 26 Feb 2015, Axel Dunkel wrote:
> Sage,
>
> we use apache as a filter for security and additional functionality
> reasons. I do like the idea, but we'd need some kind of interface to
> filter/modify/process requests.
Civetweb has some basic functionality here:
https://g
Hello,
I have an problem. I will make a symbolic link for an file, but return the
message : ln: failed to create symbolic link
‘./M_S8_L001_R1-2_001.fastq.gz_sylvio.sam_fixed.bam’: File exists
When i do the command ls, the result is
l? ? ? ? ?
Tackling this on a more piecemeal basis, I've stopped all OSDs, and
started just the three which exist on the first host.
osd.0 comes up without complaint: "osd.0 63675 done with init, starting
boot process"
osd.3 comes up without complaint: "osd.3 63675 done with init, starting
boot process
osd
If you turn up "debug osd = 20" or something it'll apply a good bit
more disk load but give you more debugging logs about what's going on.
It could be that you're in enough of a mess now that it's stuck trying
to calculate past intervals for a bunch of PGs across so many maps
that it's swapping thi
Thanks for the notes Sage
On 27 February 2015 at 00:46, Sage Weil wrote:
> We recommend that all v0.87 Giant users upgrade to this release.
>
When upgrading from 0.87 to 0.87.1 is there any special procedure that
needs to followed? or is ti sufficient to upgrade each node and restart
ceph servi
On 2/26/2015 9:46 AM, Sage Weil wrote:
This is the first (and possibly final) point release for Giant. Our focus
on stability fixes will be directed towards Hammer and Firefly.
Is this something that was decided beforehand? Can we tell if a major
version is going to be maintained or not, bef
On Thu, 26 Feb 2015, Brian Rak wrote:
> On 2/26/2015 9:46 AM, Sage Weil wrote:
> > This is the first (and possibly final) point release for Giant. Our focus
> > on stability fixes will be directed towards Hammer and Firefly.
> >
> Is this something that was decided beforehand? Can we tell if a m
I just upgraded my debian giant cluster,
1)on each node:
-
apt-get update
apt-get dist-upgrade
2)on each node:
-
/etc/init.d/ceph restart mon
#ceph -w ---> verify that HEALTH is ok before doing another node
3)on each node:
-
/etc/init.d/ceph restart osd
#ce
Thanks Mark for the results,
default values seem to be quite resonable indeed.
I also wonder is cpu frequency can have an impact on latency or not.
I'm going to benchmark on dual xeon 10-cores 3,1ghz nodes in coming weeks,
I'll try replay your benchmark to compare
- Mail original -
De:
On 27 February 2015 at 16:01, Alexandre DERUMIER
wrote:
> I just upgraded my debian giant cluster,
>
> 1)on each node:
>
Just done that too, all looking good.
Thanks all.
--
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://list
33 matches
Mail list logo