20-Mar-16 23:23, Schlacta, Christ пишет:
What do you use as an interconnect between your osds, and your clients?
Two Mellanox 10Gb SFP NIC dual port each = 4 x 10Gbit/s ports on each
server.
On servers each 2 ports bonded, so we have 2 bond for Cluster net and
Storage net.
Clients servers
Hello!
I have 3-node cluster running ceph version 0.94.6
(e832001feaf8c176593e0325c8298e3f16dfb403)
on Ubuntu 14.04. When scrubbing I get error:
-9> 2016-03-21 17:36:09.047029 7f253a4f6700 5 -- op tracker -- seq: 48045,
time: 2016-03-21 17:36:09.046984, event: all_read, op: osd_sub_op(unkn
Hi Cephers,
Setting of "rgw frontends =
access_log_file=/var/log/civetweb/access.log
error_log_file=/var/log/civetweb/error.log" is working in Firefly and
Giant.
But Infernails and Jewel the setting is invalid, the logs are empty. Have
anyone know how to set civetweb log in newer CEPH correctl
Hi Cephers,
I don't notice the user already changed from root into ceph. By changed
the directory caps, the problem already fixed. Thank you all.
Best wishes,
Mika
2016-03-22 16:50 GMT+08:00 Mika c :
> Hi Cephers,
> Setting of "rgw frontends =
> access_log_file=/var/log/civetweb/access
Hi all,
I have 20 OSDs and 1 pool, and, as recommended by the doc(
http://docs.ceph.com/docs/master/rados/operations/placement-groups/), I
configured pg_num and pgp_num to 4096, size 2, min size 1.
But ceph -s shows:
HEALTH_WARN
534 pgs degraded
551 pgs stuck unclean
534 pgs undersized
too many
Hi,
Can you please share the "ceph health detail" output?
Thanks
Swami
On Tue, Mar 22, 2016 at 3:32 PM, Zhang Qiang wrote:
> Hi all,
>
> I have 20 OSDs and 1 pool, and, as recommended by the
> doc(http://docs.ceph.com/docs/master/rados/operations/placement-groups/), I
> configured pg_num and pgp
Hi Zhang,
are you sure, that all your 20 osd's are up and in ?
Please provide the complete output of ceph -s or better with detail flag.
Thank you :-)
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
IP Interactive UG ( haft
Hi Reddy,
It's over a thousand lines, I pasted it on gist:
https://gist.github.com/dotSlashLu/22623b4cefa06a46e0d4
On Tue, 22 Mar 2016 at 18:15 M Ranga Swami Reddy
wrote:
> Hi,
> Can you please share the "ceph health detail" output?
>
> Thanks
> Swami
>
> On Tue, Mar 22, 2016 at 3:32 PM, Zhang Q
Hi desmond,
this seems to be much to do for 90 OSDs. And possible a few mistakes in
typing.
Every change of disk needs extra editing too.
This weighting was automatically done in former versions.
Do you know why and where this changed or was i faulty at some point?
Markus
Am 21.03.2016 um 13:28
Hi Matt & Cephers,
I am looking for advise on setting up a file system based on Ceph. As CephFS is
not yet productive ready(or I missed some breakthroughs?), the new NFS on
RadosGW should be a promising alternative, especially for large files, which is
what we are most interested in. However, a
Hi,
I've been looking on the internet regarding two settings which might influence
performance with librbd.
When attaching a disk with Qemu you can set a few things:
- cache
- aio
The default for libvirt (in both CloudStack and OpenStack) for 'cache' is
'none'. Is that still the recommend value
Hi Wido,
Le 22/03/2016 13:52, Wido den Hollander a écrit :
Hi,
I've been looking on the internet regarding two settings which might influence
performance with librbd.
When attaching a disk with Qemu you can set a few things:
- cache
- aio
The default for libvirt (in both CloudStack and OpenSt
> > I've been looking on the internet regarding two settings which might
> > influence
> > performance with librbd.
> >
> > When attaching a disk with Qemu you can set a few things:
> > - cache
> > - aio
> >
> > The default for libvirt (in both CloudStack and OpenStack) for 'cache' is
> > 'none'. I
Hi Xusangdi,
NFS on RGW is not intended as an alternative to CephFS. The basic idea is to
expose the S3 namespace using Amazon's prefix+delimiter convention (delimiter
currently limited to '/'). We use opens for atomicity, which implies NFSv4 (or
4.1). In addition to limitations by design, t
Hello All,
I have experience using Lustre but I am new to the Ceph world, I have some
questions to the Ceph users out there.
I am thinking about deploying a Ceph storage cluster that lives in multiple
location "Building A" and "Building B”, this cluster will be comprised of two
dell servers w
Hello All,
I have experience using Lustre but I am new to the Ceph world, I have some
questions to the Ceph users out there.
I am thinking about deploying a Ceph storage cluster that lives in multiple
location "Building A" and "Building B”, this cluster will be comprised of two
dell servers w
Hi Jason,
Le 22/03/2016 14:12, Jason Dillaman a écrit :
We actually recommend that OpenStack be configured to use writeback cache [1].
If the guest OS is properly issuing flush requests, the cache will still
provide crash-consistency. By default, the cache will automatically start up
in wr
> Hi Jason,
>
> Le 22/03/2016 14:12, Jason Dillaman a écrit :
> >
> > We actually recommend that OpenStack be configured to use writeback cache
> > [1]. If the guest OS is properly issuing flush requests, the cache will
> > still provide crash-consistency. By default, the cache will automaticall
On Tue, Mar 22, 2016 at 4:48 PM, Jason Dillaman wrote:
>> Hi Jason,
>>
>> Le 22/03/2016 14:12, Jason Dillaman a écrit :
>> >
>> > We actually recommend that OpenStack be configured to use writeback cache
>> > [1]. If the guest OS is properly issuing flush requests, the cache will
>> > still provi
On Tue, Mar 22, 2016 at 2:37 PM, Ben Archuleta wrote:
> Hello All,
>
> I have experience using Lustre but I am new to the Ceph world, I have some
> questions to the Ceph users out there.
>
> I am thinking about deploying a Ceph storage cluster that lives in multiple
> location "Building A" and "
On Tue, Mar 22, 2016 at 1:12 PM, Xusangdi wrote:
> Hi Matt & Cephers,
>
> I am looking for advise on setting up a file system based on Ceph. As CephFS
> is not yet productive ready(or I missed some breakthroughs?), the new NFS on
> RadosGW should be a promising alternative, especially for large
Hello All,
I’m experiencing some issues installing Teuthology on CentOS 6.5.
I’ve tried installing it in a number of ways:
* Wishing a python virtual environment
* Using "pip install teuthology” directly
The installation fails in both cases.
a) In a python virtual environment (using pip
Hi Zhang,
yeah i saw your answer already.
At very first, you should make sure that there is no clock skew.
This can cause some sideeffects.
According to
http://docs.ceph.com/docs/master/rados/operations/placement-groups/
you have to:
(OSDs * 100)
Total PGs =
Hey guys,
I'm trying to wrap my head about the Ceph Cache Tiering to discover if what I
want is achievable.
My cluster exists of 6 OSD nodes with normal HDD and one cache tier of SSDs.
What I would love is that Ceph flushes and evicts data as soon as a file hasn't
been requested by a client in
On Tue, Mar 22, 2016 at 1:19 AM, Max A. Krasilnikov wrote:
> Hello!
>
> I have 3-node cluster running ceph version 0.94.6
> (e832001feaf8c176593e0325c8298e3f16dfb403)
> on Ubuntu 14.04. When scrubbing I get error:
>
> -9> 2016-03-21 17:36:09.047029 7f253a4f6700 5 -- op tracker -- seq:
> 480
I got it, the pg_num suggested is the total, I need to divide it by the
number of replications.
Thanks Oliver, your answer is very thorough and helpful!
On 23 March 2016 at 02:19, Oliver Dzombic wrote:
> Hi Zhang,
>
> yeah i saw your answer already.
>
> At very first, you should make sure that
Hello!
On Tue, Mar 22, 2016 at 11:40:39AM -0700, gfarnum wrote:
> On Tue, Mar 22, 2016 at 1:19 AM, Max A. Krasilnikov
> wrote:
>>
>> -1> 2016-03-21 17:36:09.048201 7f253f912700 -1 log_channel(cluster) log
>> [ERR] : 5.ca recorded data digest 0xb284fef9 != on disk 0x43d61c5d on
>> 6134ccca
On Tue, Mar 22, 2016 at 9:37 AM, John Spray wrote:
> On Tue, Mar 22, 2016 at 2:37 PM, Ben Archuleta wrote:
>> Hello All,
>>
>> I have experience using Lustre but I am new to the Ceph world, I have some
>> questions to the Ceph users out there.
>>
>> I am thinking about deploying a Ceph storage c
I was able to get this back to HEALTH_OK by doing the following:
1. Allow ceph-objectstore-tool to run over a weekend attempting to export
the PG. Looking at timestamps it took approximately 6 hours to complete
successfully
2. Import the PG into unused PG and start it up+out
3. Allow the cluster
Hi Stable Release Team for v0.94,
Let's try again... Any news on a release of v0.94.6 for debian wheezy (bpo70)?
Cheers,
Chris
On Thu, Mar 17, 2016 at 12:43:15PM +1100, Chris Dunlop wrote:
> Hi Chen,
>
> On Thu, Mar 17, 2016 at 12:40:28AM +, Chen, Xiaoxi wrote:
>> It’s already there, in
>
Hi Zhang...
If I can add some more info, the change of PGs is a heavy operation, and as far
as i know, you should NEVER decrease PGs. From the notes in pgcalc
(http://ceph.com/pgcalc/):
"It's also important to know that the PG count can be increased, but NEVER
decreased without destroying / re
On 22/03/2016 23:49, Chris Dunlop wrote:
> Hi Stable Release Team for v0.94,
>
> Let's try again... Any news on a release of v0.94.6 for debian wheezy (bpo70)?
I don't think publishing a debian wheezy backport for v0.94.6 is planned. Maybe
it's a good opportunity to initiate a community effort
Hi Loïc,
On Wed, Mar 23, 2016 at 12:14:27AM +0100, Loic Dachary wrote:
> On 22/03/2016 23:49, Chris Dunlop wrote:
>> Hi Stable Release Team for v0.94,
>>
>> Let's try again... Any news on a release of v0.94.6 for debian wheezy
>> (bpo70)?
>
> I don't think publishing a debian wheezy backport fo
Hi Chris,
On 23/03/2016 00:39, Chris Dunlop wrote:
> Hi Loïc,
>
> On Wed, Mar 23, 2016 at 12:14:27AM +0100, Loic Dachary wrote:
>> On 22/03/2016 23:49, Chris Dunlop wrote:
>>> Hi Stable Release Team for v0.94,
>>>
>>> Let's try again... Any news on a release of v0.94.6 for debian wheezy
>>> (bpo
Hi Loïc,
On Wed, Mar 23, 2016 at 01:03:06AM +0100, Loic Dachary wrote:
> On 23/03/2016 00:39, Chris Dunlop wrote:
>> "The old OS'es" that were being supported up to v0.94.5 includes debian
>> wheezy. It would be quite surprising and unexpected to drop support for an
>> OS in the middle of a stable
On 23/03/2016 01:12, Chris Dunlop wrote:
> Hi Loïc,
>
> On Wed, Mar 23, 2016 at 01:03:06AM +0100, Loic Dachary wrote:
>> On 23/03/2016 00:39, Chris Dunlop wrote:
>>> "The old OS'es" that were being supported up to v0.94.5 includes debian
>>> wheezy. It would be quite surprising and unexpected to
Hello,
On Tue, 22 Mar 2016 12:28:22 -0400 Maran wrote:
> Hey guys,
>
> I'm trying to wrap my head about the Ceph Cache Tiering to discover if
> what I want is achievable.
>
> My cluster exists of 6 OSD nodes with normal HDD and one cache tier of
> SSDs.
>
One cache tier being what, one node?
Hi Zhang,
From the ceph health detail, I suggest NTP server should be calibrated.
Can you share crush map output?
2016-03-22 18:28 GMT+08:00 Zhang Qiang :
> Hi Reddy,
> It's over a thousand lines, I pasted it on gist:
> https://gist.github.com/dotSlashLu/22623b4cefa06a46e0d4
>
> On Tue,
Hello Gonçalo,
Thanks for your reminding. I was just setting up the cluster for test, so don't
worry, I can just remove the pool. And I learnt that since the replication
number and pool number are related to pg_num, I'll consider them carefully
before deploying any data.
> On Mar 23, 2016, at
Hi Matt,
Thank you for the explanation and good luck on the NFS project!
Regards,
---Sandy
> -Original Message-
> From: Matt Benjamin [mailto:mbenja...@redhat.com]
> Sent: Tuesday, March 22, 2016 10:12 PM
> To: xusangdi 11976 (RD)
> Cc: ceph-us...@ceph.com; ceph-de...@vger.kernel.org
> S
> -Original Message-
> From: Ilya Dryomov [mailto:idryo...@gmail.com]
> Sent: Wednesday, March 23, 2016 1:04 AM
> To: xusangdi 11976 (RD)
> Cc: mbenja...@redhat.com; ceph-us...@ceph.com; ceph-de...@vger.kernel.org
> Subject: Re: [ceph-users] About the NFS on RGW
>
> On Tue, Mar 22, 2016 at
Hi Markus
I am not pretty sure where the problem is, but yes. it should weight all
the osds automatically.
I found something in your first post.
...
bd-2:/dev/sdaf:/dev/sdaf2
ceph-deploy disk zap bd-2:/dev/sdaf
...
You use 'ceph-deploy osd create --zap-disk bd-2:/dev/sdaf:/dev/sdaf2' right?
It d
Hi, everyone,
In my ceph cluster, first I deploy my ceph using ceph-deploy with user root, I
don't set up any thing after it's setup,
to my surprise, the cluster can auto-start after my host reboot, all thing is
ok, mon is running and OSDs of device is mounted itself and also running
properly.
On Wed, Mar 23, 2016 at 01:22:45AM +0100, Loic Dachary wrote:
> On 23/03/2016 01:12, Chris Dunlop wrote:
>> On Wed, Mar 23, 2016 at 01:03:06AM +0100, Loic Dachary wrote:
>>> On 23/03/2016 00:39, Chris Dunlop wrote:
"The old OS'es" that were being supported up to v0.94.5 includes debian
wh
On Wed, 23 Mar 2016, Loic Dachary wrote:
> On 23/03/2016 01:12, Chris Dunlop wrote:
> > Hi Loïc,
> >
> > On Wed, Mar 23, 2016 at 01:03:06AM +0100, Loic Dachary wrote:
> >> On 23/03/2016 00:39, Chris Dunlop wrote:
> >>> "The old OS'es" that were being supported up to v0.94.5 includes debian
> >>> w
45 matches
Mail list logo