some tips:
1.if you enabled auth_cluster_required, you may shoud have a check
the keyring
2.can you reach the monitors from your admin node by ssh without passwd
2016-04-16 18:16 GMT+08:00 AJ NOURI :
> Followed the preflight and quick start
> http://docs.ceph.com/docs/master/start/quick-ceph-depl
Hi,
RBD mount
ceph v0.94.5
6 OSD with 9 HDD each
10 GBit/s public and private networks
3 MON nodes 1Gbit/s network
A rbd mounted with btrfs filesystem format performs really badly when
reading. Tried readahead in all combinations but that does not help in
any way.
Write rates are very good i
I have read the SK‘s performance tuning work too, it's a good
job especially the analysis of write/read latancy on OSD.
I want to ask a question about the optimize on 'Long logging time', what's
the meaning about 'split logging into another thread and do it later',
AFAIK, ceph does logging async by
Dear friends:
Hello,I have a small problem When I use ceph . my ceph has three
monitor. I want to move out one.
root@node01 ~]# ceph -s
cluster b0d8bd0d-6269-4ce7-a10b-9adc7ee2c4c8
health HEALTH_WARN
too many PGs per OSD (682 > max 300)
monmap e23: 3 mons at
{n
On Tue, Apr 19, 2016 at 5:28 AM, min fang wrote:
> I am confused on ceph/ceph-qa-suite and ceph/teuthology. Which one should I
> use? thanks.
ceph-qa-suite repository contains the test snippets, teuthology is the
test framework that knows how to run them. It will pull the appropriate
branch of c
On Mon, Apr 18, 2016 at 11:58 AM, Tim Bishop wrote:
> I had the same issue when testing on Ubuntu xenial beta. That has 4.4,
> so should be fine? I had to create images without the new RBD features
> to make it works.
None of the "new" features are currently supported by krbd. 4.7 will
support e
All,
I was called in to assist in a failed Ceph environment with the cluster
in an inoperable state. No rbd volumes are mountable/exportable due to
missing PGs.
The previous operator was using a replica count of 2. The cluster
suffered a power outage and various non-catastrophic hardware iss
Hello,
At my workplace we have a production cephfs cluster (334 TB on 60 OSDs) which
was recently upgraded from Infernalis 9.2.0 to Infernalis 9.2.1 on Ubuntu
14.04.3 (linux 3.19.0-33).
It seems that cephfs still doesn't free up space at all or at least that's what
df command tells us.
Is th
Hi all,
I just installed 3 monitors, using ceph-deploy, on CentOS 7.2. Ceph is 10.1.2.
My ceph-mon processes do not come up after reboot. This is what ceph-deploy
create-initial did:
[ams1-ceph01-mon01][INFO ] Running command: sudo systemctl enable ceph.target
[ams1-ceph01-mon01][WARNIN] Creat
On Tue, Apr 19, 2016 at 2:40 PM, Simion Rad wrote:
> Hello,
>
>
> At my workplace we have a production cephfs cluster (334 TB on 60 OSDs)
> which was recently upgraded from Infernalis 9.2.0 to Infernalis 9.2.1 on
> Ubuntu 14.04.3 (linux 3.19.0-33).
>
> It seems that cephfs still doesn't free up sp
Mounting and unmount doesn't change anyting.
The used space reported by df command is nearly the same as the values
returned by ceph -s command.
Example 1, df output:
ceph-fuse 334T 134T 200T 41% /cephfs
Example 2, ceph -s output:
health HEALTH_WARN
mds0: Many clients (22
I Have a setup using some Intel P3700 devices as a cache tier, and 33 sata
drives hosting the pool behind them. I setup the cache tier with writeback,
gave it a size and max object count etc:
ceph osd pool set target_max_bytes 5000
ceph osd pool set nvme target_max_bytes 5000
have you ever used fancy layout?
see http://tracker.ceph.com/issues/15050
On Wed, Apr 20, 2016 at 3:17 AM, Simion Rad wrote:
> Mounting and unmount doesn't change anyting.
> The used space reported by df command is nearly the same as the values
> returned by ceph -s command.
>
> Example 1, df
Hello,
On Tue, 19 Apr 2016 20:21:39 + Stephen Lord wrote:
>
>
> I Have a setup using some Intel P3700 devices as a cache tier, and 33
> sata drives hosting the pool behind them.
A bit more details about the setup would be nice, as in how many nodes,
interconnect, replication size of the
join the users
发送自 Windows 10 版邮件应用
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
OK, you asked ;-)
This is all via RBD, I am running a single filesystem on top of 8 RBD devices
in an
effort to get data striping across more OSDs, I had been using that setup
before adding
the cache tier.
3 nodes with 11 6 Tbyte SATA drives each for a base RBD pool, this is setup with
replica
As soon as I create a snapshot on the root of my test cephfs deployment
with a single file within the root, my mds server kernel panics. I
understand that snapshots are not recommended. Is it beneficial to
developers for me to leave my cluster in its present state and provide
whatever debugging inf
Hello,
On Wed, 20 Apr 2016 03:42:00 + Stephen Lord wrote:
>
> OK, you asked ;-)
>
I certainly did. ^o^
> This is all via RBD, I am running a single filesystem on top of 8 RBD
> devices in an effort to get data striping across more OSDs, I had been
> using that setup before adding the cac
Hi,
response in line
On 20 Apr 2016 7:45 a.m., "Christian Balzer" wrote:
>
>
> Hello,
>
> On Wed, 20 Apr 2016 03:42:00 + Stephen Lord wrote:
>
> >
> > OK, you asked ;-)
> >
>
> I certainly did. ^o^
>
> > This is all via RBD, I am running a single filesystem on top of 8 RBD
> > devices in an
Hi Mike,
I don't have experiences with RBD mounts, but see the same effect with RBD.
You can do some tuning to get better results (disable debug and so on).
As hint some values from a ceph.conf:
[osd]
debug asok = 0/0
debug auth = 0/0
debug buffer = 0/0
debug client = 0/0
20 matches
Mail list logo