Hmm. Apparently download.ceph.com = us-west.ceph.com
And there is no repomd.xml on us-east.ceph.com
This seems to happen a little too often for something that is stable and
released. Makes it seem like the old BBS days of “I want to play DOOM, so I’m
shutting the services down”
Brian Andrus
IT
[mailto:david.tur...@storagecraft.com]
Sent: Monday, October 10, 2016 10:33 AM
To: Andrus, Brian Contractor ; ceph-users@lists.ceph.com
Subject: RE: [ceph-users] too many PGs per OSD (326 > max 300) warning when ALL
PGs are 256
You have 11 pools with 256 pgs, 1 pool with 128 and 1 pool with
Ok, this is an odd one to me...
I have several pools, ALL of them are set with pg_num and pgp_num = 256. Yet,
the warning about too many PGs per OSD is showing up.
Here are my pools:
pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins
pg_num 256 pgp_num 256 last_change
ying of this message
is prohibited.
____
From: Andrus, Brian Contractor [bdand...@nps.edu]
Sent: Thursday, September 22, 2016 10:41 AM
To: David Turner; ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: RE: too man
Thursday, September 22, 2016 8:57 AM
To: Andrus, Brian Contractor ; ceph-users@lists.ceph.com
Subject: RE: too many PGs per OSD when pg_num = 256??
Forgot the + for the regex.
ceph -s | grep -Eo '[0-9]+ pgs'
[cid:image001.jpg@01D214B5.3D5480F0]<https:
All,
I am getting a warning:
health HEALTH_WARN
too many PGs per OSD (377 > max 300)
pool cephfs_data has many more objects per pg than average (too few
pgs?)
yet, when I check the settings:
# ceph osd pool get rbd pg_num
pg_num: 256
# ceph osd pool get rbd pgp_num
All,
I have been making some progress on troubleshooting this.
I am seeing that when rgw is configured for LDAP, I am getting an error in my
slapd log:
Sep 14 06:56:21 mgmt1 slapd[23696]: conn=1762 op=0 RESULT tag=97 err=2
text=historical protocol version requested, use LDAPv3 instead
Am I corr
All,
I am working on getting RADOSGW to work with LDAP and things seem like they
should be set, but I suspect that there are certain attributes that need to
exist for the user to work.
If I create a user using "radosgw-admin user create", I am able to use that
access/secret key successfully, bu
All,
I'm having trouble using ceph-deploy to create a rados gateway.
I initially did it and it worked, but my default pg_num was too large so it was
complaining about that.
To remedy, I stopped the ceph-radosgw service and deleted the pools that were
created.
default.rgw.log
default.rgw.gc
defau
All,
I have found an issue with ceph OSDs that are on a SAN and Multipathed. It may
not matter that they are multipathed, but that is how our setup is where I
found the issue.
Our setup has an infiniband network which uses SRP to annunciate block devices
on a DDN.
Every LUN can be seen by every
p...@redhat.com]
Sent: Monday, May 16, 2016 7:36 AM
To: Andrus, Brian Contractor
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] failing to respond to cache pressure
On Mon, May 16, 2016 at 3:11 PM, Andrus, Brian Contractor
wrote:
> Both client and server are Jewel 10.2.0
So the fuse
ailto:jsp...@redhat.com]
Sent: Monday, May 16, 2016 2:28 AM
To: Andrus, Brian Contractor
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] failing to respond to cache pressure
On Mon, May 16, 2016 at 5:42 AM, Andrus, Brian Contractor
wrote:
> So this ‘production ready’ CephFS f
So this 'production ready' CephFS for jewel seems a little not quite
Currently I have a single system mounting CephFS and merely scp-ing data to it.
The CephFS mount has 168 TB used, 345 TB / 514 TB avail.
Every so often, I get a HEALTH_WARN message of mds0: Client failing to respond
to cach
So I see that support for RHEL6 and derivatives was dropped in Jewel
(http://ceph.com/releases/v10-2-0-jewel-released/)
But is there backward compatibility to mount it using hammer on a node? Doesn't
seem to be and that makes some sense, but how can I mount CephFS from a
CentOS7-Jewel server to
All,
I am trying to add another OSD to our cluster using ceph-deploy. This is
running Jewel.
I previously set up the other 12 OSDs on a fresh install using the command:
ceph-deploy osd create :/dev/mapper/mpath:/dev/sda
Those are all up and happy. On the systems /dev/sda is an SSD which I have
All,
I thought there was a way to mount CephFS using the kernel driver and be able
to honor selinux labeling.
Right now, if I do 'ls -lZ' on a mounted cephfs, I get question marks instead
of any contexts.
When I mount it, I see in dmesg:
[858946.554719] SELinux: initialized (dev ceph, type ceph
...@uchicago.edu]
Sent: Thursday, April 28, 2016 12:56 PM
To: Andrus, Brian Contractor
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Troubleshoot blocked OSDs
OK, a few more questions.
What does the load look like on the OSDs with ‘iostat’ during the rsync?
What version of Ceph? Are you
Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238
From: Lincoln Bryant [mailto:linco...@uchicago.edu]
Sent: Thursday, April 28, 2016 12:31 PM
To: Andrus, Brian Contractor
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Troubleshoot blocked OSDs
Hi Brian,
The
All,
I have a small ceph cluster with 4 OSDs and 3 MONs on 4 systems.
I was rsyncing about 50TB of files and things get very slow. To the point I
stopped the rsync, but even with everything stopped, I see:
health HEALTH_WARN
80 requests are blocked > 32 sec
The number was as high as
9:36 PM
To: Andrus, Brian Contractor
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Multiple MDSes
On Fri, Apr 22, 2016 at 9:59 PM, Andrus, Brian Contractor
wrote:
> All,
>
> Ok, I understand Jewel is considered stable for CephFS with a single
> active MDS.
>
> B
All,
Ok, I understand Jewel is considered stable for CephFS with a single active MDS.
But, how do I add a standby MDS? What documentation I find is a bit confusing.
I ran
ceph-deploy create mds systemA
ceph-deploy create mds systemB
Then I create a ceph filesystem, but it appears systemB is the
I have implemented a 171TB CephFS using Infernalis recently (it is set so I can
grow that to just under 2PB).
I tried using Jewel, but it had grief, so I will wait on that.
I am migrating data from a lustre filesystem and so far it seems ok. I have not
put it into production yet, but will be tes
All,
I have 4 nodes each with 5 OSDs.
I recently upgraded to infernalis via ceph-deploy. It went mostly ok but one of
my nodes cannot mount any OSDs.
When I look at the status of the service, I see:
Apr 07 12:22:06 borg02 ceph-osd[3868]: 9: (ceph::__ceph_assert_fail(char
const*, char const*,
Mohd,
IIRC, disk prepare does not activate it, osd prepare does.
Brian Andrus
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
zai...@nocser.net
Sent: Monday, April 04, 2016 7:58 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] About Ceph
Hi,
What is different be
All,
So I am testing an install of jewel using ceph-deploy.
I do a fresh install of CentOS7 and then install ceph and create 3 monitors.
I then reboot one of them to see how things behave.
It seems that the monitor daemon is not starting on boot. It is enabled but I
have to manually start it be
All,
I am trying to use ceph-deploy to create an OSD on a multipath device but put
the journal partition on the SSD the system boots from.
I have created a partition on the SSD (/dev/sda5) but ceph-deploy does not seem
to like it.
I am trying:
ceph-deploy osd create ceph01:/dev/mapper/mpathb:/d
Does anyone know if there will be any representation of ceph at the Lustre
Users' Group in Portland this year?
If not, is there any event in the US that brings the ceph community together?
Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238
Andrus, Brian Contractor
Sent: Thursday, February 11, 2016 2:36 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Multipath devices with infernalis
All,
I have a set of hardware with a few systems connected via IB along with a DDN
SFA12K.
There are 4 IB/SRP paths to each block device. Those show
All,
I have a set of hardware with a few systems connected via IB along with a DDN
SFA12K.
There are 4 IB/SRP paths to each block device. Those show up as
/dev/mapper/mpath[b-d]
I am trying to do an initial install/setup of ceph on 3 nodes. Each will be a
monitor as well as host a single OSD.
29 matches
Mail list logo