HI,
we are running two MDS servers in active/standby-replay setup. Recently
we had to disconnect active MDS server, and failover to standby works as
expected.
The filesystem currently contains over 5 million files, so reading all
the metadata information from the data pool took too long, s
just to add to what Pawel said: /etc/logrotate.d/ceph.logrotate
On 17-01-26 09:21, Torsten Casselt wrote:
Hi,
that makes sense. Thanks for the fast answer!
On 26.01.2017 08:04, Paweł Sadowski wrote:
Hi,
6:25 points to daily cron job, it's probably logrotate trying to force
ceph to reopen log
Nice, there it is. Thanks a lot!
On 26.01.2017 09:43, Henrik Korkuc wrote:
> just to add to what Pawel said: /etc/logrotate.d/ceph.logrotate
>
> On 17-01-26 09:21, Torsten Casselt wrote:
>> Hi,
>>
>> that makes sense. Thanks for the fast answer!
>>
>> On 26.01.2017 08:04, Paweł Sadowski wrote:
>>
Oh, it says Coming soon somewhere? (Thanks... and I found it now at
http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-mds/ )
I wrote some instructions and tested them (it was very difficult...
putting together incomplete docs, old mailing list threads, etc. and
tinkering), and couldn't
Hello,
We have some problems with 1 pg from this morning, this is what we found so
far...
# ceph --version
ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9)
# ceph -s
cluster 2bf80721-fceb-4b63-89ee-1a5faa278493
health HEALTH_ERR
1 pgs inconsistent
I had a similar issue recently, where I had a replication size of 2 (I
changed that to 3 after the recovery).
ceph health detail
HEALTH_ERR 16 pgs inconsistent; 261 scrub errors
pg 1.bb1 is active+clean+inconsistent, acting [15,5]
zgrep 1.bb1 /var/log/ceph/ceph.log*
[...] cluster [INF] 1.bb1 d
On 13-1-2017 12:45, Willem Jan Withagen wrote:
> On 13-1-2017 09:07, Christian Balzer wrote:
>>
>> Hello,
>>
>> Something I came across a while agao, but the recent discussion here
>> jolted my memory.
>>
>> If you have a cluster configured with just a "public network" and that
>> network being in
Hello,
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On
> Behalf Of Eugen Block
> I had a similar issue recently, where I had a replication size of 2 (I
> changed that to 3 after the recovery).
Yes, we have replication size of 2 also...
> ceph health detail
> HEALTH_ERR 16 pgs i
Yes, we have replication size of 2 also
From what I understand, with a rep size of 2 the cluster can't decide
which object is intact if one is broken, so the repair fails. If you
had a size of 3, the cluster would see 2 intact objects an repair the
broken one (I guess). At least we didn't
> From: Eugen Block [mailto:ebl...@nde.ag]
>
> From what I understand, with a rep size of 2 the cluster can't decide
> which object is intact if one is broken, so the repair fails. If you
> had a size of 3, the cluster would see 2 intact objects an repair the
> broken one (I guess). At least we d
Glad I could help! :-)
Zitat von Mio Vlahović :
From: Eugen Block [mailto:ebl...@nde.ag]
From what I understand, with a rep size of 2 the cluster can't decide
which object is intact if one is broken, so the repair fails. If you
had a size of 3, the cluster would see 2 intact objects an repai
On 01/26/2017 06:16 AM, Florent B wrote:
On 01/24/2017 07:26 PM, Mark Nelson wrote:
My first thought is that PGs are splitting. You only appear to have
168PGs for 9 OSDs, that's not nearly enough. Beyond the poor data
distribution and associated performance imbalance, your PGs will split
very
Hello Everyone!
We just created a new tutorial for installing Ceph Jewel on Proxmox VE.
The Ceph Server integration in Proxmox VE is already available since
three years and is a widely used component for smaller deployments to
get a real open source hyper-converged virtualization and storage setu
On 01/24/2017 03:57 AM, Mike Lovell wrote:
i was just testing an upgrade of some monitors in a test cluster from hammer
(0.94.7) to jewel (10.2.5). after upgrade each of the first two monitors, i
stopped and restarted a single osd to cause changes in the maps. the same
error messages showed up in
On Thu, Jan 26, 2017 at 8:18 AM, Burkhard Linke
wrote:
> HI,
>
>
> we are running two MDS servers in active/standby-replay setup. Recently we
> had to disconnect active MDS server, and failover to standby works as
> expected.
>
>
> The filesystem currently contains over 5 million files, so reading
Is there a guide on how to add Proxmox to an existing ceph deployment? I
haven't quite gotten it so that Proxmox can manage ceph, just look at it
and access it.
On Thu, Jan 26, 2017, 6:08 AM Martin Maurer wrote:
> Hello Everyone!
>
> We just created a new tutorial for installing Ceph Jewel on P
Hey cephers,
Just a reminder that the 'Getting Started with Ceph Development' Ceph
Tech Talk [0] is start in about 2 hours. Sage is going to walk through
the process from start to finish, so if you have coworkers, friends,
or anyone that might be interested in getting started with Ceph,
please sen
Just an update. I think the real goal with the sleep configs in general
was to reduce the number of concurrent snap trims happening. To that end,
I've put together a branch which adds an AsyncReserver (as with backfill)
for snap trims to each OSD. Before actually starting to do trim work, the
pr
Hi Mohammed,
Thanks for the hint, I think I remember seeing this when Jewel came out but I
assumed it must be a mistake, or a mere recommendation but not a mandatory
requirement because I always upgraded the OSDs last ones.
Today I upgraded my OSD nodes in the test environment to Jewel and rega
19 matches
Mail list logo