[ceph-users] [ERR] OSD_SCRUB_ERRORS: 2 scrub errors

2021-04-01 Thread Szabo, Istvan (Agoda)
Hi,

I’m continuously getting scrub errors in my index pool and log pool that I need 
to repair always.
HEALTH_ERR 2 scrub errors; Possible data damage: 1 pg inconsistent
[ERR] OSD_SCRUB_ERRORS: 2 scrub errors
[ERR] PG_DAMAGED: Possible data damage: 1 pg inconsistent
pg 20.19 is active+clean+inconsistent, acting [39,41,37]

Why is this?
I have no cue at all, no log entry no anything ☹


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [ERR] OSD_SCRUB_ERRORS: 2 scrub errors

2021-04-01 Thread Szabo, Istvan (Agoda)
Forgot the very minimum entries after the scrub done:

2021-04-01T11:37:43.559539+0700 osd.39 (osd.39) 50 : cluster [DBG] 20.19 repair 
starts
2021-04-01T11:37:43.889909+0700 osd.39 (osd.39) 51 : cluster [ERR] 20.19 soid 
20:990258ea:::.dir.9213182a-14ba-48ad-bde9-289a1c0c0de8.17263260.1.237:head : 
omap_digest 0x775cd866 != omap_digest 0xda11ecd0 from shard 39
2021-04-01T11:37:43.950318+0700 osd.39 (osd.39) 52 : cluster [ERR] 20.19 soid 
20:994159a0:::.dir.9213182a-14ba-48ad-bde9-289a1c0c0de8.17263260.1.35:head : 
omap_digest 0xb61affda != omap_digest 0xb3467a38 from shard 39
2021-04-01T11:37:45.397338+0700 mgr.sg-cephmon-6s01 (mgr.25028786) 81795 : 
cluster [DBG] pgmap v81983: 225 pgs: 1 
active+clean+scrubbing+deep+inconsistent+repair, 224 active+clean; 4.8 TiB 
data, 25 TiB used, 506 TiB / 531 TiB avail; 14 MiB/s rd, 5.0 MiB/s wr, 14.28k 
op/s
2021-04-01T11:37:45.690930+0700 osd.39 (osd.39) 53 : cluster [ERR] 20.19 repair 
0 missing, 2 inconsistent objects
2021-04-01T11:37:45.690951+0700 osd.39 (osd.39) 54 : cluster [ERR] 20.19 repair 
2 errors, 0 fixed
2021-04-01T11:37:45.762565+0700 osd.39 (osd.39) 55 : cluster [DBG] 20.19 
deep-scrub starts

Istvan Szabo
Senior Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan.sz...@agoda.com
---

-Original Message-
From: Szabo, Istvan (Agoda)  
Sent: Thursday, April 1, 2021 11:38 AM
To: ceph-users 
Subject: [ceph-users] [ERR] OSD_SCRUB_ERRORS: 2 scrub errors

Hi,

I’m continuously getting scrub errors in my index pool and log pool that I need 
to repair always.
HEALTH_ERR 2 scrub errors; Possible data damage: 1 pg inconsistent [ERR] 
OSD_SCRUB_ERRORS: 2 scrub errors [ERR] PG_DAMAGED: Possible data damage: 1 pg 
inconsistent
pg 20.19 is active+clean+inconsistent, acting [39,41,37]

Why is this?
I have no cue at all, no log entry no anything ☹


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] v16.2.0 Pacific released

2021-04-01 Thread David Galloway
We're glad to announce the first release of the Pacific v16.2.0 stable
series. There have been a lot of changes across components from the
previous Ceph releases, and we advise everyone to go through the release
and upgrade notes carefully.

Major Changes from Octopus
--

General
~~~

* Cephadm can automatically upgrade an Octopus cluster to Pacific with a single
  command to start the process.

* Cephadm has improved significantly over the past year, with improved
  support for RGW (standalone and multisite), and new support for NFS
  and iSCSI.  Most of these changes have already been backported to
  recent Octopus point releases, but with the Pacific release we will
  switch to backporting bug fixes only.

* Packages are built for the following distributions:

  - CentOS 8
  - Ubuntu 20.04 (Focal)
  - Ubuntu 18.04 (Bionic)
  - Debian Buster
  - Container image (based on CentOS 8)

  With the exception of Debian Buster, packages and containers are
  built for both x86_64 and aarch64 (arm64) architectures.

  Note that cephadm clusters may work on many other distributions,
  provided Python 3 and a recent version of Docker or Podman is
  available to manage containers.  For more information, see
  `cephadm-host-requirements`.


Dashboard
~

The `mgr-dashboard` brings improvements in the following management areas:

* Orchestrator/Cephadm:

  - Host management: maintenance mode, labels.
  - Services: display placement specification.
  - OSD: disk replacement, display status of ongoing deletion, and improved
health/SMART diagnostics reporting.

* Official `mgr ceph api`:

  - OpenAPI v3 compliant.
  - Stability commitment starting from Pacific release.
  - Versioned via HTTP `Accept` header (starting with v1.0).
  - Thoroughly tested (>90% coverage and per Pull Request validation).
  - Fully documented.

* RGW:

  - Multi-site synchronization monitoring.
  - Management of multiple RGW daemons and their resources (buckets and users).
  - Bucket and user quota usage visualization.
  - Improved configuration of S3 tenanted users.

* Security (multiple enhancements and fixes resulting from a pen testing 
conducted by IBM):

  - Account lock-out after a configurable number of failed log-in attempts.
  - Improved cookie policies to mitigate XSS/CSRF attacks.
  - Reviewed and improved security in HTTP headers.
  - Sensitive information reviewed and removed from logs and error messages.
  - TLS 1.0 and 1.1 support disabled.
  - Debug mode when enabled triggers HEALTH_WARN.

* Pools:

  - Improved visualization of replication and erasure coding modes.
  - CLAY erasure code plugin supported.

* Alerts and notifications:

  - Alert triggered on MTU mismatches in the cluster network.
  - Favicon changes according cluster status.

* Other:

  - Landing page: improved charts and visualization.
  - Telemetry configuration wizard.
  - OSDs: management of individual OSD flags.
  - RBD: per-RBD image Grafana dashboards.
  - CephFS: Dirs and Caps displayed.
  - NFS: v4 support only (v3 backward compatibility planned).
  - Front-end: Angular 10 update.


RADOS
~

* Pacific introduces RocksDB sharding, which reduces disk space requirements.

* Ceph now provides QoS between client I/O and background operations via the
  mclock scheduler.

* The balancer is now on by default in upmap mode to improve distribution of
  PGs across OSDs.

* The output of `ceph -s` has been improved to show recovery progress in
  one progress bar. More detailed progress bars are visible via the
  `ceph progress` command.


RBD block storage
~

* Image live-migration feature has been extended to support external data
  sources.  Images can now be instantly imported from local files, remote
  files served over HTTP(S) or remote S3 buckets in `raw` (`rbd export v1`)
  or basic `qcow` and `qcow2` formats.  Support for `rbd export v2`
  format, advanced QCOW features and `rbd export-diff` snapshot differentials
  is expected in future releases.

* Initial support for client-side encryption has been added.  This is based
  on LUKS and in future releases will allow using per-image encryption keys
  while maintaining snapshot and clone functionality -- so that parent image
  and potentially multiple clone images can be encrypted with different keys.

* A new persistent write-back cache is available.  The cache operates in
  a log-structured manner, providing full point-in-time consistency for the
  backing image.  It should be particularly suitable for PMEM devices.

* A Windows client is now available in the form of `librbd.dll` and
  `rbd-wnbd` (Windows Network Block Device) daemon.  It allows mapping,
  unmapping and manipulating images similar to `rbd-nbd`.

* librbd API now offers quiesce/unquiesce hooks, allowing for coordinated
  snapshot creation.


RGW object storage
~~

* Initial support for S3 Select. See `s3-select-feature-table` for supported 
queries.

* Bucket notificat

[ceph-users] x-amz-request-id logging with beast + rgw (ceph 15.2.10/containerized)?

2021-04-01 Thread David Orman
Hi,

Is there any way to log the x-amz-request-id along with the request in
the rgw logs? We're using beast and don't see an option in the
configuration documentation to add headers to the request lines. We
use centralized logging and would like to be able to search all layers
of the request path (edge, lbs, ceph, etc) with a x-amz-request-id.

Right now, all we see is this:

debug 2021-04-01T15:55:31.105+ 7f54e599b700  1 beast:
0x7f5604c806b0: x.x.x.x - - [2021-04-01T15:55:31.105455+] "PUT
/path/object HTTP/1.1" 200 556 - "aws-sdk-go/1.36.15 (go1.15.3; linux;
amd64)" -

We've also tried this:

ceph config set global rgw_enable_ops_log true
ceph config set global rgw_ops_log_socket_path /tmp/testlog

After doing this, inside the rgw container, we can socat -
UNIX-CONNECT:/tmp/testlog and see the log entries being recorded that
we want, but there has to be a better way to do this, where the logs
are emitted like the request logs above by beast, so that we can
handle it using journald. If there's an alternative that would
accomplish the same thing, we're very open to suggestions.

Thank you,
David
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: x-amz-request-id logging with beast + rgw (ceph 15.2.10/containerized)?

2021-04-01 Thread Yuval Lifshitz
Hi David,
Don't have any good idea for "octopus" (other than ops log), but you can do
that (and more) in "pacific" using lua scripting on the RGW:
https://docs.ceph.com/en/pacific/radosgw/lua-scripting/

Yuval

On Thu, Apr 1, 2021 at 7:11 PM David Orman  wrote:

> Hi,
>
> Is there any way to log the x-amz-request-id along with the request in
> the rgw logs? We're using beast and don't see an option in the
> configuration documentation to add headers to the request lines. We
> use centralized logging and would like to be able to search all layers
> of the request path (edge, lbs, ceph, etc) with a x-amz-request-id.
>
> Right now, all we see is this:
>
> debug 2021-04-01T15:55:31.105+ 7f54e599b700  1 beast:
> 0x7f5604c806b0: x.x.x.x - - [2021-04-01T15:55:31.105455+] "PUT
> /path/object HTTP/1.1" 200 556 - "aws-sdk-go/1.36.15 (go1.15.3; linux;
> amd64)" -
>
> We've also tried this:
>
> ceph config set global rgw_enable_ops_log true
> ceph config set global rgw_ops_log_socket_path /tmp/testlog
>
> After doing this, inside the rgw container, we can socat -
> UNIX-CONNECT:/tmp/testlog and see the log entries being recorded that
> we want, but there has to be a better way to do this, where the logs
> are emitted like the request logs above by beast, so that we can
> handle it using journald. If there's an alternative that would
> accomplish the same thing, we're very open to suggestions.
>
> Thank you,
> David
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: v14.2.19 Nautilus released

2021-04-01 Thread David Galloway



On 3/31/21 9:44 AM, David Galloway wrote:
> 
> On 3/31/21 5:24 AM, Stefan Kooman wrote:
>> On 3/30/21 10:28 PM, David Galloway wrote:
>>> This is the 19th update to the Ceph Nautilus release series. This is a
>>> hotfix release to prevent daemons from binding to loopback network
>>> interfaces. All nautilus users are advised to upgrade to this release.
>>
>> Are Ceph Nautilus 14.2.19 AMD64 packages for Ubuntu Xenial still being
>> built? I only see Arm64 packages in the repository.
>>
>> Gr. Stefan
>>
> 
> They will be built and pushed hopefully today.  We had a bug in our CI
> after updating our builders to Ubuntu Focal.
> 

Just pushed.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] cephadm/podman :: upgrade to pacific stuck

2021-04-01 Thread Adrian Sevcenco

Hi! I have a single machine ceph installation and after trying to update to
pacific the upgrade is stuck with:

ceph -s
cluster:
id: d9f4c810-8270-11eb-97a7-faa3b09dcf67
health: HEALTH_WARN
Upgrade: Need standby mgr daemon

services:
mon: 1 daemons, quorum sev.spacescience.ro (age 3w)
mgr: sev.spacescience.ro.wpozds(active, since 2w)
mds: sev-ceph:1 {0=sev-ceph.sev.vmvwrm=up:active}
osd: 2 osds: 2 up (since 2w), 2 in (since 2w)

data:
pools:   4 pools, 194 pgs
objects: 32 objects, 8.4 KiB
usage:   2.0 GiB used, 930 GiB / 932 GiB avail
pgs: 194 active+clean

progress:
Upgrade to docker.io/ceph/ceph:v16.2.0 (0s)
[]

How can i put the mgr on standby? so far i did not find anything relevant..

Thanks a lot!
Adrian



smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm/podman :: upgrade to pacific stuck

2021-04-01 Thread Anthony D'Atri
I think what it’s saying is that it wants for more than one mgr daemon to be 
provisioned, so that it can failover when the primary is restarted.  I suspect 
you would then run into the same thing with the mon.  All sorts of things tend 
to crop up on a cluster this minimal.


> On Apr 1, 2021, at 10:15 AM, Adrian Sevcenco  wrote:
> 
> Hi! I have a single machine ceph installation and after trying to update to
> pacific the upgrade is stuck with:
> 
> ceph -s
> cluster:
> id: d9f4c810-8270-11eb-97a7-faa3b09dcf67
> health: HEALTH_WARN
> Upgrade: Need standby mgr daemon
> 
> services:
> mon: 1 daemons, quorum sev.spacescience.ro (age 3w)
> mgr: sev.spacescience.ro.wpozds(active, since 2w)
> mds: sev-ceph:1 {0=sev-ceph.sev.vmvwrm=up:active}
> osd: 2 osds: 2 up (since 2w), 2 in (since 2w)
> 
> data:
> pools:   4 pools, 194 pgs
> objects: 32 objects, 8.4 KiB
> usage:   2.0 GiB used, 930 GiB / 932 GiB avail
> pgs: 194 active+clean
> 
> progress:
> Upgrade to docker.io/ceph/ceph:v16.2.0 (0s)
> []
> 
> How can i put the mgr on standby? so far i did not find anything relevant..
> 
> Thanks a lot!
> Adrian
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: x-amz-request-id logging with beast + rgw (ceph 15.2.10/containerized)?

2021-04-01 Thread Matt Benjamin
Hi Folks,

A Red Hat SA (Mustafa Aydin) suggested, some while back, a concise
formula for relaying ops-log to syslog, basically a script executing

socat unix-connect:/var/run/ceph/opslog,reuseaddr UNIX-CLIENT:/dev/log &

I haven't experimented with it.

Matt

On Thu, Apr 1, 2021 at 12:22 PM Yuval Lifshitz  wrote:
>
> Hi David,
> Don't have any good idea for "octopus" (other than ops log), but you can do
> that (and more) in "pacific" using lua scripting on the RGW:
> https://docs.ceph.com/en/pacific/radosgw/lua-scripting/
>
> Yuval
>
> On Thu, Apr 1, 2021 at 7:11 PM David Orman  wrote:
>
> > Hi,
> >
> > Is there any way to log the x-amz-request-id along with the request in
> > the rgw logs? We're using beast and don't see an option in the
> > configuration documentation to add headers to the request lines. We
> > use centralized logging and would like to be able to search all layers
> > of the request path (edge, lbs, ceph, etc) with a x-amz-request-id.
> >
> > Right now, all we see is this:
> >
> > debug 2021-04-01T15:55:31.105+ 7f54e599b700  1 beast:
> > 0x7f5604c806b0: x.x.x.x - - [2021-04-01T15:55:31.105455+] "PUT
> > /path/object HTTP/1.1" 200 556 - "aws-sdk-go/1.36.15 (go1.15.3; linux;
> > amd64)" -
> >
> > We've also tried this:
> >
> > ceph config set global rgw_enable_ops_log true
> > ceph config set global rgw_ops_log_socket_path /tmp/testlog
> >
> > After doing this, inside the rgw container, we can socat -
> > UNIX-CONNECT:/tmp/testlog and see the log entries being recorded that
> > we want, but there has to be a better way to do this, where the logs
> > are emitted like the request logs above by beast, so that we can
> > handle it using journald. If there's an alternative that would
> > accomplish the same thing, we're very open to suggestions.
> >
> > Thank you,
> > David
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>


-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm/podman :: upgrade to pacific stuck

2021-04-01 Thread Adrian Sevcenco

On 4/1/21 8:19 PM, Anthony D'Atri wrote:

I think what it’s saying is that it wants for more than one mgr daemon to be 
provisioned, so that it can failover

unfortunately it is not allowed as the port usage is clashing ...
i found out the name of the daemon by grepping the ps output (it would be nice 
a ceph orch daemon ls)
and i stopped it .. but than the message was :
cluster:
id: d9f4c810-8270-11eb-97a7-faa3b09dcf67
health: HEALTH_WARN
no active mgr
Upgrade: Need standby mgr daemon

so, it seems that there is a specific requirement of a state named "standby" 
for mgr daemon

then i tried to start it again with:
ceph orch daemon start 

but the command is stuck ...

i tried to get the ceph:v16.2 image and
ceph orch daemon redeploy mgr ceph:v16.2.0

but it also is stuck?

so, what can i do? is there anything beside delete everything and start from 
scratch?

Thank you!
Adrian



when the primary is restarted.  I suspect you would then run into the same 
thing with the mon.  All sorts of things
tend to crop up on a cluster this minimal.



On Apr 1, 2021, at 10:15 AM, Adrian Sevcenco  wrote:

Hi! I have a single machine ceph installation and after trying to update to 
pacific the upgrade is stuck with:

ceph -s cluster: id: d9f4c810-8270-11eb-97a7-faa3b09dcf67 health: 
HEALTH_WARN Upgrade: Need standby mgr daemon

services: mon: 1 daemons, quorum sev.spacescience.ro (age 3w) mgr: sev.spacescience.ro.wpozds(active, since 2w) 
mds: sev-ceph:1 {0=sev-ceph.sev.vmvwrm=up:active} osd: 2 osds: 2 up (since 2w), 2 in (since 2w)


data: pools:   4 pools, 194 pgs objects: 32 objects, 8.4 KiB usage:   2.0 GiB 
used, 930 GiB / 932 GiB avail pgs:
194 active+clean

progress: Upgrade to docker.io/ceph/ceph:v16.2.0 (0s) 
[]

How can i put the mgr on standby? so far i did not find anything relevant..

Thanks a lot! Adrian




smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [Ceph-maintainers] v16.2.0 Pacific released

2021-04-01 Thread Martin Verges
Hello,

thanks for a very interesting new Ceph Release.

Are there any plans to build for Debian bullseye as well? It's in
"Hard Freeze" since 2021-03-12 and at the moment it comes with a
Nautilus release that will be EOL when Debian bullseye will be
official stable. That will be a pain for Debian users and if it's
still possible we should try to avoid that. Is there something we
could help to make it happen?

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx

On Thu, 1 Apr 2021 at 16:27, David Galloway  wrote:
>
> We're glad to announce the first release of the Pacific v16.2.0 stable
> series. There have been a lot of changes across components from the
> previous Ceph releases, and we advise everyone to go through the release
> and upgrade notes carefully.
>
> Major Changes from Octopus
> --
>
> General
> ~~~
>
> * Cephadm can automatically upgrade an Octopus cluster to Pacific with a 
> single
>   command to start the process.
>
> * Cephadm has improved significantly over the past year, with improved
>   support for RGW (standalone and multisite), and new support for NFS
>   and iSCSI.  Most of these changes have already been backported to
>   recent Octopus point releases, but with the Pacific release we will
>   switch to backporting bug fixes only.
>
> * Packages are built for the following distributions:
>
>   - CentOS 8
>   - Ubuntu 20.04 (Focal)
>   - Ubuntu 18.04 (Bionic)
>   - Debian Buster
>   - Container image (based on CentOS 8)
>
>   With the exception of Debian Buster, packages and containers are
>   built for both x86_64 and aarch64 (arm64) architectures.
>
>   Note that cephadm clusters may work on many other distributions,
>   provided Python 3 and a recent version of Docker or Podman is
>   available to manage containers.  For more information, see
>   `cephadm-host-requirements`.
>
>
> Dashboard
> ~
>
> The `mgr-dashboard` brings improvements in the following management areas:
>
> * Orchestrator/Cephadm:
>
>   - Host management: maintenance mode, labels.
>   - Services: display placement specification.
>   - OSD: disk replacement, display status of ongoing deletion, and improved
> health/SMART diagnostics reporting.
>
> * Official `mgr ceph api`:
>
>   - OpenAPI v3 compliant.
>   - Stability commitment starting from Pacific release.
>   - Versioned via HTTP `Accept` header (starting with v1.0).
>   - Thoroughly tested (>90% coverage and per Pull Request validation).
>   - Fully documented.
>
> * RGW:
>
>   - Multi-site synchronization monitoring.
>   - Management of multiple RGW daemons and their resources (buckets and 
> users).
>   - Bucket and user quota usage visualization.
>   - Improved configuration of S3 tenanted users.
>
> * Security (multiple enhancements and fixes resulting from a pen testing 
> conducted by IBM):
>
>   - Account lock-out after a configurable number of failed log-in attempts.
>   - Improved cookie policies to mitigate XSS/CSRF attacks.
>   - Reviewed and improved security in HTTP headers.
>   - Sensitive information reviewed and removed from logs and error messages.
>   - TLS 1.0 and 1.1 support disabled.
>   - Debug mode when enabled triggers HEALTH_WARN.
>
> * Pools:
>
>   - Improved visualization of replication and erasure coding modes.
>   - CLAY erasure code plugin supported.
>
> * Alerts and notifications:
>
>   - Alert triggered on MTU mismatches in the cluster network.
>   - Favicon changes according cluster status.
>
> * Other:
>
>   - Landing page: improved charts and visualization.
>   - Telemetry configuration wizard.
>   - OSDs: management of individual OSD flags.
>   - RBD: per-RBD image Grafana dashboards.
>   - CephFS: Dirs and Caps displayed.
>   - NFS: v4 support only (v3 backward compatibility planned).
>   - Front-end: Angular 10 update.
>
>
> RADOS
> ~
>
> * Pacific introduces RocksDB sharding, which reduces disk space requirements.
>
> * Ceph now provides QoS between client I/O and background operations via the
>   mclock scheduler.
>
> * The balancer is now on by default in upmap mode to improve distribution of
>   PGs across OSDs.
>
> * The output of `ceph -s` has been improved to show recovery progress in
>   one progress bar. More detailed progress bars are visible via the
>   `ceph progress` command.
>
>
> RBD block storage
> ~
>
> * Image live-migration feature has been extended to support external data
>   sources.  Images can now be instantly imported from local files, remote
>   files served over HTTP(S) or remote S3 buckets in `raw` (`rbd export v1`)
>   or basic `qcow` and `qcow2` formats.  Support for `rbd export v2`
>   format, advanced QCOW features and `rbd export-diff` snapshot differentials
>   is expected in

[ceph-users] Re: v14.2.19 Nautilus released

2021-04-01 Thread Stefan Kooman

On 4/1/21 6:56 PM, David Galloway wrote:


They will be built and pushed hopefully today.  We had a bug in our CI
after updating our builders to Ubuntu Focal.



Just pushed.


Great, thanks for the heads up!

Gr. Stefan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph User Survey Working Group - Next Steps

2021-04-01 Thread Stefan Kooman

On 3/30/21 12:48 PM, Mike Perez wrote:

Hi everyone,

I didn't get enough responses on the previous Doodle to schedule a
meeting. I'm wondering if people are OK with the previous PDF I
released or if there's interest in the community to develop better
survey results?

https://ceph.io/community/ceph-user-survey-2019/


It would be nice if we could order the graphs by percentage 
(increasing). Not that you have "jumping" bars in the middle of some of 
the graphs. Same for ordering the tables. If at all possible. I would 
not want to spend to much time on it if that's not easily configurable.


We might want to try to make a summary with the most interesting 
results. I.e. Choices why people choose Ceph, what Ceph interfaces are 
used most, etc.


And on top of that we could evaluate all the motivated answers to see 
what are the most requested features or improvements the participants 
want to see in a future release. Might be useful for Quincy road map as 
well.


Gr. Stefan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [Ceph-maintainers] v16.2.0 Pacific released

2021-04-01 Thread Victor Hooi
Hi,

This is awesome news! =).

I did hear mention before about Crimson and Pacific - does anybody know
what the current state of things is?

I see there's a doc page for it here -
https://docs.ceph.com/en/latest/dev/crimson/crimson/

Are we able to use Crimson yet in Pacific? (As in, do we need to rebuild
from source, or is it available in the packages just announced?) Are we
likely to see any improvements with U.2/NVMe drives yet?

Regards,
Victor

On Fri, Apr 2, 2021 at 7:07 AM Martin Verges  wrote:

> Hello,
>
> thanks for a very interesting new Ceph Release.
>
> Are there any plans to build for Debian bullseye as well? It's in
> "Hard Freeze" since 2021-03-12 and at the moment it comes with a
> Nautilus release that will be EOL when Debian bullseye will be
> official stable. That will be a pain for Debian users and if it's
> still possible we should try to avoid that. Is there something we
> could help to make it happen?
>
> --
> Martin Verges
> Managing director
>
> Mobile: +49 174 9335695 <+49%20174%209335695>
> E-Mail: martin.ver...@croit.io
> Chat: https://t.me/MartinVerges
>
> croit GmbH, Freseniusstr. 31h, 81247 Munich
> CEO: Martin Verges - VAT-ID: DE310638492
> Com. register: Amtsgericht Munich HRB 231263
>
> Web: https://croit.io
> YouTube: https://goo.gl/PGE1Bx
>
> On Thu, 1 Apr 2021 at 16:27, David Galloway  wrote:
> >
> > We're glad to announce the first release of the Pacific v16.2.0 stable
> > series. There have been a lot of changes across components from the
> > previous Ceph releases, and we advise everyone to go through the release
> > and upgrade notes carefully.
> >
> > Major Changes from Octopus
> > --
> >
> > General
> > ~~~
> >
> > * Cephadm can automatically upgrade an Octopus cluster to Pacific with a
> single
> >   command to start the process.
> >
> > * Cephadm has improved significantly over the past year, with improved
> >   support for RGW (standalone and multisite), and new support for NFS
> >   and iSCSI.  Most of these changes have already been backported to
> >   recent Octopus point releases, but with the Pacific release we will
> >   switch to backporting bug fixes only.
> >
> > * Packages are built for the following distributions:
> >
> >   - CentOS 8
> >   - Ubuntu 20.04 (Focal)
> >   - Ubuntu 18.04 (Bionic)
> >   - Debian Buster
> >   - Container image (based on CentOS 8)
> >
> >   With the exception of Debian Buster, packages and containers are
> >   built for both x86_64 and aarch64 (arm64) architectures.
> >
> >   Note that cephadm clusters may work on many other distributions,
> >   provided Python 3 and a recent version of Docker or Podman is
> >   available to manage containers.  For more information, see
> >   `cephadm-host-requirements`.
> >
> >
> > Dashboard
> > ~
> >
> > The `mgr-dashboard` brings improvements in the following management
> areas:
> >
> > * Orchestrator/Cephadm:
> >
> >   - Host management: maintenance mode, labels.
> >   - Services: display placement specification.
> >   - OSD: disk replacement, display status of ongoing deletion, and
> improved
> > health/SMART diagnostics reporting.
> >
> > * Official `mgr ceph api`:
> >
> >   - OpenAPI v3 compliant.
> >   - Stability commitment starting from Pacific release.
> >   - Versioned via HTTP `Accept` header (starting with v1.0).
> >   - Thoroughly tested (>90% coverage and per Pull Request validation).
> >   - Fully documented.
> >
> > * RGW:
> >
> >   - Multi-site synchronization monitoring.
> >   - Management of multiple RGW daemons and their resources (buckets and
> users).
> >   - Bucket and user quota usage visualization.
> >   - Improved configuration of S3 tenanted users.
> >
> > * Security (multiple enhancements and fixes resulting from a pen testing
> conducted by IBM):
> >
> >   - Account lock-out after a configurable number of failed log-in
> attempts.
> >   - Improved cookie policies to mitigate XSS/CSRF attacks.
> >   - Reviewed and improved security in HTTP headers.
> >   - Sensitive information reviewed and removed from logs and error
> messages.
> >   - TLS 1.0 and 1.1 support disabled.
> >   - Debug mode when enabled triggers HEALTH_WARN.
> >
> > * Pools:
> >
> >   - Improved visualization of replication and erasure coding modes.
> >   - CLAY erasure code plugin supported.
> >
> > * Alerts and notifications:
> >
> >   - Alert triggered on MTU mismatches in the cluster network.
> >   - Favicon changes according cluster status.
> >
> > * Other:
> >
> >   - Landing page: improved charts and visualization.
> >   - Telemetry configuration wizard.
> >   - OSDs: management of individual OSD flags.
> >   - RBD: per-RBD image Grafana dashboards.
> >   - CephFS: Dirs and Caps displayed.
> >   - NFS: v4 support only (v3 backward compatibility planned).
> >   - Front-end: Angular 10 update.
> >
> >
> > RADOS
> > ~
> >
> > * Pacific introduces RocksDB sharding, which reduces disk space
> requirements.
> >
> > * Ceph now provides QoS between cl