[Bug 1906496] Re: mgr can be very slow in a large ceph cluster

2020-12-02 Thread Ponnuvel Palaniyappan
** Changed in: ceph (Ubuntu)
 Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906496

Title:
  mgr can be very slow in a large ceph cluster

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1906496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906496] Re: mgr can be very slow in a large ceph cluster

2020-12-03 Thread Ponnuvel Palaniyappan
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906496

Title:
  mgr can be very slow in a large ceph cluster

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1906496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1911900] Re: [SRU] Active scrub blocks upmap balancer

2021-02-22 Thread Ponnuvel Palaniyappan
focal packages are installed & verified for ceph 15.2.8. Attached the
steps used.


** Attachment added: "sru_focal_ceph15.2.8.txt"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1911900/+attachment/5466037/+files/sru_focal_ceph15.2.8.txt

** Tags removed: verification-needed-focal
** Tags added: verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1911900

Title:
  [SRU] Active scrub blocks upmap balancer

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1911900/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1911900] Re: [SRU] Active scrub blocks upmap balancer

2021-02-22 Thread Ponnuvel Palaniyappan
Ussuri packages are installed and verified
(15.2.8-0ubuntu0.20.04.1~cloud0).

In tests for focal and ussuri, rados bench was used to drive some I/O
and randomly chosen PGs were repeated deeb-scrubbed in a loop (ceph pg
deep-scrub ) to keep scrubbing process run at all times.

** Attachment added: "sru_ceph15.2.8_ussuri.txt"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1911900/+attachment/5466042/+files/sru_ceph15.2.8_ussuri.txt

** Tags removed: verification-ussuri-needed
** Tags added: verification-ussuri-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1911900

Title:
  [SRU] Active scrub blocks upmap balancer

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1911900/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1909162] Re: cluster log slow request spam

2021-02-22 Thread Ponnuvel Palaniyappan
Ussuri packages are installed and verified
(15.2.8-0ubuntu0.20.04.1~cloud0).

In tests for focal and ussuri, rados bench was used to drive some I/O
and randomly chosen PGs were repeated deeb-scrubbed in a loop (ceph pg
deep-scrub ) to keep scrubbing process run at all times.


** Attachment added: "sru_ceph15.2.8_ussuri.txt"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1909162/+attachment/5466041/+files/sru_ceph15.2.8_ussuri.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1909162

Title:
  cluster log slow request spam

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1909162/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1909162] Re: cluster log slow request spam

2021-02-22 Thread Ponnuvel Palaniyappan
Please ignore my comment #18. Posted accidentally in the wrong bug.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1909162

Title:
  cluster log slow request spam

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1909162/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1911900] Re: [SRU] Active scrub blocks upmap balancer

2021-02-23 Thread Ponnuvel Palaniyappan
Same steps followed for installing & test groovy ceph packages
(15.2.8-0ubuntu0.20.10.1)

** Attachment added: "sru_groovy.txt"
   
https://bugs.launchpad.net/cloud-archive/+bug/1911900/+attachment/5466340/+files/sru_groovy.txt

** Tags removed: verification-needed-groovy
** Tags added: verification-done-groovy

** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1911900

Title:
  [SRU] Active scrub blocks upmap balancer

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1911900/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1913284] Re: [plugin] [bcache] add a new bcache plugin

2021-02-26 Thread Ponnuvel Palaniyappan
VERIFICAION DONE

And FAILED for Focal.

1. Installed Focal
2. wipefs two devices: one cache and one backing
3. Setup a cache device on a nvme drive (/dev/nvme0n1p1)
4. Setup a backing device a hdd partition (/dev/sdb1)
5. Create a cache setup: sudo make-bcache -B /dev/sdb1 -C /dev/nvme0n1p1
6. Enable proposed and install latest sosreport
7. Generate report with sosreport -a --all-logs


# ls -l /dev/sdb1 
brw-rw 1 root disk 8, 17 Feb 25 22:48 /dev/sdb1
# ls -l /dev/bcache0 
brw-rw 1 root disk 252, 0 Feb 25 22:48 /dev/bcache0
# dpkg -l | grep sosreport
ii  sosreport4.0-1~ubuntu0.20.04.4 
amd64Set of tools to gather troubleshooting data from a system

bcache module is loaded:
# lsmod | grep bcache
bcache245760  3
crc64  16384  1 bcache


But generated sosreport doesn't contain the bcache stats.
Looking at the "verbose" output (-v option), bcache plugin isn't triggered at 
all on Focal.

The tests originally done (when submitting patches to upstream) still
works with latest bcache.py on Bionic.


** Tags removed: verification-needed-focal
** Tags added: verification-failed-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1913284

Title:
  [plugin] [bcache] add a new bcache plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1913284] Re: [plugin] [bcache] add a new bcache plugin

2021-02-26 Thread Ponnuvel Palaniyappan
The problem is that IndependentPlugin is not working in Focal's
sosreport. The bcache plugin uses IndependentPlugin (as in, it's not
tied to any specific Distro).

IndependentPlugin was broken at some point since Bionic and has been fixed 
upstream in 
https://github.com/sosreport/sos/commit/a36e1b83040f3f2c63912d4601f4b33821cd4afb
(it's working on Bionic).

However, the sos report package on Focal, Groovy, and Hirsute all have
the broken code.

We can fix this in one of the two ways:
- Sync sosreport source with upstream on Focal, Groovy, and Hirsute to fix 
IndepedentPlugin
- Change bcache to specify individual subclasses (DebianPlugin, UbuntuPlugin, 
RedhatPlugin, etc)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1913284

Title:
  [plugin] [bcache] add a new bcache plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1913284] Re: [plugin] [bcache] add a new bcache plugin

2021-03-01 Thread Ponnuvel Palaniyappan
FYI: upstream sosreport works as expected for bcache on both Bionic and
Focal.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1913284

Title:
  [plugin] [bcache] add a new bcache plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1917288] Re: Missing to package ceph-kvstore-tool, ceph-monstore-tool, ceph-osdomap-tool in bionic-train UCA release

2021-03-01 Thread Ponnuvel Palaniyappan
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1917288

Title:
  Missing to package ceph-kvstore-tool, ceph-monstore-tool, ceph-
  osdomap-tool  in bionic-train UCA release

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1917288/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914584] Re: radosgw-admin user create error message confusing if user with email already exists

2021-03-01 Thread Ponnuvel Palaniyappan
Attaching the debdiff done for Luminous (based off Matthew Vernon's
patch).


** Description changed:

- Hi,
+ [Impact]
  
- The error message if you try and create an S3 user with an email address
- that is already associated with another S3 account is very confusing.
+ When creating a new S3 user, the error message is confusing if the email
+ address used is already associated with another S3 account.
  
  To reproduce:
  
  radosgw-admin user create --uid=foo --display-name="Foo test" 
--email=bar@domain.invalid
  #[ success ]
  radosgw-admin user create --uid=test --display-name="AN test" 
--email=bar@domain.invalid
  could not create user: unable to parse parameters, user id mismatch, 
operation id: foo does not match: test
  
- I've reported this upstream as https://tracker.ceph.com/issues/49137 and
- will send a patch against master there in due course, but that won't get
- backported to Luminous.
+ As a result, it's completely unclear what went wrong with the user
+ creation.
  
- I attach a patch that improves the error message thus:
- radosgw-admin user create --uid=test --display-name="AN test" 
--email=bar@domain.invalid
- could not create user: unable to create user test because user id foo already 
exists with email bar@domain.invalid
+ [Test case]
  
- Could this be applied to the luminous Ceph packages you ship please?
+ Create an S3 account via radosgw-admin. Then create another user but use
+ the same email address - it should provide a clear description of what
+ the problem is.
  
- Thanks,
+ [Where problems could occur]
  
- Matthew
+ The new message may yet be unclear or could complain that an email
+ exists even though it doesn't exist (false positive). It's an improved
+ diagnostic by checking if the email id exists. Perhaps, user creation
+ might become problematic if the fix doesn't work.
  
- ProblemType: Bug
- DistroRelease: Ubuntu 18.04
- Package: ceph-common 12.2.13-0ubuntu0.18.04.6
- ProcVersionSignature: Ubuntu 4.15.0-106.107-generic 4.15.18
- Uname: Linux 4.15.0-106-generic x86_64
- ApportVersion: 2.20.9-0ubuntu7.23
- Architecture: amd64
- Date: Thu Feb  4 11:13:55 2021
- ProcEnviron:
-  LANG=en_US.UTF-8
-  TERM=xterm
-  SHELL=/bin/bash
-  XDG_RUNTIME_DIR=
-  PATH=(custom, no user)
- SourcePackage: ceph
- UpgradeStatus: Upgraded to bionic on 2018-11-20 (807 days ago)
- modified.conffile..etc.logrotate.d.ceph-common: [modified]
- mtime.conffile..etc.default.ceph: 2018-03-13T17:08:17.501935
- mtime.conffile..etc.logrotate.d.ceph-common: 2019-06-17T16:40:55.496716
+ [Other Info]
+ - The patch was provided by Matthew Vernon (attached here)
+ - Upstream tracker: https://tracker.ceph.com/issues/49137
+ - Upstream PR: https://github.com/ceph/ceph/pull/39293
+ - Backported to Pacific, Octopus, and Nautilus upstream releases. Luminous is 
EOL'ed upstream, so we'd like to backport to Luminous (Bionic/queens).

** Summary changed:

- radosgw-admin user create error message confusing if user with email already 
exists
+ [SRU] radosgw-admin user create error message confusing if user with email 
already exists

** Tags added: sts-sru-needed

** Attachment added: "debdiff.txt"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1914584/+attachment/5471578/+files/debdiff.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914584

Title:
  [SRU] radosgw-admin user create error message confusing if user with
  email already exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1913284] Re: [plugin] [bcache] add a new bcache plugin

2021-03-02 Thread Ponnuvel Palaniyappan
The third option is to cherry-pick the commit from upstream that fixed
broken IndependentPlugin. It appears that Ubuntu sosreport doesn't get a
"full sync" regularly; rather features/commits cherry-picked on a need
basis. So this option might be the one that fits with the current
process, iiuc.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1913284

Title:
  [plugin] [bcache] add a new bcache plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1917494] [NEW] ceph-mgr hangs in large clusters

2021-03-02 Thread Ponnuvel Palaniyappan
Public bug reported:

This is to track and SRU the upstream bug [0]. In large clusers, there's
a GIL deadlock that happens (or observable) when ceph-mgr receive high
number of requests from Prometheus module.

This bug likely exists at least since the initial release of Nautilus.

Upstream fixed this recently [2]. Targetted for Octopus 15.2.9 (which is
not released yet - the next point release). So this tracker is to assess
the impact and potentially backport in Ubuntu/UCA packages before 15.2.9
gets released.


N.B: We have previously backported a similar bug [1]. But this is
unrelated.

[0] https://tracker.ceph.com/issues/39264
[1] https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1906496
[2] https://github.com/ceph/ceph/pull/38677

** Affects: ceph (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: seg

** Tags added: seg

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1917494

Title:
  ceph-mgr hangs in large clusters

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1917494/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1913284] Re: [plugin] [bcache] add a new bcache plugin

2021-03-03 Thread Ponnuvel Palaniyappan
Thanks, Eric.

I'll update #1917651 with the debdiff.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1913284

Title:
  [plugin] [bcache] add a new bcache plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1917651] Re: IndependentPlugin is not working

2021-03-03 Thread Ponnuvel Palaniyappan
** Attachment added: "focal-debdiff.txt"
   
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+attachment/5472231/+files/focal-debdiff.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1917651

Title:
  IndependentPlugin is not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1917651] Re: IndependentPlugin is not working

2021-03-03 Thread Ponnuvel Palaniyappan
Attaching debdiff for focal.

** Attachment added: "focal-debdiff.txt"
   
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+attachment/5472232/+files/focal-debdiff.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1917651

Title:
  IndependentPlugin is not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1917651] Re: IndependentPlugin is not working

2021-03-03 Thread Ponnuvel Palaniyappan
Attached debdiff for groovy.

** Attachment added: "groovy-debdiff.txt"
   
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+attachment/5472233/+files/groovy-debdiff.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1917651

Title:
  IndependentPlugin is not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1913284] Re: [plugin] [bcache] add a new bcache plugin

2021-03-03 Thread Ponnuvel Palaniyappan
Tested using the patch from [0] and it works (= generated sosreport contains 
the bcache stats).
Attached the test sos report here.

So once this is re-uploaded, I'll repeat the verification.

[0]
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/comments/3

** Attachment added: "sosreport-buneary-1913284-2021-03-03-iwdqcjs.tar.xz"
   
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+attachment/5472234/+files/sosreport-buneary-1913284-2021-03-03-iwdqcjs.tar.xz

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1913284

Title:
  [plugin] [bcache] add a new bcache plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914911] Re: [SRU] bluefs doesn't compact log file

2021-05-28 Thread Ponnuvel Palaniyappan
Verified that log compactions occur in bluefs with read/write I/O.
Attaching test notes.

** Attachment added: "queens_sru_1914911"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1914911/+attachment/5500895/+files/queens_sru_1914911

** Tags removed: verification-queens-needed
** Tags added: verification-queens-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914911

Title:
  [SRU] bluefs doesn't compact log file

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists

2021-06-17 Thread Ponnuvel Palaniyappan
** Changed in: cloud-archive/queens
   Status: Triaged => Won't Fix

** Changed in: cloud-archive/rocky
   Status: Triaged => Won't Fix

** Changed in: cloud-archive/stein
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914584

Title:
  [SRU] radosgw-admin user create error message confusing if user with
  email already exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists

2021-04-27 Thread Ponnuvel Palaniyappan
With SRU verification test, it doesn't quite work as intended. I believe
the problem is with the upstream patch [0].

I've asked for clarification/confirmation [1]. I'll then have to mark
the verifications failed, I am afraid. This obviously affects 15.2.11
release as well. I'll try to get this fixed upstream as soon as possible
and re-submit patches. Please let me know if there's anything else that
can be (or needs to be) done.


[0] https://github.com/ceph/ceph/pull/39293
[1] https://github.com/ceph/ceph/pull/39293#issuecomment-827751233

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914584

Title:
  [SRU] radosgw-admin user create error message confusing if user with
  email already exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists

2021-04-28 Thread Ponnuvel Palaniyappan
Submitted patch upstream: https://github.com/ceph/ceph/pull/41065

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914584

Title:
  [SRU] radosgw-admin user create error message confusing if user with
  email already exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists

2021-04-28 Thread Ponnuvel Palaniyappan
** Tags removed: verification-needed verification-needed-focal 
verification-needed-groovy
** Tags added: verification-failed verification-failed-focal 
verification-failed-groovy

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914584

Title:
  [SRU] radosgw-admin user create error message confusing if user with
  email already exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists

2021-04-28 Thread Ponnuvel Palaniyappan
Confirmed that the patch needs re-work. So marking all verification
failed.

I've opened an issue in upstream tracker [0]. Submitted the patch
upstream [1]. Once that gets approved, backports to Nautilus, Octopus,
and Pacific to follow.

I am not sure how 15.2.11 release can be worked around whether we wait
for upstream acceptance or proceed without this patch. Please let me
know what you think, James/Corey.


[0] https://tracker.ceph.com/issues/50554
[1] https://github.com/ceph/ceph/pull/41065

** Bug watch added: tracker.ceph.com/issues #50554
   http://tracker.ceph.com/issues/50554

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914584

Title:
  [SRU] radosgw-admin user create error message confusing if user with
  email already exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists

2021-04-29 Thread Ponnuvel Palaniyappan
** Tags removed: verification-ussuri-needed
** Tags added: verification-ussuri-failed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914584

Title:
  [SRU] radosgw-admin user create error message confusing if user with
  email already exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists

2021-04-30 Thread Ponnuvel Palaniyappan
This (incorrect) patch is in upstream master and backported to Nautilus,
Octopus, and Pacific. This isn't critical and doesn't break
functionality - except for an incorrect error message if user attributes
modification fails. However, the reported problem still exists. So I've
marked all verifications as failed.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914584

Title:
  [SRU] radosgw-admin user create error message confusing if user with
  email already exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1911900] Re: [SRU] Active scrub blocks upmap balancer

2021-04-12 Thread Ponnuvel Palaniyappan
The regression that this patch fixes wasn't introduced (or backported
to) Luminous by upstream. So this doesn't affect Bionic (confirmed by
checking Ubuntu's latest Bionic source too).

** Changed in: ceph (Ubuntu Bionic)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1911900

Title:
  [SRU] Active scrub blocks upmap balancer

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1911900/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1936136] Re: ceph on bcache performance regression

2021-07-15 Thread Ponnuvel Palaniyappan
** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1936136

Title:
  ceph on bcache performance regression

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1936136/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists

2021-07-29 Thread Ponnuvel Palaniyappan
** Changed in: cloud-archive/victoria
   Status: Triaged => New

** Changed in: cloud-archive/train
   Status: Triaged => Won't Fix

** Changed in: cloud-archive/ussuri
   Status: Fix Committed => New

** Changed in: cloud-archive
   Status: Fix Released => New

** Changed in: ceph (Ubuntu)
   Status: Fix Released => New

** Changed in: ceph (Ubuntu Bionic)
   Status: Triaged => New

** Changed in: ceph (Ubuntu Focal)
   Status: Fix Released => New

** Changed in: ceph (Ubuntu Groovy)
   Status: Fix Released => New

** Changed in: ceph (Ubuntu Hirsute)
   Status: Fix Released => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914584

Title:
  [SRU] radosgw-admin user create error message confusing if user with
  email already exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-11-09 Thread Ponnuvel Palaniyappan
** Changed in: cloud-archive/ussuri
 Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-11-09 Thread Ponnuvel Palaniyappan
** Attachment added: "out__ussuri_before.txt"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1894453/+attachment/5432708/+files/out__ussuri_before.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-11-09 Thread Ponnuvel Palaniyappan
SRU tests are oood for ussuri (observed perf numbers are better). Did a
short rados benchmark and attached the results (before and after
upgrade).

The steps taken are:
- Upgrade the packages on monitor, and OSDs in order.
- Restart monitor, manager and OSDs in order.
- Repeat the same benchmark test.

Attached files have the output of the commands:
 - lsb_release -a
 - dpkg -l | grep ceph
 - ceph versions
 - ceph osd pool create scbench 128 128
 - rados bench -p scbench 10 write

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-11-09 Thread Ponnuvel Palaniyappan
** Attachment added: "out__ussuri_after.txt"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1894453/+attachment/5432709/+files/out__ussuri_after.txt

** Tags removed: verification-ussuri-needed
** Tags added: verification-ussuri-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1899180] Re: autopkgtest almost always fails in Bionic on amd64

2020-11-10 Thread Ponnuvel Palaniyappan
I've encountered this failure, too:

https://objectstorage.prodstack4-5.canonical.com/v1/AUTH_77e2ada1e7a84929a74ba3b87153c0ac
/autopkgtest-
bionic/bionic/amd64/libv/libvirt/20201109_214647_148b5@/log.gz

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1899180

Title:
  autopkgtest almost always fails in Bionic on amd64

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1899180/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-11-10 Thread Ponnuvel Palaniyappan
The smoke-lxc test failure (reported in comment #37) could be due to
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1899180.

LP#1899180 suggests this could be treated as flaky. If it's easy, coud
try re-running the regression tests for 12.2.13-0ubuntu0.18.04.5 to see
if it's indeed a Heisenbug.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-11-11 Thread Ponnuvel Palaniyappan
** Changed in: cloud-archive/stein
 Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-11-16 Thread Ponnuvel Palaniyappan
^^ The above is for Stein verification.

** Tags removed: verification-stein-needed
** Tags added: verification-stein-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-11-16 Thread Ponnuvel Palaniyappan
Performed same as for Ussuri (comment #34) and attached results here.

** Attachment added: "Stein_verification.txt"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1894453/+attachment/5434835/+files/Stein_verification.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-11-16 Thread Ponnuvel Palaniyappan
** Changed in: cloud-archive/train
 Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-11-16 Thread Ponnuvel Palaniyappan
Test results for Train.

** Attachment added: "train_verification.txt"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1894453/+attachment/5434889/+files/train_verification.txt

** Tags removed: verification-train-needed
** Tags added: verification-train-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-11-17 Thread Ponnuvel Palaniyappan
Re-run of the regression tests is OK for 12.2.13-0ubuntu0.18.04.5 [0].

Test results for 12.2.13-0ubuntu0.18.04.5 is good. Attached before &
after test results.

[0] https://people.canonical.com/~ubuntu-archive/proposed-
migration/bionic/update_excuses.html#ceph

** Attachment added: "bionic_verification.txt"
   
https://bugs.launchpad.net/cloud-archive/train/+bug/1894453/+attachment/5435270/+files/bionic_verification.txt

** Tags removed: verification-needed verification-needed-bionic
** Tags added: verification-done verification-done-bionic

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-11-17 Thread Ponnuvel Palaniyappan
James Page has done focal-proposed verification already (comment #43) -
with that, all tests are done and verified. Please let me know if you
have any questions.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-12-04 Thread Ponnuvel Palaniyappan
Tested queens-proposed following the same steps noted in #34. Like
others results previously, this shows better results as well. Attached
the test notes.

** Changed in: cloud-archive/queens
 Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel)

** Changed in: ceph (Ubuntu Bionic)
 Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-12-04 Thread Ponnuvel Palaniyappan
Tested queens-proposed following the same steps noted in #34. Like
others results previously, this shows better results as well. Attached
the test notes.

** Attachment added: "queens_out.beore"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1894453/+attachment/5440913/+files/queens_out.beore

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-12-04 Thread Ponnuvel Palaniyappan
** Attachment added: "queens_out.after"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1894453/+attachment/5440914/+files/queens_out.after

** Tags removed: verification-queens-needed
** Tags added: verification-queens-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906496] Re: mgr can be very slow in a large ceph cluster

2020-12-05 Thread Ponnuvel Palaniyappan
** Changed in: ceph (Ubuntu)
   Status: New => Incomplete

** Changed in: ceph (Ubuntu)
   Status: Incomplete => New

** Changed in: ceph (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906496

Title:
  mgr can be very slow in a large ceph cluster

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1906496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906496] Re: mgr can be very slow in a large ceph cluster

2020-12-07 Thread Ponnuvel Palaniyappan
Attaching diff for Bionic (Ceph 12.2.13) - same as before but easier to
read.

** Attachment added: "bug1906496.patch-bionic-12.2.13"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1906496/+attachment/5441740/+files/bug1906496.patch-bionic-12.2.13

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906496

Title:
  mgr can be very slow in a large ceph cluster

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1906496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906496] Re: mgr can be very slow in a large ceph cluster

2020-12-07 Thread Ponnuvel Palaniyappan
Attaching debdiff for Bionic (Ceph 12.2.13).

** Attachment added: "Bionic-Ceph-12.2.13-debdiff"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1906496/+attachment/5441739/+files/debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906496

Title:
  mgr can be very slow in a large ceph cluster

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1906496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906496] Re: mgr can be very slow in a large ceph cluster

2020-12-08 Thread Ponnuvel Palaniyappan
** Changed in: ceph (Ubuntu)
   Status: Confirmed => Won't Fix

** Changed in: ceph (Ubuntu)
   Status: Won't Fix => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906496

Title:
  mgr can be very slow in a large ceph cluster

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1906496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906496] Re: mgr can be very slow in a large ceph cluster

2020-12-09 Thread Ponnuvel Palaniyappan
debdiff for 13.2.9

** Attachment added: "debdiff-ceph-13.2.9"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1906496/+attachment/5442308/+files/debdiff-ceph-13.2.9

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906496

Title:
  mgr can be very slow in a large ceph cluster

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1906496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906496] Re: mgr can be very slow in a large ceph cluster

2020-12-09 Thread Ponnuvel Palaniyappan
Patch file for 13.2.9

** Attachment added: "bug1906496.patch-13.2.9"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1906496/+attachment/5442309/+files/bug1906496.patch-13.2.9

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906496

Title:
  mgr can be very slow in a large ceph cluster

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1906496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906496] Re: [SRU] mgr can be very slow in a large ceph cluster

2020-12-10 Thread Ponnuvel Palaniyappan
** Summary changed:

- mgr can be very slow in a large ceph cluster
+ [SRU] mgr can be very slow in a large ceph cluster

** Description changed:

- upstream implemented a new feature [1] that will check/report those long
- network ping times between osds, but it introduced an issue that ceph-
- mgr might be very slow because it needs to dump all the new osd network
- ping stats [2] for some tasks, this can be bad especially when the
- cluster has large number of osds.
+ [Impact] 
+ Ceph upstream implemented a new feature [1] that will check/report those long 
network ping times between osds, but it introduced an issue that ceph-mgr might 
be very slow because it needs to dump all the new osd network ping stats [2] 
for some tasks, this can be bad especially when the cluster has large number of 
osds.
  
- Since these kind osd network ping stats doesn't need to be exposed to the 
python mgr module.
- so, it only makes the mgr doing more work than it needs to, it could cause 
the mgr slow or even hang and could cause the cpu usage of mgr process 
constantly high. the fix is to disable the ping time dump for those mgr python 
modules.
+ Since these kind osd network ping stats doesn't need to be exposed to
+ the python mgr module. so, it only makes the mgr doing more work than it
+ needs to, it could cause the mgr slow or even hang and could cause the
+ cpu usage of mgr process constantly high. The fix is to disable the ping
+ time dump for those mgr python modules.
+ 
+ This resulted in ceph-mgr not responding to commands and/or hanging (and
+ had to be restarted) in clusters with a large number of OSDs.
+ 
+ [0] is the upstreambug. It was backported to Nautilus but rejected for
+ Luminous and Mimic because they reached EOL in upstream. But I want to
+ backport to these two releases Ubuntu/UCA.
  
  The major fix from upstream is here [3], and also I found an improvement
  commit [4] that submitted later in another PR.
  
- We need to backport them to bionic Luminous and Mimic(Stein), Nautilus
- and Octopus have the fix
+ [Test Case]
+ Deploy a Ceph cluster (Luminous 13.2.9 or Mimic 13.2.9) with large number of 
Ceph OSDs (600+). During normal operations of the cluster, as the ceph-mgr 
dumps the network ping stats regularly, this problem would manifest. This is 
relatively hard to reproduce as the ceph-mgr may not always get overloaded and 
thus not hang.
  
+ [Regression Potential]
+ Fix has been accepted upstream (the changes are here in "sync" with upstream 
to the extent these old releases match the latest source code) and have been 
confirmed to work. So the risk is minimal.
+ 
+ At worst, this could affect modules that consume the stats from ceph-mgr
+ (such as prometheus or other monitoring scripts/tools) and thus becomes
+ less useful. But still shouldn't cause any problems to the operations of
+ the cluster itself.
+ 
+ [Other Info]
+ - In addition to the fix from [1], another commit [4] is also cherry-picked 
and backported here - this was also accepted upstream.
+ 
+ - Since the ceph-mgr hangs when affected, this also impact sosreport
+ collection - commands time out as the mgr doesn't respond and thus info
+ get truncated/not collected in that case. This fix should help avoid
+ that problem in sosreports.
+ 
+ [0] https://tracker.ceph.com/issues/43364
  [1] https://github.com/ceph/ceph/pull/28755
  [2] 
https://github.com/ceph/ceph/pull/28755/files#diff-5498d83111f1210998ee186e98d5836d2bce9992be7648addc83f59e798cddd8L430
  [3] https://github.com/ceph/ceph/pull/32406
  [4] 
https://github.com/ceph/ceph/pull/32554/commits/1112584621016c4a8cac1bedb1a1b8b17c394f7f

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906496

Title:
  [SRU] mgr can be very slow in a large ceph cluster

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1906496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906496] Re: [SRU] mgr can be very slow in a large ceph cluster

2020-12-13 Thread Ponnuvel Palaniyappan
@Corey, yes, I am happy to do the SRU verification when the packages are
available. I've updated the [Test case] section to note a simplified,
functional test.

** Description changed:

- [Impact] 
+ [Impact]
  Ceph upstream implemented a new feature [1] that will check/report those long 
network ping times between osds, but it introduced an issue that ceph-mgr might 
be very slow because it needs to dump all the new osd network ping stats [2] 
for some tasks, this can be bad especially when the cluster has large number of 
osds.
  
  Since these kind osd network ping stats doesn't need to be exposed to
  the python mgr module. so, it only makes the mgr doing more work than it
  needs to, it could cause the mgr slow or even hang and could cause the
  cpu usage of mgr process constantly high. The fix is to disable the ping
  time dump for those mgr python modules.
  
  This resulted in ceph-mgr not responding to commands and/or hanging (and
  had to be restarted) in clusters with a large number of OSDs.
  
  [0] is the upstreambug. It was backported to Nautilus but rejected for
  Luminous and Mimic because they reached EOL in upstream. But I want to
  backport to these two releases Ubuntu/UCA.
  
  The major fix from upstream is here [3], and also I found an improvement
  commit [4] that submitted later in another PR.
  
  [Test Case]
  Deploy a Ceph cluster (Luminous 13.2.9 or Mimic 13.2.9) with large number of 
Ceph OSDs (600+). During normal operations of the cluster, as the ceph-mgr 
dumps the network ping stats regularly, this problem would manifest. This is 
relatively hard to reproduce as the ceph-mgr may not always get overloaded and 
thus not hang.
+ 
+ A simpler version could be to deploy a Ceph cluster with as many OSDs as
+ the hardware/system setup allows and drive I/O on the cluster for
+ sometime. Then various queries could be sent to the manager to verify it
+ does report and doesn't get stuck.
  
  [Regression Potential]
  Fix has been accepted upstream (the changes are here in "sync" with upstream 
to the extent these old releases match the latest source code) and have been 
confirmed to work. So the risk is minimal.
  
  At worst, this could affect modules that consume the stats from ceph-mgr
  (such as prometheus or other monitoring scripts/tools) and thus becomes
  less useful. But still shouldn't cause any problems to the operations of
  the cluster itself.
  
  [Other Info]
  - In addition to the fix from [1], another commit [4] is also cherry-picked 
and backported here - this was also accepted upstream.
  
  - Since the ceph-mgr hangs when affected, this also impact sosreport
  collection - commands time out as the mgr doesn't respond and thus info
  get truncated/not collected in that case. This fix should help avoid
  that problem in sosreports.
  
  [0] https://tracker.ceph.com/issues/43364
  [1] https://github.com/ceph/ceph/pull/28755
  [2] 
https://github.com/ceph/ceph/pull/28755/files#diff-5498d83111f1210998ee186e98d5836d2bce9992be7648addc83f59e798cddd8L430
  [3] https://github.com/ceph/ceph/pull/32406
  [4] 
https://github.com/ceph/ceph/pull/32554/commits/1112584621016c4a8cac1bedb1a1b8b17c394f7f

** Description changed:

  [Impact]
  Ceph upstream implemented a new feature [1] that will check/report those long 
network ping times between osds, but it introduced an issue that ceph-mgr might 
be very slow because it needs to dump all the new osd network ping stats [2] 
for some tasks, this can be bad especially when the cluster has large number of 
osds.
  
  Since these kind osd network ping stats doesn't need to be exposed to
  the python mgr module. so, it only makes the mgr doing more work than it
  needs to, it could cause the mgr slow or even hang and could cause the
  cpu usage of mgr process constantly high. The fix is to disable the ping
  time dump for those mgr python modules.
  
  This resulted in ceph-mgr not responding to commands and/or hanging (and
  had to be restarted) in clusters with a large number of OSDs.
  
  [0] is the upstreambug. It was backported to Nautilus but rejected for
  Luminous and Mimic because they reached EOL in upstream. But I want to
  backport to these two releases Ubuntu/UCA.
  
  The major fix from upstream is here [3], and also I found an improvement
  commit [4] that submitted later in another PR.
  
  [Test Case]
  Deploy a Ceph cluster (Luminous 13.2.9 or Mimic 13.2.9) with large number of 
Ceph OSDs (600+). During normal operations of the cluster, as the ceph-mgr 
dumps the network ping stats regularly, this problem would manifest. This is 
relatively hard to reproduce as the ceph-mgr may not always get overloaded and 
thus not hang.
  
  A simpler version could be to deploy a Ceph cluster with as many OSDs as
- the hardware/system setup allows and drive I/O on the cluster for
- sometime. Then various queries could be sent to the manager to verify it
- does report and doesn't get stuck.
+ the hardware/system setup allows (not nece

[Bug 1906496] Re: [SRU] mgr can be very slow in a large ceph cluster

2020-12-14 Thread Ponnuvel Palaniyappan
** Description changed:

  [Impact]
- Ceph upstream implemented a new feature [1] that will check/report those long 
network ping times between osds, but it introduced an issue that ceph-mgr might 
be very slow because it needs to dump all the new osd network ping stats [2] 
for some tasks, this can be bad especially when the cluster has large number of 
osds.
+ Ceph upstream implemented a new feature [1] that will check/report those long 
network ping times between OSDs, but it introduced an issue that ceph-mgr might 
be very slow because it needs to dump all the new OSD network ping stats [2] 
for some tasks, this can be bad especially when the cluster has large number of 
OSDs.
  
- Since these kind osd network ping stats doesn't need to be exposed to
+ Since these kind OSD network ping stats doesn't need to be exposed to
  the python mgr module. so, it only makes the mgr doing more work than it
  needs to, it could cause the mgr slow or even hang and could cause the
- cpu usage of mgr process constantly high. The fix is to disable the ping
+ CPU usage of mgr process constantly high. The fix is to disable the ping
  time dump for those mgr python modules.
  
  This resulted in ceph-mgr not responding to commands and/or hanging (and
  had to be restarted) in clusters with a large number of OSDs.
  
- [0] is the upstreambug. It was backported to Nautilus but rejected for
+ [0] is the upstream bug. It was backported to Nautilus but rejected for
  Luminous and Mimic because they reached EOL in upstream. But I want to
  backport to these two releases Ubuntu/UCA.
  
  The major fix from upstream is here [3], and also I found an improvement
  commit [4] that submitted later in another PR.
  
  [Test Case]
  Deploy a Ceph cluster (Luminous 13.2.9 or Mimic 13.2.9) with large number of 
Ceph OSDs (600+). During normal operations of the cluster, as the ceph-mgr 
dumps the network ping stats regularly, this problem would manifest. This is 
relatively hard to reproduce as the ceph-mgr may not always get overloaded and 
thus not hang.
  
  A simpler version could be to deploy a Ceph cluster with as many OSDs as
  the hardware/system setup allows (not necessarily 600+) and drive I/O on
- the cluster for sometime. Then various queries could be sent to the
- manager to verify it does report and doesn't get stuck.
+ the cluster for sometime (say, 60 mins). Then various queries could be
+ sent to the manager to verify it does report and doesn't get stuck.
  
  [Regression Potential]
  Fix has been accepted upstream (the changes are here in "sync" with upstream 
to the extent these old releases match the latest source code) and have been 
confirmed to work. So the risk is minimal.
  
  At worst, this could affect modules that consume the stats from ceph-mgr
  (such as prometheus or other monitoring scripts/tools) and thus becomes
  less useful. But still shouldn't cause any problems to the operations of
  the cluster itself.
  
  [Other Info]
  - In addition to the fix from [1], another commit [4] is also cherry-picked 
and backported here - this was also accepted upstream.
  
  - Since the ceph-mgr hangs when affected, this also impact sosreport
  collection - commands time out as the mgr doesn't respond and thus info
  get truncated/not collected in that case. This fix should help avoid
  that problem in sosreports.
  
  [0] https://tracker.ceph.com/issues/43364
  [1] https://github.com/ceph/ceph/pull/28755
  [2] 
https://github.com/ceph/ceph/pull/28755/files#diff-5498d83111f1210998ee186e98d5836d2bce9992be7648addc83f59e798cddd8L430
  [3] https://github.com/ceph/ceph/pull/32406
  [4] 
https://github.com/ceph/ceph/pull/32554/commits/1112584621016c4a8cac1bedb1a1b8b17c394f7f

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906496

Title:
  [SRU] mgr can be very slow in a large ceph cluster

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1906496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-10-07 Thread Ponnuvel Palaniyappan
@Corey,

Had a discussion with Daniel Hill in SEG, and we think the risk is low.
So, yes, we'd like to backport this to all releases. I am happy to do
SRU (and/or tests) once this is ready.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906496] Re: [SRU] mgr can be very slow in a large ceph cluster

2021-01-07 Thread Ponnuvel Palaniyappan
SRU tests:
Deployed a cluster with many OSDs with the new packages; I/O was driven from a 
VM (both reads & writes). Enabled a number mgr modules, too. And under load, 
the cluster was functioning and mgr was still responding.  Attaching some 
relevant info on the tests here.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906496

Title:
  [SRU] mgr can be very slow in a large ceph cluster

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1906496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906496] Re: [SRU] mgr can be very slow in a large ceph cluster

2021-01-07 Thread Ponnuvel Palaniyappan
Stein UCA 13.2.9-0ubuntu0.19.04.1~cloud3

** Attachment added: "stein.sru"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1906496/+attachment/5450096/+files/stein.sru

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906496

Title:
  [SRU] mgr can be very slow in a large ceph cluster

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1906496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906496] Re: [SRU] mgr can be very slow in a large ceph cluster

2021-01-07 Thread Ponnuvel Palaniyappan
Bionic 12.2.13-0ubuntu0.18.04.6

** Attachment added: "bionic.sru"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1906496/+attachment/5450097/+files/bionic.sru

** Tags removed: verification-needed-bionic verification-stein-needed
** Tags added: verification-needed-done verification-stein-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906496

Title:
  [SRU] mgr can be very slow in a large ceph cluster

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1906496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1910264] Re: [plugin][ceph] include time-sync-status for ceph mon

2021-01-07 Thread Ponnuvel Palaniyappan
bionic debdiff


** Changed in: sosreport (Ubuntu Bionic)
   Status: New => In Progress

** Changed in: sosreport (Ubuntu Focal)
   Status: New => In Progress

** Changed in: sosreport (Ubuntu Groovy)
   Status: New => Incomplete

** Changed in: sosreport (Ubuntu Groovy)
   Status: Incomplete => In Progress

** Patch added: "bionic.debdiff"
   
https://bugs.launchpad.net/ubuntu/focal/+source/sosreport/+bug/1910264/+attachment/5450255/+files/bionic.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1910264

Title:
  [plugin][ceph] include time-sync-status for ceph mon

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1910264/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1910264] Re: [plugin][ceph] include time-sync-status for ceph mon

2021-01-07 Thread Ponnuvel Palaniyappan
focal debdiff

** Patch added: "focal.debdiff"
   
https://bugs.launchpad.net/ubuntu/focal/+source/sosreport/+bug/1910264/+attachment/5450256/+files/focal.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1910264

Title:
  [plugin][ceph] include time-sync-status for ceph mon

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1910264/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1910264] Re: [plugin][ceph] include time-sync-status for ceph mon

2021-01-07 Thread Ponnuvel Palaniyappan
groovy debdiff


** Patch added: "groovy.debdiff"
   
https://bugs.launchpad.net/ubuntu/focal/+source/sosreport/+bug/1910264/+attachment/5450257/+files/groovy.debdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1910264

Title:
  [plugin][ceph] include time-sync-status for ceph mon

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1910264/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1910264] Re: [plugin][ceph] include time-sync-status for ceph mon

2021-01-07 Thread Ponnuvel Palaniyappan
Hi Eric,

Attached debdiff's for bionic, focal, and groovy. Please let me know if
they're OK.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1910264

Title:
  [plugin][ceph] include time-sync-status for ceph mon

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1910264/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906496] Re: [SRU] mgr can be very slow in a large ceph cluster

2021-01-07 Thread Ponnuvel Palaniyappan
** Changed in: ceph (Ubuntu Bionic)
 Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel)

** Changed in: cloud-archive/stein
 Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel)

** Changed in: cloud-archive/queens
 Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906496

Title:
  [SRU] mgr can be very slow in a large ceph cluster

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1906496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906496] Re: [SRU] mgr can be very slow in a large ceph cluster

2021-01-07 Thread Ponnuvel Palaniyappan
** Description changed:

  [Impact]
  Ceph upstream implemented a new feature [1] that will check/report those long 
network ping times between OSDs, but it introduced an issue that ceph-mgr might 
be very slow because it needs to dump all the new OSD network ping stats [2] 
for some tasks, this can be bad especially when the cluster has large number of 
OSDs.
  
  Since these kind OSD network ping stats doesn't need to be exposed to
- the python mgr module. so, it only makes the mgr doing more work than it
+ the python mgr module. So, it only makes the mgr doing more work than it
  needs to, it could cause the mgr slow or even hang and could cause the
  CPU usage of mgr process constantly high. The fix is to disable the ping
  time dump for those mgr python modules.
  
  This resulted in ceph-mgr not responding to commands and/or hanging (and
- had to be restarted) in clusters with a large number of OSDs.
+ had to be restarted) in clusters with many OSDs.
  
  [0] is the upstream bug. It was backported to Nautilus but rejected for
  Luminous and Mimic because they reached EOL in upstream. But I want to
  backport to these two releases Ubuntu/UCA.
  
  The major fix from upstream is here [3], and also I found an improvement
  commit [4] that submitted later in another PR.
  
  [Test Case]
  Deploy a Ceph cluster (Luminous 13.2.9 or Mimic 13.2.9) with large number of 
Ceph OSDs (600+). During normal operations of the cluster, as the ceph-mgr 
dumps the network ping stats regularly, this problem would manifest. This is 
relatively hard to reproduce as the ceph-mgr may not always get overloaded and 
thus not hang.
  
  A simpler version could be to deploy a Ceph cluster with as many OSDs as
  the hardware/system setup allows (not necessarily 600+) and drive I/O on
  the cluster for sometime (say, 60 mins). Then various queries could be
  sent to the manager to verify it does report and doesn't get stuck.
  
  [Regression Potential]
  Fix has been accepted upstream (the changes are here in "sync" with upstream 
to the extent these old releases match the latest source code) and have been 
confirmed to work. So the risk is minimal.
  
  At worst, this could affect modules that consume the stats from ceph-mgr
  (such as prometheus or other monitoring scripts/tools) and thus becomes
  less useful. But still shouldn't cause any problems to the operations of
  the cluster itself.
  
  [Other Info]
  - In addition to the fix from [1], another commit [4] is also cherry-picked 
and backported here - this was also accepted upstream.
  
  - Since the ceph-mgr hangs when affected, this also impact sosreport
  collection - commands time out as the mgr doesn't respond and thus info
  get truncated/not collected in that case. This fix should help avoid
  that problem in sosreports.
  
  [0] https://tracker.ceph.com/issues/43364
  [1] https://github.com/ceph/ceph/pull/28755
  [2] 
https://github.com/ceph/ceph/pull/28755/files#diff-5498d83111f1210998ee186e98d5836d2bce9992be7648addc83f59e798cddd8L430
  [3] https://github.com/ceph/ceph/pull/32406
  [4] 
https://github.com/ceph/ceph/pull/32554/commits/1112584621016c4a8cac1bedb1a1b8b17c394f7f

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906496

Title:
  [SRU] mgr can be very slow in a large ceph cluster

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1906496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1906496] Re: [SRU] mgr can be very slow in a large ceph cluster

2021-01-08 Thread Ponnuvel Palaniyappan
Queens verification of 12.2.13-0ubuntu0.18.04.6~cloud0

Ran same tests on Queens - ceph mgr was functional and responsive under
cluster load.


** Attachment added: "queens.sru"
   
https://bugs.launchpad.net/ubuntu/bionic/+source/ceph/+bug/1906496/+attachment/5450803/+files/queens.sru

** Tags removed: verification-needed verification-queens-needed
** Tags added: verification-done verification-queens-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1906496

Title:
  [SRU] mgr can be very slow in a large ceph cluster

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1906496/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1917651] Re: IndependentPlugin is not working

2021-03-04 Thread Ponnuvel Palaniyappan
Attached debdiff for hirsute.

** Description changed:

  [IMPACT]
  Regression found while doing the verification testing of (LP: #1913284)
  
  As described by my colleague Ponnuvel:
  
  "
  The problem is that IndependentPlugin is not working in Focal's sosreport. 
The bcache plugin uses IndependentPlugin (as in, it's not tied to any specific 
Distro).
  
  IndependentPlugin was broken at some point since Bionic and has been
  fixed upstream. (It's working on Bionic in 3.X series).
  
   sosreport | 3.9.1-1ubuntu0.18.04.3   | bionic-updates
   sosreport | 4.0-1~ubuntu0.20.04.4| focal-proposed
   sosreport | 4.0-1ubuntu2.2   | groovy-proposed
   sosreport | 4.0-1ubuntu9 | hirsute
  
  However, the sos report package on Focal, Groovy, and Hirsute all have the 
broken code.
  "
  
  It currently impacts plugins relying on "IndependentPlugin" to run such
  as the bcache plugin.
  
  "IndependentPlugin" is a class for plugins that can run on any platform.
  
  [TEST PLAN]
+ The patch includes a fix for the IndependentPlugin. Currently, bcache plugin 
uses the IndependentPlugin. So sosreport collection has to be tested on a 
machine with bcache deployment. The cherry-picked commit includes a number of 
other changes, so --all-logs and -a options would need to be used to ensure 
there's no other breakages.
  
- 
  
  [WHERE PROBLEM COULD OCCUR]
  
- 
+ The IndependentPlugin may still not work - in that case, it'd only
+ affect bcache plugin. Worse, the changes could affect other Plugin types
+ and cause more regressions - affecting data collection for multiple
+ plugins.
+ 
  
  [OTHER INFORMATION]
  
  Upstream bug:
  https://github.com/sosreport/sos/pull/2018
  
  Upstream commit:
  
https://github.com/sosreport/sos/commit/a36e1b83040f3f2c63912d4601f4b33821cd4afb

** Attachment added: "hirsute-debdiff.txt"
   
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+attachment/5473096/+files/hirsute-debdiff.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1917651

Title:
  IndependentPlugin is not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1917651] Re: IndependentPlugin is not working

2021-03-04 Thread Ponnuvel Palaniyappan
Done both - thanks, Eric!

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1917651

Title:
  IndependentPlugin is not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1915705] Re: Ceph block device permission gets wrongly reverted by udev

2021-03-09 Thread Ponnuvel Palaniyappan
James Page suggested this might need to be done in the packaging as
well.

Once the fix is finalized in charm-ceph-osd, I'll update the udev rules
in debian/udev/95-ceph-osd-lvm.rules.

** Also affects: ceph (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1915705

Title:
  Ceph block device permission gets wrongly reverted by udev

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-osd/+bug/1915705/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1852441] Re: In bionic, one of the ceph packages installed causes chrony to auto-install even on lxd

2021-03-11 Thread Ponnuvel Palaniyappan
Quoting Ante Karamtic:
"If ntp/chrony is removed from ceph-mon, then ntp charm goes into error state 
if it's installed on ceph-mon units.

On other machines, ntp charm detects that it's in the container and then
reports that it is a container and that there's nothing to do.

In case of ceph-mon, now it goes into error state because chrony is not
there. So ntp charm should update the status before it checks if chrony
is installed."


** Also affects: ntp-charm
   Importance: Undecided
   Status: New

** Changed in: ntp-charm
 Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1852441

Title:
  In bionic, one of the ceph packages installed causes chrony to auto-
  install even on lxd

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-mon/+bug/1852441/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1917494] Re: ceph-mgr hangs in large clusters

2021-03-16 Thread Ponnuvel Palaniyappan
This has been merged upstream master and backported to Octopus [0].
Octopus release 15.2.9 contains the fix.

Proposed the backport for Nautilus:
https://github.com/ceph/ceph/pull/40047


[0] https://github.com/ceph/ceph/pull/38801

** Changed in: ceph (Ubuntu)
   Importance: Undecided => High

** Changed in: ceph (Ubuntu)
 Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel)

** Tags added: sts

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1917494

Title:
  ceph-mgr hangs in large clusters

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1917494/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914911] Re: [SRU] bluefs doesn't compact log file

2021-03-16 Thread Ponnuvel Palaniyappan
** Description changed:

  [Impact]
  
  For a certain type of workload, the bluefs might never compact the log
  file, which would cause the bluefs log file slowly grows to a huge size
  (some bigger than 1TB for a 1.5T device).
  
  There are more details in the bluefs perf counters when this issue happened:
  e.g.
  "bluefs": {
  "gift_bytes": 811748818944,
  "reclaim_bytes": 0,
  "db_total_bytes": 888564350976,
  "db_used_bytes": 867311747072,
  "wal_total_bytes": 0,
  "wal_used_bytes": 0,
  "slow_total_bytes": 0,
  "slow_used_bytes": 0,
  "num_files": 11,
  "log_bytes": 866545131520,
  "log_compactions": 0,
  "logged_bytes": 866542977024,
  "files_written_wal": 2,
  "files_written_sst": 3,
  "bytes_written_wal": 32424281934,
  "bytes_written_sst": 25382201
  }
  
  This bug could eventually cause osd crash and failed to restart as it 
couldn't get through the bluefs replay phase during boot time.
  We might see below log when trying to restart the osd:
  bluefs mount failed to replay log: (5) Input/output error
  
  As we can see the log_compactions is 0, which means it's never compacted
  and the log file size(log_bytes) is already 800+G. After the compaction,
  the log file size would need to be reduced to around 1G.
  
  [Test Case]
  
  Deploy a test ceph cluster (Luminous 12.2.13 which has the bug) and
  drive I/O. The compaction doesn't get triggered often when most I/O are
  reads. So fill up the cluster initially with lots of writes and then
  start reading heavy reads (no writes). Then the problem should occur.
  Smaller sized OSDs are OK as we'are only interested filling up the OSD
  and grow the bluefs log.
  
  [Where problems could occur]
  
  This fix has been part of all upstream releases since Mimic, so there's been 
quite good "runtime".
  The changes ensure that compaction happens more often. But that's not going 
to cause any problem.
  I can't see any real problems.
  
  [Other Info]
-  - It's only needed for Luminous (Bionic). All new releases since have this 
already.
-  - Upstream PR: https://github.com/ceph/ceph/pull/17354
+  - It's only needed for Luminous (Bionic). All new releases since have this 
already.
+  - Upstream master PR: https://github.com/ceph/ceph/pull/17354
+  - Upstream Luminous PR: https://github.com/ceph/ceph/pull/34876/files

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914911

Title:
  [SRU] bluefs doesn't compact log file

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914911] Re: [SRU] bluefs doesn't compact log file

2021-03-16 Thread Ponnuvel Palaniyappan
Attached debdiff for bionic (fixed the previous patch which had
additional unnecessary changes).

** Attachment added: "bionic.debdiff.fixed"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1914911/+attachment/5477289/+files/bionic.debdiff.fixed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914911

Title:
  [SRU] bluefs doesn't compact log file

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-09-21 Thread Ponnuvel Palaniyappan
An update:

I have built rocksdb with & without RelWithDebInfo. Some differences
are:

 - the exact set of additional flags used with RelWithDebInfo are: -O2
-NDEBUG -fno-omit-frame-pointer -momit-leaf-frame-pointer.

 - the size of rocksdb shared library (librocksdb.so): default-build =
156M, with-RelWithDebInfo = 235M (In the case of Ceph binaries, this
shared library isn't used separately but compiled into ceph-osd and
others. Similar comparison of ceph-osd binary size stripped: default-
build = 16M, with-RelWithDebInfo = 25M).

- There are about ~5700 asserts in rocksdb which would disappear with
RelWithDebInfo.


I am doing further performance benchmarks between the two versions and will 
update once done.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-09-24 Thread Ponnuvel Palaniyappan
@Corey

> The individual building of rocksdb files doesn't give off much
information in the package builds to see compiler flags set.

VERBOSE flag would show full flags - however, that'd need changing rules
& rebuilding again. That we can RelWithDebInfo is sufficient to confirm
that it does pass those flags as I've locally built rocksdb with
UCA/Ceph source using your cherry-picked patch
(https://paste.ubuntu.com/p/YyMVq33BgD/). So this is fine.

I've installed the Ceph packages from your PPA - they install ok.
However, I can't deploy a Ceph cluster as the default ceph-deploy
package conflicts with the packages in your PPA.

This is from a lxd container (ubuntu-daily:groovy):

Unpacking ceph-base (15.2.3-0ubuntu4~ubuntu20.10.1~ppa202009221514) ...
dpkg: error processing archive 
/tmp/apt-dpkg-install-3hzpv6/32-ceph-base_15.2.3-0ubuntu4~ubuntu20.10.1~ppa202009221514_amd64.deb
 (--unpack):
 trying to overwrite '/usr/share/man/man8/ceph-deploy.8.gz', which is also in 
package ceph-deploy 2.0.1-0ubuntu1
Selecting previously unselected package smartmontools.
Preparing to unpack .../33-smartmontools_7.1-1build1_amd64.deb ...
Unpacking smartmontools (7.1-1build1) ...
Errors were encountered while processing:
 
/tmp/apt-dpkg-install-3hzpv6/32-ceph-base_15.2.3-0ubuntu4~ubuntu20.10.1~ppa202009221514_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-09-28 Thread Ponnuvel Palaniyappan
Hi @Corey,

(Ignore my comments as I've managed to setup a cluster without needing
ceph-deploy).

I have installed the packages and setup a cluster. The cluster was
functional and found no issues with the Groovy packages in my tests.

I have done some of the Ceph's benchmarks [0] using the Ceph cluster.
Also did some separate tests on rocksdb with and without those flags.
Overall, the Ceph Cluster tests which were mainly focusing on I/O didn't
show anything out of the ordinary. But the rocksdb benchmarks [1] [2]
showed that the optimized build works better in most cases.

Besides, we already that this has already been enabled in Ceph upstream
as you know -- which we'll get in the future releases. So I am positive
that there are no cause for concern with these packages that you've
built for Groovy.


[0] 
https://tracker.ceph.com/projects/ceph/wiki/Benchmark_Ceph_Cluster_Performance
[1] https://github.com/facebook/rocksdb/wiki/Read-Modify-Write-Benchmarks
[2] 
https://github.com/facebook/rocksdb/wiki/RocksDB-In-Memory-Workload-Performance-Benchmarks


** Attachment added: "rocksdb benchmarks.txt"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1894453/+attachment/5415020/+files/rocksdb%20benchmarks.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-09-28 Thread Ponnuvel Palaniyappan
** Attachment added: "Ceph tests on Octopus - rocksdb optimized.txt"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1894453/+attachment/5415022/+files/Ceph%20tests%20on%20Octopus%20-%20rocksdb%20optimized.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-09-28 Thread Ponnuvel Palaniyappan
** Attachment added: "Ceph tests on Octopus - rocksdb UNoptimized.txt"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1894453/+attachment/5415021/+files/Ceph%20tests%20on%20Octopus%20-%20rocksdb%20UNoptimized.txt

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1894453] Re: Building Ceph packages with RelWithDebInfo

2020-09-29 Thread Ponnuvel Palaniyappan
@Tyler, Indeed it does. My cluster tests were done in a "cloud-on-cloud"
environment, so I didn't want to stress too much on the actual numbers
(whereas rocksdb tests were done locally in a tightly controlled
environment). As a whole, we can definitely say that RelWithDebInfo is
the way to go for rocksdb, too.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1894453

Title:
  Building Ceph packages with RelWithDebInfo

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1894453/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914584] Re: radosgw-admin user create error message confusing if user with email already exists

2021-02-05 Thread Ponnuvel Palaniyappan
** Also affects: ceph (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: ceph (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

** Also affects: ceph (Ubuntu Groovy)
   Importance: Undecided
   Status: New

** Also affects: ceph (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914584

Title:
  radosgw-admin user create error message confusing if user with email
  already exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914584] Re: radosgw-admin user create error message confusing if user with email already exists

2021-02-05 Thread Ponnuvel Palaniyappan
** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/stein
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/train
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/rocky
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/queens
   Importance: Undecided => Low

** Changed in: cloud-archive/queens
   Status: New => Triaged

** Changed in: cloud-archive/rocky
   Importance: Undecided => Low

** Changed in: cloud-archive/rocky
   Status: New => Triaged

** Changed in: cloud-archive/stein
   Importance: Undecided => Low

** Changed in: cloud-archive/stein
   Status: New => Triaged

** Changed in: cloud-archive/train
   Importance: Undecided => Low

** Changed in: cloud-archive/train
   Status: New => Triaged

** Changed in: cloud-archive/ussuri
   Importance: Undecided => Low

** Changed in: cloud-archive/ussuri
   Status: New => Triaged

** Changed in: cloud-archive/victoria
   Importance: Undecided => Low

** Changed in: cloud-archive/victoria
   Status: New => Triaged

** Changed in: ceph (Ubuntu Bionic)
   Importance: Undecided => Low

** Changed in: ceph (Ubuntu Bionic)
   Status: New => Triaged

** Changed in: ceph (Ubuntu Focal)
   Importance: Undecided => Low

** Changed in: ceph (Ubuntu Focal)
   Status: New => Triaged

** Changed in: ceph (Ubuntu Groovy)
   Importance: Undecided => Low

** Changed in: ceph (Ubuntu Groovy)
   Status: New => Triaged

** Changed in: ceph (Ubuntu Hirsute)
   Importance: Undecided => Low

** Changed in: ceph (Ubuntu Hirsute)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914584

Title:
  radosgw-admin user create error message confusing if user with email
  already exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1913284] Re: [plugin] [bcache] add a new bcache plugin

2021-02-06 Thread Ponnuvel Palaniyappan
Hi @Eric,

Updated the description with SRU. Please let me know if anything needs
change/unclear. Thanks.

** Description changed:

- A new plugin for bcache stats are proposed upstream [0] and there's a PR
- currently under review [1].
+ [Impact]
+ 
+ Gather bcache stats as part of sos report.
+ 
+ Bcache is often deployed as a "frontend" (typically using SSDs) for Ceph
+ cluster's with HDDs to improve performance. When we are dealing with
+ bcache performance or need to debug bcache, these stats are essential to
+ identify the problem. The newly added plugin collects various stats on
+ bcache.
  
  
- [0] https://github.com/sosreport/sos/issues/2378
- [1] https://github.com/sosreport/sos/pull/2384
+ [Test Case]
+ 
+ It's a new plugin. When sosreport is run on a system with bcache, a
+ number of small files from /sys/fs/bcache and /sys/block/bcache should
+ be collected with this plugin in place. On systems, without bcache, this
+ should be a no-op.
+ 
+ 
+ [Where problems could occur]
+ 
+ Worst case scenarios could be:
+ - As with any plugin, this plugin, in theory, could potentially run 
indefinitely and timeout.
+ - Affect performance when run on a live system. There's one known case where 
querying the proiorty_stats [0] had such a problem. But it's not collected as 
part of this plugin.
+ 
+ But otherwise, the stats collection of bcache devices (even if there are
+ many bcache devices deployed on a system) shouldn't affect anything on a
+ live system and shouldn't get anywhere closer to timeout.
+ 
+ 
+ [Other Info]
+  
+ upstream PR (merged): https://github.com/sosreport/sos/pull/2384
+ 
+ 
+ [0] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1840043

** Summary changed:

- [plugin] [bcache] add a new bcache plugin
+ [SRU] [plugin] [bcache] add a new bcache plugin

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1913284

Title:
  [SRU] [plugin] [bcache] add a new bcache plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914911] Re: bluefs doesn't compact log file

2021-02-08 Thread Ponnuvel Palaniyappan
** Changed in: ceph (Ubuntu)
 Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914911

Title:
  bluefs doesn't compact log file

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1914911/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914911] Re: bluefs doesn't compact log file

2021-02-10 Thread Ponnuvel Palaniyappan
** Also affects: ceph (Ubuntu Bionic)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914911

Title:
  bluefs doesn't compact log file

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914911] Re: bluefs doesn't compact log file

2021-02-10 Thread Ponnuvel Palaniyappan
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914911

Title:
  bluefs doesn't compact log file

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914911] Re: bluefs doesn't compact log file

2021-02-10 Thread Ponnuvel Palaniyappan
** Changed in: ceph (Ubuntu Bionic)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914911

Title:
  bluefs doesn't compact log file

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914911] Re: bluefs doesn't compact log file

2021-02-10 Thread Ponnuvel Palaniyappan
** Changed in: ceph (Ubuntu Bionic)
 Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914911

Title:
  bluefs doesn't compact log file

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914911] Re: bluefs doesn't compact log file

2021-02-11 Thread Ponnuvel Palaniyappan
Attached debdiff for Bionic.

** Patch added: "bionic.debdiff"
   
https://bugs.launchpad.net/cloud-archive/+bug/1914911/+attachment/5462589/+files/bionic.debdiff

** Tags added: sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914911

Title:
  bluefs doesn't compact log file

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914911] Re: [SRU] bluefs doesn't compact log file

2021-02-11 Thread Ponnuvel Palaniyappan
** Summary changed:

- bluefs doesn't compact log file
+ [SRU] bluefs doesn't compact log file

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914911

Title:
  [SRU] bluefs doesn't compact log file

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914911] Re: [SRU] bluefs doesn't compact log file

2021-02-11 Thread Ponnuvel Palaniyappan
** Description changed:

- For a certain type of workload, the bluefs might never compact the log file, 
- which would cause the bluefs log file slowly grows to a huge size 
+ [Impact]
+ 
+ For a certain type of workload, the bluefs might never compact the log
+ file, which would cause the bluefs log file slowly grows to a huge size
  (some bigger than 1TB for a 1.5T device).
  
- This bug could eventually cause osd crash and failed to restart as it 
couldn't get through the bluefs replay phase during boot time.
- We might see below log when trying to restart the osd:
- bluefs mount failed to replay log: (5) Input/output error
- 
  There are more details in the bluefs perf counters when this issue happened:
- e.g. 
+ e.g.
  "bluefs": {
  "gift_bytes": 811748818944,
  "reclaim_bytes": 0,
  "db_total_bytes": 888564350976,
  "db_used_bytes": 867311747072,
  "wal_total_bytes": 0,
  "wal_used_bytes": 0,
  "slow_total_bytes": 0,
  "slow_used_bytes": 0,
  "num_files": 11,
  "log_bytes": 866545131520,
  "log_compactions": 0,
  "logged_bytes": 866542977024,
  "files_written_wal": 2,
  "files_written_sst": 3,
  "bytes_written_wal": 32424281934,
  "bytes_written_sst": 25382201
  }
  
- As we can see the log_compactions is 0, which means it's never compacted and 
the log file size(log_bytes) is already 800+G. After the compaction, the log 
file size would reduced to around 
- 1 G
+ This bug could eventually cause osd crash and failed to restart as it 
couldn't get through the bluefs replay phase during boot time.
+ We might see below log when trying to restart the osd:
+ bluefs mount failed to replay log: (5) Input/output error
  
- Here is the PR[1] that addressed this bug, we need to backport this to ubuntu 
12.2.13
- [1] https://github.com/ceph/ceph/pull/17354
+ As we can see the log_compactions is 0, which means it's never compacted
+ and the log file size(log_bytes) is already 800+G. After the compaction,
+ the log file size would need to be reduced to around 1G.
+ 
+ [Test Case]
+ 
+ Deploy a test ceph cluster (Luminous 12.2.13 which has the bug) and
+ drive I/O. The compaction doesn't get triggered often when most I/O are
+ reads. So fill up the cluster initially with lots of writes and then
+ start reading heavy reads (no writes). Then the problem should occur.
+ Smaller sized OSDs are OK as we'are only interested filling up the OSD
+ and grow the bluefs log.
+ 
+ [Where problems could occur]
+ 
+ This fix has been part of all upstream releases since Mimic, so there's been 
quite good "runtime".
+ The changes ensure that compaction happens more often. But that's not going 
to cause any problem.
+ I can't see any real problems.
+ 
+ [Other Info]
+  - It's only needed for Luminous (Bionic). All new releases since have this 
already.
+  - Upstream PR: https://github.com/ceph/ceph/pull/17354

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914911

Title:
  [SRU] bluefs doesn't compact log file

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914911] Re: [SRU] bluefs doesn't compact log file

2021-02-11 Thread Ponnuvel Palaniyappan
Attaching a gitdiff based off branch: applied/12.2.13-0ubuntu0.18.04.6


** Attachment added: "ceph12.2.13-lp1914911.gitdiff"
   
https://bugs.launchpad.net/cloud-archive/+bug/1914911/+attachment/5462713/+files/ceph12.2.13-lp1914911.gitdiff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914911

Title:
  [SRU] bluefs doesn't compact log file

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1910264] Re: [plugin][ceph] include time-sync-status for ceph mon

2021-02-19 Thread Ponnuvel Palaniyappan
Verified that time-sync-status is generated.

Attached file shows sosreport and ceph time-sync-status output from a
test run.

** Attachment added: "time-sync-verification-focal.txt"
   
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1910264/+attachment/5465390/+files/time-sync-verification-focal.txt

** Tags removed: verification-needed-focal
** Tags added: verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1910264

Title:
  [plugin][ceph] include time-sync-status for ceph mon

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1910264/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1910264] Re: [plugin][ceph] include time-sync-status for ceph mon

2021-02-19 Thread Ponnuvel Palaniyappan
Verified that time-sync-status is generated.

Attached file shows sosreport and ceph time-sync-status output from a
test run on Groovy.


** Attachment added: "time-sync-verification-groovy.txt"
   
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1910264/+attachment/5465394/+files/time-sync-verification-groovy.txt

** Tags removed: verification-needed-groovy
** Tags added: verification-done-groovy

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1910264

Title:
  [plugin][ceph] include time-sync-status for ceph mon

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1910264/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914911] Re: [SRU] bluefs doesn't compact log file

2021-02-21 Thread Ponnuvel Palaniyappan
** Tags added: seg

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914911

Title:
  [SRU] bluefs doesn't compact log file

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1840043] Re: bcache: Performance degradation when querying priority_stats

2021-01-25 Thread Ponnuvel Palaniyappan
A question: Does stat(2) calls (lstat, fstat and the like) on priority_stats 
cause the same issue?
Context: https://github.com/sosreport/sos/pull/2384#discussion_r563975063

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1840043

Title:
  bcache: Performance degradation when querying priority_stats

To manage notifications about this bug go to:
https://bugs.launchpad.net/linux/+bug/1840043/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1911900] Re: [SRU] Active scrub blocks upmap balancer

2021-01-25 Thread Ponnuvel Palaniyappan
** Also affects: ceph (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: ceph (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: ceph (Ubuntu Groovy)
   Importance: Undecided
   Status: New

** Also affects: ceph (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1911900

Title:
  [SRU] Active scrub blocks upmap balancer

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1911900/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1911900] Re: [SRU] Active scrub blocks upmap balancer

2021-01-25 Thread Ponnuvel Palaniyappan
** No longer affects: ceph (Ubuntu Hirsute)

** No longer affects: ceph (Ubuntu Groovy)

** No longer affects: ceph (Ubuntu Focal)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1911900

Title:
  [SRU] Active scrub blocks upmap balancer

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1911900/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1911900] Re: [SRU] Active scrub blocks upmap balancer

2021-01-25 Thread Ponnuvel Palaniyappan
** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1911900

Title:
  [SRU] Active scrub blocks upmap balancer

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1911900/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1913284] [NEW] [plugin] [bcache] add a new bcache plugin

2021-01-26 Thread Ponnuvel Palaniyappan
Public bug reported:

A new plugin for bcache stats are proposed upstream [0] and there's a PR
currently under review [1].


[0] https://github.com/sosreport/sos/issues/2378
[1] https://github.com/sosreport/sos/pull/2384

** Affects: sosreport (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1913284

Title:
  [plugin] [bcache] add a new bcache plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1911900] Re: [SRU] Active scrub blocks upmap balancer

2021-01-26 Thread Ponnuvel Palaniyappan
** Also affects: ceph (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: ceph (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

** Also affects: ceph (Ubuntu Groovy)
   Importance: Undecided
   Status: New

** Also affects: ceph (Ubuntu Bionic)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1911900

Title:
  [SRU] Active scrub blocks upmap balancer

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1911900/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

  1   2   3   >