[ceph-users] OSD bluestore initialization failed

2019-06-21 Thread Saulo Silva
Hi,

After a power failure all OSD´s from a pool are fail with the following
error :

 -5> 2019-06-20 13:32:58.886299 7f146bcb2d00  4 rocksdb:
[/home/abuild/rpmbuild/BUILD/ceph-12.2.12-573-g67074fa839/src/rocksdb/db/version_set.cc:2859]
Recovered from manifest file:db/MANIFEST-003373
succeeded,manifest_file_number is 3373, next_file_number is 3598,
last_sequence is 319489940, log_number is 0,prev_log_number is
0,max_column_family is 0

-4> 2019-06-20 13:32:58.886330 7f146bcb2d00  4 rocksdb:
[/home/abuild/rpmbuild/BUILD/ceph-12.2.12-573-g67074fa839/src/rocksdb/db/version_set.cc:2867]
Column family [default] (ID 0), log number is 3594

-3> 2019-06-20 13:32:58.886401 7f146bcb2d00  4 rocksdb: EVENT_LOG_v1
{"time_micros": 1561048378886391, "job": 1, "event": "recovery_started",
"log_files": [3592, 3594]}
-2> 2019-06-20 13:32:58.886407 7f146bcb2d00  4 rocksdb:
[/home/abuild/rpmbuild/BUILD/ceph-12.2.12-573-g67074fa839/src/rocksdb/db/db_impl_open.cc:482]
Recovering log #3592 mode 0
-1> 2019-06-20 13:33:06.629066 7f146bcb2d00  4 rocksdb:
[/home/abuild/rpmbuild/BUILD/ceph-12.2.12-573-g67074fa839/src/rocksdb/db/db_impl_open.cc:482]
Recovering log #3594 mode 0
 0> 2019-06-20 13:33:10.086512 7f146bcb2d00 -1
/home/abuild/rpmbuild/BUILD/ceph-12.2.12-573-g67074fa839/src/os/bluestore/BlueFS.cc:
In function 'int BlueFS::_read(BlueFS::FileReader*,
BlueFS::FileReaderBuffer*, uint64_t, size_t, ceph::bufferlist*, char*)'
thread 7f146bcb2d00 time 2019-06-20 13:33:10.073021
/home/abuild/rpmbuild/BUILD/ceph-12.2.12-573-g67074fa839/src/os/bluestore/BlueFS.cc:
996: FAILED assert(r == 0)

All of OSD only read 2 log and return the error .
Is it possible to delete the rocksdb log and start the OSD again?

Best Regards,

Saulo Augusto Silva
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD bluestore initialization failed

2019-06-21 Thread Igor Fedotov

Hi Saulo,

looks like disk I/O error.

Will you set debug_bluefs to 20 and collect the log, then share a few 
lines prior to the assertion?


Checking smartctl output might be a good idea too.

Thanks,

Igor

On 6/21/2019 11:30 AM, Saulo Silva wrote:

Hi,

After a power failure all OSD´s from a pool are fail with the 
following error :


 -5> 2019-06-20 13:32:58.886299 7f146bcb2d00  4 rocksdb: 
[/home/abuild/rpmbuild/BUILD/ceph-12.2.12-573-g67074fa839/src/rocksdb/db/version_set.cc:2859] 
Recovered from manifest file:db/MANIFEST-003373 
succeeded,manifest_file_number is 3373, next_file_number is 3598, 
last_sequence is 319489940, log_number is 0,prev_log_number is 
0,max_column_family is 0


    -4> 2019-06-20 13:32:58.886330 7f146bcb2d00  4 rocksdb: 
[/home/abuild/rpmbuild/BUILD/ceph-12.2.12-573-g67074fa839/src/rocksdb/db/version_set.cc:2867] 
Column family [default] (ID 0), log number is 3594


    -3> 2019-06-20 13:32:58.886401 7f146bcb2d00  4 rocksdb: 
EVENT_LOG_v1 {"time_micros": 1561048378886391, "job": 1, "event": 
"recovery_started", "log_files": [3592, 3594]}
    -2> 2019-06-20 13:32:58.886407 7f146bcb2d00  4 rocksdb: 
[/home/abuild/rpmbuild/BUILD/ceph-12.2.12-573-g67074fa839/src/rocksdb/db/db_impl_open.cc:482] 
Recovering log #3592 mode 0
    -1> 2019-06-20 13:33:06.629066 7f146bcb2d00  4 rocksdb: 
[/home/abuild/rpmbuild/BUILD/ceph-12.2.12-573-g67074fa839/src/rocksdb/db/db_impl_open.cc:482] 
Recovering log #3594 mode 0
     0> 2019-06-20 13:33:10.086512 7f146bcb2d00 -1 
/home/abuild/rpmbuild/BUILD/ceph-12.2.12-573-g67074fa839/src/os/bluestore/BlueFS.cc: 
In function 'int BlueFS::_read(BlueFS::FileReader*, 
BlueFS::FileReaderBuffer*, uint64_t, size_t, ceph::bufferlist*, 
char*)' thread 7f146bcb2d00 time 2019-06-20 13:33:10.073021
/home/abuild/rpmbuild/BUILD/ceph-12.2.12-573-g67074fa839/src/os/bluestore/BlueFS.cc: 
996: FAILED assert(r == 0)


All of OSD only read 2 log and return the error .
Is it possible to delete the rocksdb log and start the OSD again?

Best Regards,

Saulo Augusto Silva

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RGW: Is 'radosgw-admin reshard stale-instances rm' safe?

2019-06-21 Thread Rudenko Aleksandr
Hi, folks.

I have Luminous 12.2.12. Auto-resharding is enabled.

In stale instances list I have:

# radosgw-admin reshard stale-instances list | grep clx
"clx:default.422998.196",

I have the same marker-id in bucket stats of this bucket:

# radosgw-admin bucket stats --bucket clx | grep marker
"marker": "default.422998.196",
"max_marker": 
"0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#,11#,12#,13#,14#,15#,16#,17#,18#,19#,20#,21#,22#,23#,24#,25#,26#,27#,28#,29#,30#,31#,32#,33#,34#,35#,36#,37#,38#,39#,40#,41#,42#,43#,44#,45#,46#,47#,48#,49#,50#,51#,52#",

I think it is not correct. I think active marker (in bucket stats) must not 
match marker in stale instances list.

I have to run ‘radosgw-admin reshard stale-instances rm’ because I have large 
OMAP warning, but I am not sure.

Is it safe to run: radosgw-admin reshard stale-instances rm ?


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-21 Thread Dan van der Ster
http://tracker.ceph.com/issues/40480

On Thu, Jun 20, 2019 at 9:12 PM Dan van der Ster  wrote:
>
> I will try to reproduce with logs and create a tracker once I find the
> smoking gun...
>
> It's very strange -- I had the osd mode set to 'passive', and pool
> option set to 'force', and the osd was compressing objects for around
> 15 minutes. Then suddenly it just stopped compressing, until I did
> 'ceph daemon osd.130 config set bluestore_compression_mode force',
> where it restarted immediately.
>
> FTR, it *should* compress with osd bluestore_compression_mode=none and
> the pool's compression_mode=force, right?
>
> -- dan
>
> -- Dan
>
> On Thu, Jun 20, 2019 at 6:57 PM Igor Fedotov  wrote:
> >
> > I'd like to see more details (preferably backed with logs) on this...
> >
> > On 6/20/2019 6:23 PM, Dan van der Ster wrote:
> > > P.S. I know this has been discussed before, but the
> > > compression_(mode|algorithm) pool options [1] seem completely broken
> > > -- With the pool mode set to force, we see that sometimes the
> > > compression is invoked and sometimes it isn't. AFAICT,
> > > the only way to compress every object is to set
> > > bluestore_compression_mode=force on the osd.
> > >
> > > -- dan
> > >
> > > [1] 
> > > http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
> > >
> > >
> > > On Thu, Jun 20, 2019 at 4:33 PM Dan van der Ster  
> > > wrote:
> > >> Hi all,
> > >>
> > >> I'm trying to compress an rbd pool via backfilling the existing data,
> > >> and the allocated space doesn't match what I expect.
> > >>
> > >> Here is the test: I marked osd.130 out and waited for it to erase all 
> > >> its data.
> > >> Then I set (on the pool) compression_mode=force and 
> > >> compression_algorithm=zstd.
> > >> Then I marked osd.130 to get its PGs/objects back (this time compressing 
> > >> them).
> > >>
> > >> After a few 10s of minutes we have:
> > >>  "bluestore_compressed": 989250439,
> > >>  "bluestore_compressed_allocated": 3859677184,
> > >>  "bluestore_compressed_original": 7719354368,
> > >>
> > >> So, the allocated is exactly 50% of original, but we are wasting space
> > >> because compressed is 12.8% of original.
> > >>
> > >> I don't understand why...
> > >>
> > >> The rbd images all use 4MB objects, and we use the default chunk and
> > >> blob sizes (in v13.2.6):
> > >> osd_recovery_max_chunk = 8MB
> > >> bluestore_compression_max_blob_size_hdd = 512kB
> > >> bluestore_compression_min_blob_size_hdd = 128kB
> > >> bluestore_max_blob_size_hdd = 512kB
> > >> bluestore_min_alloc_size_hdd = 64kB
> > >>
> > >>  From my understanding, backfilling should read a whole 4MB object from
> > >> the src osd, then write it to osd.130's bluestore, compressing in
> > >> 512kB blobs. Those compress on average at 12.8% so I would expect to
> > >> see allocated being closer to bluestore_min_alloc_size_hdd /
> > >> bluestore_compression_max_blob_size_hdd = 12.5%.
> > >>
> > >> Does someone understand where the 0.5 ratio is coming from?
> > >>
> > >> Thanks!
> > >>
> > >> Dan
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-21 Thread Igor Fedotov



On 6/20/2019 10:12 PM, Dan van der Ster wrote:

I will try to reproduce with logs and create a tracker once I find the
smoking gun...

It's very strange -- I had the osd mode set to 'passive', and pool
option set to 'force', and the osd was compressing objects for around
15 minutes. Then suddenly it just stopped compressing, until I did
'ceph daemon osd.130 config set bluestore_compression_mode force',
where it restarted immediately.

FTR, it *should* compress with osd bluestore_compression_mode=none and
the pool's compression_mode=force, right?
right but it looks like there is a bug: osd compression algorithm isn't 
applied when osd compression mode set to none. Hence no compression if 
pool lacks explicit algorithm specification.


-- dan

-- Dan

On Thu, Jun 20, 2019 at 6:57 PM Igor Fedotov  wrote:

I'd like to see more details (preferably backed with logs) on this...

On 6/20/2019 6:23 PM, Dan van der Ster wrote:

P.S. I know this has been discussed before, but the
compression_(mode|algorithm) pool options [1] seem completely broken
-- With the pool mode set to force, we see that sometimes the
compression is invoked and sometimes it isn't. AFAICT,
the only way to compress every object is to set
bluestore_compression_mode=force on the osd.

-- dan

[1] http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values


On Thu, Jun 20, 2019 at 4:33 PM Dan van der Ster  wrote:

Hi all,

I'm trying to compress an rbd pool via backfilling the existing data,
and the allocated space doesn't match what I expect.

Here is the test: I marked osd.130 out and waited for it to erase all its data.
Then I set (on the pool) compression_mode=force and compression_algorithm=zstd.
Then I marked osd.130 to get its PGs/objects back (this time compressing them).

After a few 10s of minutes we have:
  "bluestore_compressed": 989250439,
  "bluestore_compressed_allocated": 3859677184,
  "bluestore_compressed_original": 7719354368,

So, the allocated is exactly 50% of original, but we are wasting space
because compressed is 12.8% of original.

I don't understand why...

The rbd images all use 4MB objects, and we use the default chunk and
blob sizes (in v13.2.6):
 osd_recovery_max_chunk = 8MB
 bluestore_compression_max_blob_size_hdd = 512kB
 bluestore_compression_min_blob_size_hdd = 128kB
 bluestore_max_blob_size_hdd = 512kB
 bluestore_min_alloc_size_hdd = 64kB

  From my understanding, backfilling should read a whole 4MB object from
the src osd, then write it to osd.130's bluestore, compressing in
512kB blobs. Those compress on average at 12.8% so I would expect to
see allocated being closer to bluestore_min_alloc_size_hdd /
bluestore_compression_max_blob_size_hdd = 12.5%.

Does someone understand where the 0.5 ratio is coming from?

Thanks!

Dan

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-06-21 Thread Frank Schilder
Dear Yan, Zheng,

does mimic 13.2.6 fix the snapshot issue? If not, could you please send me a 
link to the issue tracker?

Thanks and best regards,

=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14


From: Yan, Zheng 
Sent: 20 May 2019 13:34
To: Frank Schilder
Cc: Stefan Kooman; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS 
bug?)

On Sat, May 18, 2019 at 5:47 PM Frank Schilder  wrote:
>
> Dear Yan and Stefan,
>
> it happened again and there were only very few ops in the queue. I pulled the 
> ops list and the cache. Please find a zip file here: 
> "https://files.dtu.dk/u/w6nnVOsp51nRqedU/mds-stuck-dirfrag.zip?l"; . Its a bit 
> more than 100MB.
>

MSD cache dump shows there is a snapshot related. Please avoid using
snapshot until we fix the bug.

Regards
Yan, Zheng
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS bug?)

2019-06-21 Thread Yan, Zheng
On Fri, Jun 21, 2019 at 6:10 PM Frank Schilder  wrote:
>
> Dear Yan, Zheng,
>
> does mimic 13.2.6 fix the snapshot issue? If not, could you please send me a 
> link to the issue tracker?
>
no

https://tracker.ceph.com/issues/39987


> Thanks and best regards,
>
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
> 
> From: Yan, Zheng 
> Sent: 20 May 2019 13:34
> To: Frank Schilder
> Cc: Stefan Kooman; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] mimic: MDS standby-replay causing blocked ops (MDS 
> bug?)
>
> On Sat, May 18, 2019 at 5:47 PM Frank Schilder  wrote:
> >
> > Dear Yan and Stefan,
> >
> > it happened again and there were only very few ops in the queue. I pulled 
> > the ops list and the cache. Please find a zip file here: 
> > "https://files.dtu.dk/u/w6nnVOsp51nRqedU/mds-stuck-dirfrag.zip?l"; . Its a 
> > bit more than 100MB.
> >
>
> MSD cache dump shows there is a snapshot related. Please avoid using
> snapshot until we fix the bug.
>
> Regards
> Yan, Zheng
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] understanding the bluestore blob, chunk and compression params

2019-06-21 Thread Igor Fedotov
Actually there are two issues here - the first one (fixed by #28688) is 
unloaded OSD compression settings when OSD compression mode = none and 
pool one isn't none.


Submitted https://github.com/ceph/ceph/pull/28688 to fix this part.

The second - OSD doesn't see pool settings after restart until some 
setting is (re)set.


Just made another ticket: http://tracker.ceph.com/issues/40483 




Thanks,

Igor


On 6/21/2019 12:44 PM, Dan van der Ster wrote:

http://tracker.ceph.com/issues/40480

On Thu, Jun 20, 2019 at 9:12 PM Dan van der Ster  wrote:

I will try to reproduce with logs and create a tracker once I find the
smoking gun...

It's very strange -- I had the osd mode set to 'passive', and pool
option set to 'force', and the osd was compressing objects for around
15 minutes. Then suddenly it just stopped compressing, until I did
'ceph daemon osd.130 config set bluestore_compression_mode force',
where it restarted immediately.

FTR, it *should* compress with osd bluestore_compression_mode=none and
the pool's compression_mode=force, right?

-- dan

-- Dan

On Thu, Jun 20, 2019 at 6:57 PM Igor Fedotov  wrote:

I'd like to see more details (preferably backed with logs) on this...

On 6/20/2019 6:23 PM, Dan van der Ster wrote:

P.S. I know this has been discussed before, but the
compression_(mode|algorithm) pool options [1] seem completely broken
-- With the pool mode set to force, we see that sometimes the
compression is invoked and sometimes it isn't. AFAICT,
the only way to compress every object is to set
bluestore_compression_mode=force on the osd.

-- dan

[1] http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values


On Thu, Jun 20, 2019 at 4:33 PM Dan van der Ster  wrote:

Hi all,

I'm trying to compress an rbd pool via backfilling the existing data,
and the allocated space doesn't match what I expect.

Here is the test: I marked osd.130 out and waited for it to erase all its data.
Then I set (on the pool) compression_mode=force and compression_algorithm=zstd.
Then I marked osd.130 to get its PGs/objects back (this time compressing them).

After a few 10s of minutes we have:
  "bluestore_compressed": 989250439,
  "bluestore_compressed_allocated": 3859677184,
  "bluestore_compressed_original": 7719354368,

So, the allocated is exactly 50% of original, but we are wasting space
because compressed is 12.8% of original.

I don't understand why...

The rbd images all use 4MB objects, and we use the default chunk and
blob sizes (in v13.2.6):
 osd_recovery_max_chunk = 8MB
 bluestore_compression_max_blob_size_hdd = 512kB
 bluestore_compression_min_blob_size_hdd = 128kB
 bluestore_max_blob_size_hdd = 512kB
 bluestore_min_alloc_size_hdd = 64kB

  From my understanding, backfilling should read a whole 4MB object from
the src osd, then write it to osd.130's bluestore, compressing in
512kB blobs. Those compress on average at 12.8% so I would expect to
see allocated being closer to bluestore_min_alloc_size_hdd /
bluestore_compression_max_blob_size_hdd = 12.5%.

Does someone understand where the 0.5 ratio is coming from?

Thanks!

Dan

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW: Is 'radosgw-admin reshard stale-instances rm' safe?

2019-06-21 Thread Konstantin Shalygin

Hi, folks.

I have Luminous 12.2.12. Auto-resharding is enabled.

In stale instances list I have:

# radosgw-admin reshard stale-instances list | grep clx
 "clx:default.422998.196",

I have the same marker-id in bucket stats of this bucket:

# radosgw-admin bucket stats --bucket clx | grep marker
 "marker": "default.422998.196",
 "max_marker": 
"0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#,11#,12#,13#,14#,15#,16#,17#,18#,19#,20#,21#,22#,23#,24#,25#,26#,27#,28#,29#,30#,31#,32#,33#,34#,35#,36#,37#,38#,39#,40#,41#,42#,43#,44#,45#,46#,47#,48#,49#,50#,51#,52#",

I think it is not correct. I think active marker (in bucket stats) must not 
match marker in stale instances list.

I have to run ‘radosgw-admin reshard stale-instances rm’ because I have large 
OMAP warning, but I am not sure.

Is it safe to run: radosgw-admin reshard stale-instances rm ?


Yes, this staled by dynamic resharding, mostly prior 12.2.11. At least I 
have not seen new one stales in my cluster.




k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Binding library for ceph admin api in C#?

2019-06-21 Thread LuD j
Hello guys,

We are working rados gateway automation,  we saw than there already have
binding libraries in goland, python and java to interact with ceph admin
api.
But is there any existing binding in C# or powershell or do we need to do
it by ourselves?

Thank you in advance for your help.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] out of date python-rtslib repo on https://shaman.ceph.com/

2019-06-21 Thread Matthias Leopold




Am 20.06.19 um 07:19 schrieb Michael Christie:

On 06/17/2019 03:41 AM, Matthias Leopold wrote:

thank you very much for updating python-rtslib!!
could you maybe also do this for tcmu-runner (version 1.4.1)?


I am just about to make a new 1.5 release. Give me a week. I am working
on a last feature/bug for the gluster team, and then I am going to pass
the code to the gluster tcmu-runner devs for some review and testing.



thank you, I'm looking forward to this

matthias
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] problems after upgrade to 14.2.1

2019-06-21 Thread Brent Kennedy
After installing the package on each mgr server and restarting the service,
i disabled the module then enabled the module with the force option.  (
seems I cut that out of the output I pasted )  It was essentially trial and
error.  After doing this, check and make sure you can see the module as
enabled  ( ceph mgr services ).  You should see something in the output at
that point.

 

I also had to fiddle with the SSL bits.

 

-Brent

 

From: ST Wong (ITSC)  
Sent: Friday, June 21, 2019 12:08 AM
To: Brent Kennedy ; ceph-users@lists.ceph.com
Subject: RE: [ceph-users] problems after upgrade to 14.2.1

 

Thanks.  I also didn't encounter the spillover issue on another cluster from
13.2.6 -> 14.2.1.  On that cluster, the dashboard also didn't work but
reconfiguring it similar to what you did worked.  Yes, nice new look. J

 

I commands like yours but it keeps prompting "all mgr daemons do not support
module 'dashboard', pass --force to force enablement".   Restart the mgr
service didn't help. 

 

/st wong

 

From: Brent Kennedy mailto:bkenn...@cfl.rr.com> > 
Sent: Friday, June 21, 2019 11:57 AM
To: ST Wong (ITSC) mailto:s...@itsc.cuhk.edu.hk> >;
ceph-users@lists.ceph.com  
Subject: RE: [ceph-users] problems after upgrade to 14.2.1

 

Not sure about the spillover stuff, didn't happen to me when I upgraded from
Luminous to 14.2.1.  The dashboard thing did happen to me.  Seems you have
to disable the dashboard and then renable it after installing the separate
dashboard rpm.  Also, make sure to restart the mgr services on each node
before trying that and after the dashboard package install.  I didn't end up
using the SSL certificate bits.  Also, there is a code issue for 14.2.1
where you cannot login ( the login page just refreshes ), the bug report
says its fixed in 14.2.2..

 

Login page Bug Report: https://tracker.ceph.com/issues/40051   ( manual
fix:  https://github.com/ceph/ceph/pull/27942/files )  Make sure to change
the dashboard password after applying the fix.

 

The literal command history before I had it working again.  Love the new
look though!

2046  ceph mgr module enable dashboard

2047  ceph mgr module disable dashboard

2048  ceph config set mgr mgr/dashboard/ssl false

2049  ceph mgr module disable dashboard

2050  ceph mgr module enable dashboard

2051  ceph dashboard create-self-signed-cert

2052  ceph config set mgr mgr/dashboard/ssl true

2053  ceph mgr module disable dashboard

2054  ceph mgr module enable dashboard

2056  systemctl restart ceph-mgr.target

2057  ceph mgr module disable dashboard

2058  ceph mgr module enable dashboard

2059  ceph dashboard set-login-credentials 

 2060  systemctl restart ceph-mgr.target

2063  ceph mgr module disable dashboard

2064  ceph mgr module enable dashboard

2065  ceph dashboard ac-user-set-password

 

-Brent

 

 

From: ceph-users mailto:ceph-users-boun...@lists.ceph.com> > On Behalf Of ST Wong (ITSC)
Sent: Thursday, June 20, 2019 10:24 PM
To: ceph-users@lists.ceph.com  
Subject: [ceph-users] problems after upgrade to 14.2.1

 

Hi all,

 

We recently upgrade a testing cluster from 13.2.4 to 14.2.1.  We encountered
2 problems:

 

1.   Got warning of BlueFS spillover but the usage is low while it's a
testing cluster without much activity/data:

 

# ceph -s

  cluster:

id: cc795498-5d16-4b84-9584-1788d0458be9

health: HEALTH_WARN

BlueFS spillover detected on 8 OSD(s)

[snipped]

 

# ceph health detail

HEALTH_WARN BlueFS spillover detected on 8 OSD(s)

BLUEFS_SPILLOVER BlueFS spillover detected on 8 OSD(s)

 osd.0 spilled over 48 MiB metadata from 'db' device (17 MiB used of 500
MiB) to slow device

 osd.1 spilled over 41 MiB metadata from 'db' device (6.0 MiB used of
500 MiB) to slow device

 osd.2 spilled over 47 MiB metadata from 'db' device (17 MiB used of 500
MiB) to slow device

 osd.3 spilled over 48 MiB metadata from 'db' device (6.0 MiB used of
500 MiB) to slow device

 osd.4 spilled over 44 MiB metadata from 'db' device (19 MiB used of 500
MiB) to slow device

 osd.5 spilled over 45 MiB metadata from 'db' device (6.0 MiB used of
500 MiB) to slow device

 osd.6 spilled over 46 MiB metadata from 'db' device (14 MiB used of 500
MiB) to slow device

 osd.7 spilled over 43 MiB metadata from 'db' device (6.0 MiB used of
500 MiB) to slow device

 

Is this a bug in 14 like this  
http://tracker.ceph.com/issues/38745 ?

 

 

 

2.   Dashboard configuration are lost and unable to reconfigure it
again.  

 

The ceph-mgr-dashboard rpm is there, but we can't configure dashboard again:

 

--- cut here --

# ceph mgr module enable dashboard

Error ENOENT: all mgr daemons do not support module 'dashboard', pass
--force to force enablement

 

# ceph mgr module enable dashboard --force

# ceph mgr module ls

{

"enabled_modules": [

  

Re: [ceph-users] OSD bluestore initialization failed

2019-06-21 Thread Saulo Silva
Hi Igor , thanks for helping ,

Here part of the log :

head ceph-osd.6.log -n80
2019-06-21 10:50:56.090891 7f462db84d00  0 set uid:gid to 167:167
(ceph:ceph)
2019-06-21 10:50:56.090910 7f462db84d00  0 ceph version
12.2.10-551-gbb089269ea (bb089269ea0c1272294c6b9777123ac81662b6d2) luminous
(stable), process ceph-osd, pid 18014
2019-06-21 10:50:56.116848 7f462db84d00  0 pidfile_write: ignore empty
--pid-file
2019-06-21 10:50:56.127199 7f462db84d00  0 load: jerasure load: lrc load:
isa
2019-06-21 10:50:56.435977 7f462db84d00 10 bluefs add_block_device bdev 1
path /var/lib/ceph/osd/ceph-6/block
2019-06-21 10:50:56.436187 7f462db84d00  1 bluefs add_block_device bdev 1
path /var/lib/ceph/osd/ceph-6/block size 99.9GiB
2019-06-21 10:50:56.436221 7f462db84d00  1 bluefs mount
2019-06-21 10:50:56.436225 7f462db84d00 10 bluefs _open_super
2019-06-21 10:50:56.436866 7f462db84d00 10 bluefs _open_super superblock 15
2019-06-21 10:50:56.436893 7f462db84d00 10 bluefs _open_super log_fnode
file(ino 1 size 0x10 mtime 0.00 bdev 0 allocated 50 extents
[1:0xc9bf0+10,1:0xc9bb0+40])
2019-06-21 10:50:56.436904 7f462db84d00 20 bluefs _init_alloc
2019-06-21 10:50:56.436913 7f462db84d00 10 bluefs _replay
2019-06-21 10:50:56.436919 7f462db84d00 10 bluefs _replay log_fnode
file(ino 1 size 0x10 mtime 0.00 bdev 0 allocated 50 extents
[1:0xc9bf0+10,1:0xc9bb0+40])
2019-06-21 10:50:56.436923 7f462db84d00 10 bluefs _read h 0x5632d7074b80
0x0~1000 from file(ino 1 size 0x10 mtime 0.00 bdev 0 allocated
50 extents [1:0xc9bf0+10,1:0xc9bb0+40])
2019-06-21 10:50:56.436928 7f462db84d00 20 bluefs _read fetching 0x0~10
of 1:0xc9bf0+10
2019-06-21 10:50:56.458689 7f462db84d00 20 bluefs _read left 0x10 len
0x1000
2019-06-21 10:50:56.458701 7f462db84d00 20 bluefs _read got 4096
2019-06-21 10:50:56.458704 7f462db84d00 20 bluefs _replay need 0x1000 more
bytes
2019-06-21 10:50:56.458705 7f462db84d00 10 bluefs _read h 0x5632d7074b80
0x1000~1000 from file(ino 1 size 0x10 mtime 0.00 bdev 0 allocated
50 extents [1:0xc9bf0+10,1:0xc9bb0+40])
2019-06-21 10:50:56.458710 7f462db84d00 20 bluefs _read left 0xff000 len
0x1000
2019-06-21 10:50:56.458711 7f462db84d00 20 bluefs _read got 4096
2019-06-21 10:50:56.458717 7f462db84d00 10 bluefs _replay 0x0: txn(seq 1
len 0x1c73 crc 0xfd2d3aef)
2019-06-21 10:50:56.458719 7f462db84d00 20 bluefs _replay 0x0:  op_init
2019-06-21 10:50:56.458721 7f462db84d00 20 bluefs _replay 0x0:
 op_alloc_add  1:0xbfd10~ffd0
2019-06-21 10:50:56.458735 7f462db84d00 20 bluefs _replay 0x0:
 op_file_update  file(ino 1012 size 0x41ef34e mtime 2019-05-22
19:13:16.240226 bdev 1 allocated 420 extents [1:0xceda0+420])
2019-06-21 10:50:56.458765 7f462db84d00 20 bluefs _replay 0x0:
 op_file_update  file(ino 167 size 0xf4da0c3 mtime 2019-05-23
13:11:26.154047 bdev 1 allocated f70 extents
[1:0xc0d90+10,1:0xc11a0+10,1:0xc11b0+10,1:0xc11c0+10,1:0xc11d0+10,1:0xc11e0+10,1:0xc11f0+10,1:0xc1200+10,1:0xc1210+10,1:0xc1220+10,1:0xc1230+10,1:0xc1240+10,1:0xc1250+10,1:0xc1260+10,1:0xc1270+10,1:0xc1280+10,1:0xc1290+10,1:0xc12a0+10,1:0xc12b0+10,1:0xc12c0+10,1:0xc12d0+10,1:0xc12e0+10,1:0xc12f0+10,1:0xc1300+10,1:0xc1310+10,1:0xc1320+10,1:0xc1330+10,1:0xc1340+10,1:0xc1350+10,1:0xc1360+10,1:0xc1370+10,1:0xc1380+10,1:0xc1390+10,1:0xc13a0+10,1:0xc13b0+10,1:0xc13c0+10,1:0xc13d0+10,1:0xc13e0+10,1:0xc13f0+10,1:0xc1400+10,1:0xc1410+10,1:0xc1420+10,1:0xc1430+10,1:0xc1440+10,1:0xc1450+10,1:0xc1460+10,1:0xc1470+10,1:0xc1480+10,1:0xc1490+10,1:0xc14a0+10,1:0xc14b0+10,1:0xc14c0+10,1:0xc14d0+10,1:0xc14e0+10,1:0xc14f0+10,1:0xc1500+10,1:0xc1510+10,1:0xc1520+10,1:0xc1530+10,1:0xc1540+10,1:0xc1550+10,1:0xc1560+10,1:0xc1570+10,1:0xc1580+10,1:0xc1590+10,1:0xc15a0+10,1:0xc15b0+10,1:0xc15c0+10,1:0xc15d0+10,1:0xc15e0+10,1:0xc15f0+10,1:0xc1600+10,1:0xc1610+10,1:0xc1620+10,1:0xc1630+10,1:0xc1640+10,1:0xc1650+10,1:0xc1660+10,1:0xc1670+10,1:0xc1680+10,1:0xc1690+10,1:0xc16a0+10,1:0xc16b0+10,1:0xc16c0+10,1:0xc16d0+10,1:0xc16e0+10,1:0xc16f0+10,1:0xc1700+10,1:0xc1710+10,1:0xc1720+10,1:0xc1730+10,1:0xc1740+10,1:0xc1750+10,1:0xc1760+10,1:0xc1770+10,1:0xc1780+10,1:0xc1790+10,1:0xc17a0+10,

Re: [ceph-users] OSD bluestore initialization failed

2019-06-21 Thread Igor Fedotov

Saulo,

please share a few log lines immediately before the assertion, not the 
starting ones.



Thanks,

Igor

On 6/21/2019 5:37 PM, Saulo Silva wrote:

Hi Igor , thanks for helping ,

Here part of the log :

head ceph-osd.6.log -n80
2019-06-21 10:50:56.090891 7f462db84d00  0 set uid:gid to 167:167 
(ceph:ceph)
2019-06-21 10:50:56.090910 7f462db84d00  0 ceph version 
12.2.10-551-gbb089269ea (bb089269ea0c1272294c6b9777123ac81662b6d2) 
luminous (stable), process ceph-osd, pid 18014
2019-06-21 10:50:56.116848 7f462db84d00  0 pidfile_write: ignore empty 
--pid-file
2019-06-21 10:50:56.127199 7f462db84d00  0 load: jerasure load: lrc 
load: isa
2019-06-21 10:50:56.435977 7f462db84d00 10 bluefs add_block_device 
bdev 1 path /var/lib/ceph/osd/ceph-6/block
2019-06-21 10:50:56.436187 7f462db84d00  1 bluefs add_block_device 
bdev 1 path /var/lib/ceph/osd/ceph-6/block size 99.9GiB

2019-06-21 10:50:56.436221 7f462db84d00  1 bluefs mount
2019-06-21 10:50:56.436225 7f462db84d00 10 bluefs _open_super
2019-06-21 10:50:56.436866 7f462db84d00 10 bluefs _open_super 
superblock 15
2019-06-21 10:50:56.436893 7f462db84d00 10 bluefs _open_super 
log_fnode file(ino 1 size 0x10 mtime 0.00 bdev 0 allocated 
50 extents [1:0xc9bf0+10,1:0xc9bb0+40])

2019-06-21 10:50:56.436904 7f462db84d00 20 bluefs _init_alloc
2019-06-21 10:50:56.436913 7f462db84d00 10 bluefs _replay
2019-06-21 10:50:56.436919 7f462db84d00 10 bluefs _replay log_fnode 
file(ino 1 size 0x10 mtime 0.00 bdev 0 allocated 50 
extents [1:0xc9bf0+10,1:0xc9bb0+40])
2019-06-21 10:50:56.436923 7f462db84d00 10 bluefs _read h 
0x5632d7074b80 0x0~1000 from file(ino 1 size 0x10 mtime 0.00 
bdev 0 allocated 50 extents 
[1:0xc9bf0+10,1:0xc9bb0+40])
2019-06-21 10:50:56.436928 7f462db84d00 20 bluefs _read fetching 
0x0~10 of 1:0xc9bf0+10
2019-06-21 10:50:56.458689 7f462db84d00 20 bluefs _read left 0x10 
len 0x1000

2019-06-21 10:50:56.458701 7f462db84d00 20 bluefs _read got 4096
2019-06-21 10:50:56.458704 7f462db84d00 20 bluefs _replay need 0x1000 
more bytes
2019-06-21 10:50:56.458705 7f462db84d00 10 bluefs _read h 
0x5632d7074b80 0x1000~1000 from file(ino 1 size 0x10 mtime 
0.00 bdev 0 allocated 50 extents 
[1:0xc9bf0+10,1:0xc9bb0+40])
2019-06-21 10:50:56.458710 7f462db84d00 20 bluefs _read left 0xff000 
len 0x1000

2019-06-21 10:50:56.458711 7f462db84d00 20 bluefs _read got 4096
2019-06-21 10:50:56.458717 7f462db84d00 10 bluefs _replay 0x0: txn(seq 
1 len 0x1c73 crc 0xfd2d3aef)

2019-06-21 10:50:56.458719 7f462db84d00 20 bluefs _replay 0x0:  op_init
2019-06-21 10:50:56.458721 7f462db84d00 20 bluefs _replay 0x0: 
 op_alloc_add  1:0xbfd10~ffd0
2019-06-21 10:50:56.458735 7f462db84d00 20 bluefs _replay 0x0: 
 op_file_update  file(ino 1012 size 0x41ef34e mtime 2019-05-22 
19:13:16.240226 bdev 1 allocated 420 extents [1:0xceda0+420])
2019-06-21 10:50:56.458765 7f462db84d00 20 bluefs _replay 0x0: 
 op_file_update  file(ino 167 size 0xf4da0c3 mtime 2019-05-23 
13:11:26.154047 bdev 1 allocated f70 extents 
[1:0xc0d90+10,1:0xc11a0+10,1:0xc11b0+10,1:0xc11c0+10,1:0xc11d0+10,1:0xc11e0+10,1:0xc11f0+10,1:0xc1200+10,1:0xc1210+10,1:0xc1220+10,1:0xc1230+10,1:0xc1240+10,1:0xc1250+10,1:0xc1260+10,1:0xc1270+10,1:0xc1280+10,1:0xc1290+10,1:0xc12a0+10,1:0xc12b0+10,1:0xc12c0+10,1:0xc12d0+10,1:0xc12e0+10,1:0xc12f0+10,1:0xc1300+10,1:0xc1310+10,1:0xc1320+10,1:0xc1330+10,1:0xc1340+10,1:0xc1350+10,1:0xc1360+10,1:0xc1370+10,1:0xc1380+10,1:0xc1390+10,1:0xc13a0+10,1:0xc13b0+10,1:0xc13c0+10,1:0xc13d0+10,1:0xc13e0+10,1:0xc13f0+10,1:0xc1400+10,1:0xc1410+10,1:0xc1420+10,1:0xc1430+10,1:0xc1440+10,1:0xc1450+10,1:0xc1460+10,1:0xc1470+10,1:0xc1480+10,1:0xc1490+10,1:0xc14a0+10,1:0xc14b0+10,1:0xc14c0+10,1:0xc14d0+10,1:0xc14e0+10,1:0xc14f0+10,1:0xc1500+10,1:0xc1510+10,1:0xc1520+10,1:0xc1530+10,1:0xc1540+10,1:0xc1550+10,1:0xc1560+10,1:0xc1570+10,1:0xc1580+10,1:0xc1590+10,1:0xc15a0+10,1:0xc15b0+10,1:0xc15c0+10,1:0xc15d0+10,1:0xc15e0+10,1:0xc15f0+10,1:0xc1600+10,1:0xc1610+10,1:0xc1620+10,1:0xc1630+10,1:0xc1640+10,1:0xc1650+10,1:0xc1660+10,1:0xc1670+10,1:0xc1680+10,1:0xc1690+10,1:0xc16a0+10,1:0xc16b0+10,1:0xc16c0+10,1:0xc16d0+10,1:0xc16e0+10,1:0xc16f0+10,1:0xc1700+10,1:0xc1710+10

Re: [ceph-users] OSD bluestore initialization failed

2019-06-21 Thread Saulo Silva
Hi Igor ,

I am looking at the log and I am not sure what is the exactly line that I
shuold send .
I tried to
tail -f /var/log/ceph/ceph-osd.6.log | grep -i assertion -A 5

But no valid result returned .
What you be the regex to get this specific line ?
I also could send the entire log .

Best Regards,

Saulo Augusto Silva

Em sex, 21 de jun de 2019 às 11:39, Igor Fedotov 
escreveu:

> Saulo,
>
> please share a few log lines immediately before the assertion, not the
> starting ones.
>
>
> Thanks,
>
> Igor
> On 6/21/2019 5:37 PM, Saulo Silva wrote:
>
> Hi Igor , thanks for helping ,
>
> Here part of the log :
>
> head ceph-osd.6.log -n80
> 2019-06-21 10:50:56.090891 7f462db84d00  0 set uid:gid to 167:167
> (ceph:ceph)
> 2019-06-21 10:50:56.090910 7f462db84d00  0 ceph version
> 12.2.10-551-gbb089269ea (bb089269ea0c1272294c6b9777123ac81662b6d2) luminous
> (stable), process ceph-osd, pid 18014
> 2019-06-21 10:50:56.116848 7f462db84d00  0 pidfile_write: ignore empty
> --pid-file
> 2019-06-21 10:50:56.127199 7f462db84d00  0 load: jerasure load: lrc load:
> isa
> 2019-06-21 10:50:56.435977 7f462db84d00 10 bluefs add_block_device bdev 1
> path /var/lib/ceph/osd/ceph-6/block
> 2019-06-21 10:50:56.436187 7f462db84d00  1 bluefs add_block_device bdev 1
> path /var/lib/ceph/osd/ceph-6/block size 99.9GiB
> 2019-06-21 10:50:56.436221 7f462db84d00  1 bluefs mount
> 2019-06-21 10:50:56.436225 7f462db84d00 10 bluefs _open_super
> 2019-06-21 10:50:56.436866 7f462db84d00 10 bluefs _open_super superblock 15
> 2019-06-21 10:50:56.436893 7f462db84d00 10 bluefs _open_super log_fnode
> file(ino 1 size 0x10 mtime 0.00 bdev 0 allocated 50 extents
> [1:0xc9bf0+10,1:0xc9bb0+40])
> 2019-06-21 10:50:56.436904 7f462db84d00 20 bluefs _init_alloc
> 2019-06-21 10:50:56.436913 7f462db84d00 10 bluefs _replay
> 2019-06-21 10:50:56.436919 7f462db84d00 10 bluefs _replay log_fnode
> file(ino 1 size 0x10 mtime 0.00 bdev 0 allocated 50 extents
> [1:0xc9bf0+10,1:0xc9bb0+40])
> 2019-06-21 10:50:56.436923 7f462db84d00 10 bluefs _read h 0x5632d7074b80
> 0x0~1000 from file(ino 1 size 0x10 mtime 0.00 bdev 0 allocated
> 50 extents [1:0xc9bf0+10,1:0xc9bb0+40])
> 2019-06-21 10:50:56.436928 7f462db84d00 20 bluefs _read fetching
> 0x0~10 of 1:0xc9bf0+10
> 2019-06-21 10:50:56.458689 7f462db84d00 20 bluefs _read left 0x10 len
> 0x1000
> 2019-06-21 10:50:56.458701 7f462db84d00 20 bluefs _read got 4096
> 2019-06-21 10:50:56.458704 7f462db84d00 20 bluefs _replay need 0x1000 more
> bytes
> 2019-06-21 10:50:56.458705 7f462db84d00 10 bluefs _read h 0x5632d7074b80
> 0x1000~1000 from file(ino 1 size 0x10 mtime 0.00 bdev 0 allocated
> 50 extents [1:0xc9bf0+10,1:0xc9bb0+40])
> 2019-06-21 10:50:56.458710 7f462db84d00 20 bluefs _read left 0xff000 len
> 0x1000
> 2019-06-21 10:50:56.458711 7f462db84d00 20 bluefs _read got 4096
> 2019-06-21 10:50:56.458717 7f462db84d00 10 bluefs _replay 0x0: txn(seq 1
> len 0x1c73 crc 0xfd2d3aef)
> 2019-06-21 10:50:56.458719 7f462db84d00 20 bluefs _replay 0x0:  op_init
> 2019-06-21 10:50:56.458721 7f462db84d00 20 bluefs _replay 0x0:
>  op_alloc_add  1:0xbfd10~ffd0
> 2019-06-21 10:50:56.458735 7f462db84d00 20 bluefs _replay 0x0:
>  op_file_update  file(ino 1012 size 0x41ef34e mtime 2019-05-22
> 19:13:16.240226 bdev 1 allocated 420 extents [1:0xceda0+420])
> 2019-06-21 10:50:56.458765 7f462db84d00 20 bluefs _replay 0x0:
>  op_file_update  file(ino 167 size 0xf4da0c3 mtime 2019-05-23
> 13:11:26.154047 bdev 1 allocated f70 extents
> [1:0xc0d90+10,1:0xc11a0+10,1:0xc11b0+10,1:0xc11c0+10,1:0xc11d0+10,1:0xc11e0+10,1:0xc11f0+10,1:0xc1200+10,1:0xc1210+10,1:0xc1220+10,1:0xc1230+10,1:0xc1240+10,1:0xc1250+10,1:0xc1260+10,1:0xc1270+10,1:0xc1280+10,1:0xc1290+10,1:0xc12a0+10,1:0xc12b0+10,1:0xc12c0+10,1:0xc12d0+10,1:0xc12e0+10,1:0xc12f0+10,1:0xc1300+10,1:0xc1310+10,1:0xc1320+10,1:0xc1330+10,1:0xc1340+10,1:0xc1350+10,1:0xc1360+10,1:0xc1370+10,1:0xc1380+10,1:0xc1390+10,1:0xc13a0+10,1:0xc13b0+10,1:0xc13c0+10,1:0xc13d0+10,1:0xc13e0+10,1:0xc13f0+10,1:0xc1400+10,1:0xc1410+10,1:0xc1420+10,1:0xc1430+10,1:0xc1440+10,1:0xc1450+10,1:0xc1460+10,1:0xc1470+10,1:0xc1480+10,1:0xc1490+10,1:0xc14a0+10,1:0xc14b0+10,1:0xc14c0+10,1:0xc14d0+10,1:0xc14e0+10,1:0xc14f0+10,1:0xc1500+10,1:0xc1510+10,1:0xc1520+10,1:0xc1530+10,1:0xc1540+10,1:0xc1550+10,1:0xc1560+10,1:0xc1570+10,1:0xc1580+10,1:0xc1590+10,1:0xc15a0+10,1:0x

Re: [ceph-users] OSD bluestore initialization failed

2019-06-21 Thread Saulo Silva
HI Igor ,

Here the assert line and the 10 line that goes before and after .

-4> 2019-06-21 10:54:45.161493 7f689291ed00 20 bluefs _read left 0x8000
len 0x8000
-3> 2019-06-21 10:54:45.161497 7f689291ed00 20 bluefs _read got 32768
-2> 2019-06-21 10:54:45.161533 7f689291ed00 10 bluefs _read h
0x55bab9dbfc00 0x6d0~8000 from file(ino 165 size 0xf5775f4 mtime
2019-06-19 02:49:24.017808 bdev 1 allocated f70 extents
[1:0xbfeb0+10,1:0xbfec0+10,1:0xbfed0+10,1:0xbfee0+10,1:0xbfef0+10,1:0xbff00+10,1:0xbff10+10,1:0xbff20+10,1:0xbff30+10,1:0xbff40+10,1:0xbff50+10,1:0xbff60+10,1:0xbff70+10,1:0xbff80+10,1:0xbff90+10,1:0xbffa0+10,1:0xbffb0+10,1:0xbffc0+10,1:0xbffd0+10,1:0xbffe0+10,1:0xbfff0+10,1:0xc+10,1:0xc0010+10,1:0xc0020+10,1:0xc0030+10,1:0xc0040+10,1:0xc0050+10,1:0xc0060+10,1:0xc0070+10,1:0xc0080+10,1:0xc0090+10,1:0xc00a0+10,1:0xc00b0+10,1:0xc00c0+10,1:0xc00d0+10,1:0xc00e0+10,1:0xc00f0+10,1:0xc0100+10,1:0xc0110+10,1:0xc0120+10,1:0xc0130+10,1:0xc0140+10,1:0xc0150+10,1:0xc0160+10,1:0xc0170+10,1:0xc0180+10,1:0xc0190+10,1:0xc01a0+10,1:0xc01b0+10,1:0xc01c0+10,1:0xc01d0+10,1:0xc01e0+10,1:0xc01f0+10,1:0xc0200+10,1:0xc0210+10,1:0xc0220+10,1:0xc0230+10,1:0xc0240+10,1:0xc0250+10,1:0xc0260+10,1:0xc0270+10,1:0xc0280+10,1:0xc0290+10,1:0xc02a0+10,1:0xc02b0+10,1:0xc02c0+10,1:0xc02d0+10,1:0xc02e0+10,1:0xc02f0+10,1:0xc0300+10,1:0xc0310+10,1:0xc0320+10,1:0xc0330+10,1:0xc0340+10,1:0xc0350+10,1:0xc0360+10,1:0xc0370+10,1:0xc0380+10,1:0xc0390+10,1:0xc03a0+10,1:0xc03b0+10,1:0xc03c0+10,1:0xc03d0+10,1:0xc03e0+10,1:0xc03f0+10,1:0xc0400+10,1:0xc0410+10,1:0xc0420+10,1:0xc0430+10,1:0xc0440+10,1:0xc0450+10,1:0xc0460+10,1:0xc0470+10,1:0xc0480+10,1:0xc0490+10,1:0xc04a0+10,1:0xc04b0+10,1:0xc04c0+10,1:0xc04d0+10,1:0xc04e0+10,1:0xc04f0+10,1:0xc0500+10,1:0xc0510+10,1:0xc0520+10,1:0xc0530+10,1:0xc0540+10,1:0xc0550+10,1:0xc0560+10,1:0xc0570+10,1:0xc0580+10,1:0xc0590+10,1:0xc05a0+10,1:0xc05b0+10,1:0xc05c0+10,1:0xc05d0+10,1:0xc05e0+10,1:0xc05f0+10,1:0xc0600+10,1:0xc0610+10,1:0xc0620+10,1:0xc0630+10,1:0xc0640+10,1:0xc0650+10,1:0xc0660+10,1:0xc0670+10,1:0xc0680+10,1:0xc0690+10,1:0xc06a0+10,1:0xc06b0+10,1:0xc06c0+10,1:0xc06d0+10,1:0xc06e0+10,1:0xc06f0+10,1:0xc0700+10,1:0xc0710+10,1:0xc0720+10,1:0xc0730+10,1:0xc0740+10,1:0xc0750+10,1:0xc0760+10,1:0xc0770+10,1:0xc0780+10,1:0xc0790+10,1:0xc07a0+10,1:0xc07b0+10,1:0xc07c0+10,1:0xc07d0+10,1:0xc07e0+10,1:0xc07f0+10,1:0xc0800+10,1:0xc0810+10,1:0xc0820+10,1:0xc0830+10,1:0xc0840+10,1:0xc0850+10,1:0xc0860+10,1:0xc0870+10,1:0xc0880+10,1:0xc0890+10,1:0xc08a0+10,1:0xc08b0+10,1:0xc08c0+10,1:0xc08d0+10,1:0xc08e0+10,1:0xc08f0+10,1:0xc0900+10,1:0xc0910+10,1:0xc0920+10,1:0xc0930+10,1:0xc0940+10,1:0xc0950+10,1:0xc0960+10,1:0xc0970+10,1:0xc0980+10,1:0xc0990+10,1:0xc09a0+10,1:0xc09b0+10,1:0xc09c0+10,1:0xc09d0+10,1:0xc09e0+10,1:0xc09f0+10,1:0xc0a00+10,1:0xc0a10+10,1:0xc0a20+10,1:0xc0a30+10,1:0xc0a40+10,1:0xc0a50+10,1:0xc0a60+10,1:0xc0a70+10,1:0xc0a80+10,1:0xc0a90+10,1:0xc0aa0+10,1:0xc0ab0+10,1:0xc0ac0+10,1:0xc0ad0+10,1:0xc0ae0+10,1:0xc0af0+10,1:0xc0b00+10,1:0xc0b10+10,1:0xc0b20+10,1:0xc0b30+10,1:0xc0b40+10,1:0xc0b50+10,1:0xc0b60+10,1:0xc0b70+10,1:0xc0b80+10,1:0xc0b90+10,1:0xc0ba0+10,1:0xc0bb0+10,1:0xc0bc0+10,1:0xc0bd0+10,1:0xc0be0+10,1:0xc0bf0+10,1:0xc0c00+10,1:0xc0c10+10,1:0xc0c20+10,1:0xc0c30+10,1:0xc0c

[ceph-users] Cannot delete bucket

2019-06-21 Thread Sergei Genchev
 Hello,
Trying to delete bucket using radosgw-admin, and failing. Bucket has
50K objects but all of them are large. This is what I get:
$ radosgw-admin bucket rm --bucket=di-omt-mapupdate --purge-objects --bypass-gc
2019-06-21 17:09:12.424 7f53f621f700  0 WARNING : aborted 1000
incomplete multipart uploads
2019-06-21 17:09:19.966 7f53f621f700  0 WARNING : aborted 2000
incomplete multipart uploads
2019-06-21 17:09:26.819 7f53f621f700  0 WARNING : aborted 3000
incomplete multipart uploads
2019-06-21 17:09:33.430 7f53f621f700  0 WARNING : aborted 4000
incomplete multipart uploads
2019-06-21 17:09:40.304 7f53f621f700  0 WARNING : aborted 5000
incomplete multipart uploads

Looks like it is trying to delete objects 1000 at a time, as it
should, but failing. Bucket stats do not change.
 radosgw-admin bucket stats --bucket=di-omt-mapupdate |jq .usage
{
  "rgw.main": {
"size": 521929247648,
"size_actual": 521930674176,
"size_utilized": 400701129125,
"size_kb": 509696531,
"size_kb_actual": 509697924,
"size_kb_utilized": 391309697,
"num_objects": 50004
  },
  "rgw.multimeta": {
"size": 0,
"size_actual": 0,
"size_utilized": 0,
"size_kb": 0,
"size_kb_actual": 0,
"size_kb_utilized": 0,
"num_objects": 32099
  }
}
How can I get this bucket deleted?
Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD bluestore initialization failed

2019-06-21 Thread Saulo Silva
After read a lot of documentation I started to try to recover my data doing
PG export and import to try get my pool up but all command that I did
result a error . I started the work to list all PG to create a simple
import/export script . but I can neither list the PGs .

ceph-objectstore-tool --bluestore --op list-pgs --data-path
/var/lib/ceph/osd/ceph-6
2019-06-21 20:20:55.054762 7fd42d2dc540 -1 Errors while parsing config file!
2019-06-21 20:20:55.054771 7fd42d2dc540 -1 unexpected character at char 236
of line 39
/home/abuild/rpmbuild/BUILD/ceph-12.2.10-551-gbb089269ea/src/os/bluestore/BlueFS.cc:
In function 'int BlueFS::_read(BlueFS::FileReader*,
BlueFS::FileReaderBuffer*, uint64_t, size_t, ceph::bufferlist*, char*)'
thread 7fd42d2dc540 time 2019-06-21 20:21:04.136337
/home/abuild/rpmbuild/BUILD/ceph-12.2.10-551-gbb089269ea/src/os/bluestore/BlueFS.cc:
976: FAILED assert(r == 0)
 ceph version 12.2.10-551-gbb089269ea
(bb089269ea0c1272294c6b9777123ac81662b6d2) luminous (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x10e) [0x7fd4239c6a9e]
 2: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned
long, unsigned long, ceph::buffer::list*, char*)+0xcb4) [0x55c9beeceef4]
 3: (BlueRocksSequentialFile::Read(unsigned long, rocksdb::Slice*,
char*)+0x36) [0x55c9beef1dc6]
 4: (rocksdb::SequentialFileReader::Read(unsigned long, rocksdb::Slice*,
char*)+0x17c) [0x55c9befd7b5c]
 5: (rocksdb::log::Reader::ReadMore(unsigned long*, int*)+0xc8)
[0x55c9bef4fdf8]
 6: (rocksdb::log::Reader::ReadPhysicalRecord(rocksdb::Slice*, unsigned
long*)+0x34) [0x55c9bef4fec4]
 7: (rocksdb::log::Reader::ReadRecord(rocksdb::Slice*, std::string*,
rocksdb::WALRecoveryMode)+0xe8) [0x55c9bef50168]
 8: (rocksdb::DBImpl::RecoverLogFiles(std::vector > const&, unsigned long*, bool)+0xee8)
[0x55c9bef41948]
 9: (rocksdb::DBImpl::Recover(std::vector > const&, bool, bool,
bool)+0x7c6) [0x55c9bef43296]
 10: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::string const&,
std::vector > const&,
std::vector >*, rocksdb::DB**)+0xeb3)
[0x55c9bef44523]
 11: (rocksdb::DB::Open(rocksdb::Options const&, std::string const&,
rocksdb::DB**)+0x177) [0x55c9bef45757]
 12: (RocksDBStore::do_open(std::ostream&, bool)+0x93b) [0x55c9bee85bfb]
 13: (BlueStore::_open_db(bool)+0xd53) [0x55c9bee12273]
 14: (BlueStore::_mount(bool)+0x3ea) [0x55c9bee436da]
 15: (main()+0x2b96) [0x55c9be8daaa6]
 16: (__libc_start_main()+0xf5) [0x7fd4223ab725]
 17: (_start()+0x29) [0x55c9be9770f9]
 NOTE: a copy of the executable, or `objdump -rdS ` is needed
to interpret this.
*** Caught signal (Aborted) **
 in thread 7fd42d2dc540 thread_name:ceph-objectstor
 ceph version 12.2.10-551-gbb089269ea
(bb089269ea0c1272294c6b9777123ac81662b6d2) luminous (stable)
 1: (()+0x915959) [0x55c9bef18959]
 2: (()+0x10c10) [0x7fd4231e7c10]
 3: (gsignal()+0x37) [0x7fd4223bff67]
 4: (abort()+0x13a) [0x7fd4223c133a]
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x280) [0x7fd4239c6c10]
 6: (BlueFS::_read(BlueFS::FileReader*, BlueFS::FileReaderBuffer*, unsigned
long, unsigned long, ceph::buffer::list*, char*)+0xcb4) [0x55c9beeceef4]
 7: (BlueRocksSequentialFile::Read(unsigned long, rocksdb::Slice*,
char*)+0x36) [0x55c9beef1dc6]
 8: (rocksdb::SequentialFileReader::Read(unsigned long, rocksdb::Slice*,
char*)+0x17c) [0x55c9befd7b5c]
 9: (rocksdb::log::Reader::ReadMore(unsigned long*, int*)+0xc8)
[0x55c9bef4fdf8]
 10: (rocksdb::log::Reader::ReadPhysicalRecord(rocksdb::Slice*, unsigned
long*)+0x34) [0x55c9bef4fec4]
 11: (rocksdb::log::Reader::ReadRecord(rocksdb::Slice*, std::string*,
rocksdb::WALRecoveryMode)+0xe8) [0x55c9bef50168]
 12: (rocksdb::DBImpl::RecoverLogFiles(std::vector > const&, unsigned long*, bool)+0xee8)
[0x55c9bef41948]
 13: (rocksdb::DBImpl::Recover(std::vector > const&, bool, bool,
bool)+0x7c6) [0x55c9bef43296]
 14: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::string const&,
std::vector > const&,
std::vector >*, rocksdb::DB**)+0xeb3)
[0x55c9bef44523]
 15: (rocksdb::DB::Open(rocksdb::Options const&, std::string const&,
rocksdb::DB**)+0x177) [0x55c9bef45757]
 16: (RocksDBStore::do_open(std::ostream&, bool)+0x93b) [0x55c9bee85bfb]
 17: (BlueStore::_open_db(bool)+0xd53) [0x55c9bee12273]
 18: (BlueStore::_mount(bool)+0x3ea) [0x55c9bee436da]
 19: (main()+0x2b96) [0x55c9be8daaa6]
 20: (__libc_start_main()+0xf5) [0x7fd4223ab725]
 21: (_start()+0x29) [0x55c9be9770f9]
Aborted (core dumped)

Em sex, 21 de jun de 2019 às 13:18, Saulo Silva 
escreveu:

> HI Igor ,
>
> Here the assert line and the 10 line that goes before and after .
>
> -4> 2019-06-21 10:54:45.161493 7f689291ed00 20 bluefs _read left
> 0x8000 len 0x8000
> -3> 2019-06-21 10:54:45.161497 7f689291ed00 20 bluefs _read got 32768
> -2> 2019-06-21 10:54:45.161533 7f689291ed00 10 bluefs _read h
> 0x55bab9dbfc00 0x6d0~8000 from file(ino 165 size 0xf5775f4 mtime
> 2019-06-19 02:49:24.017808 bdev 1 allocated f70 extents
> [1:0xbfeb0+10,1:0xbfec0