[ceph-users] monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2]

2021-04-22 Thread Cem Zafer
Hi,
I have recently add a new host to ceph and copied /etc/ceph directory to
the new host. When I execute the simple ceph command as "ceph -s", get the
following error.

021-04-22T14:50:46.226+0300 7ff541141700 -1 monclient(hunting):
handle_auth_bad_method server allowed_methods [2] but i only support [2]
2021-04-22T14:50:46.226+0300 7ff540940700 -1 monclient(hunting):
handle_auth_bad_method server allowed_methods [2] but i only support [2]
2021-04-22T14:50:46.226+0300 7ff533fff700 -1 monclient(hunting):
handle_auth_bad_method server allowed_methods [2] but i only support [2]
[errno 13] RADOS permission denied (error connecting to the cluster)

When I looked at the syslog on the ceph cluster node, I saw that message
too.

Apr 22 14:51:40 ceph100 bash[27979]: debug 2021-04-22T11:51:40.684+
7fe4d28cb700  0 cephx server client.admin:  attempt to reclaim global_id
264198 without presenting ticket
Apr 22 14:51:40 ceph100 bash[27979]: debug 2021-04-22T11:51:40.684+
7fe4d28cb700  0 cephx server client.admin:  could not verify old ticket

Anyone can help me out or assist to the right direction or link?
Regards.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Pacific, one of monitor service doesnt response.

2021-04-24 Thread Cem Zafer
Hi,
I have no problem with my cluster before but when upgraded from 15.2.11 to
16.2.0 one of my ceph monitor service does not work. I have looked at the
service log but cant figure it out.
Can someone have any idea what could be wrong with that monitor service?
Here is the service attached log.
Regards

root@ceph100:~# ceph orch ps
NAME   HOST STATUS  REFRESHED  AGE  PORTS
   VERSIONIMAGE ID  CONTAINER ID
alertmanager.ceph100   ceph100  running (102m)  7m ago 3d   -
   0.20.0 0881eb8f169f  b62ac302b319
crash.ceph100  ceph100  running (102m)  7m ago 3d   -
   16.2.0 24ecd6d5f14c  e786dc48b429
crash.ceph101  ceph101  running (99m)   5m ago 3d   -
   16.2.0 24ecd6d5f14c  a2cefd673a5f
crash.ceph102  ceph102  running (101m)  7m ago 3d   -
   16.2.0 24ecd6d5f14c  f9f8a337ba43
grafana.ceph100ceph100  running (102m)  7m ago 3d   -
   6.7.4  80728b29ad3f  60381a67b6a2
mgr.ceph100.abnyjw ceph100  running (102m)  7m ago 3d   *:8443
*:9283  16.2.1 c757e4a3636b  67f98b3ad44d
mgr.ceph102.qqcrik ceph102  running (101m)  7m ago 8h   *:8443
*:9283  16.2.1 c757e4a3636b  00a6eeb2a9eb
mon.ceph100ceph100  running (102m)  7m ago 3d   -
   16.2.0 24ecd6d5f14c  6bce0a137567
mon.ceph101ceph101  stopped 5m ago 3d   -
  
mon.ceph102ceph102  running (101m)  7m ago 3d   -
   16.2.0 24ecd6d5f14c  9ca0e1c7ecd9
node-exporter.ceph100  ceph100  running (102m)  7m ago 3d   -
   0.18.1 e5a616e4b9cf  b8ae7aa0fae1
node-exporter.ceph101  ceph101  running (99m)   5m ago 3d   -
   0.18.1 e5a616e4b9cf  945cdd081979
node-exporter.ceph102  ceph102  running (101m)  7m ago 3d   -
   0.18.1 e5a616e4b9cf  d9a967358fbe
osd.0  ceph100  running (101m)  7m ago 3d   -
   16.2.0 24ecd6d5f14c  8f74ba97c27e
osd.1  ceph100  running (101m)  7m ago 3d   -
   16.2.0 24ecd6d5f14c  fe0eed6a2cf4
osd.2  ceph101  running (99m)   5m ago 3d   -
   16.2.0 24ecd6d5f14c  1fd53036a139
osd.3  ceph101  running (99m)   5m ago 3d   -
   16.2.0 24ecd6d5f14c  329496f3a03e
osd.4  ceph102  running (101m)  7m ago 3d   -
   16.2.0 24ecd6d5f14c  cb16b76f0fc4
osd.5  ceph102  running (101m)  7m ago 3d   -
   16.2.0 24ecd6d5f14c  c5d4e56812af
prometheus.ceph100 ceph100  running (102m)  7m ago 3d   -
   2.18.1 de242295e225  b3f1c51d7136
Apr 24 15:56:55 ceph101 systemd[1]: Started Ceph mon.ceph101 for 
8432e37e-a22a-11eb-8d4f-c907e64b5aa7.
Apr 24 15:56:55 ceph101 bash[5664]: WARNING: Error loading config file: 
.dockercfg: $HOME is not defined
Apr 24 15:56:57 ceph101 bash[5664]: debug 2021-04-24T12:56:57.266+ 
7fab02f78700  0 set uid:gid to 167:167 (ceph:ceph)
Apr 24 15:56:57 ceph101 bash[5664]: debug 2021-04-24T12:56:57.266+ 
7fab02f78700  0 ceph version 16.2.0 (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) 
pacific (stable), process ceph-mon, pid 7
Apr 24 15:56:57 ceph101 bash[5664]: debug 2021-04-24T12:56:57.266+ 
7fab02f78700  0 pidfile_write: ignore empty --pid-file
Apr 24 15:56:57 ceph101 bash[5664]: debug 2021-04-24T12:56:57.274+ 
7fab02f78700  0 load: jerasure load: lrc load: isa
Apr 24 15:56:57 ceph101 bash[5664]: debug 2021-04-24T12:56:57.274+ 
7fab02f78700  4 rocksdb: RocksDB version: 6.8.1
Apr 24 15:56:57 ceph101 bash[5664]: debug 2021-04-24T12:56:57.274+ 
7fab02f78700  4 rocksdb: Git sha rocksdb_build_git_sha:@0@
Apr 24 15:56:57 ceph101 bash[5664]: debug 2021-04-24T12:56:57.274+ 
7fab02f78700  4 rocksdb: Compile date Mar 30 2021
Apr 24 15:56:57 ceph101 bash[5664]: debug 2021-04-24T12:56:57.274+ 
7fab02f78700  4 rocksdb: DB SUMMARY
Apr 24 15:56:57 ceph101 bash[5664]: debug 2021-04-24T12:56:57.274+ 
7fab02f78700  4 rocksdb: CURRENT file:  CURRENT
Apr 24 15:56:57 ceph101 bash[5664]: debug 2021-04-24T12:56:57.274+ 
7fab02f78700  4 rocksdb: IDENTITY file:  IDENTITY
Apr 24 15:56:57 ceph101 bash[5664]: debug 2021-04-24T12:56:57.274+ 
7fab02f78700  4 rocksdb: MANIFEST file:  MANIFEST-002525 size: 221 Bytes
Apr 24 15:56:57 ceph101 bash[5664]: debug 2021-04-24T12:56:57.274+ 
7fab02f78700  4 rocksdb: SST files in /var/lib/ceph/mon/ceph-ceph101/store.db 
dir, Total Num: 2, files: 002462.sst 002464.sst
Apr 24 15:56:57 ceph101 bash[5664]: debug 2021-04-24T12:56:57.274+ 
7fab02f78700  4 rocksdb: Write Ahead Log file in 
/var/lib/ceph/mon/ceph-ceph101/store.db: 002526.log size: 0 ;
Apr 24 15:56:57 ceph101 bash[5664]: debug 2021-04-24T12:56:57.274+ 
7fab02f78700  4 rocksdb: Options.error_if_exists: 0
Apr 24 15:56:57 ceph101 bash[5664]: debug 2021-04-24T12:56:57.274+ 
7fab02f78700  4 rocksdb:   Options.create_if_missing: 0
Apr 24 15:56:57 ceph101 bash[5664]: debug 2021-04-24T12:56:57.274+ 
7fab02f78700  4 rocksdb:

[ceph-users] How to purge/remove rgw from ceph/pacific

2021-09-11 Thread Cem Zafer
Hi,
How to remove rgw from hosts? When I execute ```ceph orch daemon rm
```, it spawns another.
What is the proper way to remove rgw from ceph hosts?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Radosgw single side configuration

2021-09-12 Thread Cem Zafer
Hi,
I have been looking for documentation about single side ceph object gateway
configuration but I just found lots of multi-side documentation. Is it
possible to use ceph object gateway as a single side? Can anyone assist me
with the configuration, the url will be fine?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Error: UPGRADE_FAILED_PULL: Upgrade: failed to pull target image

2021-09-18 Thread Cem Zafer
Hi,
As usual upgrade procedurefrom 16.2.5 -> 16.2.6, I get that error. Anyone
have any suggestions?
Thanks.

root@ceph100:~# ceph orch upgrade status
{
"target_image": "docker.io/ceph/ceph:v16.2.6",
"in_progress": true,
"services_complete": [],
"progress": "",
"message": "Error: UPGRADE_FAILED_PULL: Upgrade: failed to pull target
image"
}
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Error: UPGRADE_FAILED_PULL: Upgrade: failed to pull target image

2021-09-18 Thread Cem Zafer
Here is the detail error.
Thanks.

root@ceph100:~# ceph health detail
HEALTH_WARN Upgrade: failed to pull target image
[WRN] UPGRADE_FAILED_PULL: Upgrade: failed to pull target image
host ceph100 `cephadm pull` failed: cephadm exited with an error code:
1, stderr:Pulling container image docker.io/ceph/ceph:v16.2.6...
Non-zero exit code 1 from /usr/bin/docker pull docker.io/ceph/ceph:v16.2.6
/usr/bin/docker: stderr Error response from daemon: manifest for
ceph/ceph:v16.2.6 not found: manifest unknown: manifest unknown
Traceback (most recent call last):
  File
"/var/lib/ceph/96825634-ff47-11eb-9c13-d7c69fddf094/cephadm.d4237e4639c108308fe13147b1c08af93c3d5724d9ff21ae797eb4b78fea3931",
line 8230, in 
main()
  File
"/var/lib/ceph/96825634-ff47-11eb-9c13-d7c69fddf094/cephadm.d4237e4639c108308fe13147b1c08af93c3d5724d9ff21ae797eb4b78fea3931",
line 8218, in main
r = ctx.func(ctx)
  File
"/var/lib/ceph/96825634-ff47-11eb-9c13-d7c69fddf094/cephadm.d4237e4639c108308fe13147b1c08af93c3d5724d9ff21ae797eb4b78fea3931",
line 1737, in _infer_image
return func(ctx)
  File
"/var/lib/ceph/96825634-ff47-11eb-9c13-d7c69fddf094/cephadm.d4237e4639c108308fe13147b1c08af93c3d5724d9ff21ae797eb4b78fea3931",
line 3286, in command_pull
_pull_image(ctx, ctx.image)
  File
"/var/lib/ceph/96825634-ff47-11eb-9c13-d7c69fddf094/cephadm.d4237e4639c108308fe13147b1c08af93c3d5724d9ff21ae797eb4b78fea3931",
line 3311, in _pull_image
raise RuntimeError('Failed command: %s' % cmd_str)
RuntimeError: Failed command: /usr/bin/docker pull
docker.io/ceph/ceph:v16.2.6

On Sat, Sep 18, 2021 at 1:56 PM Cem Zafer  wrote:

> Hi,
> As usual upgrade procedurefrom 16.2.5 -> 16.2.6, I get that error. Anyone
> have any suggestions?
> Thanks.
>
> root@ceph100:~# ceph orch upgrade status
> {
> "target_image": "docker.io/ceph/ceph:v16.2.6",
> "in_progress": true,
> "services_complete": [],
> "progress": "",
> "message": "Error: UPGRADE_FAILED_PULL: Upgrade: failed to pull target
> image"
> }
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)')

2020-06-16 Thread Cem Zafer
Thanks Simon, as you mentioned, I did the missing things and now everything
works fine.

On Tue, Jun 16, 2020 at 9:30 AM Simon Sutter  wrote:

> Hello,
>
>
> When you deploy ceph to other nodes with the orchestrator, they "just"
> have the containers you deployed to them.
> This means in your case, you started the monitor container on ceph101 and
> you must have installed at least the ceph-common package (else the ceph
> command would not work).
>
> If you enter the command ceph -s there is no configuration file, nor is
> there a key file (ceph does not know, where to connect).
> Ceph's configuration directory defaults to /etc/ceph/ (should be empty or
> not exist on ceph101).
>
> So in your case, you can either create the configuration files
> manually (read throug "ceph auth" and how the config and the keyring file
> should look like), or just copy the ceph.conf and the admin keyring to
> /etc/ceph/ on ceph101.
>
>
> Regards,
>
> Simon
> --
> *Von:* cemzafer 
> *Gesendet:* Montag, 15. Juni 2020 21:27:30
> *An:* ceph-us...@ceph.com
> *Betreff:* [ceph-users] Error initializing cluster client:
> ObjectNotFound('RADOS object not found (error calling conf_read_file)')
>
> I have installed  simple ceph system with two nodes (ceph100, ceph101)
> with cephadm and ceph orch host add command. I put the ssh-copy-id -f -i
> /etc/ceph/ceph.pub key to the second host (ceph101). I can execute the
> ceph -s command from the first host(ceph100) but when I execute the
> command in the second host(ceph101), I get the following error.
>
> Error initializing cluster client: ObjectNotFound('RADOS object not
> found (error calling conf_read_file)')
>
> Also, when I execute the 'ceph orch ps' command the output seems
> suspicious to me.
>
> NAME HOST   STATUSREFRESHED AGE
> VERSIONIMAGE NAMEIMAGE ID CONTAINER ID
>
> mon.ceph101ceph101  starting -  -
> 
>
> Has anyone any idea what could be the problem or anyone give me a fine
> link for the octopus cephadm installation?
>
> Regards.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] node-exporter error problem

2020-06-25 Thread Cem Zafer
Hi,
Our ceph cluster system health is fine but when I looked at the "ceph orch
ps" one of the image has error state as stated below.

node-exporter.ceph102  ceph102  error  7m ago 13m  
 prom/node-exporter 

How can we debug and locate the problem with ceph command? Another thing is
where can I find that error log? Inside the docker or host?
Regards.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Push config to all hosts

2020-06-28 Thread Cem Zafer
Hi,
What is the best method(s) to push ceph.conf to all hosts in octopus (15.x)?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] How to change 'ceph mon metadata' hostname value in octopus.

2020-07-01 Thread Cem Zafer
Hi forum people,
What is the best method to change monitor metadata in octopus?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] [errno 2] RADOS object not found (error connecting to the cluster)

2020-07-11 Thread Cem Zafer
Hi,
I have executed the ceph -n osd.0 --show-config command but replied with
that error message.
[errno 2] RADOS object not found (error connecting to the cluster)
Could someone prompt me to the right direction what could be the problem?
Thanks. Regards.

ceph version 15.2.4
I copied the client.admin key to the hosts and here is my ceph.conf file.

# minimal ceph.conf for 4372945a-b43d-11ea-b1b7-49709def22d4
[global]
fsid = 4372945a-b43d-11ea-b1b7-49709def22d4
mon_host = 192.168.1.10,192.168.1.11,192.168.1.12
mon_initial_members = 192.168.1.10,192.168.1.11
public network = 192.168.1.0/24
cluster network = 10.10.10.0/24
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Push config to all hosts

2020-07-28 Thread Cem Zafer
Thanks Ricardo for clarification.
Regards.

On Mon, Jul 27, 2020 at 2:50 PM Ricardo Marques  wrote:

> Hi Cem,
>
> Since https://github.com/ceph/ceph/pull/35576 you will be able to tell
> cephadm to keep your `/etc/ceph/ceph.conf` updated in all hosts by runnig:
>
> # ceph config set mgr mgr/cephadm/manage_etc_ceph_ceph_conf true
>
> But this feature was not released yet, so you will have to wait for
> v15.2.5.
>
>
> Ricardo Marques
>
> ------
> *From:* Cem Zafer 
> *Sent:* Monday, June 29, 2020 6:37 AM
> *To:* ceph-users@ceph.io 
> *Subject:* [ceph-users] Push config to all hosts
>
> Hi,
> What is the best method(s) to push ceph.conf to all hosts in octopus
> (15.x)?
> Thanks.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io