Hi Ronny,
Not sure what could have cause your outage with journaling TBH :/. Best
of luck for the Ceph/Proxmox bug!
On 5/23/22 20:09, ronny.lippold wrote:
> hi arthur,
>
> just for information. we had some horrible days ...
>
> last week, we shut some virtual machines down.
> most of them did n
I lost some disks in my cluster ceph then began to correct the structure
of the objects and replicate them
This caused me to get some errors on the s3 api
Gateway Time-out (Service: Amazon S3; Status Code: 504; Error Code: 504
Gateway Time-out; Request ID: null; S3 Extended Request ID: null; Prox
I want to save data pools for rgw on HDD disk drives And use some SSD hard
drive for the cache tier on top of it
Has anyone tested this scenario?
Is this practical and optimal?
How can I do this?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsub
Hi Farhad,
you can put the block.db (contains WAL and Metadata) on SSDs, when creating the
OSD.
Cheers
- Boris
> Am 24.05.2022 um 11:52 schrieb farhad kh :
>
> I want to save data pools for rgw on HDD disk drives And use some SSD hard
> drive for the cache tier on top of it
> Has anyone t
hi
i have a lot of error in s3 api
in client s3 i get this :
2022-05-24 10:49:58.095 ERROR 156723 --- [exec-upload-21640003-285-2]
i.p.p.d.service.UploadDownloadService: Gateway Time-out (Service:
Amazon S3; Status Code: 504; Error Code: 504 Gateway Time-out; Request ID:
null; S3 Extended Req
hi
i have a lot of error in s3 api
in client s3 i get this :
2022-05-24 10:49:58.095 ERROR 156723 --- [exec-upload-21640003-285-2]
i.p.p.d.service.UploadDownloadService: Gateway Time-out (Service:
Amazon S3; Status Code: 504; Error Code: 504 Gateway Time-out; Request ID:
null; S3 Extended Re
Dear ceph user community,
I am trying to install and configure a node with a ceph cluster. The linux
kernel we have does not include the rbd kernel module, hence we installed if
ourselves:
zypper install -y ceph-common > 15
zypper install -y kernel-source =
5.3.18-24.75_10.0.189_2.1_20.4__g0
On Tue, May 24, 2022 at 3:57 PM Sopena Ballesteros Manuel
wrote:
>
> Dear ceph user community,
>
>
> I am trying to install and configure a node with a ceph cluster. The linux
> kernel we have does not include the rbd kernel module, hence we installed if
> ourselves:
>
>
> zypper install -y ceph
Good morning,
I'm looking into viable upgrade paths on my cephadm based octopus
deployment running on Centos7. Given the podman support matrix for
cephadm, how did others successful move to Pacific under a Rhel8 based OS?
I am looking to use rocky moving forward, but the latest 8.6 uses podma
Hi Ilya,
thank you very much for your prompt response,
Any rbd command variation is affected (mapping device included)
We are using a physical machine (no container involved)
Below is the output of the running strace as suggested:
nid001388:/usr/src/linux # strace -f rbd -n client.noir -o
On Tue, May 24, 2022 at 5:20 PM Sopena Ballesteros Manuel
wrote:
>
> Hi Ilya,
>
>
> thank you very much for your prompt response,
>
>
> Any rbd command variation is affected (mapping device included)
>
> We are using a physical machine (no container involved)
>
>
> Below is the output of the runni
yes dmesg shows the following:
...
[23661.367449] rbd: rbd12: failed to lock header: -13
[23661.367968] rbd: rbd2: no lock owners detected
[23661.369306] rbd: rbd11: no lock owners detected
[23661.370068] rbd: rbd11: breaking header lock owned by client21473520
[23661.370518] rbd: rbd11: blacklis
On Tue, May 24, 2022 at 8:14 PM Sopena Ballesteros Manuel
wrote:
>
> yes dmesg shows the following:
>
> ...
>
> [23661.367449] rbd: rbd12: failed to lock header: -13
> [23661.367968] rbd: rbd2: no lock owners detected
> [23661.369306] rbd: rbd11: no lock owners detected
> [23661.370068] rbd: rbd11
hi
i want used private registry for running cluster ceph storage and i changed
default registry my container runtime (docker)
/etc/docker/deamon.json
{
"registery-mirrors": ["https://private-registery.fst";]
}
and all registry addres in /usr/sbin/cephadm(quay.ceph.io and docker.io to
my private
This maintenance is ongoing. This was a much larger effort than anticipated.
I've unpaused Jenkins but fully expect many jobs to fail for the next
couple days.
If you had a PR targeting master, you will need to edit the PR to target
main now instead.
I appreciate your patience.
On 5/19/22
Thanks for the heads-up David!
FYI for anyone who doesn't know how to change the base branch, click on
"Edit" next to the PR title, click on "base", and change it to "main".
On Tue, May 24, 2022 at 5:31 PM David Galloway wrote:
> This maintenance is ongoing. This was a much larger effort than
>
16 matches
Mail list logo