Hi,
is it possible to access buckets like:
https://../?
Some SDKs use DNS bucket names only. Using such an SDK the enpoint would
look like ".".
All the best,
Florian
smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing li
Am 5/28/19 um 5:37 PM schrieb Casey Bodley:
On 5/28/19 11:17 AM, Scheurer François wrote:
Hi Casey
I greatly appreciate your quick and helpful answer :-)
It's unlikely that we'll do that, but if we do it would be announced
with a long deprecation period and migration strategy.
Fine, just
Hi,
we need to migrate a ceph pool used for gnocchi to another cluster in
another datacenter. Gnocchi uses the python rados or cradox module to
access the Ceph cluster. The pool is dedicated to gnocchi only. The
source pool is based on HDD OSDs while the target pool got SSD only. As
there are
behaviour to
write-through as far as I understood. I expected the latency to increase
to at least 0.6 ms what was the case but I also expected the IOPS to
increase to up to 60.000 which was not the case. IOPS was constant at ~
14.000IOPS (4 jobs, QD=64).
Am 3/7/19 um 11:41 AM schrieb F
Hi,
we are running an Openstack environment with Ceph block storage. There
are six nodes in the current Ceph cluster (12.2.10) with NVMe SSDs and a
P4800X Optane for rocksdb and WAL.
The decision was made to use rbd writeback cache with KVM/QEMU. The
write latency is incredible good (~85 µs) a
Hi,
I tried to add a "archive" storage class to our Openstack environment by
introducing a second storage backend offering RBD volumes having their
data in an erasure coded pool. As I will have to specify a data-pool I
tried it as follows:
### keyring files:
ceph.client.cind
Hi,
we have built a 6 Node NVMe only Ceph Cluster with 4x Intel DC P4510 8TB
each and one Intel DC P4800X 375GB Optane each. Up to 10x P4510 can be
installed in each node.
WAL and RocksDBs for all P4510 should be stored on the Optane (approx.
30GB per RocksDB incl. WAL).
Internally, discussion
Hello,
we are fighting a HDD spin-down problem on our production ceph cluster
since two weeks now. The problem is not ceph related but I guess this
topic is interesting to the list and to be honest I hope to find a
solution here.
We do use 6 OSD Nodes like:
OS: Suse 12 SP3
Ceph: SES 5.5 (12.
cally though.
Thanks,
Igor
On 11/21/2018 11:11 AM, Florian Engelmann wrote:
Great support Igor Both thumbs up! We will try to build the tool
today and expand those bluefs devices once again.
Am 11/20/18 um 6:54 PM schrieb Igor Fedotov:
FYI: https://github.com/ceph/ceph/pull/25187
On 11/20/2
Engelmann wrote:
Am 11/20/18 um 4:59 PM schrieb Igor Fedotov:
On 11/20/2018 6:42 PM, Florian Engelmann wrote:
Hi Igor,
what's your Ceph version?
12.2.8 (SES 5.5 - patched to the latest version)
Can you also check the output for
ceph-bluestore-tool show-label -p
ceph-bluestore-tool
Am 11/20/18 um 4:59 PM schrieb Igor Fedotov:
On 11/20/2018 6:42 PM, Florian Engelmann wrote:
Hi Igor,
what's your Ceph version?
12.2.8 (SES 5.5 - patched to the latest version)
Can you also check the output for
ceph-bluestore-tool show-label -p
ceph-bluestore-tool show-
7;.bluefs'" did recognize the new sizes.
But we are 100% sure the new devices are used as we already deleted the
old once...
We tried to delete the "key" "size" to add one with the new value but:
ceph-bluestore-tool rm-label-key --dev /var/lib/ceph/osd/ceph-0/block.db
-k
Hi,
today we migrated all of our rocksdb and wal devices to new once. The
new once are much bigger (500MB for wal/db -> 60GB db and 2G WAL) and
LVM based.
We migrated like:
export OSD=x
systemctl stop ceph-osd@$OSD
lvcreate -n db-osd$OSD -L60g data || exit 1
lvcreate -n wal
| t: +41-21-693-9670
EPFL / BBP
Biotech Campus
Chemin des Mines 9
1202 Geneva
Switzerland
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ce
Hi,
we've got the same problem here. Our 12.2.5 RadosGWs crashed
(unrecognised by us) about 30.000 times with ongoing multipart uploads.
After a couple of days we ended up with:
xx-1.rgw.buckets.data 6 N/A N/A
116TiB 87.22 17.1TiB 36264870 36.26M
Hi,
Our Ceph cluster is a 6 Node cluster each node having 8 disks. The
cluster is used for object storage only (right now). We do use EC 3+2 on
the buckets.data pool.
We had a problem with RadosGW segfaulting (12.2.5) till we upgraded to
12.2.8. We had almost 30.000 radosgw crashes leading
16 matches
Mail list logo