Hello.
I have 5 node Cluster in A datacenter. Also I have same 5 node in B datacenter.
They're gonna be 10 node 8+2 EC cluster for backup but I need to add
the 5 node later.
I have to sync my S3 data with multisite on the 5 node cluster in A
datacenter and move
them to the B and add the other 5 no
es 5
> step set_choose_tries 100
> step take default class hdd
> step choose indep 5 type host
> step choose indep 2 type osd
> step emit
> }
>
>
>
> This is kind of useful because if you set min_size to 8, you could even lose
> a
Hello!
I have a cluster with Datacenter crushmap (A+B).(9+9 = 18 servers)
The cluster started with v12.2.0 Luminous 4 years ago.
All these years I upgraded the Cluster Luminous > Mimic > v14.2.16 Nautilus.
Now I have a weird issue. When I add a mon or shutdown a while and
start it up again, all th
Hello. I was have one-way multisite S3 cluster and we've seen issues
with rgw-sync due to sharding problems and I've stopped the multisite
sync. This is not the topic just a knowledge about my story.
I have some leftover 0 byte objects in destination and I'm trying to
overwrite them with Rclone "pa
Good morning.
I have a bucket and it has 50M object in it. The bucket created with
multisite sync and that is the masterzone and only zone now.
After a health check, I saw weird objects in pending attr state.
I've tried to remove them with "radosgw-admin object rm --bypas-gc"
but I coldn't delete
Hello everyone!
I'm running nautilus 14.2.16 and I'm using RGW with Beast frontend.
I see this eror log in every SSD osd which is using for rgw index.
Can you please tell me what is the problem?
OSD LOG:
cls_rgw.cc:1102: ERROR: read_key_entry()
idx=�1000_matches/xdir/05/21/27260.jpg ret=-2
cls_rg
quot;read instance_entry key.name=%s key.instance=%s flags=%d",
instance_entry.key.name.c_str(), instance_entry.key.instance.c_str(),
instance_entry.flags);
return 0;
}
rgw_bucket_dir_entry& get_dir_entry() {
return instance_entry;
}
by morphin , 15 Nis 2021 Per, 02:19 tarihinde
şunu yazdı
I've same issue and joined to the club.
Almost every deleted bucket is still there due to multisite. Also I've
removed secondary zone and stopped sync but these stale-instance's still
there.
Before adding new secondary zone I want to remove them. If you gonna run
anything let me know please.
a
Hello.
I've a RGW bucket (versioning=on). And there was objects like this:
radosgw-admin object stat --bucket=xdir
--object=f5492238-50cb-4bc2-93fa-424869018946
{
"name": "f5492238-50cb-4bc2-93fa-424869018946",
"size": 0,
"tag": "",
"attrs": {
"user.rgw.manifest": "",
Hello.
I'm trying to fix a wrong cluster deployment (Nautilus 14.2.16)
Cluster usage is %40 EC pool with RGW
Every node has:
20 x OSD = TOSHIBA MG08SCA16TEY 16.0TB
2 x DB = NVME PM1725b 1.6TB (linux mdadm raid1)
NVME usage always goes around %90-99.
With "iostat -xdh 1"
r/s w/s rkB
ocksdb_options to 536870912
>
> * get you release options `ceph config help bluestore_rocksdb_options`
> * append `bluestore_rocksdb_options=536870912` to this list
> * set `ceph config set osd bluestore_rocksdb_options `
>
> Restart your OSD's.
>
>
> k
>
> On
by default)
>
>
>
>
> k
>
> On 19 Apr 2021, at 21:09, by morphin wrote:
>
> Are you tried to say add these (below) options to the config?
>
> - options.max_bytes_for_level_base = 536870912; // 512MB
> - options.max_bytes_for_level_multiplier = 10;
>
>
Hello.
I have a rgw s3 user and the user have 2 bucket.
I tried to copy objects from old.bucket to new.bucket with rclone. (in
the rgw client server)
After I checked the object with "radosgw-admin --bucket=new.bucket
object stat $i" and I saw old.bucket id and marker id also old bucket
name in the
sion 14.2.16
Have a great day.
Regards.
Matt Benjamin , 22 Nis 2021 Per, 06:08 tarihinde
şunu yazdı:
>
> Hi Morphin,
>
> Yes, this is by design. When an RGW object has tail chunks and is
> copied so as to duplicate an entire tail chunk, RGW causes the
> coincident chunk(s) to
Hello.
Its easy. In ceph.conf copy the rgw fields and change 3 things.
1- name
2- log path name
3- client port.
After that feel free to start rgw service with systemctl. Check service
status and Tail the rgw log file. Try to read or write and check the logs.
If everything works as expected then
turns into pending object
again.
Maybe deleting old periods will help. What do you think?
Its very Hard to explain. I tried my best.
Regards.
22 Nis 2021 Per 18:26 tarihinde Matt Benjamin şunu
yazdı:
> Hi Morphin,
>
> On Thu, Apr 22, 2021 at 3:40 AM by morphin
> wrote:
> &g
---
>
> On 2021. Apr 22., at 18:30, by morphin wrote:
>
> Hello.
>
> Its easy. In ceph.conf copy the rgw fields and change 3 things.
> 1- name
> 2- log path name
> 3- client port.
>
>
> After that feel free to start rgw service with systemctl
Hello.
We're running 1000vm on 28 node with 6 ssd (no seperate db device) and
these vms are Mostly win10.
2 lvm osd Per 4tb device total 288osd and one RBD pool with 8192 PG.
Replication 3.
Ceph version : nautilus 14.2.16
I'm looking for all flash RBD tuning.
This is good test env and tomorrow
2x10G for cluster + Public
2x10G for Users
lacp = 802.3ad
Smart Weblications GmbH , 26 Nis 2021 Pzt,
17:25 tarihinde şunu yazdı:
>
> Hi,
>
>
> Am 25.04.2021 um 03:58 schrieb by morphin:
> > Hello.
> >
> > We're running 1000vm on 28 node with 6 ssd (no sep
Hello.
I'm trying to export objects from rados with rados get. Some objects
bigger than 4M and they have tails. Is there any easy way to get tail
information an object?
For example this is an object:
- c106b26b.3_Img/2017/12/im034113.jpg
These are the objet parts:
-
c106b26b.3__multipart_Img/201
t --object
> this will output a json document. With the information in the manifest key
> you can find out what rados objects belong to the RGW object.
>
>
>
> Kind regards,
>
> Rob
> https://www.42on.com/
>
>
>
> From: by morph
Hello.
I was have multisite RGW (14.2.16 nautilus) setup and some of the
bucket couldn't finish bucket sync due to overfill buckets,
There was different needs and the sync started purpose of migration.
I made the secondary zone the master and removed the old master zone
from zonegroup.
Now I still
Hello
I have a weird problem on 3 node cluster. "Nautilus 14.2.9"
When I try power failure OSD's are not marking as DOWN and MDS do not
respond anymore.
If I manually set osd down then MDS becomes active again.
BTW: Only 2 node has OSD's. Third node is only for MON.
I've set mon_osd_down_out_int
I've figured out but I'm scared from the result.
The solution is "mon_osd_min_down_reporters = 1"
Due to "two node" cluster and "replicated 2" with "chooseleaf host"
the reporter count should be set to 1 but on a malfunction this could
be a s
Hello.
I have virtualization env and I'm looking new SSD for HDD replacement.
What are the best Performance / Price SSDs in the market right now?
I'm looking 1TB, 512GB, 480GB, 256GB, 240GB.
Is there a SSD recommendation list for ceph?
___
ceph-users ma
of chassis / form factor, budget,
> workload and needs.
>
> The sizes you list seem awfully small. Tell us more about your use-case.
> OpenStack? Proxmox? QEMU? VMware? Converged? Dedicated ?
> —aad
>
>
> > On May 29, 2021, at 2:10 PM, by morphin wrote:
> >
>
26 matches
Mail list logo