This should help:
ceph config set mgr mgr/balancer/upmap_max_deviation 1
On Mon, Apr 19, 2021 at 10:17 AM Ml Ml wrote:
>
> Anyone an idea? :)
>
> On Fri, Apr 16, 2021 at 3:09 PM Ml Ml wrote:
> >
> > Hello List,
> >
> > any ideas why my OSDs are that unbalanced ?
> >
> > root@ceph01:~# ceph -s
>
Hi Ivan,
this is a feature that is not yet released in Pacific. It seems the
documentation is a bit ahead of time right now.
Sebastian
On Fri, Apr 16, 2021 at 10:58 PM i...@z1storage.com
wrote:
> Hello,
>
> According to the documentation, there's count-per-host key to 'ceph
> orch', but it doe
Hi,
is there a way to remove multipart uploads that are older than X days?
It doesn't need to be build into ceph or is automated to the end. Just
something I don't need to build on my own.
I currently try to debug a problem where ceph reports a lot more used space
than it actually requires (
http
Hello.
I've a RGW bucket (versioning=on). And there was objects like this:
radosgw-admin object stat --bucket=xdir
--object=f5492238-50cb-4bc2-93fa-424869018946
{
"name": "f5492238-50cb-4bc2-93fa-424869018946",
"size": 0,
"tag": "",
"attrs": {
"user.rgw.manifest": "",
Hi Istvan,
both of them require bucket access, correct?
Is there a way to add the LC policy globally?
Cheers
Boris
Am Mo., 19. Apr. 2021 um 11:58 Uhr schrieb Szabo, Istvan (Agoda) <
istvan.sz...@agoda.com>:
> Hi,
>
> You have 2 ways:
>
> First is using s3vrowser app and in the menu select the
Anyone an idea? :)
On Fri, Apr 16, 2021 at 3:09 PM Ml Ml wrote:
>
> Hello List,
>
> any ideas why my OSDs are that unbalanced ?
>
> root@ceph01:~# ceph -s
> cluster:
> id: 5436dd5d-83d4-4dc8-a93b-60ab5db145df
> health: HEALTH_WARN
> 1 nearfull osd(s)
> 4 pool
Good morning,
is there any documentation available regarding the meta data stored
within LVM that ceph-volume manages / creates?
My background is that ceph-volume activate does not work on non-systemd
Linux distributions, but if I know how to recreate the tmpfs, we can
easily start the osd with
Hi,
You have 2 ways:
First is using s3vrowser app and in the menu select the multipart uploads and
clean it up.
The other is like this:
Set lifecycle policy
On the client:
vim lifecyclepolicy
http://s3.amazonaws.com/doc/2006-03-01/";>
Hi Sebastian,
Thank you. Is there a way to create more than 1 rgw per host until this
new feature is released?
On 2021/04/19 11:39, Sebastian Wagner wrote:
Hi Ivan,
this is a feature that is not yet released in Pacific. It seems the
documentation is a bit ahead of time right now.
Sebastia
The best questions are the ones that one can answer oneself.
The great documentation on
https://docs.ceph.com/en/latest/dev/ceph-volume/lvm/
gives the right pointers. The right search term is "lvm list tags" and
results into something like this:
[15:56:04] server20.place6:~# lvs -o lv_tags
/
Hello.
I'm trying to fix a wrong cluster deployment (Nautilus 14.2.16)
Cluster usage is %40 EC pool with RGW
Every node has:
20 x OSD = TOSHIBA MG08SCA16TEY 16.0TB
2 x DB = NVME PM1725b 1.6TB (linux mdadm raid1)
NVME usage always goes around %90-99.
With "iostat -xdh 1"
r/s w/s rkB
Hi,
> My background is that ceph-volume activate does not work on non-systemd
Linux distributions
Why not using the --no-systemd option during the ceph-volume activate
command?
The systemd part is only enabling and starting the service but the tmpfs
part should work if you're not using systemd
Hey Dimitir,
because --no-systemd still requires systemd:
[19:03:00] server20.place6:~# ceph-volume lvm activate --all --no-systemd
--> Executable systemctl not in PATH:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
--> FileNotFoundError: [Errno 2] No such file or directory: 's
thanks by commenting the ProtectClock directive, the issue is resolved.
Thanks for the support.
On Sun, Apr 18, 2021 at 9:28 AM Lomayani S. Laizer
wrote:
> Hello,
>
> Uncomment ProtectClock=true in /lib/systemd/system/ceph-osd@.service
> should fix the issue
>
>
>
> On Thu, Apr 8, 2021 at 9:49 A
On Sun, Apr 18, 2021 at 10:31:30PM +0200, huxia...@horebdata.cn wrote:
> Dear Cephers,
>
> Just curious about any one who has some experience on using Bcache on top of
> HDD OSD to accelerate IOPS performance?
>
> If any, how about the stability and the performance improvement, and for how
> l
So that's a bug ;)
https://github.com/ceph/ceph/blob/master/src/ceph-volume/ceph_volume/devices/lvm/activate.py#L248-L251
This doesn't honor the --no-systemd flag.
But this should work when you're not using the --all option.
Dimitri
On Mon, Apr 19, 2021 at 10:41 AM Nico Schottelius <
nico.sch
Hi All,
I want to send Ceph logs out to an external Graylog server. I’ve configured
the Graylog host IP using “ceph config set global log_graylog_host x.x.x.x” and
enabled logging through the Ceph dashboard (I’m running Octopus 15.2.9 –
container based). I’ve also setup a GELF UDP input on Gr
Thanks for the answer. It seems very easy.
I've never played with rocksdb options before. I always used default
and I think I need to play more with it but I couldn't find a good
config reference to understand at ceph side.
Can I use this guide instead?
https://github.com/facebook/rocksdb/wiki/Rock
Good evening,
I've to tackle an old, probably recurring topic: HBAs vs. Raid
controllers. While generally speaking many people in the ceph field
recommend to go with HBAs, it seems in our infrastructure the only
server we phased in with an HBA vs. raid controller is actually doing
worse in terms
> For the background: we have many Perc H800+MD1200 [1] systems running
> with
> 10TB HDDs (raid0, read ahead, writeback cache).
> One server has LSI SAS3008 [0] instead of the Perc H800,
> which comes with 512MB RAM + BBU. On most servers latencies are around
> 4-12ms (average 6ms), on the system
Marc writes:
>> For the background: we have many Perc H800+MD1200 [1] systems running
>> with
>> 10TB HDDs (raid0, read ahead, writeback cache).
>> One server has LSI SAS3008 [0] instead of the Perc H800,
>> which comes with 512MB RAM + BBU. On most servers latencies are around
>> 4-12ms (avera
This is what I have when I query prometheus, most hdd's are still sata 5400rpm,
there are also some ssd's. I also did not optimize cpu frequency settings.
(forget about the instance=c03, that is just because the data comes from mgr
c03, these drives are on different hosts)
ceph_osd_apply_lat
Hi,
I also have used bcache extensively on filestore with journals on SSD
for at least 5 years. This has worked very well in all versions up to
luminous. The iops improvement was definitely beneficial for vm disk
images in rbd. I am also using it under bluestore with db/wal on nvme
on both Luminous
Hey all,
I wanted to confirm my understanding of some of the mechanics of
backfill in EC pools. I've yet to find a document that outlines this
in detail; if there is one, please send it my way. :) Some of what I
write below is likely in the "well, duh" category, but I tended
towards completeness.
This is the 11th bugfix release in the Octopus stable series. It
addresses a security vulnerability in the Ceph authentication framework.
We recommend users to update to this release. For a detailed release
notes with links & changelog please refer to the official blog entry at
https://ceph.io/rel
This is the 20th bugfix release in the Nautilus stable series. It
addresses a security vulnerability in the Ceph authentication framework.
We recommend users to update to this release. For a detailed release
notes with links & changelog please refer to the official blog entry at
https://ceph.io/re
This is the first bugfix release in the Pacific stable series. It
addresses a security vulnerability in the Ceph authentication framework.
We recommend users to update to this release. For a detailed release
notes with links & changelog please refer to the official blog entry at
https://ceph.io/re
Dear Mattias,
Very glad to know that your setting with Bcache works well in production.
How long have you been puting XFS on bcache on HDD in production? Which bcache
version (i mean the kernel) do you use? or do you use a special version of
bcache?
thanks in advance,
samuel
huxia...@ho
28 matches
Mail list logo