Hi Guys,
This is a cross-post from the proxmox ML.
This morning I have a bit of a big boo-boo on our production system.
After a very sudden network outage somewhere during the night, one of my
ceph-osd's is no longer starting up.
If I try and start it manually, I get a very spectacular failure,
Hi all,
I am using ceph 14.2.1 (Nautilus)
I am unable to increase the pg_num of a pool.
I have a pool named Backup, the current pg_num is 64 : ceph osd pool get
Backup pg_num => result pg_num: 64
And when I try to increase it using the command
ceph osd pool set Backup pg_num 512 => result "
Hi Brad,
Thank you for your response , and we will check this video as well.
Our requirement is while writing an object into the cluster , if we can
provide number of copies to be made , the network consumption between
client and cluster will be only for one object write. However , the cluster
wil
I ran into this recently. Try running "ceph osd require-osd-release
nautilus". This drops backwards compat with pre-nautilus and allows
changing settings.
On Mon, Jul 1, 2019 at 4:24 AM Sylvain PORTIER wrote:
>
> Hi all,
>
> I am using ceph 14.2.1 (Nautilus)
>
> I am unable to increase the pg_num
Ceph already does this by default. For each replicated pool, you can set
the 'size' which is the number of copies you want Ceph to maintain. The
accepted norm for replicas is 3, but you can set it higher if you want to
incur the performance penalty.
On Mon, Jul 1, 2019, 6:01 AM nokia ceph wrote:
> On Jun 27, 2019, at 4:53 PM, David Turner wrote:
>
> I'm still going at 452M incomplete uploads. There are guides online for
> manually deleting buckets kinda at the RADOS level that tend to leave data
> stranded. That doesn't work for what I'm trying to do so I'll keep going with
> this and
I believe he needs to increase the pgp_num first, then pg_num.
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Mon, Jul 1, 2019 at 7:21 AM Nathan Fish wrote:
> I ran into this recently. Try running "ceph osd require-osd-release
> nautilus".
In Nautilus just pg_num is sufficient for both increases and decreases.
On Mon, Jul 1, 2019 at 10:55 AM Robert LeBlanc wrote:
> I believe he needs to increase the pgp_num first, then pg_num.
>
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Mon, Jul 1, 2019 at 11:57 AM Brett Chancellor
wrote:
> In Nautilus just pg_num is sufficient for both increases and decreases.
>
>
Good to know, I haven't gotten to Nautilus yet.
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Sat, Jun 29, 2019 at 8:13 PM Bryan Henderson wrote:
>
> > I'm not sure why the monitor did not mark it _out_ after 600 seconds
> > (default)
>
> Well, that part I understand. The monitor didn't mark the OSD out because the
> monitor still considered the OSD up. No reason to mark an up OSD out
On Fri, Jun 28, 2019 at 5:41 PM Jorge Garcia wrote:
>
> Ok, actually, the problem was somebody writing to the filesystem. So I moved
> their files and got to 0 objects. But then I tried to remove the original
> data pool and got an error:
>
> # ceph fs rm_data_pool cephfs cephfs-data
> Error
I need some help getting up the learning curve and hope someone can get me
on the right track.
I need to set up a new cluster, but want the mon, mgr and rgw services as
containers on the non-container osd nodes. It seem that doing no containers
or all containers is fairly easy but I'm trying to un
Hello
I can’t get data flushed out of osd with weights set to 0. Is there any way of
checking the tasks queued for PG remapping ? Thank You.
Yanko.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph
> Normally in the case of a restart then somebody who used to have a
> connection to the OSD would still be running and flag it as dead. But
> if *all* the daemons in the cluster lose their soft state, that can't
> happen.
OK, thanks. I guess that explains it. But that's a pretty serious design
Hi Brett,
I think I was wrong here in the requirement description. It is not about
data replication , we need same content stored in different object/name.
We store video contents inside the ceph cluster. And our new requirement is
we need to store same content for different users , hence need sam
15 matches
Mail list logo