On 08/26/2018 01:39 PM, Vasiliy Tolstov wrote:
Why avoid cache tier? Does this only for erasure or for replicated too?
Because cache tier is very uncommon feature. Cepher's was used it to
will have rbd writes to EC pools mostly, before Luminous [1]
Why this need for replicated? With cache tie
Hi!
In replicated pool rbd - the same behavior with tier.
- Original Message -
From: "Vasiliy Tolstov"
To: "Konstantin Shalygin"
Cc: ceph-users@lists.ceph.com, "Fyodor Ustinov"
Sent: Sunday, 26 August, 2018 09:39:15
Subject: Re: [ceph-users] Why rbd rn did not clean used pool?
Why avo
Hi,
I have 4 osd nodes with 4 hdd and 1 ssd on each.
I'm gonna add these osds in an existing cluster.
What I'm confused is that how to deal with the ssd.
Can I deploy 4 osd with wal and db in one ssd partition such as:
# ceph-disk prepare --bluestore --block.db /dev/sdc --block.wal /dev/sdc
/dev/sd
Hi Eugen.
Thanks for the suggestion. I'll look for the logs (since it's our first
attempt with ceph, I'll have to discover where they are, but no problem).
One thing called my attention on your response however:
I haven't made myself clear, but one of the failures we encountered were
that the fi
I am following the procedure here:
http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/
When I get to the part to run "ceph osd safe-to-destroy $ID" in a while
loop, I get a EINVAL error. I get this error when I run "ceph osd
safe-to-destroy 0" on the command line by itself, to
Hi CEPHers,
I need to design an HA CEPH object storage system. The scenario is that we
are recording HD Videos and end of the day we need to copy all these video
files (each file has approx 15 TB ) to our storage system.
1)Which would be the best tech in storage to transfer these PBs size loads
o
>No, the log end in the header is a hint. This is because we can't
>atomically wrote to two objects (the header and the last log object) at the
>same time, so we do atomic appends to the end of the log and flush out the
>journal header lazily.
Thanks; I get it now.
>I believe zeroes at the end of
Hello,
On Sun, 26 Aug 2018 22:23:53 +0400 James Watson wrote:
> Hi CEPHers,
>
> I need to design an HA CEPH object storage system.
The first question that comes to mind is why?
Why does it need to be Ceph and why object based (RGW)?
>From what's stated below it seems that nobody at your en
You need to do them on separate partitions. You can either do sdc{num} or
manage the SSD using LVM.
On Sun, Aug 26, 2018, 8:39 AM Zhenshi Zhou wrote:
> Hi,
> I have 4 osd nodes with 4 hdd and 1 ssd on each.
> I'm gonna add these osds in an existing cluster.
> What I'm confused is that how to dea
James, I echo what Christian Balzer says. DO not fixate on CEPH at this
stage, we need to look at what the requirements are, There are alternatives
such as Spectrum Scale and Minio. Also, depending on how often the videos
are to be recalled, looking at a tape based solution.
Regarding hardware, Su
please check client.213528, instead of client.267792. which version of
kernel client.213528 use.
On Sat, Aug 25, 2018 at 6:12 AM Zhenshi Zhou wrote:
>
> Hi,
> This time, osdc:
>
> REQUESTS 0 homeless 0
> LINGER REQUESTS
>
> monc:
>
> have monmap 2 want 3+
> have osdmap 4545 want 4546
> have fsmap
Could you strace apacha process, check which syscall waits for a long time.
On Sat, Aug 25, 2018 at 3:04 AM Stefan Kooman wrote:
>
> Quoting Gregory Farnum (gfar...@redhat.com):
>
> > Hmm, these aren't actually the start and end times to the same operation.
> > put_inode() is literally adjusting a
12 matches
Mail list logo