Nobody can help me?
Il ven 6 ott 2017, 07:31 Mario Giammarco ha scritto:
> Hello,
> I am trying Ceph luminous with Bluestore.
>
> I create an osd:
>
> ceph-disk prepare --bluestore /dev/sdg --block.db /dev/sdf
>
> and I see that on ssd it creates a partition of only 1g for block.db
>
> So:
>
>
There are 2 configs to set the size of your DB and
WalBluestore_block_db_sizeBluestore_block_wal_size
If you have an SSD you should give as much space as you can to the DB and don't
care about the Wal (Wal would always be placed in the fastest device)
I am not sure about hot moving the DB but a
Okay I get your point, its way more safer without cache at all.
I am talking from totally ignorace, so please correct me if I say
something wrong.
What I dont really understand is how badly is DB space used.
1-When its a new OSD, it might be totally empty but its not used for
storing any actual
What is the output of your `ceph status`?
On Fri, Oct 13, 2017, 10:09 PM dE wrote:
> On 10/14/2017 12:53 AM, David Turner wrote:
>
> What does your environment look like? Someone recently on the mailing
> list had PGs stuck creating because of a networking issue.
>
> On Fri, Oct 13, 2017 at 2:0
A few things. First, there is no need to deep scrub your PGs every 2 days.
Schedule it out so it's closer to a month or so. If you have a really bad
power hiccup, up the schedule to check for consistency.
Second, you said "Intel SSD DC S3700 1GB divided into three partitions used
for Bluestore blo
1、this assert happened accidently, not easy to reproduce; In fact, I also
suppose this assert is caused by device data lost;
but if has lost,how it can accur that (last_update +1 = log.rbegin.version) ,
in case of losting data, it's more likely to be confused. At present, this
situation can't th
Hello Dear,
I am trying to configure the Ceph iscsi gateway on Ceph Luminious . As per below
Ceph iSCSI Gateway — Ceph Documentation
|
| |
Ceph iSCSI Gateway — Ceph Documentation
| |
|
Ceph is iscsi gateway are configured and chap auth is set.
/> lso- /
...
Hi,
In my VDI environment I have configured the suggested ceph
design/arquitecture:
http://docs.ceph.com/docs/giant/rbd/rbd-snapshot/
Where I have a Base Image + Protected Snapshot + 100 clones (one for each
persistent VDI).
Now, I'd like to configure a backup script/mechanism to perform backup
Have you set the CHAP username and password on both sides (and ensured that
the initiator IQN matches)? On the initiator side, you would run the
following before attempting to log into the portal:
iscsiadm --mode node --targetname --op=update --name
node.session.auth.authmethod --value=CHAP
is
On 2017-10-14 17:50, Kashif Mumtaz wrote:
> Hello Dear,
>
> I am trying to configure the Ceph iscsi gateway on Ceph Luminious . As per
> below
>
> Ceph iSCSI Gateway -- Ceph Documentation [1]
>
> [1]
>
> CEPH ISCSI GATEWAY — CEPH DOCUMENTATION
>
> Ceph is iscsi gateway are configured and
On 10/14/2017 08:18 PM, David Turner wrote:
What are the ownership permissions on your osd folders? Clock skew
cares about partial seconds.
It isn't the networking issue because your cluster isn't stuck
peering. I'm not sure if the creating state happens in disk or in the
cluster.
On Sat
Hello,
Could you include the monitors and the osds as well to your clock skew test?
How did you create the osds? ceph-deploy osd create osd1:/dev/sdX
osd2:/dev/sdY osd3: /dev/sdZ ?
Some log from one of the osds would be great!
Kind regards,
Denes.
On 10/14/2017 07:39 PM, dE wrote:
On 10
On Sat, Oct 14, 2017 at 9:33 AM, David Turner wrote:
> First, there is no need to deep scrub your PGs every 2 days.
They aren’t being deep scrubbed every two days, nor is there any
attempt (or desire) to do so. That would be require 8+ scrubs running
at once. Currently, it takes between 2 and 3
On 10/15/2017 03:13 AM, Denes Dolhay wrote:
Hello,
Could you include the monitors and the osds as well to your clock skew
test?
How did you create the osds? ceph-deploy osd create osd1:/dev/sdX
osd2:/dev/sdY osd3: /dev/sdZ ?
Some log from one of the osds would be great!
Kind regards,
D
14 matches
Mail list logo