Well, preferring faster clock CPUs for SSD scenarios has been floated
several times over the last few months on this list. And realistic or not,
Nick's and Kostas' setup are similar enough (testing single disk) that it's
a distinct possibility.
Anyway, as mentioned measuring the performance counter
On Sun, Jun 25, 2017 at 11:28:37PM +0200, Massimiliano Cuttini wrote:
>
> Il 25/06/2017 21:52, Mykola Golub ha scritto:
> >On Sun, Jun 25, 2017 at 06:58:37PM +0200, Massimiliano Cuttini wrote:
> >>I can see the error even if I easily run list-mapped:
> >>
> >># rbd-nbd list-mapped
> >>/dev
Hi,
I have this OSD:
root@ceph-storage-rbx-1:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 21.70432 root default
-2 10.85216 host ceph-storage-rbx-1
0 3.61739 osd.0up 1.0 1.0
2 3.61739 os
Hello,
Will need to see a full export of your crush map rules.
Depends what the failure domain is set to.
,Ash
Sent from my iPhone
On 26 Jun 2017, at 4:11 PM, Stéphane Klein
mailto:cont...@stephane-klein.info>> wrote:
Hi,
I have this OSD:
root@ceph-storage-rbx-1:~# ceph osd tree
ID WEIGHT
I've hitted some strange things in my ceph cluster, and i'm asking some
fedback here.
Some cluster info: 3 nodes, 12 OSD (4 per node, symmetrical), size=3.
Proxmox based, still on hammer, so used for RBD only.
The cluser, was bult using some spare server, and there's a node that
are 'underpowered
2017-06-26 11:15 GMT+02:00 Ashley Merrick :
> Will need to see a full export of your crush map rules.
>
This is my crush map rules:
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable choosel
Your going across host’s so each replication will be on a different host.
,Ashley
Sent from my iPhone
On 26 Jun 2017, at 4:39 PM, Stéphane Klein
mailto:cont...@stephane-klein.info>> wrote:
2017-06-26 11:15 GMT+02:00 Ashley Merrick
mailto:ash...@amerrick.co.uk>>:
Will need to see a full expo
2017-06-26 11:48 GMT+02:00 Ashley Merrick :
> Your going across host’s so each replication will be on a different host.
>
Thanks :)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Saumay,
i think you should take in account to track SMART on every SSD founded.
If it has SMART capabilities, then track its test (or commit tests) and
display its values on the dashboard (or separate graph).
This allow ADMINS to forecast the next OSD will die.
Preventing is better than Res
Have some 8TB drives I am looking to remove from cluster Long term however
would like to make use of Primary Affinity to decrease the reads going to these
drives.
I have a replication and erasure code pool, I understand when setting the
primary Affinity to 0 no PG’s will have their Primary PG s
On 06/26/17 11:36, Marco Gaiarin wrote:
> ...
> Three question:
>
> a) while a 'snapshot remove' action put system on load?
>
> b) as for options like:
>
> osd scrub during recovery = false
> osd recovery op priority = 1
> osd recovery max active = 5
> osd max backfill
With the EC Overwite support, if currently running behind a cache tier in Jewel
will the overwrite still be of benefit through the cache tier and remove the
need to promote the full block to make any edits?
Or we better totally removing the cache tier once fully upgraded?
,Ashley
Sent from my
On 17-06-21 03:24 PM, Sage Weil wrote:
> On Wed, 21 Jun 2017, Piotr Dałek wrote:
>> On 17-06-14 03:44 PM, Sage Weil wrote:
>>> On Wed, 14 Jun 2017, Paweł Sadowski wrote:
[snip]
Is it safe to enable "filestore seek hole", are there any tests that
verifies that everything related
On 26/06/2017 7:36 PM, Marco Gaiarin wrote:
Last week i've used by the first time the snapshot feature. I've done
some test, before, on some ''spare'' VM doing snapshot on a powered off
VM (as expected, was merely istantaneus) and on a powered on one
(clearly, snapshotting the RAM pose some stres
Resolved.
After all of the involved OSDs had been down for a while, I brought them
back up and issued another ceph pg repair. We are clean now.
On Sun, Jun 25, 2017 at 11:54 PM, Brady Deetz wrote:
> I should have mentioned, I'm running ceph jewel 10.2.7
>
> On Sun, Jun 25, 2017 at 11:46 PM, Bra
On 26-6-2017 09:01, Christian Wuerdig wrote:
> Well, preferring faster clock CPUs for SSD scenarios has been floated
> several times over the last few months on this list. And realistic or
> not, Nick's and Kostas' setup are similar enough (testing single disk)
> that it's a distinct possibility.
>
Snapshots are not a free action. To create them it's near enough free, but
deleting objects in Ceph is an n^2 operation. Being on Hammer you do not
have access to the object map feature on RBDs which drastically reduces the
n^2 problem by keeping track of which objects it actually needs to delete
+1 on SMART tracking
On Mon, Jun 26, 2017 at 5:19 AM, Massimiliano Cuttini
wrote:
> Hi Saumay,
>
> i think you should take in account to track SMART on every SSD founded.
> If it has SMART capabilities, then track its test (or commit tests) and
> display its values on the dashboard (or separate
Just so you're aware of why that's the case, the line
step chooseleaf firstn 0 type host
in your crush map under the rules section says "host". If you changed that
to "osd", then your replicas would be unique per OSD instead of per
server. If you had a larger cluster and changed it to "rack" an
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Willem Jan Withagen
> Sent: 26 June 2017 14:35
> To: Christian Wuerdig
> Cc: Ceph Users
> Subject: Re: [ceph-users] Ceph random read IOPS
>
> On 26-6-2017 09:01, Christian Wuerdig wrote:
> >
I don't know specifics on Kubernetes or creating multiple keyrings for
servers, so I'll leave those for someone else. I will say that if you are
kernel mapping your RBDs, then the first tenant to do so will lock the RBD
and no other tenant can map it. This is built into Ceph. The original
tenant
Mandi! Lindsay Mathieson
In chel di` si favelave...
> Have you tried restoring a snapshot? I found it unusablly slow - as in hours
No, still no; i've never restored a snapshot...
--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famig
Restoring a snapshot involves copying the entire image from the
snapshot revision to the HEAD revision. The faster approach would be
to just create a clone from the snapshot.
2017-06-26 10:59 GMT-04:00 Marco Gaiarin :
> Mandi! Lindsay Mathieson
> In chel di` si favelave...
>
>> Have you tried re
On 2017-06-26 15:34, Willem Jan Withagen wrote:
> On 26-6-2017 09:01, Christian Wuerdig wrote:
>
>> Well, preferring faster clock CPUs for SSD scenarios has been floated
>> several times over the last few months on this list. And realistic or
>> not, Nick's and Kostas' setup are similar enough (
On Sun, Jun 25, 2017 at 11:28:37PM +0200, Massimiliano Cuttini wrote:
Il 25/06/2017 21:52, Mykola Golub ha scritto:
On Sun, Jun 25, 2017 at 06:58:37PM +0200, Massimiliano Cuttini wrote:
I can see the error even if I easily run list-mapped:
# rbd-nbd list-mapped
/dev/nbd0
2017-06-
On Mon, Jun 26, 2017 at 07:12:31PM +0200, Massimiliano Cuttini wrote:
> >In your case (rbd-nbd) this error is harmless. You can avoid them
> >setting in ceph.conf, [client] section something like below:
> >
> > admin socket = /var/run/ceph/$name.$pid.asok
> >
> >Also to make every rbd-nbd process
Thanks David, few more questions:-
- Is there a way to limit the capability of the keyring which is used to
map/unmap/lock to only allow those operations and nothing else using that
specific keyring
- For a single pool, is there a way to generate multiple keyrings where a
rbd cannot be mapped by te
Dear cephers,
Could someone show me an url where can I found how ceph calculate the
available space?
I've installed a small ceph (Kraken) environment with bluestore OSDs.
The servers contains 2 disks and 1 ssd. The disk 1. part is UEFI (~500
MB), 2. raid (~50GB), 3. ceph disk (450-950MB). 1
What is the output of `lsblk`?
On Mon, Jun 26, 2017 at 4:32 PM Papp Rudolf Péter wrote:
> Dear cephers,
>
> Could someone show me an url where can I found how ceph calculate the
> available space?
>
> I've installed a small ceph (Kraken) environment with bluestore OSDs.
> The servers contains 2
The output of `sudo df -h` would also be helpful. Sudo/root is generally
required because the OSD folders are only readable by the Ceph user.
On Mon, Jun 26, 2017 at 4:37 PM David Turner wrote:
> What is the output of `lsblk`?
>
> On Mon, Jun 26, 2017 at 4:32 PM Papp Rudolf Péter wrote:
>
>> D
Hi David!
lsblk:
NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:00 931,5G 0 disk
├─sda18:10 476M 0 part
├─sda28:20 46,6G 0 part
│ └─md0 9:00 46,5G 0 raid1 /
└─sda38:30 884,5G 0 part /var/lib/ceph/osd/ceph-3
sdb 8:16 0 931,5G 0 disk
├
sudo df -h:
udev3,9G 0 3,9G 0% /dev
tmpfs 790M 19M 771M 3% /run
/dev/md0 46G 2,5G 41G 6% /
tmpfs 3,9G 0 3,9G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 3,9G 0 3,9G 0% /sys/fs/cgroup
/dev/sdb1
On Mon, Jun 26, 2017 at 2:55 PM, Mayank Kumar wrote:
> Thanks David, few more questions:-
> - Is there a way to limit the capability of the keyring which is used to
> map/unmap/lock to only allow those operations and nothing else using that
> specific keyring
Since RBD is basically just a collect
And the `sudo df -h`? Also a `ceph df` might be helpful to see what's
going on.
On Mon, Jun 26, 2017 at 4:41 PM Papp Rudolf Péter wrote:
> Hi David!
>
> lsblk:
>
> NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> sda 8:00 931,5G 0 disk
> ├─sda18:10 476M 0 part
> ├─sda28
May I ask why you are using krbd with QEMU instead of librbd?
On Fri, Jun 16, 2017 at 12:18 PM, 码云 wrote:
> Hi All,
> Recently.I meet a question and I did'nt find out any thing for explain it.
>
> Ops process like blow:
> ceph 10.2.5 jewel, qemu 2.5.0 centos 7.2 x86_64
> create pool rbd_vms 3
I'm not seeing anything that would show anything to indicate a problem. The
weights, cluster size, etc all say that ceph only sees 30GB per osd. I
don't see what is causing the discrepancy. Anyone else have any ideas?
On Mon, Jun 26, 2017, 5:02 PM Papp Rudolf Péter wrote:
> sudo df -h:
> udev
On Mon, 26 Jun 2017 15:06:46 +0100 Nick Fisk wrote:
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Willem Jan Withagen
> > Sent: 26 June 2017 14:35
> > To: Christian Wuerdig
> > Cc: Ceph Users
> > Subject: Re: [ceph-users] Ceph ran
sudo df -h:
udev3,9G 0 3,9G 0% /dev
tmpfs 790M 19M 771M 3% /run
/dev/md0 46G 2,5G 41G 6% /
tmpfs 3,9G 0 3,9G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 3,9G 0 3,9G 0% /sys/fs/cgroup
/dev/sdb1
Hi,
Just checking before I start looking into ceph-deploy if the behavior I'm
seeing is correct.
On a freshly installed Ubuntu 16.04 + Luminous 12.1.0 system I see that my
ceph-mon services aren't starting on boot.
Deployed Ceph on three machines: alpha, bravo and charlie. Using 'alpha' I've
Hi Jason,
In one VDI integrated test environment, we need to known the best practise.
It seems like librbd performance weak than krbd.
qemu 2.5.0 is not link to librbd unless manual configure and compile it.
By the way, rbd and libceph ko code are both adjusted lots of place in the
centos7.3,
ar
40 matches
Mail list logo