Hi all,
I'm trying to test/figure out how cephfs works and my goal is to mount specific
pools on different KVM hosts :
Ceph osd pool create qcow2 1
Ceph osd dump | grep qcow2
-> Pool 9
Ceph mds add_data_pool 9
I want now a 900Gb quota for my pool :
Ceph osd pool set-quota qcow2 max_bytes 1207
On Tue, Nov 5, 2013 at 4:05 PM, NEVEU Stephane
wrote:
> Hi all,
>
>
>
> I’m trying to test/figure out how cephfs works and my goal is to mount
> specific pools on different KVM hosts :
>
> Ceph osd pool create qcow2 1
>
> Ceph osd dump | grep qcow2
>
> -> Pool 9
>
> Ceph mds add_data_pool 9
>
It works so cool !Thanks so much from your help!
lixuehui
From: Josh Durgin
Date: 2013-10-31 01:31
To: Mark Kirkwood; lixuehui; ceph-users
Subject: Re: [ceph-users] radosgw-agent error
On 10/30/2013 01:54 AM, Mark Kirkwood wrote:
> On 29/10/13 20:53, lixuehui wrote:
>> Hi,list
>> From
The command you recomend doesnt work, also I cannot find something in the
command reference how to do it.
How can the settings be verified?
Ceph osd dump does not show any flags:
pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num
64 pgp_num 64 last_change 1 owner 0
a
Hello,
Wondering if anyone else has come over an issue we're having with our POC CEPH
Cluster at the moment.
Some details about its setup;
6 x Dell R720 (20 x 1TB Drives, 4 xSSD CacheCade), 4 x 10GB Nics
4 x Generic white label server (24 x 2 4TB Disk Raid-0 ), 4 x 10GB Nics
3 x Dell R620 - Act
Ok thank you, so is there a way to unset my quota ? or should I create a new
pool and destroy the old one ?
Another question by the way :) , does this syntax work :
mount -t ceph ip1:6789,ip2:6789,ip3:6789:/qcow2 /disks/
If I only want to mount my "qcow2" pool ?
>On Tue, Nov 5, 2013 at 5:09 P
Ok, so after tweaking the deadline scheduler and the filestore_wbthrottle* ceph
settings I was able to get 440 MB/s from 8 rados bench instances, over a single
osd node (pool pg_num = 1800, size = 1)
This still looks awfully slow to me - fio throughput across all disks reaches
2.8 GB/s!!
I'd
Hi,
After remove ( ceph osd out X) osd from one server ( 11 osd ) ceph
starts data migration process.
It stopped on:
32424 pgs: 30635 active+clean, 191 active+remapped, 1596
active+degraded, 2 active+clean+scrubbing;
degraded (1.718%)
All osd with reweight==1 are UP.
ceph -v
ceph version 0.56.7 (
Ok, some more thoughts:
1) What kernel are you using?
2) Mixing SATA and SAS on an expander backplane can some times have bad
effects. We don't really know how bad this is and in what
circumstances, but the Nexenta folks have seen problems with ZFS on
solaris and it's not impossible linux ma
Hi,
On a Ceph cluster I have a pool without a name. I have no idea how it
got there, but how do I remove it?
pool 14 '' rep size 3 min_size 2 crush_ruleset 0 object_hash rjenkins
pg_num 8 pgp_num 8 last_change 158 owner 18446744073709551615
Is there a way to remove a pool by it's ID? I coul
Finally, my tiny ceph cluster get 3 monitors, newly added mon.b and mon.c
both running on cubieboard2, which is cheap but still with enough cpu
power(dual-core arm A7 cpu, 1.2G) and memory(1G).
But compare to mon.a which running on an amd64 cpu, both mon.b and mon.c
easily consume too much memory,
Thanks Kyle,
What's the unit for osd recovery max chunk?
Also, how do I find out what my current values are for these osd options?
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606 | http://imc-chicago.com/
Phone: +1 312-204-7439 | Fax: +1 312-24
Hi guys,
I have an OSD in my cluster that is near full at 90%, but we're using a little
less than half the available storage in the cluster. Shouldn't this be balanced
out?
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL 60606 |
http://imc-chicago.com/
Kevin, in my experience that usually indicates a bad or underperforming
disk, or a too-high priority. Try running "ceph osd crush reweight
osd.<##> 1.0. If that doesn't do the trick, you may want to just out that
guy.
I don't think the crush algorithm guarantees balancing things out in the
way y
All of the disks in my cluster are identical and therefore all have the same
weight (each drive is 2TB and the automatically generated weight is 1.82 for
each one).
Would the procedure here be to reduce the weight, let it rebal, and then put
the weight back to where it was?
--
Kevin Weiler
IT
If there's an underperforming disk, why on earth would more data be put on it?
You'd think it would be less I would think an overperforming disk should
(desirably) cause that case,right?
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Greg
As long as default mon and osd paths are used, and you have the proper mon
caps set, you should be okay.
Here is a mention of it in the ceph docs:
http://ceph.com/docs/master/install/upgrading-ceph/#transitioning-to-ceph-deploy
Brian Andrus
Storage Consultant, Inktank
On Fri, Nov 1, 2013 at 4:
Erik, it's utterly non-intuitive and I'd love another explanation than the
one I've provided. Nevertheless, the OSDs on my slower PE2970 nodes fill
up much faster than those on HP585s or Dell R820s. I've handled this by
dropping priorities and, in a couple of cases, outing or removing the OSD.
K
We recently discussed briefly the Seagate Ethernet drives, which were
basically dismissed as too limited. But what about moving an ARM SBC to
the drive tray, complete with an mSATA SSD slot?
A proper SBC could implement full Ubuntu single-drive failure domains
that also solve the journal is
Kevin Weiler schrieb:
> Thanks Kyle,
>
> What's the unit for osd recovery max chunk?
Have a look at
http://ceph.com/docs/master/rados/configuration/osd-config-ref/ where
all the possible OSD config options are described, especially have a
look at the backfilling and recovery sections.
>
> Also, h
Thanks Kurt,
I wasn't aware of the second page and has been very helpful. However, the osd
recovery max chunk doesn't list a unit
osd recovery max chunk
Description:The maximum size of a recovered chunk of data to push.
Type: 64-bit Integer Unsigned
Default:1 << 20
I assume th
Hi,
what do you think to use a USB pendrive as boot disk for OSDs nodes?
Pendrive are cheaper and bigger, and doing this will allow me to use
all spinning disks and SSDs as OSD storage/journal.
More over, in a future, i'll be able to boot from net replacing the
pendrive without loosing space on sp
It has been reported that the system is heavy on the OS during
recovery; I believe the current recommendation is 5:1 OSD disks to SSDs
and separate OS mirror.
On 2013-11-05 21:33, Gandalf Corvotempesta wrote:
Hi,
what do you think to use a USB pendrive as boot disk for OSDs nodes?
Pendrive are
UNOFFICIAL
Hi,
I'm new to Ceph and investigating how objects can be aged off, ie delete all
objects older than 7 days. Is there funtionality to do this via the Ceph SWIFT
api or alternatively using a java rados libaray?
Thanks in advance,
Matt Dickson
_
In the Debian world, purge does both a removal of the package and a clean
up the files so might be good to keep semantic consistency here?
On Tue, Nov 5, 2013 at 1:11 AM, Sage Weil wrote:
> Purgedata is only meant to be run *after* the package is uninstalled. We
> should make it do a check to
Yeah; purge does remove packages and *package config files*; however,
Ceph data is in a different class, hence the existence of purgedata.
A user might be furious if he did what he thought was "remove the
packages" and the process also creamed his terabytes of stored data he
was in the process
Hi Ceph,
People from Western Digital suggested ways to better take advantage of the disk
error reporting. They gave two examples that struck my imagination. First there
are errors that look like the disk is dying ( read / write failures ) but it's
only a transient problem and the driver should
I think purge of several data containing packages will ask if you want
to destroy that too (Mysql comes to mind - asks if you want to remove
the databases under /var/lib/mysql). So this is possibly reasonable
behaviour.
Cheers
Mark
On 06/11/13 13:25, Dan Mick wrote:
Yeah; purge does remove p
... forgot to add: maybe 'uninstall' should be target for ceph-deploy
that removes just the actual software daemons...
On 06/11/13 14:16, Mark Kirkwood wrote:
I think purge of several data containing packages will ask if you want
to destroy that too (Mysql comes to mind - asks if you want to re
We had a discussion about all of this a year ago (when package purge was
removing mds data and thus destroying clusters). I think we have to be
really careful here as it's rather permanent if you make a bad choice.
I'd much rather that users be annoyed with me that they have to go
manually cle
Yep - better to be overly cautious about that :-)
On 06/11/13 14:40, Mark Nelson wrote:
We had a discussion about all of this a year ago (when package purge
was removing mds data and thus destroying clusters). I think we have
to be really careful here as it's rather permanent if you make a bad
Hi,
This is s3/ceph cluster, .rgw.buckets has 3 copies of data.
Many PG's are only on 2 OSD's and are marked as 'degraded'.
Scrubbing can fix this on degraded object's?
I don't have set tunables in cruch, mabye this can help (this is safe?)?
--
Regards
Dominik
2013/11/5 Dominik Mostowiec :
> H
Hi all:
I failed to create bucket with s3 API. the error is 403 'Access Denied'.In fact
,I've give the user write permission.
{ "user_id": "lxh",
"display_name": "=lxh",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{ "user": "lxh",
2013/11/5 :
> It has been reported that the system is heavy on the OS during recovery;
Why? Recovery is made from OSDs/SSD, why ceph is heavy on OS disks?
There is nothing usefull to read from that disks during a recovery.
___
ceph-users mailing list
ce
It is cool - and it's interesting that more and more access to the
inner workings of the drives would be useful, given ATA controller
history (an evolution of the WD1010 MFM controller) having hidden
steadily more, to maintain compatibility with the old CHS addressing
(later LBA).
The streami
35 matches
Mail list logo