Hi Wang:
I have created the pool and fs before.
--
在 2015-01-28 14:54:33,"王亚洲" 写道:
hi:
if you do "ceph fs new" command? I encounter the same issue without doing
"ceph fs new".
At 2015-01-28 14:48:09, "于泓海" wrote:
Hi:
I have completed the installation of ce
> Thanks Robert for your response. I'm considering giving SAS 600G 15K a try
> before moving to SSD. It should give ~175 IOPS per disk.
> Do you think the performance will be better if i goes with the following
> setup ?
> 4x OSD nodes
> 2x SSD - RAID 1 for OS and Journal
> 10x 600G SAS 15K - NO
hi:
if you do "ceph fs new" command? I encounter the same issue without doing
"ceph fs new".
At 2015-01-28 14:48:09, "于泓海" wrote:
Hi:
I have completed the installation of ceph cluster,and the ceph health is ok:
cluster 15ee68b9-eb3c-4a49-8a99-e5de64449910
health HEALT
Hi:
I have completed the installation of ceph cluster,and the ceph health is ok:
cluster 15ee68b9-eb3c-4a49-8a99-e5de64449910
health HEALTH_OK
monmap e1: 1 mons at {ceph01=10.194.203.251:6789/0}, election epoch 1,
quorum 0 ceph01
mdsmap e2: 0/0/1 up
osdmap e16: 2 os
Hello,
As others said, it depends on your use case and expected write load.
If you search the ML archives, you will find that there can be SEVERE
write amplification with Ceph, something to very much keep in mind.
You should run tests yourself before deploying things and committing to a
hardware
Oh yeah, I am not completely sure (have not tested myself), but if you
were doing a setup where you were not using a clustering app like
windows/redhat clustering that uses PRs, did not use vmfs and were
instead accessing the disks exported by LIO/TGT directly in the vm
(either using the guest's is
I do not know about perf, but here is some info on what is safe and
general info.
- If you are not using VAAI then it will use older style RESERVE/RELEASE
commands only.
If you are using VAAI ATS, and doing active/active then you need
something, like the lock/sync talked about in the slides/hamme
Should chattr +i work with cephfs?
Using ceph v0.91 and a 3.18 kernel on the CephFS client, I tried this:
# mount | grep ceph
172.16.30.10:/ on /cephfs/test01 type ceph (name=cephfs,key=client.cephfs)
# echo 1 > /cephfs/test01/test.1
# ls -l /cephfs/test01/test.1
-rw-r--r-- 1 root root 2 Jan 27
Hi Nick,
Agreed, I see your point of basically once your past the 150TBW or whatever
that number maybe, your just waiting for failure effectively but aren't we
anyway?
I guess it depends on your use case at the end of the day. I wonder what the
likes of Amazon, Rackspace etc are doing in the w
Thanks Robert for your response. I'm considering giving SAS 600G 15K a try
before moving to SSD. It should give ~175 IOPS per disk.
Do you think the performance will be better if i goes with the following
setup ?
4x OSD nodes
2x SSD - RAID 1 for OS and Journal
10x 600G SAS 15K - NO Raid
Two Replic
When starting an OSD in a Docker container (so the volume is btrfs), we see
the following output:
2015-01-24 16:48:30.511813 7f9f3d066900 0 ceph version 0.87
(c51c8f9d80fa4e0168aa52685b8de40e42758578), process ceph-osd, pid 1
2015-01-24 16:48:30.522509 7f9f3d066900 0
filestore(/var/lib/ceph/osd/
Hey folks,
Any update on this fix getting merged? We suspect other crashes based on
this bug.
Thanks,
Chris
On Tue, Jan 13, 2015 at 7:09 AM, Gregory Farnum wrote:
> Awesome, thanks for the bug report and the fix, guys. :)
> -Greg
>
> On Mon, Jan 12, 2015 at 11:18 PM, 严正 wrote:
> > I tracked
Hi All,
Is there a good documentation on Ceph Testing.
I have the following setup done, but not able to find a good document to start
doing the tests.
[cid:image001.png@01D03A1C.7B7513B0]
Please advise.
Thanks
Raj
___
ceph-users mailing list
ce
Hi all,
Documentation explains how to remove the cache pool:
http://ceph.com/docs/master/rados/operations/cache-tiering/
Anyone know how to remove the storage pool instead? (E.g. the storage pool
has wrong parameters.)
I was hoping to push all the objects into the cache pool and then rep
Hi Zhang,
Thanks for the pointer. That page looks like the commands to set up the
cache, not how to verify that it is working.
I think I have been able to see objects (not PGs I guess) moving from the
cache pool to the storage pool using 'rados df' . (I haven't run long enough
to verify ye
Story time. Over the past year or so, our datacenter had been undergoing
the first of a series or renovations designed to add more power and
cooling capacity. As part of these renovations, changes to the emergency
power off system (EPO) necessitated that this system be tested. If
you're unfamiliar,
I tried this a while back. In my setup, I exposed an block device with
rbd on the owncloud host and tried sharing an image to the owncloud host
via NFS. If I recall correctly, both worked fine (I didn't try S3). The
problem I had at the time (maybe 6-12 months ago) was that owncloud
didn't support
Hi Patrik,
Am 27.01.2015 14:06, schrieb Patrik Plank:
>
> ...
> I am really happy, these values above are enough for my little amount of
> vms. Inside the vms I get now for write 80mb/s and read 130mb/s, with
> write-cache enabled.
>
> But there is one little problem.
>
> Are there some tuning
On Tue, 27 Jan 2015, Irek Fasikhov wrote:
> Hi,All.
> Indeed, there is a problem. Removed 1 TB of data space on a cluster is not
> cleared. This feature of the behavior or a bug? And how long will it be
> cleaned?
Your subject says cache tier but I don't see it in the 'ceph df' output
below. The
On Tue, Jan 27, 2015 at 6:13 AM, John Spray wrote:
> Raj,
>
> The note is still valid, but the filesystem is getting more stable all the
> time. Some people are using it, especially in an active/passive
> configuration with a single active MDS. If you do choose to do some
> testing, use the mos
Raj,
The note is still valid, but the filesystem is getting more stable all the
time. Some people are using it, especially in an active/passive
configuration with a single active MDS. If you do choose to do some
testing, use the most recent stable release of Ceph and the most recent
linux kernel
> I have two ceph nodes with the following specifications
> 2x CEPH - OSD - 2 Replication factor
> Model : SuperMicro X8DT3
> CPU : Dual intel E5620
> RAM : 32G
> HDD : 2x 480GB SSD RAID-1 ( OS and Journal )
> 22x 4TB SATA RAID-10 ( OSD )
>
> 3x Controllers - CEPH Monitor
> Model : ProLi
Hello again,
thank all for the very very helpful advices.
Now i have reinstalled my ceph cluster.
Three nodes with ceph version 0.80.7 and for every single disk an osd. The
journal will be saved on a ssd.
My ceph.conf
[global]
fsid = bceade34-3c54-4a35-a759-7af631a19df7
mon_ini
Hello,
I'm using ceph-0.80.7 with Mirantis OpenStack IceHouse - RBD for nova
ephemeral disk and glance.
I have two ceph nodes with the following specifications
2x CEPH - OSD - 2 Replication factor
Model : SuperMicro X8DT3
CPU : Dual intel E5620
RAM : 32G
HDD : 2x 480GB SSD RAID-1 ( OS and Journal
On Wed, Jan 21, 2015 at 5:53 PM, Gregory Farnum wrote:
> Depending on how you configured things it's possible that the min_size
> is also set to 2, which would be bad for your purposes (it should be
> at 1).
This was exactly the problem. Setting min_size=1 (which I believe
used to be the default
Dear all,
we would like to use ceph as a a primary (object) storage for owncloud.
Did anyone already do this? I mean: is that actually possible or am I
wrong?
As I understood I have to use radosGW in swift "flavor", but what about
s3 flavor?
I cannot find anything "official" so hence my question.
On 2015-01-27 17:06, Kim Vandry wrote:
Unfortunately, I can't find how to assert the version when submitting an
op. I'm looking at src/include/rados/librados.h in git. Maybe you or
someone else can help me find it once you have the API docs at hand.
Ah, never mind, I see that assert_version is
Although the documentation is not great, and open to interpretation, there
is a pg calculator here http://ceph.com/pgcalc/.
With it you should be able to simulate your use case, and generate number
based on your scenario.
On Mon, Jan 26, 2015 at 8:00 PM, Italo Santos wrote:
> Thanks for your an
Hi Greg,
Thanks for your feedback.
On 2015-01-27 15:38, Gregory Farnum wrote:
On Mon, Jan 26, 2015 at 6:47 PM, Kim Vandry wrote:
By the way, I have a question about the class. Following the example in
cle_hello.cc method record_hello, our method calls cls_cxx_stat() and yet is
declared CLS_ME
29 matches
Mail list logo