2015-12-11 8:19 GMT+01:00 Stolte, Felix :
> Hi Loic,
>
> output is still the same:
>
> ceph-disk list
> /dev/cciss/c0d0 other, unknown
> /dev/cciss/c0d1 other, unknown
> /dev/cciss/c0d2 other, unknown
> /dev/cciss/c0d3 other, unknown
> /dev/cciss/c0d4 other, unknown
> /dev/cciss/c0d5 other, unknown
Hi Jens,
output is attached (stderr + stdout)
Regards
-Ursprüngliche Nachricht-
Von: Jens Rosenboom [mailto:j.rosenb...@x-ion.de]
Gesendet: Freitag, 11. Dezember 2015 09:10
An: Stolte, Felix
Cc: Loic Dachary; ceph-us...@ceph.com
Betreff: Re: [ceph-users] ceph-disk list crashes in infern
Hallo ceph-users,
is it possible to change the rbd caching settings [1] from a rbd devices which
is already online, mapped and in use?
I'm interesting in changing "READ-AHEAD SETTINGS" because of poor read
performance of our iscsi/fc-target. But it's not (easy) possible to unmount the
rbd dev
2015-12-11 9:16 GMT+01:00 Stolte, Felix :
> Hi Jens,
>
> output is attached (stderr + stdout)
O.k., so now "ls -l /sys/dev/block /sys/dev/block/104:0
/sys/dev/block/104:112" please.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.
Hi,
can you please help me with the question I am currently thinking about.
I am entertaining a osd node design of a mixture of SATA spinner based
osd daemons and SSD based osd daemons.
Is it possible to have incoming write traffic go to the SSD first and
then when write traffic is becoming
def list_partitions_device(dev):
"""
Return a list of partitions on the given device name
"""
partitions = []
basename = os.path.basename(dev)
for name in os.listdir(block_path(dev)):
if name.startswith(basename):
partitions.append(name)
return partit
Hi,
I am using a 2 node ceph cluster with cloudstack.
Do we get data availability when we shutdown one of the hosts in the cluster?
If yes, can you pls share the fine-tuning steps for the same.
Regards,
Pradeep
___
ceph-users mailing list
ceph-users@l
Hi, we are just testing our new ceph cluster and to optimise our spinning disks
we created an erasure coded pool and a SSD cache pool.
We modified the crush map to make an sad pool as easy server contains 1 ssd
drive and 5 spinning drives.
Stress testing the cluster in terms of read performance
It’s very unfortunate that you guys are using the EVO drives. As we’ve
discussed numerous times on the ML, they are not very suitable for this task.
I think that 200-300MB/s is actually not bad (without knowing anything about
the hardware setup, as you didn’t give details…) coming from those driv
The drive will actually be writing 500MB/s in this case, if the journal is on
the same drive.
All writes get to the journal and then to the filestore, so 200MB/s is actually
a sane figure.
Jan
> On 11 Dec 2015, at 13:55, Zoltan Arnold Nagy
> wrote:
>
> It’s very unfortunate that you guys ar
Hi thanks for the replies, but I was under the impression that the journal is
the same as the cache pool, so that there is no extra journal write?
About the EVOs, as this is a test cluster we would like to test how far we can
push commodity hardware.
The servers are all DELL 1U Rack mounts with
On Fri, Dec 11, 2015 at 2:46 AM, Deepak Shetty wrote:
>
>
> On Wed, Dec 2, 2015 at 7:35 PM, Alfredo Deza wrote:
>>
>> On Tue, Dec 1, 2015 at 4:59 AM, Deepak Shetty wrote:
>> > Hi,
>> > Does anybody how/where I can get the F21 repo for ceph hammer release ?
>> >
>> > In download.ceph.com/rpm-ham
Am 10.12.2015 um 06:38 schrieb Robert LeBlanc:
> Since I'm very interested in
> reducing this problem, I'm willing to try and submit a fix after I'm
> done with the new OP queue I'm working on. I don't know the best
> course of action at the moment, but I hope I can get some input for
> when I do t
Hi Felix,
Could you try again ? Hopefully that's the right one :-)
https://raw.githubusercontent.com/dachary/ceph/741da8ec91919db189ba90432ab4cee76a20309e/src/ceph-disk
is the lastest from https://github.com/ceph/ceph/pull/6880
Cheers
On 11/12/2015 09:16, Stolte, Felix wrote:
> Hi Jens,
>
> o
Hi Loic,
now it is working as expected. Thanks a lot for fixing it!
Output is:
/dev/cciss/c0d0p2 other
/dev/cciss/c0d0p5 swap, swap
/dev/cciss/c0d0p1 other, ext4, mounted on /
/dev/cciss/c0d1 :
/dev/cciss/c0d1p1 ceph data, active, cluster ceph, osd.0, journal
/dev/cciss/c0d7p2
/dev/cciss/c0d
This is a proactive message to summarize best practices and options working
with monitors, especially in a larger production environment (larger for me
is > 3 racks).
I know MONs do not require a lot of resources, but prefer to run on SSDs
for response time. Also that you need an odd number, as y
Is there any way to obtain a snapshot creation time? rbd snap ls does not
list it.
Thanks!
--
Alex Gorbachev
Storcium
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi everybody,
I underdstand that radosgw bucket indexes are stored as leveldb files.
Do writes leveldb go though the jounrnal first, or directly to disk?
In other words, do they get the benefits of having a fast journal?
I started investigating disk because I observe a lot of writes to ldb files
18 matches
Mail list logo