Hi,
Our fio tests against qemu-kvm on RBD look quite promising, details here:
https://docs.google.com/spreadsheet/ccc?key=0AoB4ekP8AM3RdGlDaHhoSV81MDhUS25EUVZxdmN6WHc&usp=drive_web#gid=0
tl;dr: rbd with caching enabled is (1) at least 2x faster than the
local instance storage, and (2) reaches the
Hello,
On Fri, 20 Dec 2013 09:20:48 +0100 Dan van der Ster wrote:
> Hi,
> Our fio tests against qemu-kvm on RBD look quite promising, details here:
>
> https://docs.google.com/spreadsheet/ccc?key=0AoB4ekP8AM3RdGlDaHhoSV81MDhUS25EUVZxdmN6WHc&usp=drive_web#gid=0
>
That data is very interesting a
Hi Ceph,
Just wanted to share Yann Dupont's talk about his experience in using Ceph at
the University. He goes beyond telling his own story and it can probably be a
source of inspiration for various use cases in the academic world.
http://video.renater.fr/jres/2013/index.php?play=jres2013_a
Hi,
I need Java bindings for librados.
And also i'm new to use "Java bindings". Could you please help me get a best
way to use librados with java program.
And what is the problem we will face, if we will use Java bindings.
Is there any alternatives...
Thanks & Regards,
Upendra Yadav
_
On 12/20/2013 12:15 PM, upendrayadav.u wrote:
Hi,
I need *Java bindings* for librados.
And also i'm new to use "Java bindings". Could you please help me get a
best way to use *librados* with java program.
And what is the problem we will face, if we will use Java bindings.
Is there any alternati
- Original Message -
From: "Wido den Hollander"
To: ceph-users@lists.ceph.com
Sent: Friday, December 20, 2013 8:04:09 AM
Subject: Re: [ceph-users] Storing VM Images on CEPH with RBD-QEMU driver
Hi,
> Hi,
>
> I'm testing CEPH with the RBD/QEMU driver through libvirt to store my VM
> im
On Fri, Dec 20, 2013 at 9:44 AM, Christian Balzer wrote:
>
> Hello,
>
> On Fri, 20 Dec 2013 09:20:48 +0100 Dan van der Ster wrote:
>
>> Hi,
>> Our fio tests against qemu-kvm on RBD look quite promising, details here:
>>
>> https://docs.google.com/spreadsheet/ccc?key=0AoB4ekP8AM3RdGlDaHhoSV81MDhUS2
Hello Dan,
On Fri, 20 Dec 2013 14:01:04 +0100 Dan van der Ster wrote:
> On Fri, Dec 20, 2013 at 9:44 AM, Christian Balzer wrote:
> >
> > Hello,
> >
> > On Fri, 20 Dec 2013 09:20:48 +0100 Dan van der Ster wrote:
> >
> >> Hi,
> >> Our fio tests against qemu-kvm on RBD look quite promising, detail
Hi all,
I've tested authentication on client side for pools, no problem so far.
I'm testing granularity to the rbd image, I've seen in the doc that we
can limit to object prefix, so possibly to rbd image :
http://ceph.com/docs/master/man/8/ceph-authtool/#osd-capabilities
I've got the followin
This makes sense. So if other mons come up that are *not* defined as initial
mons, then they will not be in service until the initial mon is up and ready?
At which point they can form their quorum and operate?
> -Original Message-
> From: Gregory Farnum [mailto:g...@inktank.com]
> Sent:
Yeah. This is less of a problem when you're listing them all
explicitly ahead of time (we could just make them wait for any
majority), but some systems don't want to specify even the monitor
count that way, so we give the admins "mon initial members" as a big
hammer.
-Greg
On Fri, Dec 20, 2013 at
I guess I should add, what if I add OSDs to a mon in this scenario? Do they get
up and in and will the crush map from the non initial mons get merged with the
initial when it's online?
> -Original Message-
> From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
> boun...@lists.ceph
David Clarke writes:
> Not directly related to Ceph, but you may want to investigate kexec[0]
> ('kexec-tools' package in Debian derived distributions) in order to
> get your machines rebooting quicker. It essentially re-loads the
> kernel as the last step of the shutdown procedure, skipping over
"fio --size=100m --ioengine=libaio --invalidate=1 --direct=1
--numjobs=10 --rw=read --name=fiojob --blocksize_range=4K-512k
--iodepth=16"
Since size=100m so reads would be entirely cached and, if hypervisor is
write-back, potentially many writes would never make it to the cluster
as well?
> The area I'm currently investigating is how to configure the
> networking. To avoid a SPOF I'd like to have redundant switches for
> both the public network and the internal network, most likely running
> at 10Gb. I'm considering splitting the nodes in to two separate racks
> and connecting each
Hi Wido,
Thanks for the reply.
On Fri, Dec 20, 2013 at 08:14:13AM +0100, Wido den Hollander wrote:
> On 12/18/2013 09:39 PM, Tim Bishop wrote:
> > I'm investigating and planning a new Ceph cluster starting with 6
> > nodes with currently planned growth to 12 nodes over a few years. Each
> > node
Le 20/12/2013 03:51, Christian Balzer a écrit :
Hello Mark,
On Thu, 19 Dec 2013 17:18:01 -0600 Mark Nelson wrote:
On 12/16/2013 02:42 AM, Christian Balzer wrote:
Hello,
Hi Christian!
new to Ceph, not new to replicated storage.
Simple test cluster with 2 identical nodes running Debian Jessi
On 12/19/13, 7:51 PM, Sage Weil wrote:
>> If it takes 15 minutes for one of my servers to reboot is there a risk
>> that some sort of needless automatic processing will begin?
>
> By default, we start rebalancing data after 5 minutes. You can adjust
> this (to, say, 15 minutes) with
>
> mon os
On Fri, 20 Dec 2013, Derek Yarnell wrote:
> On 12/19/13, 7:51 PM, Sage Weil wrote:
> >> If it takes 15 minutes for one of my servers to reboot is there a risk
> >> that some sort of needless automatic processing will begin?
> >
> > By default, we start rebalancing data after 5 minutes. You can ad
On Fri, Dec 20, 2013 at 6:19 PM, James Pearce wrote:
>
> "fio --size=100m --ioengine=libaio --invalidate=1 --direct=1 --numjobs=10
> --rw=read --name=fiojob --blocksize_range=4K-512k --iodepth=16"
>
> Since size=100m so reads would be entirely cached
--invalidate=1 drops the cache, no? Our result
Hello Guys,
I wonder what's the best way to replace a failed OSD instead of remove it from
CRUSH and add a new one in. As I have OSD# assigned in the ceph.conf, add a new
OSD might need to revise the config file and reload all the ceph instances.
BTW, any suggestions for my ceph.conf? I kind o
Using your data as inputs to in the Ceph reliability calculator [1]
results in the following:
Disk Modeling Parameters
size: 3TiB
FIT rate:826 (MTBF = 138.1 years)
NRE rate:1.0E-16
RAID parameters
replace: 6 hours
recovery rate: 500MiB/s (100 mi
Hello Gilles,
On Fri, 20 Dec 2013 21:04:45 +0100 Gilles Mocellin wrote:
> Le 20/12/2013 03:51, Christian Balzer a écrit :
> > Hello Mark,
> >
> > On Thu, 19 Dec 2013 17:18:01 -0600 Mark Nelson wrote:
> >
> >> On 12/16/2013 02:42 AM, Christian Balzer wrote:
> >>> Hello,
> >> Hi Christian!
> >>
>
Hello,
We have boxes with 24 Drives, 2TB each and want to run one OSD per drive.
What should be the ideal Memory requirement of the system, keeping in mind
that OSD Rebalancing and failure/replication of say 10-15TB data
-Hemant
___
ceph-users mailing l
Hi,
yesterday I expand our 3-Node ceph-cluster with an fourth node
(additional 13 OSDs - all OSDs have the same size (4TB)).
I use the same command like before to add OSDs and change the weight:
ceph osd crush set 44 0.2 pool=default rack=unknownrack host=ceph-04
But ceph osd tree show all OSDs n
Hi All,
Does Radosgw support a "Public URL" For static content?
Being that I wish to share a "File" publicly but not give out
username/passwords etc.
I noticed in the http://ceph.com/docs/master/radosgw/swift/ it says Static
Websites isn't supported.. which I assume is talking about this featu
26 matches
Mail list logo