It's Xen yes, but no I didn't tried the RBD tab client, for two
reasons :
- too young to enable it in production
- Debian packages don't have the TAP driver
Le lundi 05 août 2013 à 01:43 +, James Harper a écrit :
> What VM? If Xen, have you tried the rbd tap client?
>
> James
>
> > -Ori
>
> It's Xen yes, but no I didn't tried the RBD tab client, for two
> reasons :
> - too young to enable it in production
> - Debian packages don't have the TAP driver
>
It works under Wheezy. blktap is available via dkms package, then just replace
the tapdisk with the rbd version and follow the
On Sun, Aug 04, 2013 at 03:36:52PM +0200, Oliver Francke wrote:
> Am 02.08.2013 um 23:47 schrieb Mike Dawson :
> > We can "un-wedge" the guest by opening a NoVNC session or running a 'virsh
> > screenshot' command. After that, the guest resumes and runs as expected. At
> > that point we can exami
Dear sir / madam,
Thanks for your reading!
We engaged in metal industry ten years ago. High quality and price has always
been our pride. of course, the good service too. We can show you a price only
take 15 minutes.Any demand, Please Contact us without hesitate.
Best regards
Christine Zho
Hi there
I have a few questions regarding the block device storage and the
ceph-filesystem.
We want to cluster a database (Progress) on a clustered filesystem , but
the database requires the
operating system to see the clustered storage area as a block device ,
and not a network storage area
>
> Hi there
>
> I have a few questions regarding the block device storage and the
> ceph-filesystem.
>
> We want to cluster a database (Progress) on a clustered filesystem , but
> the database requires the
> operating system to see the clustered storage area as a block device ,
> and not a netw
The good news is, with new patch, ceph start OK, cephfs mount OK, and kvm
virtual machine use rbd boot OK(and seems running ok), and I check the
timestamp of last file write to cephfs, it's fair near to the time of
reboot(which cause ceph not work any more). Since I don't have any other
way to chec
After more than a week of trying to restore our cluster I've given up.
I'd like to reset the data, metadata and rbd pools to their initial clean
states (wiping out all data). Is there an easy way to do this? I tried
deleting and adding pools, but still have:
health HEALTH_WARN 32 pgs
I am looking at evaluating ceph for use with large storage nodes (24-36
SATA disks per node, 3 or 4TB per disk, HBAs, 10G ethernet).
What would be the best practice for deploying this? I can see two main
options.
(1) Run 24-36 osds per node. Configure ceph to replicate data to one or
more ot
Here's a quick Google result for Ceph Resource Agents packages in the
Debian unstable branch. These look to apply to 0.48, but could be used
as a base for a Resource Agent for RBD.
http://packages.debian.org/unstable/ceph-resource-agents
-Original Message-
From: James Harper
To: Beni
[Moving to ceph-devel]
On Mon, 5 Aug 2013, huangjun wrote:
> hi,all
> i compiled ceph 0.61.3 on centos 5.9,the "sh autogen.sh" and
> "./configure " is ok, but when i "make", an error occurs, the err log:
> /usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../include/c++/4.1.2/bits/concurrence.h:
> I
Brian,
Short answer: Ceph generally is used with multiple OSDs per node. One
OSD per storage drive with no RAID is the most common setup. At 24- or
36-drives per chassis, there are several potential bottlenecks to consider.
Mark Nelson, the Ceph performance guy at Inktank, has published sever
On Mon, 5 Aug 2013, Yu Changyuan wrote:
> The good news is, with new patch, ceph start OK, cephfs mount OK, and kvm
> virtual machine use rbd boot OK(and seems running ok), and I check the
> timestamp of last file write to cephfs, it's fair near to the time of
> reboot(which cause ceph not work any
Hey Noah,
Yes it does look like an older version 56.6, I got it from the Ubuntu Repo.
Is there another method or pull request I can run to get the latest? I am
having a hard time finding it.
Thanks
On Sun, Aug 4, 2013 at 10:33 PM, Noah Watkins wrote:
> Hey Scott,
>
> Things look OK, but I'm a l
The ceph.com repositories can be added in Ubuntu. Checkout
http://ceph.com/docs/master/install/debian/ for details. If you
upgrade to the latest stable (cuttlefish) then all the dependencies
should be correct.
On Mon, Aug 5, 2013 at 9:38 AM, Scottix wrote:
> Hey Noah,
> Yes it does look like an o
On 05/08/2013 17:15, Mike Dawson wrote:
Short answer: Ceph generally is used with multiple OSDs per node. One
OSD per storage drive with no RAID is the most common setup. At 24- or
36-drives per chassis, there are several potential bottlenecks to
consider.
Mark Nelson, the Ceph performance
On 8/5/2013 12:51 PM, Brian Candler wrote:
On 05/08/2013 17:15, Mike Dawson wrote:
Short answer: Ceph generally is used with multiple OSDs per node. One
OSD per storage drive with no RAID is the most common setup. At 24- or
36-drives per chassis, there are several potential bottlenecks to
cons
Thanks for looking Sage,
I came to this conclusion myself as well and this seemed to work. I'm
trying to replicate a ceph cluster that was made with ceph-deploy
manually. I noted that these capabilities entries were not in the
ceph-deploy cluster. Does ceph-deploy do something special when creatin
Hi all,
I'm trying to identify the performance bottlenecks in my experimental Ceph
cluster. A little background on my setup:
10 storage servers, each configured with:
-(2) dual-core opterons
-8 GB of RAM
-(6) 750GB disks (1 OSD per disk, 720
On 08/04/2013 08:07 PM, Olivier Bonvalet wrote:
>
> Hi,
>
> I've just upgraded a Xen Dom0 (Debian Wheezy with Xen 4.2.2) from Linux
> 3.9.11 to Linux 3.10.5, and now I have kernel panic after launching some
> VM which use RBD kernel client.
A crash like this was reported last week. I started l
Can no one shed any light on this?
On Jul 30, 2013, at 1:51 PM, John Nielsen wrote:
> I am running a ceph cluster with 24 OSD's across 3 nodes, Cuttlefish 0.61.3.
> Recently an inconsistent PG cropped up:
>
> # ceph health detail
> HEALTH_ERR 1 pgs inconsistent; 1 scrub errors
> pg 11.2d5 is a
Josh,
Logs are uploaded to cephdrop with the file name
mikedawson-rbd-qemu-deadlock.
- At about 2013-08-05 19:46 or 47, we hit the issue, traffic went to 0
- At about 2013-08-05 19:53:51, ran a 'virsh screenshot'
Environment is:
- Ceph 0.61.7 (client is co-mingled with three OSDs)
- rbd cac
> I am looking at evaluating ceph for use with large storage nodes (24-36 SATA
> disks per node, 3 or 4TB per disk, HBAs, 10G ethernet).
>
> What would be the best practice for deploying this? I can see two main
> options.
>
> (1) Run 24-36 osds per node. Configure ceph to replicate data to one o
In the previous email, you are forgetting Raid1 has a write penalty of 2
since it is mirroring and now we are talking about different types of raid
and nothing really to do about Ceph. One of the main advantages of Ceph is
to have data replicated so you don't have to do Raid to that degree. I am
su
>
> In the previous email, you are forgetting Raid1 has a write penalty of 2
> since it
> is mirroring and now we are talking about different types of raid and nothing
> really to do about Ceph. One of the main advantages of Ceph is to have data
> replicated so you don't have to do Raid to that d
Hi, all! Thanks to those who attended the first day (unless you are in
Europe, where they all happen on the same glorious day) of the Ceph
Developer Summit.
For those of you who couldn’t make it, I have added links to videos of
the sessions to the wiki page - it’s actually just three large video
26 matches
Mail list logo