Thank you Gregory !
I think I found all options :
https://github.com/ceph/ceph/blob/master/src/common/config_opts.h
Is that right ?
On 04/02/2014 04:19 PM, Gregory Farnum wrote:
> It's been a while, but I think you need to use the long form
> "client_mountpoint" config option here instead. If yo
I'm not sure I will re-test and tell you ;)
On 04/02/2014 04:14 PM, Gregory Farnum wrote:
> A *clean* shutdown? That sounds like a different issue; hjcho616's
> issue only happens when a client wakes back up again.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Wed
Did you add the virsh-secret?
Look at the libvirt-bin logs
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovan
I am facing problem when trying to use create an image using qemu-img on CEPH
storage
"qemu-img create -f raw rbd:rbd/foo 1G" (Qemu->librbd->librados).
I am using QEMU version 1.7.1 ; librbd : librbd1-0.67.7-0.el6.x86_64 ; librados
: librados2-0.67.7-0.el6.x86_64.
Whereas when I run rbd crea
Hello everybody,
can anybody comment on the largest number of
production VMs running on top of Ceph?
Thanks,
Constantinos
On 04/01/2014 09:47 PM, Jeremy Hanmer wrote:
Our (DreamHost's) largest cluster is roughly the same size as yours,
~3PB on just shy of 1100 OSDs currently. The architecture
On Wed, 2014-04-02 at 20:42 -0400, Jean-Charles Lopez wrote:
> From what is pasted, your remove failed so make sure you purge
> snapshots then the rbd image.
I already pasted that too.
rbd snap purge 6fa36869-4afe-485a-90a3-93fba1b5d15e
-p cloudstack
Removing all snapshots2014-04-03 01:02:46.863
Hi,
some time ago I build small Openstack cluster with Ceph as main/only
storage backend. I managed to get all parts working (removing/adding
volumes works in cinder/glance/nova).
I get no errors in logs but I've noticed that after deleting an
instance (booted from image) I get leftover RBD volum
Hi,
On Apr 3, 2014 4:49 AM, Christian Balzer wrote:
>
> On Tue, 1 Apr 2014 14:18:51 + Dan Van Der Ster wrote:
>
> [snip]
> > >
> > > http://www.slideshare.net/Inktank_Ceph/scaling-ceph-at-cern
> > >
> [snap]
>
> In that slide it says that replacing failed OSDs is automated via puppet.
>
> I'm
Hello.
I had successfully set up a first cluster using the Suse-SLES11
emperor repository at http://ceph.com/rpm/sles11/x86_64/. Everything
worked file ... until I decided to reinstall the servers (due to
hardware change). Today, I can only find one package "ceph-deploy"
with Suse's "zypper"
Hi guys,
I have got a problem. I created a new 1TB RBD device and mapped in on the
box. I tried to create a file system on that device but it failed:
root@export01:~# mkfs.ext4 /dev/rbd/pool/server1
mke2fs 1.42
(29-Nov-2011)
Filesystem
label=
OS type:
Linux
Block size=4096
(log=2)
Fragment size=4
Hi,
By my observation, I don't think that marking it out before crush rm would be
any safer.
Normally what I do (when decommissioning an OSD or whole server) is stop the
OSD process, then crush rm / osd rm / auth del the OSD shortly afterwards,
before the down out interval expires. Since the OS
Hi,
Our first attempt at using CephFS in earnest in December ran into a known
bug with the kclients hanging in ceph_mdsc_do_request, which I suspected
was down to the bug in
http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/15838. We
were on a default Ubuntu 12.04 3.2 kernel, so recent
I accidentally removed some MDS objects (a scary typo in a "rados
cleanup"), and when trying the read the files via the kclient I got all
zeros instead of some IO failure. Is this expected behaviour? I realise
it's generally bad behaviour, but I didn't expect silent zeros.
Best regards,
Danny
_
I'm having trouble understanding the description of Placement Group IDs
at http://ceph.com/docs/master/architecture/
There it says:
...
2. CRUSH takes the object ID and hashes it.
3. CRUSH calculates the hash modulo the number of OSDs. (e.g., 0x58) to
get a PG ID.
...
That seems to imply tha
Yes.
On Thu, Apr 3, 2014 at 12:56 AM, Florent B wrote:
> Thank you Gregory !
>
> I think I found all options :
> https://github.com/ceph/ceph/blob/master/src/common/config_opts.h
>
> Is that right ?
>
> On 04/02/2014 04:19 PM, Gregory Farnum wrote:
>> It's been a while, but I think you need to us
- Message from Brian Candler -
Date: Thu, 03 Apr 2014 14:44:13 +0100
From: Brian Candler
Subject: [ceph-users] PGID query
To: ceph-us...@ceph.com
I'm having trouble understanding the description of Placement Group
IDs at http://ceph.com/docs/master/architecture/
The
Hi,
I have found that problem is somewhere within the pool itself. I created
another pool and created an RBD within the new pool and it worked fine.
Can anyone point me out on how can I find the problem with the pool and why
any RBD assigned to it fails to be formatted ?
Thank you.
On 3 April
Trying out the "quick" installation instructions, using four Ubuntu
Server 12.04 VMs, ceph-deploy aborts with the following error:
brian@ceph-admin:~/my-cluster$ ceph-deploy install node1 node2 node3
[ceph_deploy.cli][INFO ] Invoked (1.4.0): /usr/bin/ceph-deploy install
node1 node2 node3
[ceph
Hi Brian,
try disabling "requiretty" in visudo on all nodes.
Best,
G.
On Thu, 03 Apr 2014 15:35:35 +0100, Brian Candler wrote:
Trying out the "quick" installation instructions, using four
Ubuntu Server 12.04 VMs, ceph-deploy aborts with the following error:
brian@ceph-admin:~/my-cluster$ ce
On 03/04/2014 15:42, Georgios Dimitrakakis wrote:
Hi Brian,
try disabling "requiretty" in visudo on all nodes.
There is no "requiretty" in the sudoers file, or indeed any file under /etc.
The manpage says that "requiretty" is off by default, but I suppose
Ubuntu could have broken that. So ju
On 03/04/2014 15:51, Brian Candler wrote:
On 03/04/2014 15:42, Georgios Dimitrakakis wrote:
Hi Brian,
try disabling "requiretty" in visudo on all nodes.
There is no "requiretty" in the sudoers file, or indeed any file under
/etc.
The manpage says that "requiretty" is off by default, but I s
DD tests: results below for slow VM host. Max throughput seems to cap at 130M
spikes.
sysbench: results below for slow VM host.
Restarting iscsi services: I have performed the tests after restarting iscsi
and even after restarting the VM, with no change in results. - note, however,
we have no is
On Thu, Apr 3, 2014 at 9:28 PM, Danny Luhde-Thompson
wrote:
> Hi,
>
> Our first attempt at using CephFS in earnest in December ran into a known
> bug with the kclients hanging in ceph_mdsc_do_request, which I suspected was
> down to the bug in
> http://comments.gmane.org/gmane.comp.file-systems.ce
On Thursday, April 03, 2014 07:57:58 Dan Van Der Ster wrote:
> Hi,
> By my observation, I don't think that marking it out before crush rm would
> be any safer.
>
> Normally what I do (when decommissioning an OSD or whole server) is stop
> the OSD process, then crush rm / osd rm / auth del the OSD
A few more minor nits.
(1) at the "ceph-deploy admin ..." step:
...
[ceph-admin][DEBUG ] connected to host: ceph-admin
[ceph-admin][DEBUG ] detect platform information from remote host
[ceph-admin][DEBUG ] detect machine type
[ceph-admin][DEBUG ] get remote short hostname
[ceph-admin][DEBUG ] wr
The filesystem interprets nonexistent file objects as holes -- so, zeroes.
This is expected. If you actually deleted *metadata* objects it would
detect that and fail.
-Greg
On Thursday, April 3, 2014, Danny Luhde-Thompson <
da...@meantradingsystems.com> wrote:
> I accidentally removed some MDS ob
On Thursday, April 3, 2014, Chad Seys wrote:
> On Thursday, April 03, 2014 07:57:58 Dan Van Der Ster wrote:
> > Hi,
> > By my observation, I don't think that marking it out before crush rm
> would
> > be any safer.
> >
> > Normally what I do (when decommissioning an OSD or whole server) is stop
>
On Tue, Apr 1, 2014 at 7:34 PM, Shang Wu wrote:
> Hi all,
>
> I have some questions about the Ceph multi-site implementation.
>
> I am thinking to have Ceph as the storage solution for across three internal
> site. I think, with a good internet connection, using the Multi-site object
> storage wit
Hi,
Here is the agenda:
* Meetups https://wiki.ceph.com/Community/Meetups
* Goodies https://ceph.myshopify.com/collections/all
* Documentation of the new Firefly feature (tiering, erasure code)
http://ceph.com/docs/master/dev/
* Careers http://ceph.com/community/careers/
Location : irc.oftc.net
Is there a guide/blog/document describing how one can identify bottlenecks
in a ceph cluster? For example, what if one node in my machine has a slow
hard-disk, CPU, or internet -- is there an easy and reliable way that I can
trace what is causing the CEPH performance to be poor?
Also, are there so
Is there a "known" **full** hardware configuration that someone can share
where they are "happy" with the CEPH performance? By "full", I mean the
full specs of server node (including SSD purchased, hard disks bought, RAID
controller used, ethernet card purchased, file-system type used, OS used)
and
Hi all-
I am testing on Ceph 0.78 running on Ubuntu 13.04 with 3.13 kernel. I had two
replication pools and five erasure code pools. Cluster was getting full so I
deleted all the EC pools. However, Ceph is not freeing the capacity. Note
below there is only 1636G in the two pools but the glo
Actually, I have to revise this, Ceph _is_ freeing capacity, but very slowly,
roughly 150G every 5 minutes. Is that normal? I feel like capacity is
generally freed almost immediately when I've previously deleted pools.
Thanks!
-Joe
From: Gruher, Joseph R
Sent: Thursday, April 03, 2014 10:32
I am looking at setting up a Ganeti cluster using KVM and CentOS. While
looking at storage I first looked at Gluster but noticed in the
documentation it does not allow Live Database files to be saved to it. Does
Ceph allow the use of LIVE database files being saved to it. If so does the
database pe
Ceph will allow anything; it's just providing a block device. How it
performs will depend quite a lot on the database workload you're
applying, though. We've heard from people who think it's wonderful and
others who don't, depending on what hardware they're using and what
their use case is. You'll
Back last fall I ran the DBT3 TPC-H test suite using mariadb on top of a
QEMU/KVM RBD volume (dumpling release) on a virtual machine. I
intentionally kept the cache sizes small to force more disk IO and
compared to the same test running on a local disk passed through to the
VM as well.
In th
Hi,
Here are the raw logs of today's meeting. I'll write down an executive summary
tomorrow.
Cheers
The Ceph User Committee monthly meeting (first edition) is about to
begin, in 2 minutes :-) The agenda is:
* Meetups https://wiki.ceph.com/Community/Meetups
* Goodies https://ceph.myshopify.c
Are you running Havana with josh’s branch?
(https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd)
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 P
Hi! I am facing the exact same problem!!!
I am also on a CentOS 6.5 64bit system
Does anyone has any suggestions? Where to look? What to check??
zhongku did you manage to solve this problem?
On the other hand if I use python as shown here:
http://ceph.com/docs/master/radosgw/s3/python/ I can
Ok, we were able to reproduce it. Opened issue #7978, and there's a
fix pending for upstream.
Thanks,
Yehuda
On Wed, Apr 2, 2014 at 7:20 AM, Yehuda Sadeh wrote:
> On Wed, Apr 2, 2014 at 2:08 AM, Benedikt Fraunhofer
> wrote:
>> Hi Yehuda,
>>
>> i tried your patch and it feels fine,
>> except you
On Tue, 2014-03-04 at 14:05 +0800, YIP Wai Peng wrote:
> Dear all,
>
>
> I have a rbd image that I can't delete. It contains a snapshot that is
> "busy"
>
>
> # rbd --pool openstack-images rm
> 2383ba62-b7ab-4964-a776-fb3f3723aabe-deleted
>
> 2014-03-04 14:02:04.062099 7f340b2d5760 -1 librbd:
On 04/03/2014 03:36 PM, Jonathan Gowar wrote:
On Tue, 2014-03-04 at 14:05 +0800, YIP Wai Peng wrote:
Dear all,
I have a rbd image that I can't delete. It contains a snapshot that is
"busy"
# rbd --pool openstack-images rm
2383ba62-b7ab-4964-a776-fb3f3723aabe-deleted
2014-03-04 14:02:04.0620
Thanks for the replies.
Here is some info on what I am trying to accomplish. My goal here is to
find the least expensive way to get into Virtualization and storage
without the cost of a SAN and Proprietary software(eg Vmware, Hyper-V). We
currently run about 6 servers that host our web based appli
Yes. You can see whether the snapshots are protected by using snap rm
instead of snap purge.
# rbd --pool mypool snap rm 5216ba99-1d8e-4155-9877-7d77d7b6caa0@snap
# rbd --pool mypool snap unprotect 5216ba99-1d8e-4155-9877-7d77d7b6caa0@snap
- WP
On Fri, Apr 4, 2014 at 6:37 AM, Josh Durgin wrote:
Hello, everyone!
I have installed ceph radosgw. My domain is like cephtest.com, and new bucket's
domain is {bucket-name}.cephtest.com.
Now customer has his own domain, such as domain.com.he want to bind
{bucket-name}.cephtest.com with domain.com.
Then he can download file by domain.com/filename,
Hi,
I want to use Dell R515/R510 for the OSD node purpose
1. 2*SSD for OS purpose (Raid 1)
2. 10* Segate 3.5' HDDx 3TB for OSD purpose (No RAID...JBOD)
To create JBOD...i created all 10 HDD as raid0but the problem is when i
will plug out the HDD from the server and plug-in again,i need t
You need to use Dell OpenManage:
https://linux.dell.com/repo/hardware/.
2014-04-04 7:26 GMT+04:00 Punit Dambiwal :
> Hi,
>
> I want to use Dell R515/R510 for the OSD node purpose
>
> 1. 2*SSD for OS purpose (Raid 1)
> 2. 10* Segate 3.5' HDDx 3TB for OSD purpose (No RAID...JBOD)
>
> To crea
I am trying to deploy a cluster with ceph-deploy. I installed ceph
0.72.2 from the rpm repositories. Running "ceph-deploy mon
create-initial" creates /var/lib/ceph etc. on all the nodes, but on
all nodes I get a warning:
[hvrrzceph2][DEBUG ] Starting Ceph mon.hvrrzceph2 on hvrrzceph2...
[hvrrz
48 matches
Mail list logo