Thanks, posted the question in openstack list. Hopefully will get some
expert opinion.
On Fri, Jun 12, 2015 at 11:33 AM, Alexandre DERUMIER
wrote:
> Hi,
>
> here a libvirt xml sample from libvirt src
>
> (you need to define number, then assign then in disks).
>
> I don't use openstack, so I rea
Augh, never mind, firewall problem. Thanks anyway.
From: Gruher, Joseph R
Sent: Thursday, June 11, 2015 10:55 PM
To: ceph-users@lists.ceph.com
Cc: Gruher, Joseph R
Subject: MONs not forming quorum
Hi folks-
I'm trying to deploy 0.94.2 (Hammer) onto CentOS7. I used to be pretty good at
this on
Hi,
here a libvirt xml sample from libvirt src
(you need to define number, then assign then in disks).
I don't use openstack, so I really don't known how it's working with it.
QEMUGuest1
c7a5fdbd-edaf-9455-926a-d65c16db1809
219136
219136
2
2
hvm
destroy
res
Hi folks-
I'm trying to deploy 0.94.2 (Hammer) onto CentOS7. I used to be pretty good at
this on Ubuntu but it has been a while. Anyway, my monitors are not forming
quorum, and I'm not sure why. They can definitely all ping each other and
such. Any thoughts on specific problems in the outpu
Hi Alexandre,
I agree with your rational, of one iothread per disk. CPU consumed in
IOwait is pretty high in each VM. But I am not finding a way to set
the same on a nova instance. I am using openstack Juno with QEMU+KVM.
As per libvirt documentation for setting iothreads, I can edit
domain.xml di
On Thu, Jun 11, 2015 at 10:31 PM, Nigel Williams
wrote:
> Wondering if anyone has done comparisons between CephFS and other
> parallel filesystems like Lustre typically used in HPC deployments
> either for scratch storage or persistent storage to support HPC
> workflows?
Oak Ridge had a paper at
Wondering if anyone has done comparisons between CephFS and other
parallel filesystems like Lustre typically used in HPC deployments
either for scratch storage or persistent storage to support HPC
workflows?
thanks.
___
ceph-users mailing list
ceph-users
Dear Sir,
I am New to CEPH. I have the following queries:
1. I have been using OpenNAS, OpenFiler, Gluster & Nexenta for storage
OS. How is CEPH different from Gluster & Nexenta ?
2. I also use LUSTRE for our Storage in a HPC Environment. Can CEPH be
substituted for Lustre ?
3. What is the
2015-06-12 2:00 GMT+08:00 Lincoln Bryant :
> Hi,
>
> Are you using cephx? If so, does your client have the appropriate key on it?
> It looks like you have an mds set up and running from your screenshot.
>
> Try mounting it like so:
>
> mount -t ceph -o name=admin,secret=[your secret] 192.168.1.105:
Hi,
On 11/06/2015 19:34, Sage Weil wrote:
> Bug #11442 introduced a change that made rgw objects that start with
> underscore incompatible with previous versions. The fix to that bug
> reverts to the previous behavior. In order to be able to access objects
> that start with an underscore and w
Hi Trevor,
probably csync2 could work for you.
Best
Karsten
Am 11.06.2015 7:30 nachm. schrieb "Trevor Robinson - Key4ce" <
t.robin...@key4ce.com>:
> Hello,
>
>
>
> Could somebody please advise me if Ceph is suitable for our use?
>
>
>
> We are looking for a file system which is able to work ove
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES works, or at least seems to, just nothing
positive. This is on a Centos 6-ish distro.
I can’t really upgrade anything easily because of support, and we still run
0.67.12 in production, so that’s a no-go.
I know upgrading to Giant is the best way to achieve mo
Alternatively you could just use GIT (or some other form of versioning system)
... host your code/files/html/whatever in GIT. Make changes to the GIT tree -
then you can trigger a git pull from your webservers to local filesystem.
This gives you the ability to use branches/versions to control
Yeah ! Then it is the tcmalloc issue..
If you are using the version coming with OS , the
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES won't do anything.
Try building the latest tcmalloc and set the env variable and see if it
improves or not.
Also, you can try with latest ceph build with jemalloc enabled
Hi,
I looked at it briefly before leaving, tcmalloc was at the top. I can provide a
full listing tomorrow if it helps.
12.80% libtcmalloc.so.4.1.0 [.] tcmalloc::CentralFreeList::FetchFromSpans()
8.40% libtcmalloc.so.4.1.0 [.]
tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCa
Yeah, perf top will help you a lot..
Some guess:
1. If your block size is small 4-16K range, most probably you are hitting the
tcmalloc issue. 'perf top' will show up with lot of tcmalloc traces in that
case.
2. fdcache should save you some cpu but I don't see it will be that significant.
Tha
Turn off write cache on the controller. Your probably seeing the flush to disk.
Tyler Bishop
Chief Executive Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are notified
that disclosing, copying, distri
You don't need a filesystem for that. I use csync2 with lsyncd and it
works ok.
Make sure if you use 2 or multi way sync and WinSCP to update files,
first delete the old version, wait a second and then upload the new
version. It will save you some head scratching...
https://www.krystalmods.com/ind
Hi,
Ceph journal works in different way. It’s a write ahead journal, all the data
will be persisted first in journal and then will be written to actual place.
Journal data is encoded. Journal is a fixed size partition/file and written
sequentially. So, if you are placing journal in HDDs, it wi
On 05/20/15 23:34, Trevor Robinson - Key4ce wrote:
>
> Hello,
>
>
>
> Could somebody please advise me if Ceph is suitable for our use?
>
>
>
> We are looking for a file system which is able to work over different
> locations which are connected by VPN. If one locations was to go
> offline then
Hi,
Are you using cephx? If so, does your client have the appropriate key on it? It
looks like you have an mds set up and running from your screenshot.
Try mounting it like so:
mount -t ceph -o name=admin,secret=[your secret] 192.168.1.105:6789:/
/mnt/mycephfs
--Lincoln
On Jun 7, 2015, at 1
You might be able to accomplish that with something like dropbox or owncloud
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Trevor
Robinson - Key4ce
Sent: Wednesday, May 20, 2015 2:35 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Is Ceph right for me?
Hello,
C
1) set up mds server
ceph-deploy mds --overwrite-conf create
2) create filesystem
ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_metadata 16
ceph fs new cephfs cephfs_metadata cephfs_data
ceph fs ls
ceph mds stat
3) mount it!
From: ceph-users [mailto:ceph-users-boun...@lists.
You may be able to use replication. Here is a site showing a good example of
how to set it up. I have not tested replicating within the same datacenter,
but you should just be able to define a new zone within your existing ceph
cluster and replicate to it.
http://cephnotes.ksperis.com/blog/20
You probably didn't turn on an MDS, as that isn't set up by default
anymore. I believe the docs tell you how to do that somewhere else.
If that's not it, please provide the output of "ceph -s".
-Greg
On Sun, Jun 7, 2015 at 8:14 AM, 张忠波 wrote:
> Hi ,
> My ceph health is OK , And now , I want to
This Hammer point release fixes a few critical bugs in RGW that can
prevent objects starting with underscore from behaving properly and that
prevent garbage collection of deleted objects when using the Civetweb
standalone mode.
All v0.94.x Hammer users are strongly encouraged to upgrade, and to
Hi,
I am trying to setup nginx to access html files in ceph buckets.
I have setup -> https://github.com/anomalizer/ngx_aws_auth
Below is the nginx config . When I try to access
http://hostname:8080/test/b.html -> shows signature mismatch.
http://hostname:8080/b.html -> shows signature mismatch
You want write cache to disk, no write cache for SSD.
I assume all of your data disk are single drive raid 0?
Tyler Bishop
Chief Executive Officer
513-299-7108 x10
tyler.bis...@beyondhosting.net
If you are not the intended recipient of this transmission you are notified
Hi everyone.
I'm wondering - is there way to backup radosgw data?
What i already tried.
create backup pool -> copy .rgw.buckets to backup pool. Then i delete
object via s3 client. And then i copy data from backup pool to
.rgw.buckets. I still can't see object in s3 client, but can get it via
http b
Hello,
I am testing NFS over RBD recently. I am trying to build the NFS HA
environment under Ubuntu 14.04 for testing, and the packages version
information as follows:
- Ubuntu 14.04 : 3.13.0-32-generic(Ubuntu 14.04.2 LTS)
- ceph : 0.80.9-0ubuntu0.14.04.2
- ceph-common : 0.80.9-0ubuntu0.14.04.2
Hi George
In order to experience the error it was enough to simply run mkfs.xfs on all
the volumes.
In the meantime it became clear what the problem was:
~ ; cat /proc/183016/limits
...
Max open files1024 4096 files
..
This can be changed by settin
Hello,
Could somebody please advise me if Ceph is suitable for our use?
We are looking for a file system which is able to work over different locations
which are connected by VPN. If one locations was to go offline then the
filesystem will stay online at both sites and then once connection is r
Hi,
Could you take a look on my problem.
It's about high latency on my OSDs on HP G8 servers (ceph01, ceph02 and ceph03).
When I run a rados bench for 60 sec, the results are surprising : after a few
seconds, there is no traffic, then it's resume, etc.
Finally, the maximum latency is high and VM
On 5/11/15 9:47 AM, Ric Wheeler wrote:
> On 05/05/2015 04:13 AM, Yujian Peng wrote:
>> Emmanuel Florac writes:
>>
>>> Le Mon, 4 May 2015 07:00:32 + (UTC)
>>> Yujian Peng 126.com> écrivait:
>>>
I'm encountering a data disaster. I have a ceph cluster with 145 osd.
The data center had
On Mon, Jun 08, 2015 at 07:49:15PM +0300, Andrey Korolyov wrote:
> On Mon, Jun 8, 2015 at 6:50 PM, Jason Dillaman wrote:
> > Hmm ... looking at the latest version of QEMU, it appears that the RBD
> > cache settings are changed prior to reading the configuration file instead
> > of overriding the
Thank you for your reply .
I encountered other problems when i install ceph .
#1. When i run the command , " ceph-deploy new ceph-0 " , and got the
ceph.conf file . However , there is not any information aboutosd pool
default size or public network .
[root@ceph-2 my-cluster]# more ce
OS: CentOS release 6.6 (Final)
kernel : 3.10.77-1.el6.elrepo.x86_64
Installed: ceph-deploy.noarch 0:1.5.23-0
Dependency Installed:python-argparse.noarch 0:1.2.1-2.el6.centos
I install the ceph-deploy refer to the manual ,
http://ceph.com/docs/master/start/quick-start-preflight/ . However
Hi,
I am trying to deploy ceph-hammer on 4 nodes(admin, monitor and 2 OSD's).
MY servers are behind a proxy server, so when I need to run an apt-get
update I need to export our proxy server.
When I run the command "ceph-deploy install osd1 osd2 mon1", since all
three nodes are behind the proxy th
Hi ,
My ceph health is OK , And now , I want to build a Filesystem , refer to
the CEPH FS QUICK START guide .
http://ceph.com/docs/master/start/quick-cephfs/
however , I got a error when i use the command , mount -t ceph
192.168.1.105:6789:/ /mnt/mycephfs . error : mount error 22 = Inval
Hi,
Please help me with hardware cache settings on controllers for ceph rbd best
performance. All Ceph hosts have one SSD drive for journal.
We are using 4 different controllers, all with BBU:
* HP Smart Array P400
* HP Smart Array P410i
* Dell PERC 6/i
* D
hi,jan
2015-06-01 15:43 GMT+08:00 Jan Schermer :
> We had to disable deep scrub or the cluster would me unusable - we need to
> turn it back on sooner or later, though.
> With minimal scrubbing and recovery settings, everything is mostly good.
> Turned out many issues we had were due to too few
Hi George
Well that’s strange. I wonder why our systems behave so differently.
We’ve got:
Hypervisors running on Ubuntu 14.04.
VMs with 9 ceph volumes: 2TB each.
XFS instead of your ext4
Maybe the number of placement groups plays a major role as well. Jens-Christian
may be able to give you th
This development release is delayed a bit due to tooling changes in the
build environment. As a result the next one (v9.0.2) will have a bit more
work than is usual.
Highlights here include lots of RGW Swift fixes, RBD feature work
surrounding the new object map feature, more CephFS snapshot f
Hum thanks David I will check corosync
And maybe Consul can be a solution ?
Sent from my iPhone
> On 11 juin 2015, at 11:33, David Moreau Simard wrote:
>
> What I've seen work well is to set multiple A records for your RGW endpoint.
> Then, with something like corosync, you ensure that these mu
On Thu, Jun 11, 2015 at 5:30 PM, Nick Fisk wrote:
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Ilya Dryomov
>> Sent: 11 June 2015 12:33
>> To: Nick Fisk
>> Cc: ceph-users
>> Subject: Re: [ceph-users] krbd splitting large IO's into sma
What I've seen work well is to set multiple A records for your RGW endpoint.
Then, with something like corosync, you ensure that these multiple IP
addresses are always bound somewhere.
You can then have as many nodes in "active-active" mode as you want.
--
David Moreau Simard
On 2015-06-11 11:2
Hi Team
Is it possible for you to share your setup on radosgw in order to use maximum
of network bandwidth and to have no SPOF
I have 5 servers on 10gb network and 3 radosgw on it
We would like to setup Haproxy on 1 node with 3 rgw but :
- SPOF become Haproxy node
- Max bandwidth will be on HApr
Hello,
I'm getting an error when attempting to mount a volume on a host that was
forceably powered off:
# mount /dev/rbd4 climate-downscale-CMIP5/
mount: mount /dev/rbd4 on /mnt/climate-downscale-CMIP5 failed: Stale file
handle
/var/log/messages:
Jun 10 15:31:07 node1 kernel: rbd4: unknown parti
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Have you configured and enabled the epel repo?
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Thu, Jun 11, 2015 at 6:26 AM, Shambhu Rajak wrote:
> I am trying to install ceph gaint on rhel 7
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Ilya Dryomov
> Sent: 11 June 2015 12:33
> To: Nick Fisk
> Cc: ceph-users
> Subject: Re: [ceph-users] krbd splitting large IO's into smaller IO's
>
> On Thu, Jun 11, 2015 at 2:23 PM, Ilya Dryom
Hi Team,
Once data transfer completed the journal file should convert all memory
data's to real places but our cause it showing double of the size after
complete transfer, Here everyone will confuse what is real file and folder
size. Also What will happen If i move the monitoring from that
I have no experience with perf and the package is not installed.
I will take a look at it, thanks.
Jan
> On 11 Jun 2015, at 13:48, Dan van der Ster wrote:
>
> Hi Jan,
>
> Can you get perf top running? It should show you where the OSDs are
> spinning...
>
> Cheers, Dan
>
> On Thu, Jun 11, 2
I am trying to install ceph gaint on rhel 7.0, while installing
ceph-common-0.87.2-0.el7.x86_64.rpm I am getting following dependency
packages]$ sudo yum install ceph-common-0.87.2-0.el7.x86_64.rpm
Loaded plugins: amazon-id, priorities, rhui-lb
Examining ceph-common-0.87.2-0.el7.x86_64.rpm: 1:cep
Hi Jan,
Can you get perf top running? It should show you where the OSDs are spinning...
Cheers, Dan
On Thu, Jun 11, 2015 at 11:21 AM, Jan Schermer wrote:
> Hi,
> hoping someone can point me in the right direction.
>
> Some of my OSDs have a larger CPU usage (and ops latencies) than others. If I
On Thu, Jun 11, 2015 at 2:23 PM, Ilya Dryomov wrote:
> On Wed, Jun 10, 2015 at 7:07 PM, Ilya Dryomov wrote:
>> On Wed, Jun 10, 2015 at 7:04 PM, Nick Fisk wrote:
> >> -Original Message-
> >> From: Ilya Dryomov [mailto:idryo...@gmail.com]
> >> Sent: 10 June 2015 14:06
>
On Wed, Jun 10, 2015 at 7:07 PM, Ilya Dryomov wrote:
> On Wed, Jun 10, 2015 at 7:04 PM, Nick Fisk wrote:
>>> > >> -Original Message-
>>> > >> From: Ilya Dryomov [mailto:idryo...@gmail.com]
>>> > >> Sent: 10 June 2015 14:06
>>> > >> To: Nick Fisk
>>> > >> Cc: ceph-users
>>> > >> Subject: R
@Irek Fasikhov
as i said in my previous post
all my servers clocks are in sync
i double checked it several times just to be sure
i hope you have any other clues
-Original Message-
From: Irek Fasikhov
mailto:irek%20fasikhov%20%3cmalm...@gmail.com%3e>>
To: "Makkelie, R (ITCDCC) - KLM"
mai
> On 11 Jun 2015, at 11:53, Henrik Korkuc wrote:
>
> On 6/11/15 12:21, Jan Schermer wrote:
>> Hi,
>> hoping someone can point me in the right direction.
>>
>> Some of my OSDs have a larger CPU usage (and ops latencies) than others. If
>> I restart the OSD everything runs nicely for some time,
On 6/11/15 12:21, Jan Schermer wrote:
Hi,
hoping someone can point me in the right direction.
Some of my OSDs have a larger CPU usage (and ops latencies) than others. If I
restart the OSD everything runs nicely for some time, then it creeps up.
1) most of my OSDs have ~40% CPU (core) usage (us
Hands follow command: ntpdate NTPADDRESS
2015-06-11 12:36 GMT+03:00 Makkelie, R (ITCDCC) - KLM <
ramon.makke...@klm.com>:
> all ceph releated servers have the same NTP server
> and double checked the time and timezones
> the are all correct
>
>
> -Original Message-
> *From*: Irek Fasikho
all ceph releated servers have the same NTP server
and double checked the time and timezones
the are all correct
-Original Message-
From: Irek Fasikhov
mailto:irek%20fasikhov%20%3cmalm...@gmail.com%3e>>
To: "Makkelie, R (ITCDCC) - KLM"
mailto:%22Makkelie,%20r%20%28itcdcc%29%20-%20klm%22%
Hi,
hoping someone can point me in the right direction.
Some of my OSDs have a larger CPU usage (and ops latencies) than others. If I
restart the OSD everything runs nicely for some time, then it creeps up.
1) most of my OSDs have ~40% CPU (core) usage (user+sys), some are closer to
80%. Restar
It is necessary to synchronize time
2015-06-11 11:09 GMT+03:00 Makkelie, R (ITCDCC) - KLM <
ramon.makke...@klm.com>:
> i'm trying to add a extra monitor to my already existing cluster
> i do this with the ceph-deploy with the following command
>
> ceph-deploy mon add "mynewhost"
>
> the ceph-dep
>>> Andrey Korolyov schrieb am Mittwoch, 10. Juni 2015 um
>>> 15:29:
> On Wed, Jun 10, 2015 at 4:11 PM, Pavel V. Kaygorodov wrote:
Hi,
for us a restart of the monitor solved this.
Regards
Steffen
>> Hi!
>>
>> Immediately after a reboot of mon.3 host its clock was unsynchronized and
> "cl
i'm trying to add a extra monitor to my already existing cluster
i do this with the ceph-deploy with the following command
ceph-deploy mon add "mynewhost"
the ceph-deploy says its all finished
but when i take a look at my new monitor host in the logs i see the following
error
cephx: verify_repl
I wrote a script which calculates data durability SLA depending on many
factors like disk size, network speed, number of hosts etc.
It takes recovery time three times greater than needed to count client IO
priority over recovery IO.
For 2Tb disks and 10g network it shows a bright picture.
OSDs: 10
66 matches
Mail list logo