On 03/04/2014 15:51, Brian Candler wrote:
On 03/04/2014 15:42, Georgios Dimitrakakis wrote:
Hi Brian,
try disabling "requiretty" in visudo on all nodes.
There is no "requiretty" in the sudoers file, or indeed any file
under /etc.
The manpage says that "requiretty" is off by default, but I s
On 04/04/2014 08:14, Georgios Dimitrakakis wrote:
On 03/04/2014 15:51, Brian Candler wrote:
On 03/04/2014 15:42, Georgios Dimitrakakis wrote:
Hi Brian,
try disabling "requiretty" in visudo on all nodes.
There is no "requiretty" in the sudoers file, or indeed any file
under /etc.
The manpag
2014-04-04 0:31 GMT+02:00 Yehuda Sadeh :
Hi Yehuda,
sorry for the delay. We ran into another problem and this took up all the time.
>> Are you running the version off the master branch, or did you just
>> cherry-pick the patch? I can't seem to reproduce the problem.
I just patched that line in a
Nope, one from RDO packages http://openstack.redhat.com/Main_Page
On Thu, 3 Apr 2014 23:22:15 +0200, Sebastien Han
wrote:
> Are you running Havana with josh’s branch?
> (https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd)
>
>
> Sébastien Han
> Cloud Engineer
>
> "Always give
Hi All,
I have built one distribution using yocto. I have written bitbakes for ceph
and ceph-deploy. Deployed in to same distribution. But..
While trying to create monitor node, I am getting "Unsupported platform" as
below. Please suggest me how can I use ceph and ceph-deploy independent of
platf
On 03/04/2014 23:43, Brian Beverage wrote:
Here is some info on what I am trying to accomplish. My goal here is to
find the least expensive way to get into Virtualization and storage
without the cost of a SAN and Proprietary software
...
I have been
tasked with taking a new start up project and
I don’t know the packages but for me it looks like a bug…
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovanc
On 04/04/2014 08:34, Brian Candler wrote:
It does also work with:
ssh -o RequestTTY=yes node1 'sudo ls'
ssh -o RequestTTY=force node1 'sudo ls'
But strangely, not if I put this in ~/.ssh/config:
$ cat ~/.ssh/config
Host node1
RequestTTY force
Host *
RequestTTY force
In that case, ssh -v does
As others have mentioned, there is no reason you cannot run databases on
Ceph storage. I've been running/testing Postgres and Mysql/Mariadb on
Ceph RDB volumes for quite a while now - since version 0.50 (typically
inside KVM containers but via the kernel driver will work too).
With a reasonabl
I managed to provoke this behavior by forgetting to include
'rbd_children' in the Ceph auth setup for the images and volumes
keyrings (https://ceph.com/docs/master/rbd/rbd-openstack/). Doing a:
$ ceph auth list
should reveal if all is well there.
Regards
Mark
On 04/04/14 20:56, Mariusz Gron
I've seen this "fast everything except sequential reads" asymmetry in my
own simple dd tests on RBD images but haven't really understood the cause.
Could you clarify what's going on that would cause that kind of
asymmetry. I've been assuming that once I get around to turning
on/tuning read caching
Hi again all,
I've had to recreate this RBD in a different pool. It seems that the
RBD's in this pool are not functioning on this one machine. If I can
create an RBD and map it, I then get I/O errors trying to mkfs with
either ext4 or xfs. Both of the RBD's in question that are in this pool
Hi,
We're running mysql in multi-master cluster (galera), mysql standalones,
postgresql, mssql and oracle db's on ceph RBD via QEMU/KVM. As someone else
pointed out it is usually faster with ceph, but sometimes you'll get some
odd slow reads.
Latency is our biggest enemy.
Oracle comes with an aw
On Fri, Apr 4, 2014 at 2:22 PM, Tom wrote:
> Hi again all,
>
> I've had to recreate this RBD in a different pool. It seems that the RBD's
> in this pool are not functioning on this one machine. If I can create an RBD
> and map it, I then get I/O errors trying to mkfs with either ext4 or xfs.
> Bo
I did my config according to that manual (except configuring
cinder-backup)
client.cinder
key:
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=volumes, allow rx pool=images
client.glance
key:
caps: [mon
On Fri, Apr 4, 2014 at 5:21 AM, Brian Candler wrote:
> On 04/04/2014 08:34, Brian Candler wrote:
>>
>>
>> It does also work with:
>>
>> ssh -o RequestTTY=yes node1 'sudo ls'
>> ssh -o RequestTTY=force node1 'sudo ls'
>>
>> But strangely, not if I put this in ~/.ssh/config:
>>
>> $ cat ~/.ssh/confi
On Fri, Apr 4, 2014 at 2:53 AM, Diedrich Ehlerding
wrote:
> I am trying to deploy a cluster with ceph-deploy. I installed ceph
> 0.72.2 from the rpm repositories. Running "ceph-deploy mon
> create-initial" creates /var/lib/ceph etc. on all the nodes, but on
> all nodes I get a warning:
>
>
> [hvrr
One thing you can try is tuning read_ahead_kb on the OSDs and/or the RBD
volume(s) and see if that helps. On some hardware we've seen this
improve sequential read performance dramatically.
Another big culprit that can really hurt sequential reads is
fragmentation. BTRFS is particularly bad w
On Fri, Apr 4, 2014 at 4:03 AM, Srinivasa Rao Ragolu wrote:
> Hi All,
>
> I have built one distribution using yocto. I have written bitbakes for ceph
> and ceph-deploy. Deployed in to same distribution. But..
>
> While trying to create monitor node, I am getting "Unsupported platform" as
> below.
On 04/04/2014 12:59, Alfredo Deza wrote:
We test heavily against 12.04 and I use it almost daily for
testing/working with ceph-deploy as well
and have not seen this problem at all.
I have made sure that I have the same SSH version as you:
$ cat /etc/issue
Ubuntu 12.04 LTS \n \l
$ ssh -v
OpenSS
On 04/04/2014 13:39, Brian Candler wrote:
If you create /etc/sudo.conf (not /etc/sudoers!) containing
Path askpass /usr/X11R6/bin/ssh-askpass
Correction:
Path askpass /usr/bin/ssh-askpass
then you don't need the SUDO_ASKPASS incantation.
___
ceph-
On Fri, Apr 4, 2014 at 8:39 AM, Brian Candler wrote:
> On 04/04/2014 12:59, Alfredo Deza wrote:
>>
>> We test heavily against 12.04 and I use it almost daily for
>> testing/working with ceph-deploy as well
>> and have not seen this problem at all.
>>
>> I have made sure that I have the same SSH ve
Hello,
If I remember good, I think I have encounter same issues with 12.04 and
Virtualbox VM, and *it seems* that VBox tools do some very weird things.
Have you tried to cleanup all packages that aren't needed by your setup ?
Le 04/04/2014 14:39, Brian Candler a écrit :
> On 04/04/2014 12:59, Al
On 04/04/2014 14:11, Alfredo Deza wrote:
Have you set passwordless sudo on the remote host?#
No. Ah... I missed this bit:
echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
sudo chmod 0440 /etc/sudoers.d/ceph
The reason being that I misread the preceeding instruction:
" For
Alfredo Deza writes:
> Have you ensured that either there is no firewall up or that the ports
> that the monitors need to communicate between each other are open?
Yes, I am sure - the nodes are connected over one single switch, and
no firewall is active.
> If that is not the problem then t
On 04/04/2014 14:31, Diedrich Ehlerding wrote:
Alfredo Deza writes:
Have you ensured that either there is no firewall up or that the ports
that the monitors need to communicate between each other are open?
Yes, I am sure - the nodes are connected over one single switch, and
no firewall is
Hi all,
I found a solution : to get Ceph mounted after network at boot, we need
to had "_netdev" option (managed by distribution). So far, I knew that
but FUSE refuses to mount it with that option ! So we also need to patch
file /sbin/mount.fuse.ceph to remove _netdev option passed to FUSE.
Patch
On 04/04/2014 15:14, Diedrich Ehlerding wrote:
Hi Brian,
thank you for your, response, however:
Including iptables? CentOS/RedHat default to iptables enabled and
closed.
"iptables -Lvn" to be 100% sure.
hvrrzceph1:~ # iptables -Lvn
iptables: No chain/target/match by that name.
hvrrzceph1:~ #
On Fri, Apr 4, 2014 at 6:33 PM, Florent B wrote:
> Hi all,
>
> My machines are all running Debian Wheezy.
>
> After a few days using kernel driver to mount my Ceph pools (with backports
> 3.13 kernel), I'm now switching to FUSE because of very high CPU usage with
> kernel driver (load average > 35
unsubscribe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Ceph,
This month Ceph User Committee meeting was about:
Tiering, erasure code
Using http://tracker.ceph.com/
CephFS
Miscellaneous
You will find an executive summary at:
https://wiki.ceph.com/Community/Meetings/Ceph_User_Committee_meeting_2014-04-03
The full log of the IR
Anyone know why this happens? What datastore fills up specifically?
2014-04-04 17:01:51.277954 mon.0 [WRN] reached concerning levels of available
space on data store (16% free)
2014-04-04 17:03:51.279801 7ffd0f7fe700 0 monclient: hunting for new mon
2014-04-04 17:03:51.280844 7ffd0d6f9700 0 -- 19
Well, that's no mon crash.
On 04/04/2014 06:06 PM, Karol Kozubal wrote:
Anyone know why this happens? What datastore fills up specifically?
The monitor's. Your monitor is sitting on a disk that is filling up.
The monitor will check for available disk space to make sure it has
enough to work
Thank you for your reply.
I had over 190 GB left on the disk where the mon was residing. I deployed
with fuel initially and didn¹t check how the mon was configured.
I am re-deploying now and will check what is configured out of the box by
fuel 4.1
Maybe it was pointing somewhere different than
/
Take a look at ProxmoxVE. Has full support for Ceph, is supported, and uses
KVM/QEMU.
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Brian Candler
Sent: Friday, April 04, 2014 1:44 AM
To: Brian Beverage; ceph-users@list
CephFS use case
Wanted to throw in our use case.
We store massive amount of timeseries data files that we need to process as
they come in. Right now raid is holding us over but we are hitting the
upper limits, we would have to spend some serious amount of money to go to
the next level, even then w
Loic,
The writeup has been helpful.
What I'm curious about (and hasn't been mentioned) is can we use
erasure with CephFS? What steps have to be taken in order to setup
erasure coding for CephFS?
In our case we'd like to take advantage of the savings since a large
chunk of our data is written onc
I've been seeing this warning on ceph -w for a while:
2014-04-04 11:26:29.438992 osd.3 [WRN] 84 slow requests, 1 included
below; oldest blocked for > 90124.336765 secs
2014-04-04 11:26:29.438996 osd.3 [WRN] slow request 1920.199044 seconds
old, received at 2014-04-04 10:54:29.239906: osd_op(clie
hi yan,
(taking the list in CC)
On 04/04/2014 04:44 PM, Yan, Zheng wrote:
On Thu, Apr 3, 2014 at 2:52 PM, Stijn De Weirdt wrote:
hi,
latest pprof output attached.
this is no kernel client, this is ceph-fuse on EL6. starting the mds without
any ceph-fuse mounts works without issue. mounting
Hi folks-
Was this ever resolved? I’m not finding a resolution in the email chain,
apologies if I am missing it. I am experiencing this same problem. Cluster
works fine for object traffic, can’t seem to get rbd to work in 0.78. Worked
fine in 0.72.2 for me. Running Ubuntu 13.04 with 3.12 k
Meant to include this – what do these messages indicate? All systems have 0.78.
[1301268.557820] Key type ceph registered
[1301268.558524] libceph: loaded (mon/osd proto 15/24)
[1301268.579486] rbd: loaded rbd (rados block device)
[1301268.582364] libceph: mon1 10.0.0.102:6789 feature set mismatc
On Fri, Apr 4, 2014 at 11:15 AM, Milosz Tanski wrote:
> Loic,
>
> The writeup has been helpful.
>
> What I'm curious about (and hasn't been mentioned) is can we use
> erasure with CephFS? What steps have to be taken in order to setup
> erasure coding for CephFS?
Lots. CephFS takes advantage of al
Aha – upgrade of kernel from 3.13 to 3.14 appears to have resolved the problem.
Thanks,
Joe
From: Gruher, Joseph R
Sent: Friday, April 04, 2014 11:48 AM
To: Ирек Фасихов; Ilya Dryomov
Cc: ceph-users@lists.ceph.com; Gruher, Joseph R
Subject: RE: [ceph-users] Ceph RBD 0.78 Bug or feature?
Meant to
I'm attempting to create a user using S3 interface, however, I keep getting
access denied. Here is the debugging info from the radosgw.log file for my
request (for space I've removed the time stamp at the beginning of each line).
The only strange thing that I see is: required_mask= 0 user.op_m
On Fri, Apr 4, 2014 at 10:47 PM, Gruher, Joseph R
wrote:
> Meant to include this - what do these messages indicate? All systems have
> 0.78.
>
>
>
> [1301268.557820] Key type ceph registered
>
> [1301268.558524] libceph: loaded (mon/osd proto 15/24)
>
> [1301268.579486] rbd: loaded rbd (rados blo
What is the status of your PGs on the slave zone side.
A down or stale PG could definitely cause this.
May be a quick ceph -s and ceph health detail could help locate the PG with
a problem that could may be then help you get the correct ceph pg {pgid}
query command to find out which OSD is causin
On Tue, Mar 25, 2014 at 12:14 PM, Ирек Фасихов wrote:
> I want to create an image in format 2 through cache tier, but get an error
> creating.
>
> [root@ceph01 cluster]# rbd create rbd/myimage --size 102400 --image-format 2
> 2014-03-25 12:03:44.835686 7f668e09d760 1 -- :/0 messenger.start
> 2014
Hi,
I'm running Debian Wheezy which has kernel version 3.2.54-2 .
Should I be using rbd-fuse 0.72.2 or the kernel client to mount rbd devices?
I.e. This is an old kernel relative to Emperor, but maybe bugs are
backported to the kernel?
Thanks!
Chad.
On Fri, 4 Apr 2014, Chad Seys wrote:
> Hi,
> I'm running Debian Wheezy which has kernel version 3.2.54-2 .
>
> Should I be using rbd-fuse 0.72.2 or the kernel client to mount rbd devices?
> I.e. This is an old kernel relative to Emperor, but maybe bugs are
> backported to the kernel?
Not t
Just for future reference for others to see it,
I did manage to solve this by adding the
rgw dns host entry to ceph.conf , push it to all ceph machines and
restart the services!
Obviously I had to add in advance the correct entries in the dns server
and adjust correctly the rgw.conf file.
Interesting -
I had tried that on the RBD volumes only (via blockdev --setra, but I
think the effect is the same as tweaking read_ahead_kb directly),
however it made no difference. Unfortunately I didn't think to adjust on
the OSD's too - I'll try it out.
One thing that seemed to make a big
The cluster is currently recovering from a failed disk:
cluster 1604ec7a-6ceb-42fc-8c68-0a7896c4e120
health HEALTH_WARN 520 pgs backfill; 378 pgs degraded; 1 pgs
recovering; 2 pgs recovery_wait; 523 pgs stuck unclean; 84 requests are
blocked > 32 sec; recovery 4324271/37841102 objects d
Hello,
I've asked this before and now am getting ready to build and deploy my
first production CEPH cluster.
Is filestore_xattr_use_omap = true required (for an ext4 formatted OSD
obviously) if the only usage of that cluster will be RBD?
I assume that having this off will improve perfor
Hi everyone
My Ceph cluster is running, I'm plaining to tune my Ceph performance.
I want to increase object size from 4M to 16MB (maybe 32MB,..)
With the fomular: "stripe_unit" * "stripe_count" equals "object_size",
i'm thinking to change this following option:
"rbd_default_stripe_unit" fro
Hello,
On Fri, 4 Apr 2014 14:53:44 -0700 (PDT) Sage Weil wrote:
> On Fri, 4 Apr 2014, Chad Seys wrote:
> > Hi,
> > I'm running Debian Wheezy which has kernel version 3.2.54-2 .
> >
> > Should I be using rbd-fuse 0.72.2 or the kernel client to mount rbd
> > devices? I.e. This is an old kerne
55 matches
Mail list logo