Hi Srinivas,
Our cause OCFS2 is not directly interacting with SCSI. Here we have
ceph Storage that is mounted to many client system using OCFS2. More ever ocfs2
support SCSI.
https://blogs.oracle.com/wim/entry/what_s_up_with_ocfs2
http://www.linux-mag.com/id/7809/
R
My point is rbd device should support SCSI reservation, so that OCFS can take
write lock while write on particular client to avoid corruption.
Thanks,
Srinivas
From: gjprabu [mailto:gjpr...@zohocorp.com]
Sent: Monday, January 04, 2016 1:40 PM
To: Srinivasula Maram
Cc: Somnath Roy; ceph-users; Si
I am not sure why you want to layer a clustered file system (OCFS2) on top of
Ceph RBD. Seems like a huge overhead and a ton of complexity.
Better to use CephFS if you want Ceph at the bottom or to just use iSCSI luns
under ocfs2.
Regards,
Ric
On 01/04/2016 10:28 AM, Srinivasula Maram wr
Hello,
How to run multiple RadosGW instances under the same zone?
Assume there are two hosts HOST_1 and HOST2. I want to run
two RadosGW instances on these two hosts for my zone ZONE_MULI.
So, when one of the radosgw instance is down, I can still access the zone.
There are some questions:
1. Ho
Thanks...
On 23/12/2015, 21:33, "Gregory Farnum" wrote:
>On Wed, Dec 23, 2015 at 5:20 AM, HEWLETT, Paul (Paul)
> wrote:
>> Seasons Greetings Cephers..
>>
>> Can I assume that http://tracker.ceph.com/issues/12200 is fixed in
>> Infernalis?
>>
>> Any chance that it can be back ported to Hammer
On Wed, Dec 30, 2015 at 5:06 PM, Bryan Wright wrote:
> Hi folks,
>
> I have an mds cluster stuck in replay. The mds log file is filled with
> errors like the following:
>
> 2015-12-30 12:00:25.912026 7f9f5b88b700 0 -- 192.168.1.31:6800/13093 >>
> 192.168.1.24:6823/31155 pipe(0x4ccc800 sd=18 :442
Hi Joseph,
You can try haproxy as proxy for load balancing and failover.
Thanks,
Srinivas
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Joseph
Yang
Sent: Monday, January 04, 2016 2:09 PM
To: ceph-us...@ceph.com; Joseph Yang
Subject: [ceph-users] How to run multiple Ra
Hi,
I was doing some tests with opstate on radosgw, and I find that behavior
is strange :
When I try to retrieve the status of a particular object by specifying
client_id, object and op_id, the return value is an empty array.
The behavior is identical using radosgw-admin and the REST API.
Is i
Hi Srinivas,
I am not sure RBD support SCSI but OCFS2 having that capability to lock and
unlock while write.
(kworker/u192:5,71152,28):dlm_unlock_lock_handler:424 lvb: none
(kworker/u192:5,71152,28):__dlm_lookup_lockres:232
O00946c510c
(kworker/u192:5,71152,28
On 2016-01-04 10:37:43 +, Srinivasula Maram said:
Hi Joseph,
You can try haproxy as proxy for load balancing and failover.
Thanks,
Srinivas
We have 6 hosts running RadosGW with haproxy in front of them without problems.
Depending on your setup you might even consider running haproxy l
Hi Cephers and Happy New Year
I am under the impression that ceph was refactored to allow dynamic enabling of
lttng in Infernalis.
Is there any documentation on how to enable lttng in Infernalis? (I cannot
find anything…)
Regards
Paul
___
ceph-users
On Fri, Jan 1, 2016 at 9:14 AM, Bryan Wright wrote:
> Gregory Farnum writes:
>
>> Or maybe it's 0.9a, or maybe I just don't remember at all. I'm sure
>> somebody recalls...
>>
>
> I'm still struggling with this. When copying some files from the ceph file
> system, it hangs forever. Here's some
On Fri, Jan 1, 2016 at 12:15 PM, Bryan Wright wrote:
> Hi folks,
>
> "ceph pg dump_stuck inactive" shows:
>
> 0.e8incomplete [406,504] 406 [406,504] 406
>
> Each of the osds above is alive and well, and idle.
>
> The output of "ceph pg 0.e8 query" is shown below. All of t
Gregory Farnum writes:
> I can't parse all of that output, but the most important and
> easiest-to-understand bit is:
> "blocked_by": [
> 102
> ],
>
> And indeed in the past_intervals section there are a bunch where it's
> just 102. You really want min_siz
LTTng tracing is now enabled via the following config file options:
osd tracing = false # enable OSD tracing
osd objectstore tracing = false # enable OSD object store tracing (only
supported by FileStore)
rados tracing = false # enable librados LTTng tracing
rbd trac
Bryan,
If you can read the disk that was osd.102, you may wish to attempt this
process to recover your data:
https://ceph.com/community/incomplete-pgs-oh-my/
Good luck!
Michael J. Kidd
Sr. Software Maintenance Engineer
Red Hat Ceph Storage
On Mon, Jan 4, 2016 at 8:32 AM, Bryan Wright wrote:
Hello,
Can anyone please tell me what is the right way to do quiesced RBD
snapshots in libvirt (OpenStack)?
My Ceph version is 0.94.3.
I found two possible ways, none of them is working for me. Wonder if
I'm doing something wrong:
1) Do VM fsFreeze through QEMU guest agent, perform RBD snapshot,
Michael Kidd writes:
> If you can read the disk that was osd.102, you may wish to attempt this
process to recover your data:https://ceph.com/community/incomplete-pgs-oh-my/
> Good luck!
Hi Michael,
Thanks for the pointer. After looking at it, I'm wondering if the necessity
to copy the pgs to
I am surprised by the error you are seeing with exclusive lock enabled. The
rbd CLI should be able to send the 'snap create' request to QEMU without an
error. Are you able to provide "debug rbd = 20" logs from shortly before and
after your snapshot attempt?
--
Jason Dillaman
- Origin
Hi all,
I'm trying to get comfortable with managing and benchmarking ceph clusters, and
I'm struggling to understan rbd bench-write results vs using dd against mounted
rbd images.
I have a 6 node test cluster running version 0.94.5, 2 nodes per rack, 20 OSDs
per node. Write journals are on the
Hi all,
I just did an update of a storage cluster here, or rather, I've done one
node out of three updating to Infernalis from Hammer.
I shut down the daemons, as per the guide, then did a recursive chown of
the /var/lib/ceph directory, then struck the following when re-starting:
> 2016-01-05 07
I ran into this same issue, and found that a reboot ended up setting the
ownership correctly. If you look at /lib/udev/rules.d/95-ceph-osd.rules
you'll see the magic that makes it happen:
# JOURNAL_UUID
ACTION=="add", SUBSYSTEM=="block", \
ENV{DEVTYPE}=="partition", \
ENV{ID_PART_ENTRY_TYPE}=
Hi Bryan,
On 05/01/16 07:45, Stillwell, Bryan wrote:
> I ran into this same issue, and found that a reboot ended up setting the
> ownership correctly. If you look at /lib/udev/rules.d/95-ceph-osd.rules
> you'll see the magic that makes it happen
Ahh okay, good-o, so a reboot should be fine. I gu
All,
I am testing an all SSD and NVMe (journal) config for a customers first
endeavor investigating Ceph for performance oriented workloads.
Can someone recommend a good performance and reliable ( under high load )
combination?
Terrible high level question I know but we have had a number of issu
There was a bug in the rbd CLI bench-write tool that would result in the same
offset being re-written [1]. Since writeback cache is enabled (by default), in
your example only 4MB would be written to the OSD at the conclusion of the
test. The fix should have been scheduled for backport to Hamme
Hi Cephers,
Happy New Year! I got question regards to the long PG peering..
Over the last several days I have been looking into the *long peering*
problem when we start a OSD / OSD host, what I observed was that the
two peering working threads were throttled (stuck) when trying to
queue new transa
Ah, thanks, that makes sense. I see bug 14225 opened for the backport.
I'm looking at
http://tracker.ceph.com/projects/ceph-releases/wiki/HOWTO_backport_commits,
I'll see if I can get a PR up for that.
-emile
On 1/4/16, 3:11 PM, "Jason Dillaman" wrote:
>There was a bug in the rbd CLI bench-
We need every OSDMap persisted before persisting later ones because we
rely on there being no holes for a bunch of reasons.
The deletion transactions are more interesting. It's not part of the
boot process, these are deletions resulting from merging in a log from
a peer which logically removed an
Hello Srinivas,
Yes, we can use Haproxy as a frontend. But the precondition is multi
RadosGW instances sharing
the *SAME CEPH POOLS* are running. I only want the master zone keep one
copy of all data. I want
to access the data through *ANY *radosgw instance.
And it said in http://docs.ceph.com
Thank you Martin,
Yes, "nslookup " was not working.
After configuring DNS on all nodes, the nslookup issue got sorted out.
But the "some monitors have still not reach quorun" issue was still seen.
I was using user "ceph" for ceph deployment. The user "ceph" is reserved
for ceph internal use.
Afte
On Mon, 4 Jan 2016, Guang Yang wrote:
> Hi Cephers,
> Happy New Year! I got question regards to the long PG peering..
>
> Over the last several days I have been looking into the *long peering*
> problem when we start a OSD / OSD host, what I observed was that the
> two peering working threads were
It works fine. The federated config reference is not related to running
multiple instances on the same zone.
Just set up 2 radosgws give each instance the exact same configuration. (I
use different client names in ceph.conf, but i bet it would work even if
the client names were identical)
Officia
On 01/01/2016 08:22 PM, Adam wrote:
> I'm running into the same install problem described here:
> https://www.spinics.net/lists/ceph-users/msg23533.html
>
> I tried compiling from source (ceph-9.2.0) to see if it had been fixed
> in the latest code, but I got the same error as with the pre-compi
Yes, it should work. Even if you have multiple radosgws/instances, all
instances use same pools.
Ceph.conf:
[client.radosgw.gateway-1]
host = host1
keyring = /etc/ceph/ceph.client.admin.keyring
rgw_socket_path = /var/log/ceph/radosgw1.sock
log_file = /var/log/ceph/radosgw-1.host1.log
rgw_max_chun
It works. Thank you for your time (Srinivas and Ben).
Supplement:
client.radosgw.gateway-1 and client.radosgw.gateway-2 should only share
the same ceph pools.
A keyring must be created for both client.radosgw.gateway-1 and
client.radosgw.gateway-2.
thx
joseph
On 01/05/2016 01:26 PM, Sriniva
35 matches
Mail list logo