On 06/26/2014 01:08 PM, Gregory Farnum wrote:
On Thu, Jun 26, 2014 at 12:52 PM, Kevin Horan
wrote:
I am also getting inconsistent object errors on a regular basis, about 1-2
every week or so for about 300GB of data. All OSDs are using XFS
filesystems. Some OSDs are individual 3TB internal
I am also getting inconsistent object errors on a regular basis, about
1-2 every week or so for about 300GB of data. All OSDs are using XFS
filesystems. Some OSDs are individual 3TB internal hard drives and some
are external FC attached raid6 arrays. I am using this cluster to store
kvm images
t the operation just
hangs.
Kevin
On 5/1/14 10:11 , kevin horan wrote:
Here is how I got into this state. I have only 6 OSDs total, 3 on
one host (vashti) and 3 on another (zadok). I set the noout flag
so I could reboot zadok. Zadok was down for 2 minutes. When it
came up
While everything was
moving from degraded to active+clean, it finally finished probing.
If it's still happening tomorrow, I'd try to find a Geeks on IRC
Duty (http://ceph.com/help/community/).
On 5/3/14 09:43 , Kevin Horan wrote:
Craig,
Thanks for your response
t the operation just
hangs.
Kevin
On 5/1/14 10:11 , kevin horan wrote:
Here is how I got into this state. I have only 6 OSDs total, 3 on
one host (vashti) and 3 on another (zadok). I set the noout flag
so I could reboot zadok. Zadok was down for 2 minutes. When it
came up
I have an issue very similar to this thread:
http://article.gmane.org/gmane.comp.file-systems.ceph.user/3197. I have
19 unfound objects that are part of a VM image that I have already
recovered from backup. If I query pg 4.30 ( the one with the unfound
objects), it says it is still querying
Ah, that sounds like what I want. I'll look into that, thanks.
Kevin
On 11/27/2013 11:37 AM, LaSalle, Jurvis wrote:
Is LUN masking an option in your SAN?
On 11/27/13, 2:34 PM, "Kevin Horan" wrote:
Thanks. I may have to go this route, but it seems awfully fragile. One
stray
Thanks. I may have to go this route, but it seems awfully fragile. One
stray command could destroy the entire cluster, replicas and all. Since
all disks are visible to all nodes, any one of them could mount
everything, corrupting all OSDs at once.
Surly other people are using external FC dr
I am working with a small test cluster, but the problems described
here will remain in production. I have an external fiber channel storage
array and have exported 2 3TB disks (just as JBODs). I can use
ceph-deploy to create an OSD for each of these disks on a node named
Vashti. So far ever
I am working with a small test cluster, but the problems described
here will remain in production. I have an external fiber channel storage
array and have exported 2 3TB disks (just as JBODs). I can use
ceph-deploy to create an OSD for each of these disks on a node named
Vashti. So far ever
10 matches
Mail list logo