>From my observation, the s3gw.fcgi script seems to be completely
superfluous in the operation of Ceph. With or without the script, swift
requests execute correctly, as long as a radosgw daemon is running.
Is there something I'm missing here?
___
ceph-us
Hi,
I'm having trouble setting up an object gateway on an existing cluster. The
cluster I'm trying to add the gateway to is running on a Precise 12.04
virtual machine.
The cluster is up and running, with a monitor, two OSDs, and a metadata
server. It returns HEALTH_OK and active+clean, so I am so
This was excellent advice. It should be on some official Ceph
troubleshooting page. It takes a while for the monitors to deal with new
info, but it works.
Thanks again!
--Greg
On Wed, Mar 18, 2015 at 5:24 PM, Sage Weil wrote:
> On Wed, 18 Mar 2015, Greg Chavez wrote:
> > We have a c
00 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault
Finally, here is our ceph.conf: http://pastebin.com/Gmiq2V8S
So that's where we stand. Did we kill our Ceph Cluster (and thus our
OpenStack Cloud)? Or is there hope? Any suggestions would be greatly
appreciated.
--
\*..+.-
--Greg Chavez
+//..;};
___
s, or is the -5 return code
> just a sign of a broken hard drive?
>
These are the OSDs creating new connections to each other because the previous
ones failed. That's not necessarily a problem (although here it's probably a
symptom of some kind of issue, given the freq
On Sun, Jun 22, 2014 at 6:44 AM, Mark Nelson
wrote:
> RBD Cache is definitely going to help in this use case. This test is
> basically just sequentially writing a single 16k chunk of data out, one at
> a time. IE, entirely latency bound. At least on OSDs backed by XFS, you
> have to wait for t
00KB, aggrb=9264KB/s, minb=9264KB/s, maxb=9264KB/s,
mint=44213msec, maxt=44213msec
Disk stats (read/write):
rbd2: ios=0/102499, merge=0/1818, ticks=0/5593828, in_queue=5599520,
util=99.85%
On Sun, Jun 22, 2014 at 6:42 PM, Christian Balzer wrote:
> On Sun, 22 Jun 2014 12:14:38 -0700 Greg Po
How does RBD cache work? I wasn't able to find an adequate explanation in
the docs.
On Sunday, June 22, 2014, Mark Kirkwood
wrote:
> Good point, I had neglected to do that.
>
> So, amending my conf.conf [1]:
>
> [client]
> rbd cache = true
> rbd cache size = 2147483648
> rbd cache max dirty = 10
We actually do have a use pattern of large batch sequential writes, and
this dd is pretty similar to that use case.
A round-trip write with replication takes approximately 10-15ms to
complete. I've been looking at dump_historic_ops on a number of OSDs and
getting mean, min, and max for sub_op and
ournal...ssd able to do
> 180 MB/s etc), however I am still seeing writes to the spinners during the
> 8s or so that the above dd tests take).
> [2] Ubuntu 13.10 VM - I'll upgrade it to 14.04 and see if that helps at
> all.
>
>
> On 21/06/14 09:17, Greg Poirier wrote:
>
d
- osd_op times are approximately 6-12ms
- osd_sub_op times are 6-12 ms
- iostat reports service time of 6-12ms
- Latency between the storage and rbd client is approximately .1-.2ms
- Disabling replication entirely did not help significantly
On Fri, Jun 20, 2014 at 2:13 PM, Tyler Wilson wrot
I recently created a 9-node Firefly cluster backed by all SSDs. We have had
some pretty severe performance degradation when using O_DIRECT in our tests
(as this is how MySQL will be interacting with RBD volumes, this makes the
most sense for a preliminary test). Running the following test:
dd if=/
On Saturday, April 19, 2014, Mike Dawson wrote:
>
>
> With a workload consisting of lots of small writes, I've seen client IO
> starved with as little as 5Mbps of traffic per host due to spindle
> contention once deep-scrub and/or recovery/backfill start. Co-locating OSD
> Journals on the same spi
We have a cluster in a sub-optimal configuration with data and journal
colocated on OSDs (that coincidentally are spinning disks).
During recovery/backfill, the entire cluster suffers degraded performance
because of the IO storm that backfills cause. Client IO becomes extremely
latent. I've tried
Villalta [
> ja...@rubixnet.com]
> *Sent:* 12 April 2014 16:41
> *To:* Greg Poirier
> *Cc:* ceph-users@lists.ceph.com
> *Subject:* Re: [ceph-users] Useful visualizations / metrics
>
> I know ceph throws some warnings if there is high write latency. But i
> would be mo
sure there is a specific metric
>> in ceph for this but it would be awesome if there was.
>>
>>
>> On Sat, Apr 12, 2014 at 10:37 AM, Greg Poirier > <mailto:greg.poir...@opower.com>> wrote:
>>
>> Curious as to how you define cluster latency.
>&
cents.
>
>
> On Sat, Apr 12, 2014 at 10:02 AM, Greg Poirier wrote:
>
>> I'm in the process of building a dashboard for our Ceph nodes. I was
>> wondering if anyone out there had instrumented their OSD / MON clusters and
>> found particularly useful visualization
I'm in the process of building a dashboard for our Ceph nodes. I was
wondering if anyone out there had instrumented their OSD / MON clusters and
found particularly useful visualizations.
At first, I was trying to do ridiculous things (like graphing % used for
every disk in every OSD host), but I r
2 active+remapped+backfill_toofull
1 active+degraded+remapped+backfilling
recovery io 362 MB/s, 365 objects/s
client io 1643 kB/s rd, 6001 kB/s wr, 911 op/s
On Fri, Apr 11, 2014 at 5:45 AM, Greg Poirier wrote:
> So... our storage problems persisted for about 45 minutes. I gave
ef Johansson wrote:
>
>>
>> On 11/04/14 09:07, Wido den Hollander wrote:
>>
>>>
>>> Op 11 april 2014 om 8:50 schreef Josef Johansson :
>>>>
>>>>
>>>> Hi,
>>>>
>>>> On 11/04/14 07:29, Wido den Holl
One thing to note
All of our kvm VMs have to be rebooted. This is something I wasn't
expecting. Tried waiting for them to recover on their own, but that's not
happening. Rebooting them restores service immediately. :/ Not ideal.
On Thu, Apr 10, 2014 at 10:12 PM, Greg Poirier wrote
number of OSDs), but got held up by some networking nonsense.
Thanks for the tips.
On Thu, Apr 10, 2014 at 9:51 PM, Sage Weil wrote:
> On Thu, 10 Apr 2014, Greg Poirier wrote:
> > Hi,
> > I have about 200 VMs with a common RBD volume as their root filesystem
> and a
> &
Hi,
I have about 200 VMs with a common RBD volume as their root filesystem and
a number of additional filesystems on Ceph.
All of them have stopped responding. One of the OSDs in my cluster is
marked full. I tried stopping that OSD to force things to rebalance or at
least go to degraded mode, but
don't see any smart errors, but i'm slowly working my way through all of
the disks on these machines with smartctl to see if anything stands out.
On Fri, Mar 14, 2014 at 9:52 AM, Gregory Farnum wrote:
> On Fri, Mar 14, 2014 at 9:37 AM, Greg Poirier
> wrote:
> > So,
ocking progress. If it is the journal commit, check out how busy the
> disk is (is it just saturated?) and what its normal performance
> characteristics are (is it breaking?).
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Thu, Mar 13, 2014 at 5:48 PM, Gr
uot;0.086852",
{ "time": "2014-03-13 20:41:40.314633",
"event": "commit_sent"},
{ "time": "2014-03-13 20:41:40.314665",
"event":
the commitment to figuring this poo out.
On Wed, Mar 12, 2014 at 8:31 PM, Greg Poirier wrote:
> Increasing the logging further, and I notice the following:
>
> 2014-03-13 00:27:28.617100 7f6036ffd700 20 rgw_create_bucket returned
> ret=-1 bucket=test(@.rgw.buckets[us-west-1.15849318
g?
I did notice that .us-west-1.rgw.buckets and .us-west-1.rgw.buckets.index
weren't created. I created those, restarted radosgw, and still 403 errors.
On Wed, Mar 12, 2014 at 8:00 PM, Greg Poirier wrote:
> And the debug log because that last log was obviously not helpful...
>
&g
in.rgw+.pools.avail to cache LRU end
2014-03-12 23:57:49.522672 7ff97e7dd700 2 req 1:0.024893:s3:PUT
/test:create_bucket:http status=403
2014-03-12 23:57:49.523204 7ff97e7dd700 1 == req done req=0x23bc650
http_status=403 ==
On Wed, Mar 12, 2014 at 7:36 PM, Greg Poirier wrote:
> The saga
tool?
On Wed, Mar 12, 2014 at 1:54 PM, Greg Poirier wrote:
> Also... what are linger_ops?
>
> ceph --admin-daemon /var/run/ceph/ceph-client.radosgw..asok
> objecter_requests
> { "ops": [],
> "linger_ops": [
> { "linger_id":
"pg": "7.31099063",
"osd": 28,
"object_id": "notify.5",
"object_locator": "@7",
"snapid": "head",
"registering": "head",
Rados GW and Ceph versions installed:
Version: 0.67.7-1precise
I create a user:
radosgw-admin --name client.radosgw. user create --uid test
--display-name "Test User"
It outputs some JSON that looks convincing:
{ "user_id": "test",
"display_name": "test user",
"email": "",
"suspended": 0,
e snapshot. The
>> thing to worry about is that it's a snapshot at the block device
>> layer, not the filesystem layer, so if you don't quiesce IO and sync
>> to disk the filesystem might not be entirely happy with you for the
>> same reasons that it won't be h
According to the documentation at
https://ceph.com/docs/master/rbd/rbd-snapshot/ -- snapshots require that
all I/O to a block device be stopped prior to making the snapshot. Is there
any plan to allow for online snapshotting so that we could do incremental
snapshots of running VMs on a regular basi
;s more troubling is that when this occurred we lost all
connectivity to the Ceph cluster.
On Wed, Feb 5, 2014 at 1:11 AM, Karan Singh wrote:
> Hi Greg
>
>
> I have seen this problem before in my cluster.
>
>
>
>- What ceph version you are running
>- Did you
I have a MON that at some point lost connectivity to the rest of the
cluster and now cannot rejoin.
Each time I restart it, it looks like it's attempting to create a new MON
and join the cluster, but the rest of the cluster rejects it, because the
new one isn't in the monmap.
I don't know why it
On Fri, Jan 24, 2014 at 4:28 PM, Yehuda Sadeh wrote:
> For each object that rgw stores it keeps a version tag. However this
> version is not ascending, it's just used for identifying whether an
> object has changed. I'm not completely sure what is the problem that
> you're trying to solve though.
Hello!
I have a great deal of interest in the ability to version objects in
buckets via the S3 API. Where is this on the roadmap for Ceph?
This is a pretty useful feature during failover scenarios between zones in
a region. For instance, take the example where you have a region with two
zones:
u
On Sun, Dec 8, 2013 at 8:33 PM, Mark Kirkwood wrote:
>
> I'd suggest testing the components separately - try to rule out NIC (and
> switch) issues and SSD performance issues, then when you are sure the bits
> all go fast individually test how ceph performs again.
>
> What make and model of SSD? I'
Hi.
So, I have a test cluster made up of ludicrously overpowered machines with
nothing but SSDs in them. Bonded 10Gbps NICs (802.3ad layer 2+3 xmit hash
policy, confirmed ~19.8 Gbps throughput with 32+ threads). I'm running
rados bench, and I am currently getting less than 1 MBps throughput:
sudo
or removing the OSD.
Kevin, generally speaking, the OSDs that fill up on me are the same ones.
Once I lower the weights, they stay low or they fill back up again within
days or hours of re-raising the weight. Please try to lift them up though,
maybe you'll have better luck than me.
--Greg
things out in the
way you're expecting.
--Greg
On Tue, Nov 5, 2013 at 11:11 AM, Kevin Weiler
wrote:
> Hi guys,
>
> I have an OSD in my cluster that is near full at 90%, but we're using a
> little less than half the available storage in the cluster. Shouldn't thi
finds that open file descriptor and applies
the repair to it — which of course doesn’t help put it back into place!
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On October 24, 2013 at 2:52:54 AM, Matt Thompson (watering...@gmail.com) wrote:
>
>Hi Harry,
>
>I
ceph health detail. I've never seen a pg with the
state active+remapped+wait_backfill+backfill_toofull. Clearly I should
have increased the pg count more gradually, but here I am. I'm frozen,
afraid to do anything.
Any suggestions? Thanks.
Gregs are awesome, apparently. Thanks for the confirmation.
I know that threads are light-weight, it's just the first time I've ever
run into something that uses them... so liberally. ^_^
On Mon, Aug 26, 2013 at 10:07 AM, Gregory Farnum wrote:
> On Mon, Aug 26, 2013 at 9:24 AM,
So, in doing some testing last week, I believe I managed to exhaust the
number of threads available to nova-compute last week. After some
investigation, I found the pthread_create failure and increased nproc for
our Nova user to, what I considered, a ridiculous 120,000 threads after
reading that li
On Fri, Aug 23, 2013 at 9:53 AM, Gregory Farnum wrote:
>
> Okay. It's important to realize that because Ceph distributes data
> pseudorandomly, each OSD is going to end up with about the same amount
> of data going to it. If one of your drives is slower than the others,
> the fast ones can get ba
Ah thanks, Brian. I will do that. I was going off the wiki instructions on
performing rados benchmarks. If I have the time later, I will change it
there.
On Fri, Aug 23, 2013 at 9:37 AM, Brian Andrus wrote:
> Hi Greg,
>
>
>> I haven't had any luck with the seq bench. It ju
On Thu, Aug 22, 2013 at 2:34 PM, Gregory Farnum wrote:
> You don't appear to have accounted for the 2x replication (where all
> writes go to two OSDs) in these calculations. I assume your pools have
>
Ah. Right. So I should then be looking at:
# OSDs * Throughput per disk / 2 / repl factor ?
I should have also said that we experienced similar performance on
Cuttlefish. I have run identical benchmarks on both.
On Thu, Aug 22, 2013 at 2:23 PM, Oliver Daudey wrote:
> Hey Greg,
>
> I encountered a similar problem and we're just in the process of
> tracking it down
ing Scientific Linux 6.4
2.6.32 kernel
Ceph Dumpling 0.67.1-0.el6
OpenStack Grizzly
Libvirt 0.10.2
qemu-kvm 0.12.1.2-2.355.el6.2.cuttlefish
(I'm using qemu-kvm from the ceph-extras repository, which doesn't appear
to have a -.dumpling version yet).
Thanks very much for any assistance.
Greg
_
ntrol whether it is sysvinit or upstart that should be doing
> the restart.
>
> (And note that either way, upgrading the package doesn't restart the
> daemons for you.)
Which is probably good, especially if you are running osd and mon on the
same hosts.
> On Wed, 31 Jul
ers mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
--
\*..+.-
--Greg Chavez
+//..;};
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
d, Jul 31, 2013 at 3:48 PM, Eric Eastman wrote:
> Hi Greg,
>
> I saw about the same thing on Ubuntu 13.04 as you did. I used
>
> apt-get -y update
> apt-get -y upgrade
>
> On all my cluster nodes to upgrade from 0.61.5 to 0.61.7 and then noticed
> that some of my sy
for object storage, which can have a higher allowance for
latency than volume storage.
A separate non-production cluster will allow you to test and validate new
> versions (including point releases within a stable series) before you
> attempt to upgrade your producti
etc/init.d/ceph: mon.kvm-cs-sn-14i not found (/etc/ceph/ceph.conf defines
, /var/lib/ceph defines )
I'm very worried that I have all my packages at 0.61.7 while my osd and
mon daemons could be running as old as 0.61.1!
Can anyone help me figure this out? Thanks.
--
\
production cluster),
but I'd rather have a single cluster in order to more evenly distribute
load across all of the spindles.
Thoughts or observations from people with Ceph in production would be
greatly appreciated.
Greg
___
ceph-users mailing list
Any idea how we tweak this? If I want to keep my ceph node root
volume at 85% used, that's my business, man.
Thanks.
--Greg
On Mon, Jul 8, 2013 at 4:27 PM, Mike Bryant wrote:
> Run "ceph health detail" and it should give you more information.
> (I'd guess an osd o
Watching. Thanks, Neil.
On Tue, Jul 16, 2013 at 12:43 PM, Neil Levine wrote:
> This seems like a good feature to have. I've created
> http://tracker.ceph.com/issues/5642
>
> N
>
>
> On Tue, Jul 16, 2013 at 8:05 AM, Greg Chavez wrote:
>>
>> This is inter
6T 1% /tmp/ceph_mount
>>>>>
>>>>> Please, explain this help me?
>>>>
>>>> statfs/df show the raw capacity of the cluster, not the usable capacity.
>>>> How much data you can store is a (potentially) complex function of your
>>>> CRUSH rules and replication layout.
Even after restarting the OSDs, it hangs at 8.876%. Consequently,
many of our virts have crashed.
I'm hoping someone on this list can provide some suggestions.
Otherwise, I may have to blow this up. Thanks!
--
\*..+.-
--Greg Chavez
+//..;};
___
m. I thought they were all AMD!
So... is this a problem? It seems to be running well.
Thanks.
--
\*..+.-
--Greg Chavez
+//..;};
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Le 14/05/2013 13:00, Wolfgang Hennerbichler a écrit :
On 05/14/2013 12:16 PM, Greg wrote:
...
And try to mount :
root@client2:~# mount /dev/rbd1 /mnt/
root@client2:~# umount /mnt/
root@client2:~# mount /dev/rbd2 /mnt/
mount: you must specify the filesystem type
What strikes me is the copy is
t into the pool named 'rbd'.
if you rbd copy it's maybe easier to do it with explicit destination
pool name:
rbd cp sp/p1b16 sp/p2b16
hth
wolfgang
On 05/14/2013 11:47 AM, Greg wrote:
Hello,
I found some oddity when attempting to copy an rbd image in my pool
(using bobtail 0.56
with explicit destination
pool name:
rbd cp sp/p1b16 sp/p2b16
hth
wolfgang
On 05/14/2013 11:47 AM, Greg wrote:
Hello,
I found some oddity when attempting to copy an rbd image in my pool
(using bobtail 0.56.4), please see this :
I have a built working RBD image name p1b16 :
root@nas16:~# rb
Hello,
I found some oddity when attempting to copy an rbd image in my pool
(using bobtail 0.56.4), please see this :
I have a built working RBD image name p1b16 :
root@nas16:~# rbd -p sp ls
p1b16
Copying image :
root@nas16:~# rbd -p sp cp p1b16 p2b16
Image copy: 100% complete...done.
Great
Le 13/05/2013 17:01, Gandalf Corvotempesta a écrit :
2013/5/13 Greg :
thanks a lot for pointing this out, it indeed makes a *huge* difference !
# dd if=/mnt/t/1 of=/dev/zero bs=4M count=100
100+0 records in
100+0 records out
419430400 bytes (419 MB) copied, 5.12768 s, 81.8 MB/s
(caches
Le 13/05/2013 15:55, Mark Nelson a écrit :
On 05/13/2013 07:26 AM, Greg wrote:
Le 13/05/2013 07:38, Olivier Bonvalet a écrit :
Le vendredi 10 mai 2013 à 19:16 +0200, Greg a écrit :
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running e
Le 13/05/2013 07:38, Olivier Bonvalet a écrit :
Le vendredi 10 mai 2013 à 19:16 +0200, Greg a écrit :
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running each a MON and an OSD with both journal and
data on the same SSD (ok this is stupi
Le 11/05/2013 13:24, Greg a écrit :
Le 11/05/2013 02:52, Mark Nelson a écrit :
On 05/10/2013 07:20 PM, Greg wrote:
Le 11/05/2013 00:56, Mark Nelson a écrit :
On 05/10/2013 12:16 PM, Greg wrote:
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of
Le 11/05/2013 02:52, Mark Nelson a écrit :
On 05/10/2013 07:20 PM, Greg wrote:
Le 11/05/2013 00:56, Mark Nelson a écrit :
On 05/10/2013 12:16 PM, Greg wrote:
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running each a MON and an OSD
Le 11/05/2013 00:56, Mark Nelson a écrit :
On 05/10/2013 12:16 PM, Greg wrote:
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running each a MON and an OSD with both journal and
data on the same SSD (ok this is stupid but this is simp
Hello folks,
I'm in the process of testing CEPH and RBD, I have set up a small
cluster of hosts running each a MON and an OSD with both journal and
data on the same SSD (ok this is stupid but this is simple to verify the
disks are not the bottleneck for 1 client). All nodes are connected on a
/ceph/mon.kvm-cs-sn-10i.pid -c /etc/ceph/ceph.conf
'
Starting ceph-create-keys on kvm-cs-sn-10i...
Luckily I hadn't set up my ssh keys yet, so that's as far as I got.
Would dearly love some guidance. Thanks in advance!
--Greg Chavez
_
Le 06/05/2013 20:41, Glen Aidukas a écrit :
New post bellow...
*From:*Greg [mailto:it...@itooo.com]
*Sent:* Monday, May 06, 2013 2:31 PM
*To:* Glen Aidukas
*Subject:* Re: [ceph-users] problem readding an osd
Le 06/05/2013 20:05, Glen Aidukas a écrit :
Greg,
Not sure where to use the
Le 06/05/2013 19:23, Glen Aidukas a écrit :
Hello,
I think this is a newbe question but I tested everything and, yes I
FTFM as best I could.
I'm evaluating ceph and so I setup a cluster of 4 nodes. The nodes
are KVM virtual machines named ceph01 to ceph04 all running Ubuntu
12.04.2 LTS ea
Le 03/05/2013 16:34, Travis Rhoden a écrit :
I have a question about "tell bench" command.
When I run this, is it behaving more or less like a dd on the drive?
It appears to be, but I wanted to confirm whether or not it is
bypassing all the normal Ceph stack that would be writing metadata,
c
he docs include information on how
this works.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Wednesday, March 20, 2013 at 10:48 AM, Igor Laskovy wrote:
> Well, can you please clarify what exactly key I must to use? Do I need to
> get/generate it somehow from working cl
The MDS doesn't have any local state. You just need start up the daemon
somewhere with a name and key that are known to the cluster (these can be
different from or the same as the one that existed on the dead node; doesn't
matter!).
-Greg
Software Engineer #42 @ http://inktank.
oting that
there are FUSE-like systems for Windows, and that Samba is a workaround. Sorry.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tuesday, March 19, 2013 at 8:58 AM, Igor Laskovy wrote:
> Thanks for reply!
>
> Actually I would like found some way to
> Marc-Antoine Perennou
Well, for now the fixes are for stuff like "make analysis take less time, and
export timing information more easily". The most immediately applicable one is
probably http://tracker.ceph.com/issues/4354, which I hope to start on next
week and should be done by
iseconds. I'd
look into what your split applications are sharing across those spaces.
On the up side for Ceph, >80% of your requests take "0" milliseconds and ~95%
of them take less than 2 milliseconds. Hurray, it's not ridiculously slow most
of the time. :)
-Greg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ke you'll get some useful results; you
might also just be able to enable the CRUSH tunables
(http://ceph.com/docs/master/rados/operations/crush-map/#tunables).
John, this is becoming a more common problem; we should generate some more
targeted documentation around it. :)
-Greg
Software Enginee
There's not a really good per-version list, but tracker.ceph.com is reasonably
complete and has a number of views.
-Greg
On Monday, March 11, 2013 at 8:22 AM, Igor Laskovy wrote:
> Thanks for the quick reply.
> Ok, so at this time looks like better to avoid split networks acr
rief
look at the OSD log here — can you describe what you did to the OSD during that
logging period? (In particular I see a lot of pg_log messages, but not the sub
op messages that would be associated with this OSD doing a deep scrub, nor the
internal heartbeat tim
mention them (NFS export in particular — it works right now but isn't in great
shape due to NFS filehandle caching).
Thanks,
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
I believe the debian folder only includes stable releases; .57 is a dev
release. See http://ceph.com/docs/master/install/debian/ for more! :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tuesday, March 5, 2013 at 8:44 AM, Scott Kinder wrote:
> When is ceph 0.57 go
87 matches
Mail list logo