Hi,
Gregory Farnum wrote:
I forget which clients you're using — is rbd caching enabled?
Yes, the clients are qemu-kvm-rhev with latest librbd from dumpling and
rbd cache = true.
Cheers, Dan
-- Dan van der Ster || Data & Storage Services || CERN IT Department --
___
Hello Greg,
I've searched - but don't see any backtraces... I've tried to get some
more info out of the logs. I really hope, there is something interesting
in it:
It all started two days ago with an authentication error:
2014-04-14 21:08:55.929396 7fd93d53f700 1 mds.0.0
standby_replay_rest
Mike Dawson wrote:
Dan,
Could you describe how you harvested and analyzed this data? Even
better, could you share the code?
Cheers,
Mike
First enable debug_filestore=10, then you'll see logs like this:
2014-04-17 09:40:34.466749 7fb39df16700 10
filestore(/var/lib/ceph/osd/osd.0) write
4.2
Christian Balzer wrote:
> I'm trying to understand that distribution, and the best explanation
> I've come up with is that these are ext4/xfs metadata updates,
> probably atime updates. Based on that theory, I'm going to test
> noatime on a few VMs and see if I notice a change in the distribu
Whatever happened - It fixed itself!?
When restarting, I got ~ 165k log messages like:
2014-04-17 07:30:14.856421 7fc50b991700 0 log [WRN] : ino 1f24fe0
2014-04-17 07:30:14.856422 7fc50b991700 0 log [WRN] : ino 1f24fe1
2014-04-17 07:30:14.856423 7fc50b991700 0 log [WRN] : ino 1f
On Thu, 17 Apr 2014 12:58:55 +1000 Blair Bethwaite wrote:
> Hi Kyle,
>
> Thanks for the response. Further comments/queries...
>
> > Message: 42
> > Date: Wed, 16 Apr 2014 06:53:41 -0700
> > From: Kyle Bader
> > Cc: ceph-users
> > Subject: Re: [ceph-users] SSDs: cache pool/tier versus node-loca
On Thu, Apr 17, 2014 at 4:10 PM, Georg Höllrigl
wrote:
> Whatever happened - It fixed itself!?
>
> When restarting, I got ~ 165k log messages like:
> 2014-04-17 07:30:14.856421 7fc50b991700 0 log [WRN] : ino 1f24fe0
> 2014-04-17 07:30:14.856422 7fc50b991700 0 log [WRN] : ino 1f24fe1
>
I am currently testing this functionality. What is your issue?
On 04/17/2014 07:32 AM, maoqi1982 wrote:
Hi list
i follow the http://ceph.com/docs/master/radosgw/federated-config/ to
test the muti-geography function.failed. .Does anyone success deploy
FEDERATED GATEWAYS?Is the function in ce
Thanks Dan!
Thanks,
Mike Dawson
On 4/17/2014 4:06 AM, Dan van der Ster wrote:
Mike Dawson wrote:
Dan,
Could you describe how you harvested and analyzed this data? Even
better, could you share the code?
Cheers,
Mike
First enable debug_filestore=10, then you'll see logs like this:
2014-04-1
Hi!
How do you think, is it a good idea, to add RBD block device as a hot spare
drive to a linux software raid?
Pavel.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 04/17/2014 02:37 PM, Pavel V. Kaygorodov wrote:
Hi!
How do you think, is it a good idea, to add RBD block device as a hot spare
drive to a linux software raid?
Well, it could work, but why? What is the total setup going to be?
RAID over a couple of physical disks with RBD as hotspare?
17 апр. 2014 г., в 16:41, Wido den Hollander написал(а):
> On 04/17/2014 02:37 PM, Pavel V. Kaygorodov wrote:
>> Hi!
>>
>> How do you think, is it a good idea, to add RBD block device as a hot spare
>> drive to a linux software raid?
>>
>
> Well, it could work, but why? What is the total set
So in the mean time, are there any common work-arounds?
I'm assuming monitoring imageused/imagesize ratio and if its greater
than some tolerance create a new image and move file system content over
is an effective, if crude approach. I'm not clear on how to measure the
amount of storage an image
On Thu, Apr 17, 2014 at 12:45 AM, Georg Höllrigl
wrote:
> Hello Greg,
>
> I've searched - but don't see any backtraces... I've tried to get some more
> info out of the logs. I really hope, there is something interesting in it:
>
> It all started two days ago with an authentication error:
>
> 2014-
>> >> I think the timing should work that we'll be deploying with Firefly and
>> >> so
>> >> have Ceph cache pool tiering as an option, but I'm also evaluating
>> >> Bcache
>> >> versus Tier to act as node-local block cache device. Does anybody have
>> >> real
>> >> or anecdotal evidence about whic
Well, this is embarrassing.
After working on this for a week, it finally created last night. The
only thing that changed in the past 2 days was that I ran ceph osd unset
noscrub and ceph osd unset nodeep-scrub. I had disabled both scrubs in
the hope that backfilling would finish faster.
I o
> Message: 20
> Date: Thu, 17 Apr 2014 17:45:39 +0900
> From: Christian Balzer
> To: "ceph-users@lists.ceph.com"
> Subject: Re: [ceph-users] SSDs: cache pool/tier versus node-local
> block cache
> Message-ID: <20140417174539.6c713...@batzmaru.gol.ad.jp>
> Content-Type: text/plain; charset
On Fri, 18 Apr 2014 11:34:15 +1000 Blair Bethwaite wrote:
> > Message: 20
> > Date: Thu, 17 Apr 2014 17:45:39 +0900
> > From: Christian Balzer
> > To: "ceph-users@lists.ceph.com"
> > Subject: Re: [ceph-users] SSDs: cache pool/tier versus node-local
> > block cache
> > Message-ID: <201404
Hi Yehuda,
With the same keys i am able to access the buckets through cyberduckbut
i use the same keys in the AdminOPS apiit through the access denied
erroras i have assign all the permissions to this user but still the
same access denied error...
http://ceph.com/docs/master/radosgw/s
Hi Peter
thanks for your reply.
We plan to have a test of the muti-site data replication,but we encountered a
problem.
All user and metadata were replicated ok,while the data failed .radosgw-agent
allways responsed "the state is error".
>Message: 22
>Date: Thu, 17 Apr 2014 12:03:06 +0100
>
20 matches
Mail list logo