ot;
> If you want to get rid of filestore on Btrfs, start a proper deprecation
> process and inform users that support for it it's going to be removed in
> the near future. The documentation must be updated accordingly and it
> must be clearly emph
but we're aiming for HA
and redundancy.
Thanks!
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
scrubbing reasons.
Output of related commands below.
Thanks for any help,
Sean Purdy
$ sudo ceph osd tree
ID CLASS WEIGHT TYPE NAMEUP/DOWN REWEIGHT PRI-AFF
-1 32.73651 root default
-3 10.91217 host store01
0 hdd 1.81870 osd.0 up 1.0 1.0
quorum.
OSDs had 15 minutes of
ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-9: (2) No such
file or directory
before becoming available.
Advice welcome.
Thanks,
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Tue, 15 Aug 2017, Gregory Farnum said:
> On Tue, Aug 15, 2017 at 4:23 AM Sean Purdy wrote:
> > I have a three node cluster with 6 OSD and 1 mon per node.
> >
> > I had to turn off one node for rack reasons. While the node was down, the
> > cluster was still runn
Hi,
On Thu, 17 Aug 2017, Gregory Farnum said:
> On Wed, Aug 16, 2017 at 4:04 AM Sean Purdy wrote:
>
> > On Tue, 15 Aug 2017, Gregory Farnum said:
> > > On Tue, Aug 15, 2017 at 4:23 AM Sean Purdy
> > wrote:
> > > > I have a three node cluster with 6 OSD an
On Tue, 15 Aug 2017, Sean Purdy said:
> Luminous 12.1.1 rc1
>
> Hi,
>
>
> I have a three node cluster with 6 OSD and 1 mon per node.
>
> I had to turn off one node for rack reasons. While the node was down, the
> cluster was still running and accepting files vi
sd@NN.service" will
work.
What happens at disk detect and mount time? Is there a timeout somewhere I can
extend?
How can I tell udev to have another go at mounting the disks?
If it's in the docs and I've missed it, apologies.
Thanks in advance,
Sean Purdy
__
On Wed, 23 Aug 2017, David Turner said:
> This isn't a solution to fix them not starting at boot time, but a fix to
> not having to reboot the node again. `ceph-disk activate-all` should go
> through and start up the rest of your osds without another reboot.
Thanks, will try next time.
Sean
Datapoint: I have the same issue on 12.1.1, three nodes, 6 disks per node.
On Thu, 31 Aug 2017, Piotr Dzionek said:
> For a last 3 weeks I have been running latest LTS Luminous Ceph release on
> CentOS7. It started with 4th RC and now I have Stable Release.
> Cluster runs fine, however I noticed t
2 22:48 18218 s3://test/1486716654.15214271.docx.gpg.99
I have not tried rclone or ACL futzing.
Sean Purdy
> I have opened an issue on s3cmd too
>
> https://github.com/s3tools/s3cmd/issues/919
>
> Thanks for your help
>
> Yoann
>
> > I have a fresh luminou
2.16.0.45:6789/0},
election epoch 378, leader 0 store01, quorum 0,1,2 store01,store02,store03
and everything's happy.
What should I look for/fix? It's a fairly vanilla system.
Thanks in advance,
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
the filestore journal
Our Bluestore disks are hosted on RAID controllers. Should I set cache policy
as WriteThrough for these disks then?
Sean Purdy
> the bluestore wal/rocksdb partitions can be used to allow both faster
> devices (ssd/nvme) and faster sync writes (compared to sp
On Wed, 20 Sep 2017, Burkhard Linke said:
> Hi,
>
>
> On 09/20/2017 12:24 PM, Sean Purdy wrote:
> >On Wed, 20 Sep 2017, Burkhard Linke said:
> >>The main reason for having a journal with filestore is having a block device
> >>that supports synchronous
94.125.129.7 3 u 411 1024 3770.388 -0.331 0.139
*172.16.0.19 158.43.128.332 u 289 1024 3770.282 -0.005 0.103
Sean
> On Wed, Sep 20, 2017 at 2:50 AM Sean Purdy wrote:
>
> >
> > Hi,
> >
> >
> > Luminous 12.2.0
> >
&g
bably trying to connect to the 3rd
> monitor, but why? When this monitor is not in quorum.
There's a setting for client timeouts. I forget where.
Sean
> -Original Message-
> From: Sean Purdy [mailto:s.pu...@cv-library.co.uk]
> Sent: donderdag 21 september 2017 12
On Thu, 28 Sep 2017, Matthew Vernon said:
> Hi,
>
> TL;DR - the timeout setting in ceph-disk@.service is (far) too small - it
> needs increasing and/or removing entirely. Should I copy this to ceph-devel?
Just a note. Looks like debian stretch luminous packages have a 10_000 second
timeout:
fr
On Thu, 10 Aug 2017, John Spray said:
> On Thu, Aug 10, 2017 at 4:31 PM, Sean Purdy wrote:
> > Luminous 12.1.1 rc
And 12.2.1 stable
> > We added a new disk and did:
> > That worked, created osd.18, OSD has data.
> >
> > However, mgr output at http://localho
Hi,
Is there any way that radosgw can ping something when a file is removed or
added to a bucket?
Or use its sync facility to sync files to AWS/Google buckets?
Just thinking about backups. What do people use for backups? Been looking at
rclone.
Thanks,
Sean
__
Are you using radosgw? I found this page useful when I had a similar issue:
http://www.osris.org/performance/rgw.html
Sean
On Wed, 18 Oct 2017, Ольга Ухина said:
> Hi!
>
> I have a problem with ceph luminous 12.2.1. It was upgraded from kraken,
> but I'm not sure if it was a problem in kraken
Hi,
The default collectd ceph plugin seems to parse the output of "ceph daemon
perf dump" and generate graphite output. However, I see more
fields in the dump than in collectd/graphite
Specifically I see get stats for rgw (ceph_rate-Client_rgw_nodename_get) but
not put stats (e.g. ceph_rate
I can use? I've found Spreadshirt's haproxy fork
which traps requests and updates redis -
https://github.com/spreadshirt/s3gw-haproxy Anybody used that?
Thanks,
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
h
arch feature that was added recently is using this to send
> objects metadata into elasticsearch for indexing.
>
> Yehuda
>
> On Tue, Nov 28, 2017 at 2:22 PM, Sean Purdy wrote:
> > Hi,
> >
> >
> > http://docs.ceph.com/docs/master/radosgw/s3/ says that S3 obj
--bypass-gc
Thanks,
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
do you expect to see during this operation that you're trying
> to avoid? I'm unaware of any such rebalancing (unless it might be the new
> automatic OSD rebalancing mechanism in Luminous to keep OSDs even... but
> deleting data shouldn't really trigger that if the clu
While we're at it, is there a release date for 12.2.6? It fixes a
reshard/versioning bug for us.
Sean
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Sean,
On Tue, 10 Jul 2018, Sean Redmond said:
> Can you please link me to the tracker 12.2.6 fixes? I have disabled
> resharding in 12.2.5 due to it running endlessly.
http://tracker.ceph.com/issues/22721
Sean
> Thanks
>
> On Tue, Jul 10, 2018 at 9:07 AM, Sean
Hi,
On my test servers, I created a bucket using 12.2.5, turned on versioning,
uploaded 100,000 objects, and the bucket broke, as expected. Autosharding said
it was running but didn't complete.
Then I upgraded that cluster to 12.2.7. Resharding seems to have finished, but
now that cluster s
Hi,
I was testing versioning and autosharding in luminous 12.2.5 upgrading to
12.2.7 I wanted to know if the upgraded autosharded bucket is still usable.
Looks like it is, but a bucket limit check seems to show too many objects.
On my test servers, I created a bucket using 12.2.5, turned on
On Wed, 5 Sep 2018, John Spray said:
> On Wed, Sep 5, 2018 at 8:38 AM Marc Roos wrote:
> >
> >
> > The adviced solution is to upgrade ceph only in HEALTH_OK state. And I
> > also read somewhere that is bad to have your cluster for a long time in
> > an HEALTH_ERR state.
> >
> > But why is this ba
Hi,
We were on 12.2.5 when a bucket with versioning and 100k objects got stuck when
autoreshard kicked in. We could download but not upload files. But upgrading
to 12.2.7 then running bucket check now shows twice as many objects, according
to bucket limit check. How do I fix this?
Sequenc
On Fri, 7 Sep 2018, Paul Emmerich said:
> Mimic
Unless you run debian, in which case Luminous.
Sean
> 2018-09-07 12:24 GMT+02:00 Vincent Godin :
> > Hello Cephers,
> > if i had to go for production today, which release should i choose :
> > Luminous or Mimic ?
> > _
I doubt it - Mimic needs gcc v7 I believe, and Trusty's a bit old for that.
Even the Xenial releases aren't straightforward and rely on some backported
packages.
Sean, missing Mimic on debian stretch
On Wed, 19 Sep 2018, Jakub Jaszewski said:
> Hi Cephers,
>
> Any plans for Ceph Mimic packag
Hi,
We have a bucket that we are trying to empty. Versioning and lifecycle was
enabled. We deleted all the objects in the bucket. But this left a whole
bunch of Delete Markers.
aws s3api delete-object --bucket B --key K --version-id V is not deleting the
delete markers.
Any ideas? We wan
Hi,
How do I delete an RGW/S3 bucket and its contents if the usual S3 API commands
don't work?
The bucket has S3 delete markers that S3 API commands are not able to remove,
and I'd like to reuse the bucket name. It was set up for versioning and
lifecycles under ceph 12.2.5 which broke the
On Sat, 29 Sep 2018, Konstantin Shalygin said:
> > How do I delete an RGW/S3 bucket and its contents if the usual S3 API
> > commands don't work?
> >
> > The bucket has S3 delete markers that S3 API commands are not able to
> > remove, and I'd like to reuse the bucket name. It was set up for
>
Hi,
Versions 12.2.7 and 12.2.8. I've set up a bucket with versioning enabled and
upload a lifecycle configuration. I upload some files and delete them,
inserting delete markers. The configured lifecycle DOES remove the deleted
binaries (non current versions). The lifecycle DOES NOT remove
On Wed, 7 Mar 2018, Wei Jin said:
> Same issue here.
> Will Ceph community support Debian Jessie in the future?
Seems odd to stop it right in the middle of minor point releases. Maybe it was
an oversight? Jessie's still supported in Debian as oldstable and not even in
LTS yet.
Sean
> On
We had something similar recently. We had to disable "rgw dns name" in the end.
Sean
On Thu, 29 Mar 2018, Rudenko Aleksandr said:
>
> Hi friends.
>
>
> I'm sorry, maybe it isn't bug, but i don't know how to solve this problem.
>
> I know that absolute URIs are supported in civetweb and it w
Just a quick note to say thanks for organising the London Ceph/OpenStack day.
I got a lot out of it, and it was nice to see the community out in force.
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
On Sat, 21 Apr 2018, Marc Roos said:
>
> I wondered if there are faster ways to copy files to and from a bucket,
> like eg not having to use the radosgw? Is nfs-ganesha doing this faster
> than s3cmd?
I find the go-based S3 clients e.g. rclone, minio mc, are a bit faster than the
python-based
marker is the latest version. This is available
in AWS for example.
Thanks,
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
Mimic has a new feature, a cloud sync module for radosgw to sync objects to
some other S3-compatible destination.
This would be a lovely thing to have here, and ties in nicely with object
versioning and DR. But I am put off by confusion and complexity with the whole
multisite/realm/zone
I get this too, since I last rebooted a server (one of three).
ceph -s says:
cluster:
id: a8c34694-a172-4418-a7dd-dd8a642eb545
health: HEALTH_OK
services:
mon: 3 daemons, quorum box1,box2,box3
mgr: box3(active), standbys: box1, box2
osd: N osds: N up, N in
rgw: 3
The other way to do it is with policies.
e.g. a bucket owned by user1, but read access granted to user2:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"user2 policy",
"Effect":"Allow",
"Principal": {"AWS": ["arn:aws:iam:::user/user2"]},
"Action":["s3:GetObject",
aintained going forwards, and we're a debian shop. I appreciate Mimic is a
non-LTS release, I hope issues of debian support are resolved by the time of
the next LTS.
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.co
retch. Is
http://tracker.ceph.com/issues/22365 a fix for this? (12.2.3)
In addition, systemctl start/stop/restart radosgw isn't working and I seem to
have to run the radosgw command and options manually.
Thanks,
Sean Purdy
___
ceph-users ma
On Wed, 27 Jun 2018, Matthew Vernon said:
> Hi,
>
> On 27/06/18 11:18, Thomas Bennett wrote:
>
> > We have a particular use case that we know that we're going to be
> > writing lots of objects (up to 3 million) into a bucket. To take
> > advantage of sharding, I'm wanting to shard buckets, withou
ceph-mon coexist peacefully with a different zookeeper
already on the same machine?
Thanks,
Sean Purdy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
Another newbie question. Do people using radosgw mirror their buckets
to AWS S3 or compatible services as a backup? We're setting up a
small cluster and are thinking of ways to mitigate total disaster.
What do people recommend?
Thanks,
Sean
On Fri, 1 Feb 2019 08:47:47 +0100
Wido den Hollander wrote:
>
>
> On 2/1/19 8:44 AM, Abhishek wrote:
> > We are glad to announce the eleventh bug fix release of the Luminous
> > v12.2.x long term stable release series. We recommend that all users
> > * There have been fixes to RGW dynamic and
Hi,
Will debian packages be released? I don't see them in the nautilus repo. I
thought that Nautilus was going to be debian-friendly, unlike Mimic.
Sean
On Tue, 19 Mar 2019 14:58:41 +0100
Abhishek Lekshmanan wrote:
>
> We're glad to announce the first release of Nautilus v14.2.0 stable
>
Hi,
A while back I reported a bug in luminous where lifecycle on a versioned bucket
wasn't removing delete markers.
I'm interested in this phrase in the pull request:
"you can't expect lifecycle to work with dynamic resharding enabled."
Why not?
https://github.com/ceph/ceph/pull/29122
https:
53 matches
Mail list logo