On Thu, Sep 27, 2018 at 6:34 PM Luis Periquito wrote:
>
> I think your objective is to move the data without anyone else
> noticing. What I usually do is reduce the priority of the recovery
> process as much as possible. Do note this will make the recovery take
> a looong time, and will also make
On Thu, Sep 27, 2018 at 9:57 PM Maged Mokhtar wrote:
>
>
>
> On 27/09/18 17:18, Dan van der Ster wrote:
> > Dear Ceph friends,
> >
> > I have a CRUSH data migration puzzle and wondered if someone could
> > think of a clever solution.
> >
> > Consider an osd tree like this:
> >
> >-2 4428
On Fri, Sep 28, 2018 at 12:51 AM Goncalo Borges
wrote:
>
> Hi Dan
>
> Hope to find you ok.
>
> Here goes a suggestion from someone who has been sitting in the side line for
> the last 2 years but following stuff as much as possible
>
> Will weight set per pool help?
>
> This is only possible in l
Quoting by morphin (morphinwith...@gmail.com):
> Good news... :)
>
> After I tried everything. I decide to re-create my MONs from OSD's and
> I used the script:
> https://paste.ubuntu.com/p/rNMPdMPhT5/
>
> And it worked!!!
Congrats!
> I think when 2 server crashed and come back same time some h
Hi Brett,
most probably your device is reported as hdd by the kernel, please check
by running the following:
cat /sys/block/sdf/queue/rotational
It should be 0 for SSD.
But as far as I know BlueFS (i.e. DB+WAL stuff) doesn't have any
specific behavior which depends on this flag so most pr
Looks like that if I move files between different data pools of the
cephfs, something is still refering to the 'old location' and gives an
Input/output error. I assume this, because I am using different client
ids for authentication.
With the same user as configured in ganesha, mounting (ker
Hi,
On my cluster I tried to clear all objects from a pool. I used the
command "rados -p bench ls | xargs rados -p bench rm". (rados -p bench
cleanup doesn't clean everything, because there was a lot of other
testing going on here).
Now 'rados -p bench ls' returns a list of objects, which do
On Fri, Sep 28, 2018 at 2:25 PM Frank (lists) wrote:
>
> Hi,
>
> On my cluster I tried to clear all objects from a pool. I used the
> command "rados -p bench ls | xargs rados -p bench rm". (rados -p bench
> cleanup doesn't clean everything, because there was a lot of other
> testing going on here)
On Fri, Sep 28, 2018 at 2:28 PM Marc Roos wrote:
>
>
> Looks like that if I move files between different data pools of the
> cephfs, something is still refering to the 'old location' and gives an
> Input/output error. I assume this, because I am using different client
> ids for authentication.
>
>
If I copy the file out6 to out7 in the same location. I can read the
out7 file on the nfs client.
-Original Message-
To: ceph-users
Subject: [ceph-users] cephfs issue with moving files between data pools
gives Input/output error
Looks like that if I move files between different dat
Is this useful? I think this is the section of the client log when
[@test2 m]$ cat out6
cat: out6: Input/output error
2018-09-28 16:03:39.082200 7f1ad01f1700 10 client.3246756 fill_statx on
0x100010943bc snap/devhead mode 040557 mtime 2018-09-28 14:49:35.349370
ctime 2018-09-28 14:49:35.349
Hello
I've attempted to increase the number of placement groups of the pools
in our test cluster and now ceph status (below) is reporting problems. I
am not sure what is going on or how to fix this. Troubleshooting
scenarios in the docs don't seem to quite match what I am seeing.
I have no idea h
Created: https://tracker.ceph.com/issues/36250
On Tue, Sep 25, 2018 at 9:08 PM Brad Hubbard wrote:
>
> On Tue, Sep 25, 2018 at 11:31 PM Josh Haft wrote:
> >
> > Hi cephers,
> >
> > I have a cluster of 7 storage nodes with 12 drives each and the OSD
> > processes are regularly crashing. All 84 ha
Hi,
How do I delete an RGW/S3 bucket and its contents if the usual S3 API commands
don't work?
The bucket has S3 delete markers that S3 API commands are not able to remove,
and I'd like to reuse the bucket name. It was set up for versioning and
lifecycles under ceph 12.2.5 which broke the
HI there, I'm trying to enable swift static site ability in my rgw.
It appears to be supported http://docs.ceph.com/docs/master/radosgw/swift/ but
I can't find any documentation on it.
All I can find is for s3
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/object_gate
Hi,
On 28.09.2018 18:04, Vladimir Brik wrote:
Hello
I've attempted to increase the number of placement groups of the pools
in our test cluster and now ceph status (below) is reporting problems. I
am not sure what is going on or how to fix this. Troubleshooting
scenarios in the docs don't seem
I guess the pool is mapped to SSDs only from the name and you only got 20 SSDs.
So you should have about ~2000 effective PGs taking replication into account.
Your pool has ~10k effective PGs with k+m=5 and you seem to have 5
more pools
Check "ceph osd df tree" to see how many PGs per OSD you
On 2018/08/21 1:24 pm, Jason Dillaman wrote:
Can you collect any librados / librbd debug logs and provide them via
pastebin? Just add / tweak the following in your "/etc/ceph/ceph.conf"
file's "[client]" section and re-run to gather the logs.
[client]
log file = /path/to/a/log/file
debug ms = 1
John Spray wrote:
> On Fri, Sep 28, 2018 at 2:25 PM Frank (lists) wrote:
>>
>> Hi,
>>
>> On my cluster I tried to clear all objects from a pool. I used the
>> command "rados -p bench ls | xargs rados -p bench rm". (rados -p bench
>> cleanup doesn't clean everything, because there was a lot of othe
On 2018/09/28 2:26 pm, Andre Goree wrote:
On 2018/08/21 1:24 pm, Jason Dillaman wrote:
Can you collect any librados / librbd debug logs and provide them via
pastebin? Just add / tweak the following in your "/etc/ceph/ceph.conf"
file's "[client]" section and re-run to gather the logs.
[client]
l
How do I delete an RGW/S3 bucket and its contents if the usual S3 API commands
don't work?
The bucket has S3 delete markers that S3 API commands are not able to remove,
and I'd like to reuse the bucket name. It was set up for versioning and
lifecycles under ceph 12.2.5 which broke the bucket
21 matches
Mail list logo