Hi,
On 5/21/19 9:46 PM, Robert LeBlanc wrote:
I'm at a new job working with Ceph again and am excited to back in the
community!
I can't find any documentation to support this, so please help me
understand if I got this right.
I've got a Jewel cluster with CephFS and we have an inconsistent
Hi Alex,
The cluster has been idle at the moment being new and all. I
noticed some disk related errors in dmesg but that was about it.
It looked to me for the next 20 - 30 minutes the failure has not
been detected. All osds were up and in and health was OK. OSD logs
had no smoking gun eit
try 'umount -f'
On Tue, May 21, 2019 at 4:41 PM Marc Roos wrote:
>
>
>
>
>
> [@ceph]# ps -aux | grep D
> USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
> root 12527 0.0 0.0 123520 932 pts/1D+ 09:26 0:00 umount
> /home/mail-archive
> root 14549 0.2 0
On Tue, May 21, 2019 at 6:10 AM Ryan Leimenstoll
wrote:
>
> Hi all,
>
> We recently encountered an issue where our CephFS filesystem unexpectedly was
> set to read-only. When we look at some of the logs from the daemons I can see
> the following:
>
> On the MDS:
> ...
> 2019-05-18 16:34:24.341 7
Hi,
thank you, it worked. The PGs are not incomplete anymore. Still we have
another problem, there are 7 PGs inconsistent and a cpeh pg repair is
not doing anything. I just get "instructing pg 1.5dd on osd.24 to
repair" and nothing happens. Does somebody know how we can get the PGs
to repair?
It's been suggested here in the past to disable deep scrubbing temporarily
before running the repair because it does not execute immediately but gets
queued up behind deep scrubs.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
Hello,
I created an erasure code profile named ecprofile-42 with the following
parameters:
$ ceph osd erasure-code-profile set ecprofile-42 plugin=jerasure k=4 m=2
Next I created a new pool using the ec profile from above:
$ ceph osd pool create my_erasure_pool 64 64 erasure ecprofile-42
The
CRUSH only specifies where the chunks are placed, not how many chunks there
are (the pool specifies this)
This is the same with replicated rules: pool specifies the number of
replicas, the rule where they are put.
You can use one CRUSH rule for multiple ec pools
Paul
--
Paul Emmerich
Looking
On Wed, May 22, 2019 at 3:03 PM Rainer Krienke wrote:
>
> Hello,
>
> I created an erasure code profile named ecprofile-42 with the following
> parameters:
>
> $ ceph osd erasure-code-profile set ecprofile-42 plugin=jerasure k=4 m=2
>
> Next I created a new pool using the ec profile from above:
>
>
Hi guys,
Any help here?
Sent from my iPhone
> On 20 May 2019, at 2:48 PM, John Hearns wrote:
>
> I found similar behaviour on a Nautilus cluster on Friday. Around 300 000
> open connections which I think were the result of a benchmarking run which
> was terminated. I restarted the radosgw s
Which states are all these connections in ?
ss -tn | awk '{print $1}' | sort | uniq -c
/Torben
On 22.05.2019 15:19, Li Wang wrote:
> Hi guys,
>
> Any help here?
>
> Sent from my iPhone
>
> On 20 May 2019, at 2:48 PM, John Hearns wrote:
>
> I found similar behaviour on a Nautilus clus
Am 22.05.19 um 15:16 schrieb Dan van der Ster:
Yes this is basically what I was looking for however I had expected that
its a little better visible in the output...
Rainer
>
> Is this what you're looking for?
>
> # ceph osd pool ls detail -f json | jq .[0].erasure_code_profile
> "jera_4plus2"
On Wed, May 22, 2019 at 03:38:27PM +0200, Rainer Krienke wrote:
Am 22.05.19 um 15:16 schrieb Dan van der Ster:
Yes this is basically what I was looking for however I had expected that
its a little better visible in the output...
Mind opening a tracker ticket on http://tracker.ceph.com/ so we can
Thanks for the reply! We will be more proactive about evicting clients in the
future rather than waiting.
One followup however, it seems that the filesystem going read only was only a
WARNING state, which didn’t immediately catch our eye due to some other
rebalancing operations. Is there a rea
Hi All,
What are the metadata pools in an RGW deployment that need to sit on the
fastest medium to better the client experience from an access standpoint ?
Also is there an easy way to migrate these pools in a PROD scenario with
minimal to no-outage if possible ?
Regards,
Nikhil
__
On Wed, May 22, 2019 at 12:22 AM Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:
> Hi,
>
> On 5/21/19 9:46 PM, Robert LeBlanc wrote:
> > I'm at a new job working with Ceph again and am excited to back in the
> > community!
> >
> > I can't find any documentation to support
On Wed, May 22, 2019 at 4:31 AM Kevin Flöh wrote:
> Hi,
>
> thank you, it worked. The PGs are not incomplete anymore. Still we have
> another problem, there are 7 PGs inconsistent and a cpeh pg repair is
> not doing anything. I just get "instructing pg 1.5dd on osd.24 to
> repair" and nothing hap
On Wed, 22 May 2019 at 20:32, Torben Hørup wrote:
>
> Which states are all these connections in ?
>
> ss -tn
That set of the args won't display anything but ESTAB-lished conn-s..
One typically needs `-atn` instead.
--
End of message. Next message?
__
Thank you for your reply. We will run the script and let you know the results
once the number of TCP connections raises up. We just restarted the sever
several days ago.
Sent from my iPhone
> On 23 May 2019, at 12:26 AM, Igor Podlesny wrote:
>
>> On Wed, 22 May 2019 at 20:32, Torben Hørup
Hello :
When I try to make a secure-temp-sesstion(STS), I try the following
actions:
s3 = session.client('sts',
aws_access_key_id=tomAccessKey,
aws_secret_access_key=tomSecretKey,
endpoint_url=host
) #返回一个低级的
Hello,
It looks like the version that you are trying this on, doesn't support
AssumeRole or STS. What version of Ceph are you using?
Thanks,
Pritha
On Thu, May 23, 2019 at 9:10 AM Yuan Minghui wrote:
> Hello :
>
>When I try to make a secure-temp-sesstion(STS), I try the following
> act
HELLO :
The version I am using is ceph luminous 12.2.4 ,and what types of ceph can
support AssumeRole or STS?
Thanks a lot.
kyle
发件人: Pritha Srivastava
日期: 2019年5月23日 星期四 上午11:49
收件人: Yuan Minghui
抄送: "ceph-users@lists.ceph.com"
主题: Re: [ceph-users] assume_role() :http_code 405 erro
On Thu, May 23, 2019 at 9:24 AM Yuan Minghui wrote:
> HELLO :
>
> The version I am using is ceph luminous 12.2.4 ,and what types of ceph
> can support AssumeRole or STS?
>
>
>
STS is available in Nautilus (v14.2.0), and versions after that.
Thanks,
Pritha
> Thanks a lot.
>
> kyle
>
>
>
> *发件人*
Hello,
thanks for the hint. I opened a ticket with a feature request to include
the ec-profile information in the output of ceph osd pool ls detail.
http://tracker.ceph.com/issues/40009
Rainer
Am 22.05.19 um 17:04 schrieb Jan Fajerski:
> On Wed, May 22, 2019 at 03:38:27PM +0200, Rainer Krienke
24 matches
Mail list logo