On 08/23/2017 07:29 AM, Eric Renfro wrote:
> I sent a message in almost 2 days ago, with pasted logs. Since then, it’s
> been in the moderator’s queue and still not approved (or even declined).
>
> Is anyone actually checking that? ;)
>
> Eric Renfro
As you guessed, no, not really.
I found th
Hello,
On Wed, 23 Aug 2017 09:11:18 -0300 Guilherme Steinmüller wrote:
> Hello!
>
> I recently installed INTEL SSD 400GB 750 SERIES PCIE 3.0 X4 in 3 of my OSD
> nodes.
>
Well, you know what's coming now, don't you?
That's a consumer device, with 70GB writes per day endurance.
unless you're es
On 08/23/2017 07:17 PM, Mark Nelson wrote:
On 08/23/2017 06:18 PM, Xavier Trilla wrote:
Oh man, what do you know!... I'm quite amazed. I've been reviewing
more documentation about min_replica_size and seems like it doesn't
work as I thought (Although I remember specifically reading it
somewh
On 08/23/2017 06:18 PM, Xavier Trilla wrote:
Oh man, what do you know!... I'm quite amazed. I've been reviewing more
documentation about min_replica_size and seems like it doesn't work as I
thought (Although I remember specifically reading it somewhere some years ago
:/ ).
And, as all repli
Oh man, what do you know!... I'm quite amazed. I've been reviewing more
documentation about min_replica_size and seems like it doesn't work as I
thought (Although I remember specifically reading it somewhere some years ago
:/ ).
And, as all replicas need to be written before primary OSD informs
On Thu, Aug 24, 2017 at 12:04:37AM +0800, Xuehan Xu wrote:
> Hi, Leonardo
>
> Will there be a link for September's CDM in
> http://tracker.ceph.com/projects/ceph/wiki/Planning?
Yes, I must post the reminder next week.
> And when will the video record of August's CDM be posted in youtube?
It's o
That was the problem, thanks again,
-Bryan
From: Bryan Banister
Sent: Wednesday, August 23, 2017 9:06 AM
To: Bryan Banister ; Abhishek Lekshmanan
; ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Anybody gotten boto3 and ceph RGW working?
Looks like I found the problem:
https://github.com/
Actually, this looks very much like my issue, so I'll add to that:
http://tracker.ceph.com/issues/21040
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Edward
R Huyer
Sent: Wednesday, August 23, 2017 11:10 AM
To: Brad Hubbard
Cc: ceph-users@l
So, I was running Ceph 10.2.9 servers, with 10.2.6 (I think, what is in
CentOS’s Jewel-SIG repo?), clients.
I had an issue where the MDS cluster stopped working, wasn’t responding to
cache pressure, and I restarted the mdd’s and they failed to replay the
journal.
Long story short, I managed t
On Wed, 23 Aug 2017, David Turner said:
> This isn't a solution to fix them not starting at boot time, but a fix to
> not having to reboot the node again. `ceph-disk activate-all` should go
> through and start up the rest of your osds without another reboot.
Thanks, will try next time.
Sean
Forgot to send to the list with the first reply.
I'm honestly not exactly sure when it happened. I hadn't looked at ceph status
in several days prior to discovering the issue and submitting to the mailing
list. I've seen one or two inconsistent pg issues randomly crop up in the
month or so si
Hey Cephers,
Sorry for the short notice, but the Ceph Tech Talk for August (scheduled
for today) has been cancelled.
Kindest regards,
Leo
--
Leonardo Vaz
Ceph Community Manager
Open Source and Standards Team
___
ceph-users mailing list
ceph-users@lis
I sent a message in almost 2 days ago, with pasted logs. Since then, it’s been
in the moderator’s queue and still not approved (or even declined).
Is anyone actually checking that? ;)
Eric Renfro
___
ceph-users mailing list
ceph-users@lists.ceph.com
h
On Wed, Aug 23, 2017 at 3:13 PM, Marc Roos wrote:
>
>
> ceph fs authorize cephfs client.bla /bla rw
>
> Will generate a user with these permissions
>
> [client.bla]
> caps mds = "allow rw path=/bla"
> caps mon = "allow r"
> caps osd = "allow rw pool=fs_data"
>
> With those
ceph fs authorize cephfs client.bla /bla rw
Will generate a user with these permissions
[client.bla]
caps mds = "allow rw path=/bla"
caps mon = "allow r"
caps osd = "allow rw pool=fs_data"
With those permissions I cannot mount, I get a permission denied, until
I chang
Looks like I found the problem:
https://github.com/snowflakedb/snowflake-connector-python/issues/1
I’ll try the fixed version of botocore 1.4.87+,
-Bryan
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Bryan
Banister
Sent: Wednesday, August 23, 2017 9:01 AM
To: Abhishe
Here is the error I get:
# python3 boto3_test.py
Traceback (most recent call last):
File "boto3_test.py", line 15, in
for bucket in s3.list_buckets():
File
"/jump/software/rhel7/python36_botocore-1.4.85/lib/python3.6/site-packages/botocore/client.py",
line 251, in _api_call
r
This isn't a solution to fix them not starting at boot time, but a fix to
not having to reboot the node again. `ceph-disk activate-all` should go
through and start up the rest of your osds without another reboot.
On Wed, Aug 23, 2017 at 9:36 AM Sean Purdy wrote:
> Hi,
>
> Luminous 12.1.1
>
> I'
On Tue, Aug 22, 2017 at 1:37 AM, Alessandro De Salvo
wrote:
> Hi,
>
> when trying to use df on a ceph-fuse mounted cephfs filesystem with ceph
> luminous >= 12.1.3 I'm having hangs with the following kind of messages in
> the logs:
>
>
> 2017-08-22 02:20:51.094704 7f80addb7700 0 client.174216 ms_
Hi,
Luminous 12.1.1
I've had a couple of servers where at cold boot time, one or two of the OSDs
haven't mounted/been detected. Or been partially detected. These are luminous
Bluestore OSDs. Often a warm boot fixes it, but I'd rather not have to reboot
the node again.
Sometimes /var/lib/ce
Hello!
I recently installed INTEL SSD 400GB 750 SERIES PCIE 3.0 X4 in 3 of my OSD
nodes.
First of all, here's is an schema describing how my cluster is:
[image: Imagem inline 1]
[image: Imagem inline 2]
I primarily use my ceph as a beckend for OpenStack nova, glance, swift and
cinder. My crush
On Wed, 23 Aug 2017 16:48:12 +0530 M Ranga Swami Reddy wrote:
> On Mon, Aug 21, 2017 at 5:37 PM, Christian Balzer wrote:
> > On Mon, 21 Aug 2017 17:13:10 +0530 M Ranga Swami Reddy wrote:
> >
> >> Thank you.
> >> Here I have NVMes from Intel. but as the support of these NVMes not
> >> there from
On Mon, Aug 21, 2017 at 5:37 PM, Christian Balzer wrote:
> On Mon, 21 Aug 2017 17:13:10 +0530 M Ranga Swami Reddy wrote:
>
>> Thank you.
>> Here I have NVMes from Intel. but as the support of these NVMes not
>> there from Intel, we decided not to use these NVMes as a journal.
>
> You again fail to
Finally problem solved.
First, I set noscrub, nodeep-scrub, norebalance, nobackfill, norecover, noup
and nodown flags. Then I restarted the OSD which has problem.
When OSD daemon started, blocked requests increased (up to 100) and some
misplaced PGs appeared. Then I unset flags in order to noup,
On Tue, 15 Aug 2017, Sean Purdy said:
> Luminous 12.1.1 rc1
>
> Hi,
>
>
> I have a three node cluster with 6 OSD and 1 mon per node.
>
> I had to turn off one node for rack reasons. While the node was down, the
> cluster was still running and accepting files via radosgw. However, when I
> t
Hi,
Sometimes we have the same issue on our 10.2.9 Cluster. (24 Nodes á 60
OSDs)
I think there is some racecondition or something like that
which results in this state. The blocking requests starts exactly at
the time the PG begins to scrub.
you can try the following. The OSD will automaticaly
No, nothing like that.
The cluster is in the process of having more OSDs added and, while that was
ongoing, one was removed because the underlying disk was throwing up a bunch of
read errors.
Shortly after, the first three OSDs in this PG started crashing with error
messages about corrupted EC
Bryan Banister writes:
> Hello,
>
> I have the boto python API working with our ceph cluster but haven't figured
> out a way to get boto3 to communicate yet to our RGWs. Anybody have a simple
> example?
I just use the client interface as described in
http://boto3.readthedocs.io/en/latest/re
On Wed, Aug 23, 2017 at 12:47 AM, Edward R Huyer wrote:
> Neat, hadn't seen that command before. Here's the fsck log from the primary
> OSD: https://pastebin.com/nZ0H5ag3
>
> Looks like the OSD's bluestore "filesystem" itself has some underlying
> errors, though I'm not sure what to do about t
Hello everyone,
I'm trying to get a handle on the current state of the async messenger's
RDMA transport in Luminous, and I've noticed that the information
available is a little bit sparse (I've found
https://community.mellanox.com/docs/DOC-2693 and
https://community.mellanox.com/docs/DOC-2721, whi
Hi, thanks for your quick response!
Do I take it from this that your cache tier is only on one node?
If so upgrade the "Risky" up there to "Channeling Murphy".
The two SSDs are on two different nodes, but since we just started
using cache tier, we decided to use a pool size of 2, we know it'
31 matches
Mail list logo