Hi,
While creating a Ceph user with a pre-generated key stored in a keyring
file, "ceph auth get-or-create" doesn't seem to take the keyring file into
account:
# cat /tmp/user1.keyring
[client.user1]
key = AQAuJEpVgLQmJxAAQmFS9a3R7w6EHAOAIU2uVw==
# ceph auth get-or-create -i /tmp/user1.keyring c
I was under the impression that "ceph-disk activate" would take care of
setting OSD weights. In fact, the documentation for adding OSDs, the "short
form", only talks about running ceph-disk prepare and activate:
http://ceph.com/docs/master/install/manual-deployment/#adding-osds
This is also how t
Hi,
I have a cluster of 5 hosts running Ceph 0.94.6 on CentOS 6.5. On each
host, there is 1 monitor and 13 OSDs. We had an issue with the network and
for some reason (which I still don't know why), the servers were restarted.
One host is still down, but the monitors on the 4 remaining servers are
] :
mon.60zxl02@1 won leader election with quorum 1,2,4
2016-07-25 14:32:33.440103 7fefdf4ee700 1 mon.60zxl02@1(leader).paxos(paxos
recovering c 1318755..1319319) collect timeout, calling fresh election
On Mon, Jul 25, 2016 at 3:27 PM, Sergio A. de Carvalho Jr. <
scarvalh...@gmail.com> wrote:
ed starting both the 4th and
> 5th simultaneously and letting them both vote?
>
> --
> Joshua M. Boniface
> Linux System Ærchitect
> Sigmentation fault. Core dumped.
>
> On 25/07/16 10:41 AM, Sergio A. de Carvalho Jr. wrote:
> > In the logs, there 2 monitors are const
annel(cluster) log [INF] :
mon.610wl02 calling new monitor election
I'm curious about the "handle_timecheck drop unexpected msg" message.
On Mon, Jul 25, 2016 at 4:10 PM, Joao Eduardo Luis wrote:
> On 07/25/2016 03:41 PM, Sergio A. de Carvalho Jr. wrote:
>
>> I
l the time so I can't see why monitors
would be getting stuck.
On Mon, Jul 25, 2016 at 5:18 PM, Joao Eduardo Luis wrote:
> On 07/25/2016 04:34 PM, Sergio A. de Carvalho Jr. wrote:
>
>> Thanks, Joao.
>>
>> All monitors have the exact same mom map.
>>
>> I
wrote:
> On 07/25/2016 05:55 PM, Sergio A. de Carvalho Jr. wrote:
>
>> I just forced an NTP updated on all hosts to be sure it's down to clock
>> skew. I also checked that hosts can reach all other hosts on port 6789.
>>
>> I then stopped monitor 0 (60z0m02) and
system clock. As time passes, the gap widens and
quickly the logs are over 10 minutes behind the actual time, which explains
why the logs above don't seem to overlap.
On Mon, Jul 25, 2016 at 9:37 PM, Sergio A. de Carvalho Jr. <
scarvalh...@gmail.com> wrote:
> Awesome, thanks so much
As per my previous messages on the list, I was having a strange problem in
my test cluster (Hammer 0.94.6, CentOS 6.5) where my monitors were
literally crawling to a halt, preventing them to ever reach quorum and
causing all sort of problems. As it turned out, to my surprise everything
went back to
before when my central log server is not
> keeping up with messages.
>
> Cheers,
> Sean
>
> On 26 July 2016 at 21:13, Sergio A. de Carvalho Jr. > wrote:
>
>> I left the 4 nodes running overnight and they just crawled to their
>> knees... to the point that nothing
operate normally.
It's just scary to think that your logging daemon can cause so much damage!
On Tue, Jul 26, 2016 at 6:48 PM, Joao Eduardo Luis wrote:
> On 07/26/2016 06:27 PM, Sergio A. de Carvalho Jr. wrote:
>
>> (Just realised I originally replied to Sean directly, so repo
t
> things to check when services are running weirdly.
>
> My failsafe check is to do
>
> # logger "sean test"
>
> and see if it appears in syslog. If it doesn't do it immediately, I have a
> problem
>
> Cheers,
> Sean
>
> On 27 July 2016 at
mally, even though the logs might not be getting
pushed out to the central syslog servers.
On Wed, Jul 27, 2016 at 4:49 AM, Brad Hubbard wrote:
> On Tue, Jul 26, 2016 at 03:48:33PM +0100, Sergio A. de Carvalho Jr. wrote:
> > As per my previous messages on the list, I was having a strang
lained weirdness with services (in your
> case, Ceph), and syslog lagging 10mins behind just reminded me of symptoms
> I've seen before where the sending of syslog messages to a central syslog
> server got stuck, and caused unusual problems on the host.
>
> Cheers,
> Sean
>
>
&g
We tracked the problem down to the following rsyslog configuration in our
test cluster:
*.* @@:
$ActionExecOnlyWhenPreviousIsSuspended on
& /var/log/failover.log
$ActionExecOnlyWhenPreviousIsSuspended off
It seems that the $ActionExecOnlyWhenPreviousIsSuspended directive doesn't
work well with th
Hi all,
Is it possible to create a pool where the minimum number of replicas for
the write operation to be confirmed is 2 but the minimum number of replicas
to allow the object to be read is 1?
This would be useful when a pool consists of immutable objects, so we'd
have:
* size 3 (we always keep
Ok, thanks for confirming.
On Thu, Mar 23, 2017 at 7:32 PM, Gregory Farnum wrote:
> Nope. This is a theoretical possibility but would take a lot of code
> change that nobody has embarked upon yet.
> -Greg
> On Wed, Mar 22, 2017 at 2:16 PM Sergio A. de Carvalho Jr. <
> sca
Hi all,
I've setup a testing/development Ceph cluster consisting of 5 Dell
PowerEdge R720xd servers (256GB RAM, 2x 8-core Xeon E5-2650 @ 2.60 GHz,
dual-port 10Gb Ethernet, 2x 900GB + 12x 4TB disks) running CentOS 6.5 and
Ceph Hammer 0.94.6. All servers use one 900GB disk for the root partition
and
ng journals does impact
> performance and usually separating them on flash is a good idea. Also not
> sure of your networking setup which can also have significant impact.
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Sergio A. de Carv
so given
what we have, what would it be the ideal setup? Would it make sense to put
the journals of all 12 OSDs on the same 900GB disk?
Sergio
On Thu, Apr 7, 2016 at 6:03 PM, Mark Nelson wrote:
> Hi Sergio
>
>
> On 04/07/2016 07:00 AM, Sergio A. de Carvalho Jr. wrote:
>
>>
Hi,
Does anybody know what auth capabilities are required to run commands such
as:
ceph daemon osd.0 perf dump
Even with the client.admin user, I can't get it to work:
$ ceph daemon osd.0 perf dump --name client.admin
--keyring=/etc/ceph/ceph.client.admin.keyring
{}
$ ceph auth get client.admi
Hi everyone, I have some questions about encryption in Ceph.
1) Are RBD connections encrypted or is there an option to use encryption
between clients and Ceph? From reading the documentation, I have the
impression that the only option to guarantee encryption in transit is to
force clients to encry
Thanks for the answers, guys!
Am I right to assume msgr2 (http://docs.ceph.com/docs/mimic/dev/msgr2/)
will provide encryption between Ceph daemons as well as between clients and
daemons?
Does anybody know if it will be available in Nautilus?
On Fri, Jan 11, 2019 at 8:10 AM Tobias Florek wrote:
24 matches
Mail list logo