> -Original Message-
> From: Alex Gorbachev [mailto:a...@iss-integration.com]
> Sent: 21 August 2016 04:15
> To: Nick Fisk
> Cc: w...@globe.de; Horace Ng ; ceph-users
>
> Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance
>
> Hi Nick,
>
> On Thu, Jul 21, 2016 at 8:33 A
Hi Nick
Interested in this comment - "-Dual sockets are probably bad and will
impact performance."
Have you got real world experience of this being the case?
Thanks - B
On Sun, Aug 21, 2016 at 8:31 AM, Nick Fisk wrote:
>> -Original Message-
>> From: Alex Gorbachev [mailto:a...@iss-inte
If you point at the eu.ceph.com
ceph.apt-get.eu has address 185.27.175.43
ceph.apt-get.eu has IPv6 address 2a00:f10:121:400:48c:baff:fe00:477
On Sat, Aug 20, 2016 at 11:59 AM, Vlad Blando wrote:
> Hi Guys,
>
> I will be installing Ceph behind a very restrictive firewall and one of
> the requir
Hello,
On Sun, 21 Aug 2016 09:16:33 +0100 Brian :: wrote:
> Hi Nick
>
> Interested in this comment - "-Dual sockets are probably bad and will
> impact performance."
>
> Have you got real world experience of this being the case?
>
Well, Nick wrote "probably".
Dual sockets and thus NUMA, the n
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Christian Balzer
> Sent: 21 August 2016 09:32
> To: ceph-users
> Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance
>
>
> Hello,
>
> On Sun, 21 Aug 2016 09:16:33 +0100 B
> -Original Message-
> From: Wilhelm Redbrake [mailto:w...@globe.de]
> Sent: 21 August 2016 09:34
> To: n...@fisk.me.uk
> Cc: Alex Gorbachev ; Horace Ng ;
> ceph-users
> Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance
>
> Hi Nick,
> i understand all of your technica
> Op 21 augustus 2016 om 10:26 schreef "Brian ::" :
>
>
> If you point at the eu.ceph.com
>
> ceph.apt-get.eu has address 185.27.175.43
>
> ceph.apt-get.eu has IPv6 address 2a00:f10:121:400:48c:baff:fe00:477
>
Yes, however, keep in mind that IPs might change without notice.
The best way to
Hi,
[ceph@ceph1 my-cluster]$ ceph -v
ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
[ceph@ceph1 my-cluster]$ rados -p mypool ls
hello.txt
[ceph@ceph1 my-cluster]$ rados -p mypool mksnap snap01
created pool mypool snap snap01
[ceph@ceph1 my-cluster]$ rados -p mypool lssnap
5 snap01
As a closure I would like to thank all people who contributed with
their knowledge in my problem although the final decision was not to try
any sort of recovery since the effort required would have been
tremendous with unambiguous results (to say at least).
Jason, Ilya, Brad, David, George,
Yeah, switched to 4.7 recently and no issues so far.
2016-08-21 6:09 GMT+03:00 Alex Gorbachev :
> On Tue, Jul 19, 2016 at 12:04 PM, Alex Gorbachev
> wrote:
>> On Mon, Jul 18, 2016 at 4:41 AM, Василий Ангапов wrote:
>>> Guys,
>>>
>>> This bug is hitting me constantly, may be once per several day
On Sunday, August 21, 2016, Wilhelm Redbrake wrote:
> Hi Nick,
> i understand all of your technical improvements.
> But: why do you Not use a simple for example Areca Raid Controller with 8
> gb Cache and Bbu ontop in every ceph node.
> Configure n Times RAID 0 on the Controller and enable Write
From: Alex Gorbachev [mailto:a...@iss-integration.com]
Sent: 21 August 2016 15:27
To: Wilhelm Redbrake
Cc: n...@fisk.me.uk; Horace Ng ; ceph-users
Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance
On Sunday, August 21, 2016, Wilhelm Redbrake mailto:w...@globe.de> > wrote
This is going to be a challenge
/Vlad
On Sun, Aug 21, 2016 at 5:34 PM, Wido den Hollander wrote:
>
> > Op 21 augustus 2016 om 10:26 schreef "Brian ::" :
> >
> >
> > If you point at the eu.ceph.com
> >
> > ceph.apt-get.eu has address 185.27.175.43
> >
> > ceph.apt-get.eu has IPv6 address 2a00:f
Hello,
On Sun, 21 Aug 2016 09:57:40 +0100 Nick Fisk wrote:
>
>
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Christian Balzer
> > Sent: 21 August 2016 09:32
> > To: ceph-users
> > Subject: Re: [ceph-users] Ceph + VMware + Sing
Hello,
Looks fine on the first cluster:
cluster1# radosgw-admin period get
{
"id": "6ea09956-60a7-48df-980c-2b5bbf71b565",
"epoch": 2,
"predecessor_uuid": "80026abd-49f4-436e-844f-f8743685dac5",
"sync_status": [
"",
"",
"",
"",
"",
"
15 matches
Mail list logo