Most mails to this ML score low or negatively with SpamAssassin, however
once in a while (this is a recent one) we get relatively high scores.
Note that the forged bits are false positives, but the SA is up to date and
google will have similar checks:
---
X-Spam-Status: No, score=3.9 required=10.0
You're not the only one, happens to me too. I found some old ML thread
from a couple years back where someone mentioned the same thing.
I do notice from time to time spam coming through (not much though and
it seems to come in waves) although I'm not sure how much gmail is
bouncing but nobody else
Thank you for your answers, gentlemen! We will use the default cluster
name, although that implies some trouble for us.
I kindly advise you to update the documentation to make it evident for
everyone that the custom cluster naming suppor was removed. It will save
many research, trial and error hou
On Mon, 16 Oct 2017 14:15:22 +1100 Blair Bethwaite wrote:
> Thanks Christian,
>
> You're no doubt on the right track, but I'd really like to figure out
> what it is at my end - I'm unlikely to be the only person subscribed
> to ceph-users via a gmail account.
>
> Re. attachments, I'm surprised m
Thanks Christian,
You're no doubt on the right track, but I'd really like to figure out
what it is at my end - I'm unlikely to be the only person subscribed
to ceph-users via a gmail account.
Re. attachments, I'm surprised mailman would be allowing them in the
first place, and even so gmail's att
Hello,
You're on gmail.
Aside from various potential false positives with regards to spam my bet
is that gmail's known dislike for attachments is the cause of these
bounces and that setting is beyond your control.
Because Google knows best[tm].
Christian
On Mon, 16 Oct 2017 13:50:43 +1100 Bla
Hi all,
This is a mailing-list admin issue - I keep being unsubscribed from
ceph-users with the message:
"Your membership in the mailing list ceph-users has been disabled due
to excessive bounces..."
This seems to be happening on roughly a monthly basis.
Thing is I have no idea what the bounce is
On Sat, Oct 14, 2017 at 12:25 PM, Oscar Segarra wrote:
> Hi,
>
> In my VDI environment I have configured the suggested ceph
> design/arquitecture:
>
> http://docs.ceph.com/docs/giant/rbd/rbd-snapshot/
>
> Where I have a Base Image + Protected Snapshot + 100 clones (one for each
> persistent VDI).
See also this ML thread regarding removing the cluster name option:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-June/018520.html
On Mon, Oct 16, 2017 at 11:42 AM, Erik McCormick
wrote:
> Do not, under any circumstances, make a custom named cluster. There be pain
> and suffering (and
Do not, under any circumstances, make a custom named cluster. There be pain
and suffering (and dragons) there, and official support for it has been
deprecated.
On Oct 15, 2017 6:29 PM, "Bogdan SOLGA" wrote:
> Hello, everyone!
>
> We are trying to create a custom cluster name using the latest cep
Hello, everyone!
We are trying to create a custom cluster name using the latest ceph-deploy
version (1.5.39), but we keep getting the error:
*'ceph-deploy new: error: subnet must have at least 4 numbers separated by
dots like x.x.x.x/xx, but got: cluster_name'*
We tried to run the new command us
Hi,
you are right... the correct url is the following:
http://docs.ceph.com/docs/luminous/rbd/rbd-snapshot/
But content is essentially the same.
Thanks a lot.
2017-10-15 17:11 GMT+02:00 Mohamad Gebai :
> Hi,
>
> I'm not answering your questions, but I just want to point out that you
> might b
Hi,
I'm not answering your questions, but I just want to point out that you
might be using the documentation for an older version of Ceph:
On 10/14/2017 12:25 PM, Oscar Segarra wrote:
>
> http://docs.ceph.com/docs/giant/rbd/rbd-snapshot/
>
If you're not using the 'giant' version of Ceph (which h
correction, i limit it to 128K:
echo 128 > /sys/block/sdX/queue/read_ahead_kb
On 2017-10-15 13:14, Maged Mokhtar wrote:
> On 2017-10-14 05:02, J David wrote:
>
>> Thanks all for input on this.
>>
>> It's taken a couple of weeks, but based on the feedback from the list,
>> we've got our vers
On 2017-10-14 05:02, J David wrote:
> Thanks all for input on this.
>
> It's taken a couple of weeks, but based on the feedback from the list,
> we've got our version of a scrub-one-at-a-time cron script running and
> confirmed that it's working properly.
>
> Unfortunately, this hasn't really so
15 matches
Mail list logo