Hi List,
TL;DR:
For those of you who are running a Ceph cluster with Intel SSD D3-S4510
and or Intel SSD D3-S4610 with firmware version XCV10100 please upgrade
to firmware XCV10110 ASAP. At least before ~ 1700 power up hours.
More information here:
https://support.microsoft.com/en-us/help/44996
Wow!!!
пт, 19 апр. 2019 г. в 10:16, Stefan Kooman :
> Hi List,
>
> TL;DR:
>
> For those of you who are running a Ceph cluster with Intel SSD D3-S4510
> and or Intel SSD D3-S4610 with firmware version XCV10100 please upgrade
> to firmware XCV10110 ASAP. At least before ~ 1700 power up hours.
>
> M
I've always used the standalone mac and Linux package version. Wasn't aware
of the 'bundled software' in the installers. Ugh. Thanks for pointing it
out.
On Thursday, April 18, 2019, Janne Johansson wrote:
> https://www.reddit.com/r/netsec/comments/8t4xrl/filezilla_malware/
>
> not saying it defi
I am a bit curious on how production ceph clusters are being used. I am
reading here that the block storage is used a lot with openstack and
proxmox, and via iscsi with vmare.
But I since nobody here is interested in a better rgw client for end
users. I am wondering if the rgw is even being u
OK. So this works for me with master commit
bdaac2d619d603f53a16c07f9d7bd47751137c4c on Centos 7.5.1804.
I cloned the repo and ran './install-deps.sh' and './do_cmake.sh
-DWITH_FIO=ON' then 'make all'.
# find ./lib -iname '*.so*' | xargs nm -AD 2>&1 | grep
_ZTIN13PriorityCache8PriCacheE
./lib/li
[Adding ceph-users for better usability]
On Fri, 19 Apr 2019, Radoslaw Zarzynski wrote:
> Hello,
>
> RadosGW can use OpenStack Keystone as one of its authentication
> backends. Keystone in turn had been offering many token variants
> over the time with PKI/PKIz being one of them. Unfortunately,
>
Hello all,
Thanks! According to Intel, affected are D3-S4510 and D3-S4610 Series 1.92TB
and 3.84TB.
For those, who have these SSDs connected to LSI/Avago/Broadcom MegaRAID
controller - do not forget to run before updating:
isdct set -system EnableLSIAdapter=true
Regards,
Vytautas J.
-Ori
On Fri, Apr 19, 2019 at 10:44 AM Varun Singh wrote:
>
> On Thu, Apr 18, 2019 at 9:53 PM Siegfried Höllrigl
> wrote:
> >
> > Hi !
> >
> > I am not 100% sure, but i think, --net=host does not propagate /dev/
> > inside the conatiner.
> >
> > From the Error Message :
> >
> > 2019-04-18 07:30:06 /o
On Wed, Apr 17, 2019 at 10:48 AM Wesley Dillingham
wrote:
>
> The man page for gwcli indicates:
>
> "Disks exported through the gateways use ALUA attributes to provide
> ActiveOptimised and ActiveNonOptimised access to the rbd images. Each disk
> is assigned a primary owner at creation/import
On Thu, Apr 18, 2019 at 3:47 PM Wesley Dillingham
wrote:
>
> I am trying to determine some sizing limitations for a potential iSCSI
> deployment and wondering whats still the current lay of the land:
>
> Are the following still accurate as of the ceph-iscsi-3.0 implementation
> assuming CentOS 7
Den fre 19 apr. 2019 kl 12:10 skrev Marc Roos :
>
> [...]since nobody here is interested in a better rgw client for end
> users. I am wondering if the rgw is even being used like this, and what
> most production environments look like.
>
>
"Like this" ?
People use tons of scriptable and built-in
> On Apr 19, 2019, at 10:59 AM, Janne Johansson wrote:
>
> May the most significant bit of your life be positive.
Marc, my favorite thing about open source software is it has a 100% money back
satisfaction guarantee: If you are not completely satisfied, you can have an
instant refund, just for
I've been away from OpenStack for a couple of years now, so this may have
changed. But back around the Icehouse release, at least, upgrading between
OpenStack releases was a major undertaking, so backing an older OpenStack with
newer Ceph seems like it might be more common than one might thin
I’ve run production Ceph/OpenStack since 2015. The reality is running
OpenStack Newton (the last one with pki) with a post Nautilus release just
isn’t going to work. You are going to have bigger problems than trying to make
object storage work with keystone issued tokens. Worst case is you will
On Fri, Apr 19, 2019 at 12:10:02PM +0200, Marc Roos wrote:
> I am a bit curious on how production ceph clusters are being used. I am
> reading here that the block storage is used a lot with openstack and
> proxmox, and via iscsi with vmare.
Have you looked at the Ceph User Surveys/Census?
https:
Hi Casey,
I set up a completely fresh cluster on a new VM host.. everything is fresh
fresh fresh. I feel like it installed cleanly and because there is practically
zero latency and unlimited bandwidth as peer VMs, this is a better place to
experiment. The behavior is the same as the other clust
16 matches
Mail list logo