> Op 14 oktober 2016 om 19:13 schreef i...@witeq.com:
>
>
> Hi all,
>
> after encountering a warning about one of my OSDs running out of space i
> tried to study better how data distribution works.
>
100% perfect data distribution is not possible with straw. It is even very hard
to accomp
> Op 17 oktober 2016 om 6:37 schreef xxhdx1985126 :
>
>
> Hi, everyone.
>
>
> If one OSD's state transforms from up to down, by "kill -i" for example, will
> an "AdvMap" event be triggered on other related
> OSDs?___
iirc it wil. A down OSD will t
Hi, everyone.
If one OSD's state transforms from up to down, by "kill -i" for example, will
an "AdvMap" event be triggered on other related OSDs?___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.co
Hello,
On Sun, 16 Oct 2016 19:07:17 +0800 William Josefsson wrote:
> Ok thanks for sharing. yes my journals are Intel S3610 200GB, which I
> partition in 4 partitions each ~45GB. When I ceph-deploy I declare
> these as the journals of the OSDs.
>
The size (45GB) of these journals is only going
Hi,
its using LIO, means it will have the same compatibelity issues with vmware.
So i am wondering, why they call it an idial solution.
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
IP Interactive UG ( haftungsbeschraenkt
Really interesting project
Il 16 ott 2016 18:57, "Maged Mokhtar" ha scritto:
> Hello,
>
> I am happy to announce PetaSAN, an open source scale-out SAN that uses
> Ceph storage and LIO iSCSI Target.
> visit us at:
> www.petasan.org
>
> your feedback will be much appreciated.
> maged mokhtar
> ___
Hello,
I am happy to announce PetaSAN, an open source scale-out SAN that uses Ceph
storage and LIO iSCSI Target.
visit us at:
www.petasan.org
your feedback will be much appreciated.
maged mokhtar
___
ceph-users mailing list
ceph-users@lists.ceph.c
Ok thanks for sharing. yes my journals are Intel S3610 200GB, which I
partition in 4 partitions each ~45GB. When I ceph-deploy I declare
these as the journals of the OSDs.
I was trying to understand the blocking, and how much my SAS OSDs
affected my performance. I have a total of 9 hosts, 158 OSDs
On Sat, Oct 15, 2016 at 1:36 AM, Heller, Chris wrote:
> Just a thought, but since a directory tree is a first class item in cephfs,
> could the wire protocol be extended with an “recursive delete” operation,
> specifically for cases like this?
In principle yes, but the problem is that the POSIX
Morning
It’s been a few days now since the outage however we’re still unable to install
new nodes, it seems the repo’s are broken … and have been for at least 2 days
now (so not just a brief momentary issue caused by an update)
[osd04][WARNIN] E: Package 'ceph-osd' has no installation candidate
Hello,
On Sun, 16 Oct 2016 15:03:24 +0800 William Josefsson wrote:
> Hi list, while I know that writes in the RADOS backend are sync() can
> anyone please explain when the cluster will return on a write call for
> RBD from VMs? Will data be considered synced one written to the
> journal or all t
Hi list, while I know that writes in the RADOS backend are sync() can
anyone please explain when the cluster will return on a write call for
RBD from VMs? Will data be considered synced one written to the
journal or all the way to the OSD drive?
Each host in my cluster has 5x Intel S3610, and 18x1
12 matches
Mail list logo