[ceph-users] Squeeze packages for 0.94.2

2015-07-30 Thread Sebastian Köhler
Hello,

it seems that there are no Debian Squeeze packages in the repository for the 
current Hammer version. Is this an oversight or is there another reason those 
are not provided?

Sebastian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Squeeze packages for 0.94.2

2015-07-30 Thread Sebastian Köhler
July 30 2015 11:05 AM, "Christian Balzer"  wrote:
> Is there any reason you can't use Wheezy or Jessie?

Our cluster is running on trusty however nearly all our clients are running on 
squeeze and can not be updated for compatibility reasons in the short term. 
Packages of older Hammer versions were provided so we assumed future releases 
of Hammer would be provided for Debian 6.


Sebastian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Best upgrade strategy

2016-06-05 Thread Sebastian Köhler
Hi,

we are running a cluster with 6 storage nodes(72 osds) and 3 monitors.
The osds and and monitors are running on Ubuntu 14.04 and with ceph 0.94.5.
We want to upgrade the cluster to Jewel and at the same time the OS to
Ubuntu 16.04. What would be the best way to this? First to upgrade the
OS and then ceph to 0.94.7 followed by 10.2.1. Or should we first
upgrade Ceph and then Ubuntu? Or maybe doing it all at once?

Regards
Sebastian



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Best upgrade strategy

2016-06-06 Thread Sebastian Köhler
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


On 06/06/2016 03:26 PM, David Turner wrote:
> Best practices in general say to do them separate. If something
> doesn't work... Is it the new kernel, some package that are
> different on 16.04, Jewel, etc. The less things in that list the
> easier it is to track down the issue and fix it.
> 
> As far as order, hammer 0.94.5 wasn't built with 16.04 in mind.
> Jewel lists both as compatibilities. I would say upgrade to jewel,
> make sure things are stable, and then upgrade to 16.04. It might
> even be wise to add a third upgrade step of running the kernel
> you'll be using on 16.04 before you upgrade to 16.04 to separate
> the 2 out.

Ok first the upgrade to jewel, then the os. Got it. Would you
recommend first upgrading to 0.94.7 or can we directly go to jewel,
according to the realease notes a direct upgrade should be possible.

> 
> The biggest question though is why you're upgrading Ubuntu. New
> software does not mean better. The 3.16 kernel was the last kernel
> before an xfs file system regression was added to the kernel and
> hadn't been fixed yet in 4.2. So Ceph storage nodes running osds on
> xfs would be better on the much older kernel.

16.04 is using the 4.4 kernel do you know if it is fixed in there? The
reason for updating to 16.04 is that we dont want to use the odd
system out(upstart vs systemd), we've been burned by ceph before
suddenly not publishing packages for a system although it was still
supported on paper so we rather use something more "mainstream".


Regards
Sebastian

> Sent from my iPhone
> 
>> On Jun 5, 2016, at 5:48 PM, Sebastian Köhler 
>> wrote:
>> 
>> Hi,
>> 
>> we are running a cluster with 6 storage nodes(72 osds) and 3
>> monitors. The osds and and monitors are running on Ubuntu 14.04
>> and with ceph 0.94.5. We want to upgrade the cluster to Jewel and
>> at the same time the OS to Ubuntu 16.04. What would be the best
>> way to this? First to upgrade the OS and then ceph to 0.94.7
>> followed by 10.2.1. Or should we first upgrade Ceph and then
>> Ubuntu? Or maybe doing it all at once?
>> 
>> Regards Sebastian
>> 
>> ___ ceph-users
>> mailing list ceph-users@lists.ceph.com 
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJXVaDeAAoJEFsCNCyUAJg4n34P/0YcjwrrSBL03abKCpcVLfLN
LtEaRg8fH/e+NFbfPRGY1mulAuSyvyyls83TWPgWEz28UAyXN7qltwdojzjO+f2U
UHVPGcRjTJ2VNnPHhTO53pq9fjmP6VLx7xDV7t3p96aXixd3tVyG7n6plcoCcY9+
sPzePvlh7RWO6mDDDlWTx/hPIgVQmCDIc7LOW1uihjgiWn0bNzBMDsalinHyHteH
65be4bFZGcsGaYKksfLGp1PIjZ1/pnPtONiOcOjjLGWx0fEyM2TKLs4hEvuVD9gT
Yeg4qOv2/9GIWo2awvu9Fuxc8WO9FV90mgLck/AQ265uGKI4ezsfrqdx9p5FUThN
QRLjtmpml7LAPos5BbDlwoouBOHNKtZi7LWt5bNwAHn4o9VxkshHEtbyq7yfh+HB
hrCKLeyXh7jv1Er1o2mq6AxU9gTQfa5GuVM1b5KZ1WNjtkkoqr117h1rd/ztVYfW
8SX55f/Rh7FdcMiuGfaUdzbUSJrXDnzSpXvJOt6/GYFxm8Cq1RyeXq9F/OSdGm8s
JksK6mhB6bXm3tibGwX0NKg5P1PexsyP8D6QknjnpyaL68TVdh1k1fNVAuIyLnhT
CnlsjhfZ0uogK9qW4iI3h5jXdZlOpY4OEoLIwGBmxpG5vFFHOYacGsXJzbIVof5g
ZElw5gqLIQRBs7uHAo6e
=qwZA
-END PGP SIGNATURE-
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Replica count

2016-10-23 Thread Sebastian Köhler
Hello,

is it possible to reduce the replica count of a pool that already
contains data? If it is possible how much load will a change in the
replica size cause? I am guessing it will do a rebalance.

Thanks

Sebastian



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Replica count

2016-10-23 Thread Sebastian Köhler
Thanks for the help, is there any information available on how much data
movement will happen when I reduce the size from 3 to 2? The min_size is
already at 1.

On 10/23/2016 05:43 PM, David Turner wrote:
> Make sure to also adjust your min_size. Having those be the same number
> can cause issues if and when you lose an osd from your cluster.
> 
> Like Wido said, you can change the size of a replica pool at any time,
> it will just cause a lot of data to move.
> 
> Sent from my iPhone
> 
>> On Oct 23, 2016, at 4:30 AM, Wido den Hollander  wrote:
>>
>>
>>> Op 23 oktober 2016 om 10:04 schreef Sebastian Köhler :
>>>
>>>
>>> Hello,
>>>
>>> is it possible to reduce the replica count of a pool that already
>>> contains data? If it is possible how much load will a change in the
>>> replica size cause? I am guessing it will do a rebalance.
>>>
>>
>> Yes, just change the 'size' parameter of the pool. Data will rebalance
> indeed if you increase the number.
>>
>> Wido
>>
>>> Thanks
>>>
>>> Sebastian
>>>
>>>
> 
> 
> 
> <https://storagecraft.com>David Turner | Cloud Operations Engineer |
> StorageCraft Technology Corporation <https://storagecraft.com>
> 380 Data Drive Suite 300 | Draper | Utah | 84020
> Office: 801.871.2760| Mobile: 385.224.2943
> 
> 
> 
> If you are not the intended recipient of this message or received it
> erroneously, please notify the sender and delete it, together with any
> attachments, and be advised that any dissemination or copying of this
> message is prohibited.
> 
> 
> 
> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rbd: I/O Errors in low memory situations

2015-02-18 Thread Sebastian Köhler [Alfahosting GmbH]
 0*8kB 
0*16kB 1*32kB (U) 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB 
(U) 1*2048kB (R) 3*4096kB (M) = 15904kB
Feb 17 22:52:25 six kernel: [2401866.078351] Node 1 DMA32: 15791*4kB 
(UE) 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 
0*2048kB 1*4096kB (R) = 67260kB
Feb 17 22:52:25 six kernel: [2401866.078357] Node 1 Normal: 89352*4kB 
(U) 0*8kB 0*16kB 1*32kB (R) 1*64kB (R) 1*128kB (R) 1*256kB (R) 0*512kB 
1*1024kB (R) 1*2048kB (R) 0*4096kB = 360960kB
Feb 17 22:52:25 six kernel: [2401866.078366] Node 0 hugepages_total=0 
hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Feb 17 22:52:25 six kernel: [2401866.078368] Node 1 hugepages_total=0 
hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB

Feb 17 22:52:25 six kernel: [2401866.078369] 1687591 total pagecache pages
Feb 17 22:52:25 six kernel: [2401866.078370] 0 pages in swap cache
Feb 17 22:52:25 six kernel: [2401866.078372] Swap cache stats: add 0, 
delete 0, find 0/0

Feb 17 22:52:25 six kernel: [2401866.078372] Free swap  = 23390604kB
Feb 17 22:52:25 six kernel: [2401866.078373] Total swap = 23390604kB
Feb 17 22:52:25 six kernel: [2401866.078374] 3143224 pages RAM
Feb 17 22:52:25 six kernel: [2401866.078375] 0 pages HighMem/MovableOnly
Feb 17 22:52:25 six kernel: [2401866.078376] 22529 pages reserved
Feb 17 22:52:25 six kernel: [2401866.078377] 0 pages hwpoisoned
Feb 17 22:52:25 six kernel: [2401866.078380] rbd: rbd1: write 1000 at 0 
result -12
Feb 17 22:52:25 six kernel: [2401866.078382] end_request: I/O error, dev 
rbd1, sector 0
Feb 17 22:52:25 six kernel: [2401866.078449] Buffer I/O error on device 
rbd1, logical block 0
Feb 17 22:52:25 six kernel: [2401866.078515] lost page write due to I/O 
error on rbd1



--
Mit freundlichen Grüßen

Sebastian Köhler


Alfahosting GmbH
Ankerstraße 3b
06108 Halle
Germany

Steuernummer: 110/115/41765
HRB : 214733 AG Stendal
Ust-IdNr.: DE232969203
Geschäftsführer: Moritz Bartsch

Email: i...@alfahosting.de
Internet: www.alfahosting.de

TEL +49 (0345) 279 58 304
FAX +49 (0345) 680 04 99
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com