Good day! Please help me solve the problem. There are the following scheme :
Server ESXi with 1Gb NICs. it has local store store2Tb and two isci storage
connected to the second server .
The second server supermicro: two 1TB hdd (lsi 9261-8i with battery), 8 CPU
cores, 32 GB RAM and 2 1Gb NICs . O
On 01/17/2014 10:01 AM, Никитенко Виталий wrote:
Good day! Please help me solve the problem. There are the following scheme :
Server ESXi with 1Gb NICs. it has local store store2Tb and two isci storage
connected to the second server .
The second server supermicro: two 1TB hdd (lsi 9261-8i with b
On Fri, Jan 17, 2014 at 2:05 AM, Christian Balzer wrote:
> On Thu, 16 Jan 2014 15:51:17 +0200 Ilya Dryomov wrote:
>
>> On Wed, Jan 15, 2014 at 5:42 AM, Sage Weil wrote:
>> >
>> > [...]
>> >
>> > * rbd: support for 4096 mapped devices, up from ~250 (Ilya Dryomov)
>>
>> Just a note, v0.75 simply ad
On Fri, Jan 17, 2014 at 11:20 AM, Ilya Dryomov wrote:
> On Fri, Jan 17, 2014 at 2:05 AM, Christian Balzer wrote:
>> On Thu, 16 Jan 2014 15:51:17 +0200 Ilya Dryomov wrote:
>>
>>> On Wed, Jan 15, 2014 at 5:42 AM, Sage Weil wrote:
>>> >
>>> > [...]
>>> >
>>> > * rbd: support for 4096 mapped devices
Hi, Виталий.
Whether a sufficient number of PGS?
2014/1/17 Никитенко Виталий
> Good day! Please help me solve the problem. There are the following scheme
> :
> Server ESXi with 1Gb NICs. it has local store store2Tb and two isci
> storage connected to the second server .
> The second server supe
Hi guys,
I use ceph-deploy to deploy my ceph cluster.
This is my config file:
-
[global]
osd pool default size = 3
auth_service_required = none
filestore_xattr_use_omap = true
journal zero on create = true
auth_client_requi
On 01/17/2014 12:46 PM, Tim Zhang wrote:
Hi guys,
I use ceph-deploy to deploy my ceph cluster.
This is my config file:
-
[global]
osd pool default size = 3
auth_service_required = none
filestore_xattr_use_omap = true
jour
On 01/17/2014 12:02 PM, Wido den Hollander wrote:
On 01/17/2014 12:46 PM, Tim Zhang wrote:
Hi guys,
I use ceph-deploy to deploy my ceph cluster.
This is my config file:
-
[global]
osd pool default size = 3
auth_service_
Dear,
we are studying the possibility to migrate our FS in the next year to
cephfs. I know that it is not prepare for production environments yet, but
we are planning to play with it in the next months deploying a basic
testbed.
Reading the documentation, I see 3 mons, 1 mds and several ods's (
On 01/17/2014 01:29 PM, Joao Eduardo Luis wrote:
On 01/17/2014 12:02 PM, Wido den Hollander wrote:
On 01/17/2014 12:46 PM, Tim Zhang wrote:
Hi guys,
I use ceph-deploy to deploy my ceph cluster.
This is my config file:
-
On Fri, Jan 17, 2014 at 8:41 AM, Wido den Hollander wrote:
> On 01/17/2014 01:29 PM, Joao Eduardo Luis wrote:
>>
>> On 01/17/2014 12:02 PM, Wido den Hollander wrote:
>>>
>>> On 01/17/2014 12:46 PM, Tim Zhang wrote:
Hi guys,
I use ceph-deploy to deploy my ceph cluster.
This is m
On 01/14/2014 10:42 PM, Sage Weil wrote:
> This is a big release, with lots of infrastructure going in for
> firefly. The big items include a prototype standalone frontend for
> radosgw (which does not require apache or fastcgi), tracking for read
> activity on the osds (to inform tiering decision
On Friday, January 17, 2014, Iban Cabrillo wrote:
> Dear,
> we are studying the possibility to migrate our FS in the next year to
> cephfs. I know that it is not prepare for production environments yet, but
> we are planning to play with it in the next months deploying a basic
> testbed.
> Re
On Fri, 17 Jan 2014, Guang wrote:
> Thanks Sage.
>
> I further narrow down the problem to #any command using paxos service would
> hang#, following are details:
>
> 1. I am able to run ceph status / osd dump, etc., however, the result are out
> of date (though I stopped all OSDs, it does not re
Just an FYI...we have a Ceph cluster setup for archiving audio and video using
the following Dell hardware:
6 x Dell R-720xd;64 GB of RAM; for OSD nodes
72 x 4TB SAS drives as OSD’s
3 x Dell R-420;32 GB of RAM; for MON/RADOSGW/MDS nodes
2 x Force10 S4810 switches
4 x 10 GigE LCAP bonded Intel car
Thanks for the numbers Shain. I'm new to ceph, I definitely like the
technology. However I'm not sure how to calculate if the transfer numbers
you mentioned would be considered "good". For example, assuming a single
disk's rate is barely 50MB/s .. Then the 1175MB/s is merely the aggregate
bandwidth
I guess I joined the mailing list at just the right time, since I'm just
starting to size out a ceph cluster, and I was just starting to read about how
best to size out the nodes.
You mention consider less dense nodes for OSD nodes
Assuming you used nodes with similar CPU,RAM, etc, at what
Hi Greg,
2014/1/17 Gregory Farnum
> On Friday, January 17, 2014, Iban Cabrillo
> wrote:
>
>> Dear,
>> we are studying the possibility to migrate our FS in the next year to
>> cephfs. I know that it is not prepare for production environments yet, but
>> we are planning to play with it in the
I tried using aws-java-sdk. I can list the buckets, but cant do any other
functions like create/delete objects/buckets. getting 403/405 response
code. pls let me know if anybody used it. sub domains are resolving
properly in dns. Thanks.
___
ceph-users
On Jan 15, 2014, Sage Weil wrote:
> v0.75 291 files changed, 82713 insertions(+), 33495 deletions(-)
> Upgrading
> ~
I suggest adding:
* All (replicated?) pools will likely fail scrubbing because the
per-pool dirty object counts, introduced in 0.75, won't match. This
incons
On 18/01/14 19:50, Alexandre Oliva wrote:
On Jan 15, 2014, Sage Weil wrote:
v0.75 291 files changed, 82713 insertions(+), 33495 deletions(-)
Upgrading
~
I suggest adding:
* All (replicated?) pools will likely fail scrubbing because the
per-pool dirty object counts, intr
21 matches
Mail list logo