- Message from Sage Weil -
Date: Thu, 11 Sep 2014 14:10:46 -0700 (PDT)
From: Sage Weil
Subject: Re: [ceph-users] Cephfs upon Tiering
To: Gregory Farnum
Cc: Kenneth Waegeman , ceph-users
On Thu, 11 Sep 2014, Gregory Farnum wrote:
On Thu, Sep 11, 2014 at 11:39
Hi,
I've stumpled upon this a couple of times, where Ceph just stops
responding, but still works.
The cause has been package loss on the network layer, but Ceph doesn't
say anything.
Is there a debug flag for showing retransmission of package, or someway
to see that packages are lost?
Regards,
J
Hi,
Anyone help me why my radosgw-admin pool list give me this error
#radosgw-admin pools list
couldn't init storage provider
But the rados lspools list all the pools,
Regards,
Santhosh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lis
Hi,
Following-up this issue, I've identified that almost all unfound objects
belongs to a single RBD volume (with the help of the script below).
Now what's the best way to try to recover the filesystem stored on this
RBD volume?
'mark_unfound_lost revert' or 'mark_unfound_lost lost' and then run
Hello Team,
I have configured ceph as a multibackend for openstack.
I have created 2 pools .
1. Volumes (replication size =3 )
2. poolb (replication size =2 )
Below is the details from /etc/cinder/cinder.conf
enabled_backends=rbd-ceph,rbd-cephrep
[rbd-ceph]
volume_driver=cinde
Hi Ceph-Users,
I have absolutely no idea what is going on on my systems...
Hardware:
45 x 4TB Harddisks
2 x 6 Core CPUs
256GB Memory
When initializing all disks and join them to the cluster, after
approximately 30 OSDs, other osds are crashing. When I try to start them
again I see different kind
What are your ulimit settings? You could be hitting the max process count.
On 9/12/2014 9:06 AM, Christian Eichelmann wrote:
Hi Ceph-Users,
I have absolutely no idea what is going on on my systems...
Hardware:
45 x 4TB Harddisks
2 x 6 Core CPUs
256GB Memory
When initializing all disks and jo
do cat /proc//limits
probably you hit max processes limit or max FD limit
> Hi Ceph-Users,
>
> I have absolutely no idea what is going on on my systems...
>
> Hardware:
> 45 x 4TB Harddisks
> 2 x 6 Core CPUs
> 256GB Memory
>
> When initializing all disks and join them to the cluster, after
> a
Hi all,
Today I have a problem using CephFS. I use firefly last release, with
kernel 3.16 client (Debian experimental).
I have a directory in CephFS, associated to a pool "pool2" (with
set_layout).
All is working fine, I can add and remove files, objects are stored in
the right pool.
But when C
Hi,
I am running all commands as root, so there are no limits for the processes.
Regards,
Christian
___
Von: Mariusz Gronczewski [mariusz.gronczew...@efigence.com]
Gesendet: Freitag, 12. September 2014 15:33
An: Christian Eichelmann
Cc: ceph-users@lists.ceph.co
here the results for the intel s3500
max performance is with ceph 0.85 + optracker disabled.
intel s3500 don't have d_sync problem like crucial
%util show almost 100% for read and write, so maybe the ssd disk performance is
the limit.
I have some stec zeusram
Another thing we're looking into is compression. The intersection of
compression and object striping (fracturing) is interesting. Is the striping
variable on a per-object basis?
Allen Samuels
Chief Software Architect, Emerging Storage Solutions
951 SanDisk Drive, Milpitas, CA 95035
T: +1 408
That's not how ulimit works. Check the `ulimit -a` output.
On 9/12/2014 10:15 AM, Christian Eichelmann wrote:
Hi,
I am running all commands as root, so there are no limits for the processes.
Regards,
Christian
___
Von: Mariusz Gronczewski [mariusz.gronczew.
Hi,
I am new to ceph file system, and have got a newbie question:
For a sparse file, how could ceph file system know the hole in the file was
never created or some stripe was just simply lost?
Thanks,
Brandon
___
ceph-users mailing list
ceph-users@list
On Fri, 12 Sep 2014 12:05:06 -0400 Brian Rak wrote:
> That's not how ulimit works. Check the `ulimit -a` output.
>
Indeed.
And to forestall the next questions, see "man initscript", mine looks like
this:
---
ulimit -Hn 131072
ulimit -Sn 65536
# Execute the program.
eval exec "$4"
---
And also
On Fri, Sep 12, 2014 at 1:53 AM, Kenneth Waegeman > wrote:
>
> - Message from Sage Weil > -
>Date: Thu, 11 Sep 2014 14:10:46 -0700 (PDT)
>From: Sage Weil >
> Subject: Re: [ceph-users] Cephfs upon Tiering
> To: Gregory Farnum >
> Cc: Kenneth Waegeman >,
ceph-users
> >
>
Ceph messages are transmitted using tcp, so the system isn't directly aware
of packet loss at any level. I suppose we could try and export messenger
reconnect counts via the admin socket, but that'd be a very noisy measure
-- it seems simplest to just query the OS or hardware directly?
-Greg
On Fr
On Fri, Sep 12, 2014 at 9:26 AM, brandon li wrote:
> Hi,
>
> I am new to ceph file system, and have got a newbie question:
>
> For a sparse file, how could ceph file system know the hole in the file was
> never created or some stripe was just simply lost?
CephFS does not keep any metadata to try
On Fri, Sep 12, 2014 at 6:49 AM, Florent Bautista wrote:
> Hi all,
>
> Today I have a problem using CephFS. I use firefly last release, with
> kernel 3.16 client (Debian experimental).
>
> I have a directory in CephFS, associated to a pool "pool2" (with
> set_layout).
>
> All is working fine, I ca
On Fri, Sep 12, 2014 at 4:41 AM, Francois Deppierraz
wrote:
> Hi,
>
> Following-up this issue, I've identified that almost all unfound objects
> belongs to a single RBD volume (with the help of the script below).
>
> Now what's the best way to try to recover the filesystem stored on this
> RBD vol
We were building a test cluster here, and I enabled MDS in order to use
ceph-fuse to fill the cluster with data. It seems the metadata server is
having problems, so I figured I'd just remove it and rebuild it. However, the
"ceph-deploy mds destroy" command is not implemented; it appears that
You can turn off the MDS and create a new FS in new pools. The ability
to shut down a filesystem more completely is coming in Giant.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Fri, Sep 12, 2014 at 1:16 PM, LaBarre, James (CTR) A6IT
wrote:
> We were building a tes
Hi All!
1st of all thanks in advance.
If my running Ceph cluster is Centos6.5 with Ceph-firefly v80.1, will a
calamari server running on Ubuntu12.04 able to connect/monitor/manage this
cluster?
I would think Ubuntu base agents have to be installed on the storage-nodes in
order for calamari serv
Hi,
We are facing a crash while deleting large number of objects. Here is the trace.
2014-09-12 13:48:06.820524 7fb56596d700 -1 os/FDCache.h: In function 'void
FDCache::clear(const ghobject_t&)' thread 7fb56596d700 time 2014-09-12
13:48:06.815407
os/FDCache.h: 89: FAILED assert(!registry[regist
Looking at the docs (as below), it seems like .95 and .85 are the default
numbers for full and near full ratio and if you reach the full ratio, it will
stop reading an writing to avoid data corruption.
http://ceph.com/docs/master/rados/configuration/mon-config-ref/#storage-capacity
So, few ques
Hi,
I'm n00b in the ceph world, so here I go. I was following this tutorials
[1][2] (in case you need to know if I missed something), while trying to
mount a block from an isolated machine using cehpfs I got this error
(actually following what's there in [2]).
mount error 5 = Input/output error
What does your mount command look like ?
Sent from my iPhone 5S
> On Sep 12, 2014, at 4:56 PM, Erick Ocrospoma wrote:
>
> Hi,
>
> I'm n00b in the ceph world, so here I go. I was following this tutorials
> [1][2] (in case you need to know if I missed something), while trying to
> mount a bl
On 12 September 2014 20:32, JIten Shah wrote:
> What does your mount command look like ?
>
>
mount -t ceph ceph01:/mnt /mnt -o name=admin,secretfile=/root/ceph/admin.key
Where ceph01 is my mds server.
> Sent from my iPhone 5S
>
>
>
> On Sep 12, 2014, at 4:56 PM, Erick Ocrospoma wrote:
>
> Hi
Hi Erick,
the address to use in the mount syntax is the address of your MON node, not the
one of the MDS node.
Or may be you have deployed both a MON and an MDS on ceph01?
JC
On Sep 12, 2014, at 18:41, Erick Ocrospoma wrote:
>
>
> On 12 September 2014 20:32, JIten Shah wrote:
> What do
Yes. It has to be the name of the MON server. If there are more than one MON
servers, they all need to be listed.
--Jiten
Sent from my iPhone 5S
> On Sep 12, 2014, at 6:44 PM, Jean-Charles LOPEZ wrote:
>
> Hi Erick,
>
> the address to use in the mount syntax is the address of your MON n
Here's an example:
sudo mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o
name=admin,secret=AQATSKdNGBnwLhAAnNDKnH65FmVKpXZJVasUeQ==
Sent from my iPhone 5S
> On Sep 12, 2014, at 7:14 PM, JIten Shah wrote:
>
> Yes. It has to be the name of the MON server. If there are more than one MON
> s
On 12 September 2014 21:16, JIten Shah wrote:
> Here's an example:
>
> sudo mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o
> name=admin,secret=AQATSKdNGBnwLhAAnNDKnH65FmVKpXZJVasUeQ==
>
>
> Sent from my iPhone 5S
>
>
>
> On Sep 12, 2014, at 7:14 PM, JIten Shah wrote:
>
> Yes. It has to be th
Sent from my iPhone 5S
> On Sep 12, 2014, at 8:01 PM, Erick Ocrospoma wrote:
>
>
>
>> On 12 September 2014 21:16, JIten Shah wrote:
>> Here's an example:
>>
>> sudo mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o
>> name=admin,secret=AQATSKdNGBnwLhAAnNDKnH65FmVKpXZJVasUeQ==
>>
>> Sen
On 12 September 2014 22:20, JIten Shah wrote:
>
>
> Sent from my iPhone 5S
>
>
>
> On Sep 12, 2014, at 8:01 PM, Erick Ocrospoma wrote:
>
>
>
> On 12 September 2014 21:16, JIten Shah wrote:
>
>> Here's an example:
>>
>> sudo mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o
>> name=admin,secret
34 matches
Mail list logo