Hello,
I'm trying online resizing with RBD + XFS. But when i try to make a
xfs_growfs, it doesn't seen the new size. I don't use partition table, os
is debian squeeze / kernel 3.8.4 / ceph 0.56.4.
It seems that the mounted file system prevents update the block device size
?
If the file system is
On 04/05/2013 12:34 PM, Laurent Barbe wrote:
Hello,
I'm trying online resizing with RBD + XFS. But when i try to make a
xfs_growfs, it doesn't seen the new size. I don't use partition table,
os is debian squeeze / kernel 3.8.4 / ceph 0.56.4.
It seems that the mounted file system prevents update
On 04/05/2013 05:50 AM, Vanja Z wrote:
I have been testing CephFS on our computational cluster of about 30 computers.
I want users to be able to access the file-system from their personal machines.
At the moment, we simply allow the same NFS exports to be mounted from users
personal machines.
On 04/05/2013 05:47 AM, Vanja Z wrote:
I have been testing CephFS on our computational cluster of about 30 computers.
I've got 4 machines, 4 disks, 4 osd, 4 mon and 1 mds at the moment for testing.
The testing has been going very well apart from one problem that needs to be
resolved before we
Hello to all,
I've a Ceph cluster composed of 4 nodes in 2 differents rooms.
room A : osd.1, osd.3, mon.a, mon.c
room B : osd.2, osd.4, mon.b
My crush rule is made to make replica accross rooms.
So normally, if I shut the whole room A, my cluster should stay usable.
... but, in fact no.
When i
Hi,
On 04/05/2013 01:57 PM, Alexis GÜNST HORN wrote:
Hello to all,
I've a Ceph cluster composed of 4 nodes in 2 differents rooms.
room A : osd.1, osd.3, mon.a, mon.c
room B : osd.2, osd.4, mon.b
My crush rule is made to make replica accross rooms.
So normally, if I shut the whole room A, my c
Thanks for your answer,
no more chance with blockdev --rereadpt or partprobe -s. :(
2013/4/5 Wido den Hollander
> On 04/05/2013 12:34 PM, Laurent Barbe wrote:
>
>> Hello,
>>
>> I'm trying online resizing with RBD + XFS. But when i try to make a
>> xfs_growfs, it doesn't seen the new size. I don
Thanks Wildo, I have to admit its slightly disappointing (but completely
understandable) since it basically means it's not safe for us to use CephFS :(
Without "userquotas", it would be sufficient to have multiple CephFS
filesystems and to be able to set the size of each one.
Is it part of the
If I pause my instances in Openstack, then snapshot and clone my volumes, I
should have a consistent backup correct? Is freezing on snapshot creation like
LVM a potential future feature?
I've considered Sebastien's method here(
http://www.sebastien-han.fr/blog/2012/12/10/openstack-perform-cons
On 4/5/2013 7:57 AM, Wido den Hollander wrote:
You always need a majority of your monitors to be up. In this case you
loose 66% of your monitors, so mon.b can't get a majority.
With 3 monitors you need at least 2 to be up to have your cluster working.
That's kinda useless, isn't it? I'd've th
If, in the case above, you have a monitor per room (a, b) and one in a
third location outside of either (c), you would have the ability to
take down the entirety of either room and still maintain monitor
quorum. (a,c or b,c) The cluster would continue to work.
On Fri, Apr 5, 2013 at 10:02 AM, Dimi
On 04/05/2013 05:02 PM, Dimitri Maziuk wrote:
On 4/5/2013 7:57 AM, Wido den Hollander wrote:
You always need a majority of your monitors to be up. In this case you
loose 66% of your monitors, so mon.b can't get a majority.
With 3 monitors you need at least 2 to be up to have your cluster
worki
On Apr 4, 2013, at 3:06 AM, Waed Bataineh wrote:
> Hello,
>
> I'm using Ceph as object storage, where it put the whole file what ever was
> its size in one object (correct me if i'm wrong).
> i used it for multiple files that have different extension (.txt, .mp3,
> ...etc) i can store the
On 04/05/2013 10:12 AM, Wido den Hollander wrote:
> Think about it this way. You have two racks and the network connection
> between them fails. If both racks keep operating because they can still
> reach that single monitor in their rack you will end up with data
> inconsistency.
Yes. In DRBD la
On Fri, Apr 5, 2013 at 10:28 AM, Dimitri Maziuk wrote:
> On 04/05/2013 10:12 AM, Wido den Hollander wrote:
>
>> Think about it this way. You have two racks and the network connection
>> between them fails. If both racks keep operating because they can still
>> reach that single monitor in their ra
On 4/5/2013 10:32 AM, Gregory Farnum wrote:
On Fri, Apr 5, 2013 at 10:28 AM, Dimitri Maziuk wrote:
On 04/05/2013 10:12 AM, Wido den Hollander wrote:
Think about it this way. You have two racks and the network connection
between them fails. If both racks keep operating because they can still
r
On 04/05/2013 12:38 PM, Jeff Anderson-Lee wrote:
> The point is I believe that you don't need a 3rd replica of everything,
> just a 3rd MON running somewhere else.
Bear in mind that you still need a physical machine somewhere in that
"somewhere else".
--
Dimitri Maziuk
Programmer/sysadmin
BioMa
Hi all,
I have some problem after my RBD performance test
Setup:
Linux kernel: 3.6.11
OS: Ubuntu 12.04
RAID card: LSI MegaRAID SAS 9260-4i For every HDD: RAID0, Write Policy: Write
Back with BBU, Read Policy: ReadAhead, IO Policy: Direct
Storage server number : 1
Storage server :
8 * HDD (each
18 matches
Mail list logo