Hi,
I saved a file sizing 5GB in the cluster. OSD disk "Used space" increases 15GB
in total because replication is 3. And radosgw-admin bucket stats --uid=someuid
shows that num-objects is increased by 1.
However, after I removed the object, I observe this:
the OSD disk usage does NOT chang
> Op 8 augustus 2016 om 16:45 schreef Martin Palma :
>
>
> Hi all,
>
> we are in the process of expanding our cluster and I would like to
> know if there are some best practices in doing so.
>
> Our current cluster is composted as follows:
> - 195 OSDs (14 Storage Nodes)
> - 3 Monitors
> - Tot
Hi,
I did a diff on the directories of all three the osds, no difference ..
So I don't know what's wrong.
Only thing I see different is a scrub file in the TEMP folder (it is
already another pg than last mail):
-rw-r--r--1 ceph ceph 0 Aug 9 09:51
scrub\u6.107__head_0107__f
Hi Mark, thanks for following up. I'm now pretty convinced I have issues
with my network, it's not Ceph related. My cursory iperf tests between
pairs of hosts were looking fine but with multiple clients I'm seeing
really high tcp retransmissions.
On Mon, Aug 8, 2016 at 1:07 PM, Mark Nelson wrote:
Gregory,
I've been given a tip by one of the ceph user list members on tuning values and
data migration and cluster IO. I had an issues twice already where my vms would
simply loose IO and crash while the cluster is being optimised for the new
tunables.
The recommendations were to upgrade the
On Mon, Aug 8, 2016 at 9:39 PM, Georgios Dimitrakakis
wrote:
> Dear David (and all),
>
> the data are considered very critical therefore all this attempt to
> recover them.
>
> Although the cluster hasn't been fully stopped all users actions have. I
> mean services are running but users are not a
Hello,
[re-added the list]
Also try to leave a line-break, paragraph between quoted and new text,
your mail looked like it was all written by me...
On Tue, 09 Aug 2016 11:00:27 +0300 Александр Пивушков wrote:
> Thank you for your response!
>
>
> >Вторник, 9 августа 2016, 5:11 +03:00 от Chr
On Tue, Aug 9, 2016 at 2:00 AM, Kenneth Waegeman
wrote:
> Hi,
>
> I did a diff on the directories of all three the osds, no difference .. So I
> don't know what's wrong.
omap (as implied by the omap_digest complaint) is stored in the OSD
leveldb, not in the data directories, so you wouldn't expec
Hi Ceph users,
I am new in ceph. I've been succeed installing ceph in 4 VM using Quick
installation guide in ceph documentation.
And I've also done to compile
ceph from source code, build and install in single vm.
What I want to do next is that run ceph multiple nodes in a cluster
but only insid
> >> Hello dear community!
>> >> I'm new to the Ceph and not long ago took up the theme of building
>> >> clusters.
>> >> Therefore it is very important to your opinion.
>> >> It is necessary to create a cluster from 1.2 PB storage and very rapid
>> >> access to data. Earlier disks of "Intel® SS
> Op 9 augustus 2016 om 16:36 schreef Александр Пивушков :
>
>
> > >> Hello dear community!
> >> >> I'm new to the Ceph and not long ago took up the theme of building
> >> >> clusters.
> >> >> Therefore it is very important to your opinion.
> >> >> It is necessary to create a cluster from 1.2
Вторник, 9 августа 2016, 17:43 +03:00 от Wido den Hollander :
>
>
>> Op 9 augustus 2016 om 16:36 schreef Александр Пивушков < p...@mail.ru >:
>>
>>
>> > >> Hello dear community!
>> >> >> I'm new to the Ceph and not long ago took up the theme of building
>> >> >> clusters.
>> >> >> Therefore i
Hi Wido,
thanks for your advice.
Best,
Martin
On Tue, Aug 9, 2016 at 10:05 AM, Wido den Hollander wrote:
>
>> Op 8 augustus 2016 om 16:45 schreef Martin Palma :
>>
>>
>> Hi all,
>>
>> we are in the process of expanding our cluster and I would like to
>> know if there are some best practices in
On 8/9/2016 10:43 AM, Wido den Hollander wrote:
Op 9 augustus 2016 om 16:36 schreef Александр Пивушков :
> >> Hello dear community!
I'm new to the Ceph and not long ago took up the theme of building clusters.
Therefore it is very important to your opinion.
It is necessary to create a clus
Hello,
On Tue, 9 Aug 2016 14:15:59 -0400 Jeff Bailey wrote:
>
>
> On 8/9/2016 10:43 AM, Wido den Hollander wrote:
> >
> >> Op 9 augustus 2016 om 16:36 schreef Александр Пивушков :
> >>
> >>
> >> > >> Hello dear community!
> >> I'm new to the Ceph and not long ago took up the theme of buil
On Wed, Aug 10, 2016 at 12:26 AM, agung Laksono wrote:
>
> Hi Ceph users,
>
> I am new in ceph. I've been succeed installing ceph in 4 VM using Quick
> installation guide in ceph documentation.
>
> And I've also done to compile
> ceph from source code, build and install in single vm.
>
> What I wa
On Tue, Aug 9, 2016 at 7:39 AM, George Mihaiescu wrote:
> Look in the cinder db, the volumes table to find the Uuid of the deleted
> volume.
You could also look through the logs at the time of the delete and I
suspect you should
be able to see how the rbd image was prefixed/named at the time of
Christian,
I have to say that OpenNebula 5 doesn't need any additional hacks (ok,
just two lines of code to support rescheduling in case of the original node
failure and even these patch scheduled to 5.2 to be added after my question
a couple of weeks ago; but it isn't about 'live') or an additi
I want to use Ceph only as user data storage.
user program writes data to a folder that is mounted on a Ceph.
Virtual machine images are not stored on the Сeph.
Fiber channel and 40GbE are used only for the rapid transmission of
information between the cluster Ceph and the virtual machine on oV
Hello!
Brad,
is that possible from the default logging or verbose one is needed??
I 've managed to get the UUID of the deleted volume from OpenStack but
don't really know how to get the offsets and OSD maps since "rbd info"
doesn't provide any information for that volume.
Is it possible to
Hello Vladimir,
On Wed, 10 Aug 2016 09:12:39 +0500 Дробышевский, Владимир wrote:
> Christian,
>
> I have to say that OpenNebula 5 doesn't need any additional hacks (ok,
> just two lines of code to support rescheduling in case of the original node
> failure and even these patch scheduled to 5.
2016-08-10 9:30 GMT+05:00 Александр Пивушков :
> I want to use Ceph only as user data storage.
> user program writes data to a folder that is mounted on a Ceph.
> Virtual machine images are not stored on the Сeph.
> Fiber channel and 40GbE are used only for the rapid transmission of
> informatio
Hi Greg...
Thanks for replying, You seem omnipresent in all ceph/cephfs issues!
Can you please confirm that, in Jewel, 'ceph pg repair' simply copies
the pg contents of the primary osd to the others? And that can lead to
data corruption if the problematic osd is indeed the primary?
If in Jew
On Wed, Aug 10, 2016 at 3:16 PM, Georgios Dimitrakakis
wrote:
>
> Hello!
>
> Brad,
>
> is that possible from the default logging or verbose one is needed??
>
> I 've managed to get the UUID of the deleted volume from OpenStack but don't
> really know how to get the offsets and OSD maps since "r
24 matches
Mail list logo