Greetings, ceph-users :)
I’m pleased to share that Xiaoxi from the Intel Asia Pacific R&D Center
is our newest volunteer Geek on Duty!
If you’re not familiar with the Geek on Duty program, here are the
basics: members of our community take shifts on IRC and on the mailing
list to help new users
Thanks for the quick reply.
+Hugo Kuo+
(+886) 935004793
2013/9/10 Yehuda Sadeh
> by default .rgw.buckets holds the objects data.
>
> On Mon, Sep 9, 2013 at 8:39 PM, Kuo Hugo wrote:
> > Hi Folks,
> > I found that RadosGW created the following pools. The copies number is 2
> by
> > default. I'
Hi Folks,
I found that RadosGW created the following pools. The copies number is 2 by
default. I'd like to tweak the replicas to 3 for better reliability. I
tried to find the definition/usage of each pool but no luck.
Could someone provide related information about the usage of each pool and
which
There are a lot of numbers ceph status prints.
Is there any documentation on what they are?
I'm particulary curious about what seems a total data.
ceph status says I have 314TB, when I calculate I have 24TB.
It also says:
10615 GB used, 8005 GB / 18621 GB avail;
which I take to be 10TB used/
On 09/08/2013 01:14 AM, Da Chun Ng wrote:
I mapped an image to a system, and used blockdev to make it readonly.
But it failed.
[root@ceph0 mnt]# blockdev --setro /dev/rbd2
[root@ceph0 mnt]# blockdev --getro /dev/rbd2
0
It's on Centos6.4 with kernel 3.10.6 .
Ceph 0.61.8 .
Any idea?
For reasons
Indeed, that pool was created with the default 8 pg_nums.
8 pg_num * 2T/OSD / 2 repl ~ 8TB which about how far I got.
I bumped up the pg_num to 600 for that pool and nothing happened.
I bumped up the pgp_num to 600 for that pool and ceph started shifting
things around.
Can you explain the dif
On 09/09/2013 04:57 AM, Andrey Korolyov wrote:
May I also suggest the same for export/import mechanism? Say, if image
was created by fallocate we may also want to leave holes upon upload
and vice-versa for export.
Import and export already omit runs of zeroes. They could detect
smaller runs (cu
I don't see any very bad.
Try rename your racks from numbers to unique string, for example change
rack 1 {
to
rack rack1 {
and i.e.
On 09.09.2013, at 23:56, Gaylord Holder wrote:
> Thanks for your assistance.
>
> Crush map:
>
> # begin crush map
> tunable choose_local_tries 0
> tunable choose_
What do you mean by "directly from Rados"?
-Sam
On Wed, Sep 4, 2013 at 1:40 AM, Art M. wrote:
> Hello,
>
> As I know, radosgw calculates MD5 of the uploaded file and compares it with
> MD5 provided in header.
>
> Is it possible to get calculated MD5 of uploaded file directly from Rados.
> We want
That's normal, each osd listens on a few different ports for different reasons.
-Sam
On Mon, Sep 9, 2013 at 12:27 AM, Timofey Koolin wrote:
> I use ceph 0.67.2.
> When I start
> ceph-osd -i 0
> or
> ceph-osd -i 1
> it start one process, but it process open few tcp-ports, is it normal?
>
> netstat
You can't really disable the journal. It's used for failure recovery. It
should be fine to place your journal on the same ssd as the osd data
directory (though it does affect performance).
-Sam
On Wed, Sep 4, 2013 at 8:40 AM, Neo wrote:
>
>
>
> Original-Nachricht Betreff: R
It looks like osd.4 may actually be the problem. Can you try removing
osd.4 and trying again?
-Sam
On Mon, Sep 2, 2013 at 8:01 AM, Mariusz Gronczewski
wrote:
> We've installed ceph on test cluster:
> 3x mon, 7xOSD on 2x10k RPM SAS
> Centos 6.4 ( 2.6.32-358.14.1.el6.x86_64 )
> ceph 0.67.2 (also
The regions and zones can be used to distribute among different ceph clusters.
-Sam
On Mon, Sep 2, 2013 at 2:05 AM, 李学慧 wrote:
> Mr.
> Hi!I'm interested into the rgw geo-replication and disaster recovery
> feature.
> But whether those 'regisions and zones ' distributes among several different
> c
This is usually caused by having too few pgs. Each pool with a
significant amount of data needs at least around 100pgs/osd.
-Sam
On Mon, Sep 9, 2013 at 10:32 AM, Gaylord Holder wrote:
> I'm starting to load up my ceph cluster.
>
> I currently have 12 2TB drives (10 up and in, 2 defined but down
I was looking at someones question on the list and started looking up some
documentation and found this page.
http://ceph.com/docs/next/install/os-recommendations/
Do you think you can provide an update for dumpling.
Best Regards
___
ceph-users mailing
Yes. We'll have an update shortly.
On Mon, Sep 9, 2013 at 11:29 AM, Scottix wrote:
> I was looking at someones question on the list and started looking up some
> documentation and found this page.
> http://ceph.com/docs/next/install/os-recommendations/
>
> Do you think you can provide an update f
Great Thanks.
On Mon, Sep 9, 2013 at 11:31 AM, John Wilkins wrote:
> Yes. We'll have an update shortly.
>
> On Mon, Sep 9, 2013 at 11:29 AM, Scottix wrote:
> > I was looking at someones question on the list and started looking up
> some
> > documentation and found this page.
> > http://ceph.com
I'm starting to load up my ceph cluster.
I currently have 12 2TB drives (10 up and in, 2 defined but down and out).
rados df
says I have 8TB free, but I have 2 nearly full OSDs.
I don't understand how/why these two disks are filled while the others
are relatively empty.
How do I tell ceph t
On 09/09/2013 05:14 PM, Timofey Koolin wrote:
Is lost journal mean that I lost all data from this osd?
If you are not using btrfs, yes.
And I must have HA (raid-1 or similar) journal storage if I use data
without replication?
I'd not recommend that, but rather use Ceph's replication. Using
On Mon, 9 Sep 2013, Vladislav Gorbunov wrote:
> Is ceph osd crush move syntax changed on 0.67?
> I have crushmap
> # id weight type name up/down reweight
> -1 10.11 root default
> -4 3.82 datacenter dc1
> -2 3.82 host cstore3
> 0 0.55
If you manually use wipefs to clear out the fs signatures after you zap,
does it work then?
I've opened http://tracker.ceph.com/issues/6258 as I think that is the
answer here, but if you could confirm that wipefs does in fact solve the
problem, that would be helpful!
Thanks-
sage
On Mon, 9 S
Is lost journal mean that I lost all data from this osd?
And I must have HA (raid-1 or similar) journal storage if I use data
without replication?
--
Blog: www.rekby.ru
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo
May I also suggest the same for export/import mechanism? Say, if image
was created by fallocate we may also want to leave holes upon upload
and vice-versa for export.
On Mon, Sep 9, 2013 at 8:45 AM, Sage Weil wrote:
> On Sat, 7 Sep 2013, Oliver Daudey wrote:
>> Hey all,
>>
>> This topic has been
On Mon, Sep 9, 2013 at 3:29 PM, Tobias Prousa wrote:
> Hi Ceph,
>
> I recently realized that whenever I'm forced to restart MDS (i.e. stall or
> crash due to execcive memory consumption, btw. my MDS host has 32GB of RAM)
> especially while there are still clients having CephFS mounted, open files
for the experiment:
- blank disk sdae for data
blkid -p /dev/sdaf
/dev/sdaf: PTTYPE="gpt"
- and sda4 partition for journal
blkid -p /dev/sda4
/dev/sda4: PTTYPE="gpt" PART_ENTRY_SCHEME="gpt" PART_ENTRY_NAME="Linux
filesystem" PART_ENTRY_UUID="cdc46436-b6ed-40bb-adb4-63cf1c41cbe3"
PART_ENTRY_TY
I am found solution with this commands:
ceph osd crush unlink cstore1
ceph osd crush link cstore1 root=default datacenter=dc1
2013/9/9 Vladislav Gorbunov :
> Is ceph osd crush move syntax changed on 0.67?
> I have crushmap
> # id weight type name up/down reweight
> -1 10.11
Is ceph osd crush move syntax changed on 0.67?
I have crushmap
# id weight type name up/down reweight
-1 10.11 root default
-4 3.82 datacenter dc1
-2 3.82 host cstore3
0 0.55osd.0 up 1
1 0.55
Hi Ceph,
I recently realized that whenever I'm forced to restart MDS (i.e. stall or crash due to execcive memory consumption, btw. my MDS host has 32GB of RAM) especially while there are still clients having CephFS mounted, open files tend to have their metadata corrupted. Those files, when cor
I use ceph 0.67.2.
When I start
ceph-osd -i 0
or
ceph-osd -i 1
it start one process, but it process open few tcp-ports, is it normal?
netstat -nlp | grep ceph
tcp0 0 10.11.0.73:6789 0.0.0.0:* LISTEN
1577/ceph-mon - mon
tcp0 0 10.11.0.73:6800
29 matches
Mail list logo