>> however for geographic distributed datacentres specially when network
>> flactuate how to handle that as i read it seems CEPH need big pipe of
>> network
>Ceph isn't really suited for WAN-style distribution. Some users have
>high-enough and consistent-enough bandwidth (with low enough latency)
"perf reset" on the admin socket. I'm not sure what version it went in
to; you can check the release logs if it doesn't work on whatever you
have installed. :)
-Greg
On Mon, Jan 12, 2015 at 2:26 PM, Shain Miley wrote:
> Is there a way to 'reset' the osd perf counters?
>
> The numbers for osd 73
On 12 Jan 2015, at 17:08, Sage Weil
mailto:s...@newdream.net>> wrote:
On Mon, 12 Jan 2015, Dan Van Der Ster wrote:
Moving forward, I think it would be good for Ceph to a least document
this behaviour, but better would be to also detect when
zone_reclaim_mode != 0 and warn the admin (like MongoDB
Zheng, this looks like a kernel client issue to me, or else something
funny is going on with the cap flushing and the timestamps (note how
the reading client's ctime is set to an even second, while the mtime
is ~.63 seconds later and matches what the writing client sees). Any
ideas?
-Greg
On Mon,
Scenario:
Openstack Juno RDO on Centos7.
Ceph version: Giant.
On Centos7 there isn't more the old fastcgi,
but there's "mod_fcgid"
The apache VH is the following:
ServerName rdo-ctrl01
DocumentRoot /var/www/radosgw
RewriteEngine On
RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*)
/s3gw.fcgi?page=$1¶ms
Hi,
I am just wondering if anyone has any thoughts on the questions below...I would
like to order some additional hardware ASAP...and the order that I place may
change depending on the feedback that I receive.
Thanks again,
Shain
Sent from my iPhone
> On Jan 9, 2015, at 2:45 PM, Shain Miley
unsubscribe
Regards,
-don-
--
The information contained in this transmission may be confidential. Any
disclosure, copying, or further distribution of confidential information is not
permitted unless such privilege is explicitl
On Mon, Jan 12, 2015 at 3:55 AM, Zeeshan Ali Shah wrote:
> Thanks Greg, No i am more into large scale RADOS system not filesystem .
>
> however for geographic distributed datacentres specially when network
> flactuate how to handle that as i read it seems CEPH need big pipe of
> network
Ceph isn'
Thanks Greg, No i am more into large scale RADOS system not filesystem .
however for geographic distributed datacentres specially when network
flactuate how to handle that as i read it seems CEPH need big pipe of
network
/Zee
On Fri, Jan 9, 2015 at 7:15 PM, Gregory Farnum wrote:
> On Thu, Jan
All,
I wish to experiment with erasure-coded pools in Ceph. I've got some questions:
1. Is FIREFLY a reasonable release to be using to try EC pools? When I
look at various bits of development info, it appears that the work is complete
in FIREFLY, but I thought I'd askJ
2. It looks,
Hi Gregory,
$ uname -a
Linux coreos2 3.17.7+ #2 SMP Tue Jan 6 08:22:04 UTC 2015 x86_64
Intel(R) Xeon(R) CPU E5-4620 0 @ 2.20GHz GenuineIntel GNU/Linux
Kernel Client, using `mount -t ceph ...`
core@coreos2 /var/run/systemd/system $ modinfo ceph
filename: /lib/modules/3.17.7+/kernel/fs/c
Hi,
[redirecting back to list]
> Oh, it could be that... can you include the output from 'ceph osd tree'?
> That's a more concise view that shows up/down, weight, and in/out.
>
> Thanks!
> sage
>
root@cepharm17:~# ceph osd tree
# idweight type name up/down reweight
-1 0.52
Hi experts,
Could you some guys guide me how to get ceph-extras packages for Centos7?
I try to install giant in centos7 manually, however, I get the latest
extras packages only for centos6.4 in repository.
BTW, Is the qemu aware to the giant? Shoud I get the dedicated one to the
giant?
Thanks in ad
Is there a way to 'reset' the osd perf counters?
The numbers for osd 73 though osd 83 look really high compared to the
rest of the numbers I see here.
I was wondering if I could clear the counters out, so that I have a
fresh set of data to work with.
root@cephmount1:/var/log/samba# ceph os
Hi everyone:
I used writeback mode for cache pool :
ceph osd tier add sas ssd
ceph osd tier add sas ssd
ceph osd tier cache-mode ssd writeback
ceph osd tier set-overlay sas ssd
and i also set dirty ratio and full ratio:
ceph osd pool set ssd cache_target_dirty_ratio .4
ceph osd pool s
Hi everyone:
I plan to use SSD Journal to improve performance.
I have one 1.2T SSD disk per server.
what is the best practice for SSD Journal ?
There are there choice to deploy SSD Journal
1. all osd used same ssd partion
ceph-deploy osd create ceph-node:sdb:/de
What versions of all the Ceph pieces are you using? (Kernel
client/ceph-fuse, MDS, etc)
Can you provide more details on exactly what the program is doing on
which nodes?
-Greg
On Fri, Jan 9, 2015 at 5:15 PM, Lorieri wrote:
> first 3 stat commands shows blocks and size changing, but not the times
I have a couple of questions about caching:
I have 5 VM-Hosts serving 20 VMs.
I have 1 Ceph pool where the VM-Disks of those 20 VMs reside as RBD Images.
1) Can i use multiple caching-tiers on the "same" data pool?
I would like to use a local SSD OSD on each VM-Host that can serve
as "applic
Thanks for the reply, I have had some more time to mess around more with this
now.
I understand that the best thing is to allow it to rebuild the entire OSD, but
I am currently only using one replica and 2/3 machines had problems I ended up
in a bad situation. With OSDs down on 2 machines and o
On Sun, 11 Jan 2015, Sahlstrom, Claes wrote:
>
> Hi,
>
>
>
> I have a problem starting a couple of OSDs because of the journal being
> corrupt. Is there any way to replace the journal and keeping the rest of the
> OSD intact.
It is risky at best... I would not recommend it! The safe route is
For the first choice:
ceph-deploy osd create ceph-node:sdb:/dev/ssd ceph-node:sdc:/dev/ssd
i find ceph-deploy will create partition automaticaly, and each partition is 5G
default.
So the first choice and second choice is almost the same.
Compare to filesystem, I perfer to block device to get
On Mon, 12 Jan 2015, Dan Van Der Ster wrote:
> Moving forward, I think it would be good for Ceph to a least document
> this behaviour, but better would be to also detect when
> zone_reclaim_mode != 0 and warn the admin (like MongoDB does). This
> line from the commit which disables it in the kernel
(apologies if you receive this more than once... apparently I cannot reply to a
1 year old message on the list).
Dear all,
I'd like to +10 this old proposal of Kyle's. Let me explain why...
A couple months ago we started testing a new use-case with radosgw --
this new user is writing millions of
Hi all,
I've been trying to add a few new OSDs, and as I manage everything with
puppet, it was manually adding via the CLI.
At one point it adds the OSD to the crush map using:
# ceph osd crush add 6 0.0 root=default
but I get
Error ENOENT: osd.6 does not exist. create it before updating the c
(resending to list)
Hi Kyle,
I'd like to +10 this old proposal of yours. Let me explain why...
A couple months ago we started testing a new use-case with radosgw --
this new user is writing millions of small files and has been causing
us some headaches. Since starting these tests, the relevant OS
Hi,
I have a problem starting a couple of OSDs because of the journal being
corrupt. Is there any way to replace the journal and keeping the rest of the
OSD intact.
-1> 2015-01-11 16:02:54.475138 7fb32df86900 -1 journal Unable to read past
sequence 8188178 but header indicates the journal
Hi,
the next MeetUp in Berlin takes place on January 26 at 18:00 CET.
Our host is Deutsche Telekom, they will hold a short presentation about
their OpenStack / CEPH based production system.
Please RSVP at http://www.meetup.com/Ceph-Berlin/events/218939774/
Regards
--
Robert Sander
Heinlein Sup
27 matches
Mail list logo