Hi guys,
I was creating new buckets and moving buckets around when 1 monitor
stopped replying.
The scenario is:
2 servers
2 MONs
21 OSDs each server
I uploaded the stderr to:
http://ur1.ca/jxbrp
when I try to run:
/usr/bin/ceph-mon -i zrh-srv-m-cph01 --pid-file
/var/run/ceph/mon.zrh-srv-m-c
Hello,
I am working on a project where creating Ceph buckets and makes hem
available to different applications. I would like to ask for your
suggestion on modeling of the user relationship between apps and buckets.
One way we have come identified as potential solution is to create a user
to own
On 周二, 3月 17, 2015 at 10:01 上午, Xinze Chi wrote:hi,all:
I find a pg on my test cluster in doing scrubbing for a long time
and not finish. there are not some useful scrubbing log. scrubs_active
is 1, so inc_scrubs_pending return false. I think the reason is that
some sc
Hi Ceph user,
I’m new to Ceph but I need to use Ceph as the storage for the Cloud we are
building in house.
Did anyone use RADOS Gateway in production? How mature it is in terms of
compatibility with S3 / Swift?
Anyone can share their experience on it?
Best Regards,
Jerry
_
- Original Message -
> From: "Ben"
> To: "Craig Lewis"
> Cc: "Yehuda Sadeh-Weinraub" , "ceph-users"
>
> Sent: Monday, March 16, 2015 3:38:42 PM
> Subject: Re: [ceph-users] Shadow files
>
> Thats the thing. The peaks and troughs are in USERS BUCKETS only.
> The actual cluster usage do
Hey Cephers,
Now that the first Ceph Day is under our belt, we're looking forward
to the next couple of events. The event in San Francisco was great,
with many solid community speakers...and we want to keep that trend
rolling. Take a look at the upcoming events, and if you have been
doing cool thi
Hi all,
I wanna deploy Ceph and I see the doc here
(http://docs.ceph.com/docs/dumpling/start/quick-start-preflight/). I
wonder how could I install ceph from latest source codes install of
specific software libraries like `sudo apt-get install ceph-deploy`?
After I compile ceph source codes I would
Situation: I need to use EC pools (for the economics/power/cooling) for the
storage of data, but my use case requires a block device. Ergo, I require a
cache tier. I have tried using a 3x replicated pool as a cache tier - the
throughput was poor, mostly due to latency, mostly due to device sat
Hello, fellow Ceph users,
I'm trying to utilize RBD read-ahead settings with 0.87.1 (documented as new in
0.86) to convince the Windows boot loader to boot a Windows RBD in a reasonable
amount of time using QEMU on Ubuntu 14.04.2. Below is the output of "ceph -w"
during the Windows VM boot proc
Hi,
I was wondering if any cepher where going to WHD this year?
Cheers,
Josef
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Don,
In a similar situation at the moment. Initially thought EC pools would be ok
for our workload and I still believe they are, however the current cache
tiering code seems to hamper the performance as for every read and write the
whole object has to be de/promoted. This has a very severe perf
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi All,
Does anyone have Ceph implemented with Infiniband for Cluster
and Public network?
Thanks in advance,
German Anders
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-cep
We have a test cluster with IB. We have both networks over IPoIB on the
same IP subnet though (no cluster network configuration).
On Tue, Mar 17, 2015 at 12:02 PM, German Anders
wrote:
> Hi All,
>
> Does anyone have Ceph implemented with Infiniband for Cluster and
> Public network?
>
> Th
Hi Robert,
How are you? Thanks a lot for the quick response. I would like
to know if you could share some info on this. We have an existing Ceph
cluster in production, with the following:
3 x MON servers with 10GbE ADPT DP (one port on the PUB network)
4 x OSD servers with 10GbE ADPT DP
I'm trying to get better performance out of exporting RBD volumes via
tgt for iSCSI consumers...
By terrible, I'm getting <5MB/sec reads, <50IOPS. I'm pretty sure neither RBD
or iSCSI themselves are the problems; as the individually perform well.
iSCSI to RAM-backed: >60MB/sec, >500IOPS
iSCSI to
I don't know what your disk performance in your OSD nodes are, but dual FDR
would probably be more than enough that I wouldn't worry about doing a
separate cluster network. The FDR card should have more capacity than your
PCI bus anyway.
Since Ceph does not use RDMA or native IB verbs yet, you won
Hi,
we are running Ceph v.0.72.2 (emperor) from the ceph emperor repo. The
latest week we had 2 random OSD crashes (one during cluster recovery
and one while in healthy state) with the same symptom: osd process
crashes, logs the following trace on its log and gets down and out. We
are in the proces
Most likely fixed in firefly.
-Sam
- Original Message -
From: "Kostis Fardelas"
To: "ceph-users"
Sent: Tuesday, March 17, 2015 12:30:43 PM
Subject: [ceph-users] Random OSD failures - FAILED assert
Hi,
we are running Ceph v.0.72.2 (emperor) from the ceph emperor repo. The
latest week we
Never mind. After digging through the history on Github it looks like the docs
are wrong. The code for the RBD read-ahead feature appears in 0.88, not 0.86,
which explains why I can't get it to work in 0.87.1.
Steve
From: Stephen Taylor
Sent: Tuesday, March 17, 2015 11:32 AM
To: 'ceph-us...@cep
Hi Jerry,
I currently work at Bloomberg and we currently have a very large Ceph
installation in production and we use the S3 compatible API for rados
gateway. We are also re-architecting our new RGW and evaluating a different
Apache configuration for a little better performance. We only use replic
On 03/15/2015 08:42 PM, Mike Christie wrote:
> On 03/15/2015 07:54 PM, Mike Christie wrote:
>> On 03/09/2015 11:15 AM, Nick Fisk wrote:
>>> Hi Mike,
>>>
>>> I was using bs_aio with the krbd and still saw a small caching effect. I'm
>>> not sure if it was on the ESXi or tgt/krbd page cache side, but
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Mike Christie
> Sent: 17 March 2015 21:27
> To: Nick Fisk; 'Jake Young'
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] tgt and krbd
>
> On 03/15/2015 08:42 PM, Mike Christie w
Hi Robin,
Just a few things to try:-
1. Increase the number of worker threads for tgt (it's a parameter of tgtd,
so modify however its being started)
2. Disable librbd caching in ceph.conf
3. Do you see the same performance problems exporting a krbd as a block
device via tgt?
Nick
> -Origin
Also what are you getting locally on your filesystem? Looking at the specs
for a 840 pro, ~520MBps and based on the numbers you stated earlier your
arent getting close to that so there might be a problem at the server.
Once you start seeing better numbers at the local, then retry your iscsi
target
On Tue, Mar 17, 2015 at 3:24 PM, Florent B wrote:
> Hi everyone,
>
> My problem is about ceph-fuse & Ansible, I first post here to see if
> someone have an idea of what happens.
>
> I configure a mount point like this:
>
> mount: name=/mnt/cephfs src='daemonize,id={{ cephfs_username
> }},mon_host=
Hello everybody,
I want to build a new architecture with Ceph for storage backend.
For the moment I’ve got only one server with this specs :
1 RAID-1 SSD : OS + OSD journals
12x 4To : OSD daemons.
I never reached the « clean state » on my cluster and I’m always in HEALTH_WARN
mode like this :
Your error looks to be the mountpoint or the option in your ansible
playbook...
Are you sure its running in order?
Can you mount a different directory using the commands?
On Mar 17, 2015 6:24 PM, "Florent B" wrote:
> Hi everyone,
>
> My problem is about ceph-fuse & Ansible, I first post here to
Hi,
just make sure you modify your CRUSH map so that each copy of the objects are
just dispatched on different OSDs rather than on different hosts.
Follow these steps:
ceph osd getcrushmap -o /tmp/cm
crushtool -i /tmp/cm -o /tmp/cm.txt
Edit the /tmp/cm.txt file. Locate the crush rule ID 0 at th
Hey,
I just updated your command with : crushtool -d /tmp/cm -o /tmp/cm.txt (instead
of -i)
It works fine, thank you so much.
Cheers,
k.
> Le 18 mars 2015 à 00:42, LOPEZ Jean-Charles a écrit :
>
> Hi,
>
> just make sure you modify your CRUSH map so that each copy of the objects are
> just
Sorry to bump this one, but I have more hardware coming and I still cannot add
another OSD to my cluster..
Does anybody have any clues?
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Malcolm Haak
Sent: Friday, 13 March 2015 10:05 AM
To: Joao
None of this helps with trying to remove defunct shadow files which
number in the 10s of millions.
Is there a quick way to see which shadow files are safe to delete
easily?
Remembering that there are MILLIONS of objects.
We have a 320TB cluster which is 272TB full. Of this, we should only
ac
Hi,
I’m planning a Ceph SSD cluster, I know that we won’t get the full performance
from the SSD in this case, but SATA won’t cut it as backend storage and SAS is
the same price as SSD now.
The backend network will be a 10GbE active/passive, but will be used mainly for
MySQL, so we’re aiming fo
Hi everyone, I am ready to launch ceph on production but there is one thing
that keeps on my mind... If there was a Blackout where all the ceph nodes went
off what would really happen with the filesystem? It would get corrupt? Or
ceph has any Kind of mechanism to survive to something like that?
Hello,
On Wed, 18 Mar 2015 03:52:22 +0100 Josef Johansson wrote:
> Hi,
>
> I’m planning a Ceph SSD cluster, I know that we won’t get the full
> performance from the SSD in this case, but SATA won’t cut it as backend
> storage and SAS is the same price as SSD now.
>
Have you actually tested SAT
Used from From MASTER branch.
/etc/php5/cli/conf.d/rados.ini
rados
librados version (linked) => 0.69.0
librados version (compiled) => 0.69.0
--
Seems like the error is due to
rados_osd_op_timeout
or
rados_mon_op_timeout
On Mon, Mar 16, 2015 at 7:26 PM, Wido den Holl
36 matches
Mail list logo