Hi, Wido den Hollander
>> Good day! Please help me solve the problem. There are the following scheme :
>> Server ESXi with 1Gb NICs. it has local store store2Tb and two isci storage
>> connected to the second server .
>> The second server supermicro: two 1TB hdd (lsi 9261-8i with battery), 8 CPU
Hi All,
I have a cluster of size 3120 GB running, but i noticed that at my monitor
node the log is consuming lot of space and it is increasing very rapidly.
In one day itself it goes to 100 GB.
Please help me to stop or reduce the the logging at monitor.
log location file in monitor node - /var/
Hi,
On Sat, Dec 7, 2013 at 6:34 PM, Yehuda Sadeh wrote:
> Sounds like disabling the cache triggers some bug. I'll open a relevant
> ticket.
Any news on this ?
I have the same issue, but the cache only masks the problem. If you
restart radosgw, you'll get it again (once for each bucket).
Cheer
Hi Cephers,
I reshuffled my crushmap without setting the osd state as noout, then 2 out of
three monitors got down, then I shut the osds down and got the monitors back
again bt when I wanted to set the osd state as noout, the monitors got down
again and everytime I start them, 2 of them go dow
On 01/22/2014 10:41 AM, Sherry Shahbazi wrote:
Hi Cephers,
I reshuffled my crushmap without setting the osd state as noout, then 2
out of three monitors got down, then I shut the osds down and got the
monitors back again bt when I wanted to set the osd state as noout, the
monitors got down again
As I currently only have three nodes, I'm running in the non-recommended
configuration where I've got both VMs and ceph running on the same
hosts; I added the stanza below to the upstart jobs for ceph-mon,
ceph-mds and ceph-osd, and it much improved the contention between ceph
and the VMs. I'm
Good day.
Some time ago i change pg_num like this
http://www.sebastien-han.fr/blog/2013/03/12/ceph-change-pg-number-on-the-fly/:
ceph osd pool create one-new 500
rados cppool one one-new
ceph osd pool delete one
ceph osd pool rename one-new one
Before changes:
ceph osd lspools
0 data,1 metadata
Hi Aaron,
thanks for the very usefull hint! With "ceph osd set noout" it's works
without trouble. Typical beginner's mistake.
regards
Udo
Am 21.01.2014 20:45, schrieb Aaron Ten Clay:
> Udo,
>
> I think you might have better luck using "ceph osd set noout" before
> doing maintenance, rather than
Hello everyone,
I find the plugin for wireshark to decode ceph message in src code, but I
wonder how to use it, and is there any plugin for tcpdump?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.c
22.01.2014 15:01, Dmitry Lysenko пишет:
> Good day.
>
> Some time ago i change pg_num like this
> http://www.sebastien-han.fr/blog/2013/03/12/ceph-change-pg-number-on-the-fly/:
>
> ceph osd pool create one-new 500
> rados cppool one one-new
> ceph osd pool delete one
> ceph osd pool rename one
Greetings,
I am trying to use CEPH cluster, with erasor encoding, preferrably with
CEPHFS as a disk storage unit for Symantec NetBackup.
Problem currently is that Symantec Netbackup Servers packages do not come
for ubuntu at all and for CEPHFS, I will need Ubuntu 12.04 (3.8 linux
kernal).
Is the
On 01/22/2014 11:34 AM, Guang wrote:
Thanks Sage.
If we use the debug_mon and debug_paxos as 20, the log file is growing too
fast, I set the log level as 10 and then: 1) run the 'ceph osd set noin'
command, 2) grep the log with keyword 'noin', attached is the monitor log.
Please help to check
Hi,
I have another idea for you:
In NetBackup (with proper licensing) you can configure something called
Cloud Storage Servers. Essentially a storage unit in Rackspace Cloudfiles
or Amazon S3. There's an option to change url to the storage server, so you
can replace it with the url to your RadosGW
Hello,
I am wondering if there is any detailed documentation for obtaining I/O
statistics for a Ceph cluster.
The important metrics I'm looking for are: the number of operations, size of
operations, and latency of operations - by operations I'm referring to
read/write.
I've seen what look like
Hi,
I'm using the aws-sdk-for-php classes for the Ceph RADOS gateway but I'm
getting an intermittent issue with the uploading files.
I'm attempting to upload an array of objects to Ceph one by one using
the create_object() function. It appears to stop randomly when
attempting to do them all
On Wed, Jan 22, 2014 at 2:33 AM, Sylvain Munaut
wrote:
> Hi,
>
> On Sat, Dec 7, 2013 at 6:34 PM, Yehuda Sadeh wrote:
>> Sounds like disabling the cache triggers some bug. I'll open a relevant
>> ticket.
>
> Any news on this ?
>
> I have the same issue, but the cache only masks the problem. If yo
On Wed, Jan 22, 2014 at 8:05 AM, Graeme Lambert wrote:
> Hi,
>
> I'm using the aws-sdk-for-php classes for the Ceph RADOS gateway but I'm
> getting an intermittent issue with the uploading files.
>
> I'm attempting to upload an array of objects to Ceph one by one using the
> create_object() functi
Hi,
After reading the thread
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-June/002358.html
We have done this crush map to make thing work.
srv1 and srv1ssd are the same physical server (same srv2,3,4)
we split it in the crush to make two parallel hierarchies.
This example is working,
I actually need to see what happens before it starts looping. What
does 'ceph health' show?
Yehuda
On Wed, Jan 22, 2014 at 8:38 AM, Graeme Lambert wrote:
> Hi,
>
> Following discussions with people in the IRC I set debug_ms and this is what
> is being looped over and over when one of them is stu
Hi,
Following discussions with people in the IRC I set debug_ms and this is
what is being looped over and over when one of them is stuck:
http://pastebin.com/KVcpAeYT
Regarding the modules, apache version is 2.2.22-2precise.ceph and the
fastcgi mod version is 2.4.7~0910052141-2~bpo70+1.ceph.
All,
I have a situation on my RHEL 6.4 cluster that seems to be caused by
ceph-deploy changing the file permissions on /etc/ceph/ceph.conf after running
a command such as "ceph-deploy mon create node2 node3". The idea was to create
additional monitors for a healthy 3 node cluster that already
Hi Yehuda,
With regards to the health status of the cluster, it isn't healthy but I
haven't found any way of fixing the placement group errors. Looking at
the ceph health detail it's also showing blocked requests too?
HEALTH_WARN 1 pgs down; 3 pgs incomplete; 3 pgs stuck inactive; 3 pgs
stuc
Hi all,
I want to double the number of pgs available for a pool, however I
want to reduce as much as possible the resulting I/O storm (I have
quite a bit of data in these pools).
What is the best way of doing this? Is it using php_nums? for example:
increase pg_num form X to 2X
while pgp_num <
Hi Yehuda,
I'm wondering if part of the problem is disk I/O? Running "iotop -o" on
the three nodes I get 20MB/s to 100MB/s read on two of them but less
than 1MB/s read on another. All of them have two OSDs, one on each
disk, and all are running ceph-mon.
There shouldn't be anything differe
On Wed, Jan 22, 2014 at 8:55 AM, Graeme Lambert wrote:
> Hi Yehuda,
>
> With regards to the health status of the cluster, it isn't healthy but I
> haven't found any way of fixing the placement group errors. Looking at the
> ceph health detail it's also showing blocked requests too?
>
> HEALTH_WAR
All,
Having failed to successfully and new monitors using ceph-deploy, I tried the
documented manual approach.
The platform:
OS: RHEL 6.4
Ceph: Emperor
Ceph-deploy: 1.3.4-0
When following the procedure on an existing node in a working cluster that has
an existing single monitor configured a
On Wed, Jan 22, 2014 at 8:04 AM, Dan Ryder (daryder) wrote:
>
> Hello,
>
>
>
> I am wondering if there is any detailed documentation for obtaining I/O
> statistics for a Ceph cluster.
>
> The important metrics I’m looking for are: the number of operations, size of
> operations, and latency of op
On Wed, Jan 22, 2014 at 8:35 AM, zorg wrote:
> Hi,
> After reading the thread
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-June/002358.html
>
> We have done this crush map to make thing work.
> srv1 and srv1ssd are the same physical server (same srv2,3,4)
> we split it in the crush t
On Wed, Jan 22, 2014 at 9:13 AM, Caius Howcroft
wrote:
> Hi all,
>
>
> I want to double the number of pgs available for a pool, however I
> want to reduce as much as possible the resulting I/O storm (I have
> quite a bit of data in these pools).
>
> What is the best way of doing this? Is it using
On Sun, Jan 19, 2014 at 9:00 PM, Sherry Shahbazi wrote:
> Hi all,
>
> I have three pools, which I want to mount Pool 0 with CephFS.
> When I try to set the layout by changing the pool to to 0 (cephfs
> /mnt/oruafs/pool0/ set_layout -p 0), it would not be set to pool 0 while I
> am able to set it t
[Re-added the list.]
On Wed, Jan 22, 2014 at 4:15 AM, Tim Zhang wrote:
> Hi Gregory,
> I find in the src code:ReplicatedPG.cc in the function do_pg_op() I see this
> two flag:
> CEPH_OSD_FLAG_ACK and CEPH_OSD_FLAG_ONDISK,
> what's the differenrence between this two flag?
In some circumstances (w
On Tue, Jan 21, 2014 at 8:26 AM, Sylvain Munaut
wrote:
> Hi,
>
> I noticed in the documentation that the OSD should use 3 ports per OSD
> daemon running and so when I setup the cluster, I originally opened
> enough port to accomodate this (with a small margin so that restart
> could proceed even i
On Wed, Jan 22, 2014 at 11:39 AM, wrote:
> All,
>
>
>
> I have a situation on my RHEL 6.4 cluster that seems to be caused by
> ceph-deploy changing the file permissions on /etc/ceph/ceph.conf after
> running a command such as “ceph-deploy mon create node2 node3”. The idea
> was to create additio
On Wed, Jan 22, 2014 at 12:47 PM, wrote:
> All,
>
>
>
> Having failed to successfully and new monitors using ceph-deploy, I tried
> the documented manual approach.
>
Would you be able to share why/how it didn't work? Maybe some logs or
output would
be great so that we can continue to improve the
Can you advise on what the issues may be?
Yehuda Sadeh wrote:
>On Wed, Jan 22, 2014 at 8:55 AM, Graeme Lambert
>wrote:
>> Hi Yehuda,
>>
>> With regards to the health status of the cluster, it isn't healthy
>but I
>> haven't found any way of fixing the placement group errors. Looking
>at the
>>
On Wed, Jan 22, 2014 at 3:23 AM, Dmitry Lysenko wrote:
> Good day.
>
> Some time ago i change pg_num like this
> http://www.sebastien-han.fr/blog/2013/03/12/ceph-change-pg-number-on-the-fly/:
>
> ceph osd pool create one-new 500
> rados cppool one one-new
Unfortunately, this command is not copyi
Gregory Farnum writes:
>
> On Wed, Jan 22, 2014 at 9:13 AM, Caius Howcroft
> > I want to double the number of pgs available for a pool, however I
> > want to reduce as much as possible the resulting I/O storm (I have
> > quite a bit of data in these pools).
> >
> > What is the best way of doin
On Wed, Jan 22, 2014 at 3:50 PM, bf wrote:
>
>
> Gregory Farnum writes:
>
>>
>> On Wed, Jan 22, 2014 at 9:13 AM, Caius Howcroft
>> > I want to double the number of pgs available for a pool, however I
>> > want to reduce as much as possible the resulting I/O storm (I have
>> > quite a bit of data
I am facing a problem in requesting ceph radosgw using swift api.
Connection is getting closed after reading 512 kb from stream. This
problem is only occurring if I send a GET object request with range header.
Here is the request and response:
Request--->
GET http://rgw.mydomain.com/swift/v1/use
Thank Wido den Hollander!
Migrate journaling to /dev/sdc1
and
rados bench -p my_pool 300 write
Total time run: 300.356865
Total writes made: 7902
Write size: 4194304
Bandwidth (MB/sec): 105.235
Press any key to continue...
22.01.2014, 13:08, "Никитенко Виталий" :
can ceph handle a configuration where a custer node is not "always on", but
rather gets booted periodically to sync to the cluster, and is also
sometimes up full time as demand requires? I ask because I want to put an
OSD on each of my cluster nodes, but some cluster nodes only come up as
demand d
41 matches
Mail list logo