i am having issues running radosgw-agent to sync data between two
radosgw zones. As far as i can tell both zones are running correctly.
My issue is when i run the radosgw-agent command:
radosgw-agent -v --src-access-key --src-secret-key
--dest-access-key --dest-secret-key
--src-zone us-m
e.urlparse('http://us-secondary.example.com:80')
>>> print result.hostname, result.port
us-secondary.example.com 80
that looks ok to me.
On 07/07/14 22:57, Josh Durgin wrote:
On 07/04/2014 08:36 AM, Peter wrote:
i am having issues running radosgw-agent to sync data between two
ra
": ".us-secondary.rgw.buckets"}
}
]
}
us-master user exists on us-master cluster gateway, us-secondary user
exists on us-secondary cluster gateway. both us-master and us-secondary
gateway users have same access and secret key. should us-master and
us-secondary users exist
typo, should read:
{ "name": "us-secondary",
"endpoints": [
"http:\/\/us-secondary.example.com:80\/"],
"log_meta": "true",
"log_data": "true"}
in region config bel
he examples.
Sorry, I'm out of ideas.
On Mon, Jul 21, 2014 at 7:13 AM, Peter <mailto:ptier...@tchpc.tcd.ie>> wrote:
hello again,
i couldn't find
'http://us-secondary.example.comhttp://us-secondary.example.com/
<http://us-secondary.example.com/>
radosgw-admin regionmap update after making
region/zone changes. Bouncing the RGW’s probably wouldn’t hurt either.
*From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On
Behalf Of *Peter
*Sent:* Tuesday, July 22, 2014 4:51 AM
*To:* Craig Lewis
*Cc:* Ceph Users
*Subject:* Re: [ceph-users
5 hadoop
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
-2/-2 (syslog threshold)
-1/-1 (stderr threshold)
max_recent 1
max_new 1000
log_file /var/log/ceph/ceph-mon.narr9.log
--- end dump of recent events ---
Cheers,
Peter
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
nt to
end up losing data again (I had this happen once before).
Thanks,
Peter
On 2013-07-22 17:31, Joao Eduardo Luis wrote:
On 07/22/2013 12:33 PM, pe...@2force.nl wrote:
Hello,
After a reboot one of our monitors is unable to start. We did an
upgrade
from 0.61.4 to 0.61.5 last week withou
one with
this problem. Our cluster isn't working anymore (only 1 monitor left) so
I'd recommend anyone running 0.61.5 not to reboot or restart their
monitors until it is know what is going on :(
Thanks,
Peter
On 2013-07-22 17:31, Joao Eduardo Luis wrote:
On 07/22/2013 12:33 PM, pe..
t is going on :(
I just rebooted one mon server running 0.61.5 (had to!)
and it didn't crash (yet?). I guess I was lucky…
Cheers, Dan
Lucky you! :)
Thanks,
Peter
On 2013-07-22 17:31, Joao Eduardo Luis wrote:
On 07/22/2013 12:33 PM, pe...@2force.nl wrote:
Hello,
After a reboot one of o
/ref/wip-cuttlefish-osdmap/
precise main
apt-get update && apt-get install ceph
Not sure if that is the correct way (if not, let me know) but I thought
I would point this out for others having the same issue.
Thanks for all the effort! Everything is up and running again :)
Cheers
n, osd etc)? Just curious if there are still any
dependencies or if you still need to list those on clients for instance.
Cheers,
Peter
Version: ceph -v
ceph version 0.61.6 (59ddece17e36fef69ecf40e239aeffad33c9db35)
Note that using "ceph" command line utility on the nodes is wo
Hi Sage,
I just had a 0.61.6 monitor crash and one osd. The mon and all osds
restarted just fine after the update but it decided to crash after 15
minutes orso. See a snippet of the logfile below. I have you sent a link
to the logfiles and monitor store. It seems the bug hasn't been fully
fix
Any news on this? I'm not sure if you guys received the link to the log
and monitor files. One monitor and osd is still crashing with the error
below.
On 2013-07-24 09:57, pe...@2force.nl wrote:
Hi Sage,
I just had a 0.61.6 monitor crash and one osd. The mon and all osds
restarted just fine a
On 2013-07-25 11:52, Wido den Hollander wrote:
On 07/25/2013 11:46 AM, pe...@2force.nl wrote:
Any news on this? I'm not sure if you guys received the link to the
log
and monitor files. One monitor and osd is still crashing with the
error
below.
I think you are seeing this issue: http://track
ish and not on bobtail so it doesn't
apply. I'm not sure if it is clear what I am trying to say or that I'm
missing something here but I still see this issue either way :-)
I will check out the dev list also but perhaps someone from Inktank can
at least look at the files I pro
will check out the dev list also but perhaps someone from Inktank
can
at least look at the files I provided.
Peter,
We did take a look at your files (thanks a lot btw!), and as of last
night's patches (which are now on the cuttlefish branch), your store
worked just fine.
As Sage mentioned
ing to say or that
I'm
missing something here but I still see this issue either way :-)
I will check out the dev list also but perhaps someone from Inktank
can
at least look at the files I provided.
Peter,
We did take a look at your files (thanks a lot btw!), and as of last
night'
B SSL libz TLS-SRP
Cheers,
Peter
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
i am wondering about running virtual environment traffic (VM -> Ceph)
traffic on the ceph cluster network by plugging virtual hosts into this
network. Is this a good idea?
My thoughts are no, as VM -> ceph traffic would be client traffic from
ceph perspective.
Just want the community's th
s are physicals of course.
--
Thomas Lemarchand
Cloud Solutions SAS - Responsable des systèmes d'information
On jeu., 2014-12-04 at 12:45 +0000, Peter wrote:
Hi,
i am wondering about running virtual environment traffic (VM -> Ceph)
traffic on the ceph cluster network by plugging virtual ho
Hello,
I am testing out federated gateways. I have created one gateway with one
region and one zone. The gateway appears to work. I am trying to test it
with s3cmd before I continue with more regions and zones.
I create a test gateway user:
radosgw-admin user create --uid=test --display-name
=20
2014-04-14 12:39:20.556130 7f133f7ee700 10 failed to authorize request
2014-04-14 12:39:20.556167 7f133f7ee700 2 req 2:0.009095:s3:GET
/:list_buckets:http status=403
2014-04-14 12:39:20.556396 7f133f7ee700 1 == req done
req=0x8ca280 http_status=403 ==
On 04/14/2014 10:47 AM, Pete
fixed! thank you for the reply. It was the backslashes in the secret
that was the issue. I generated a new gateway user with:
radosgw-admin user create --uid=test2 --display-name=test2
--access-key={key} --secret={secret_without_slashes} --name
client.radosgw.gateway
and that worked.
On 04/
I am currently testing this functionality. What is your issue?
On 04/17/2014 07:32 AM, maoqi1982 wrote:
Hi list
i follow the http://ceph.com/docs/master/radosgw/federated-config/ to
test the muti-geography function.failed. .Does anyone success deploy
FEDERATED GATEWAYS?Is the function in ce
status=404 ==
on the master zone side when the agent is syncing. Ill respond back when
i have more info.
On 04/18/2014 06:35 AM, maoqi1982 wrote:
Hi Peter
thanks for your reply.
We plan to have a test of the muti-site data replication,but we
encountered a problem.
All user and met
): rgw-us-secondary.example.com
2014-04-23T12:51:22.708 4888:ERROR:radosgw_agent.worker:failed to sync
object BUCKET/README.md: state is error
On 04/23/2014 12:43 PM, Peter wrote:
I have successfully created a one region, two zone federated setup
with seperate clusters for each zone.
I a
i had a similar issue with authentication over S3 with fastcgi. It was
due to slashes (\ /) in secret key. I see that your secret key has
slashes. perhaps generate a new gateway user specifying keys using:
--access-key= and --secret=
On 04/23/2014 02:30 PM, Srinivasa Rao Ragolu wrote:
Hi A
w.tld:80/auth 404 Not Found
Please provide your valuable inputs to resolve this issue.
Thanks,
Srinivas.
On Wed, Apr 23, 2014 at 7:13 PM, Srinivasa Rao Ragolu
mailto:srag...@mvista.com>> wrote:
even after creating new secret key, I am facing the issue. Could
you please let me know
Hello,
I am testing radosgw-agent for federation. I have a fully working two
cluster master/secondary zones.
When I try to run radosgw-agent, I receive the following error:
root@us-master:/etc/ceph# radosgw-agent -c inter-sync.conf
ERROR:root:Could not retrieve region map from destination
Tr
i also have this issue and there is another thread on it. radosgw-agent
will sync metadata but not data.
Do you have different gateway system user keys on master and slave zone?
On 04/24/2014 09:45 AM, lixuehui wrote:
hi,list:
I tried to sync between master zone and slave zone belong one regi
Do you have a typo? :
public_network = 192.168.0/24
should this read:
public_network = 192.168.0.0/24
On 04/24/2014 04:53 PM, Gandalf Corvotempesta wrote:
I'm trying to configure a small ceph cluster with both public and
cluster networks.
This is my conf:
[global]
public_network = 192.
this means that there is no limit to the size of buckets created by this
user, bucket quota is not enabled. you can enable with:
radosgw-admin quota enable --uid shrinivas
radosgw-admin quota set --uid shrinivas
On 04/29/2014 09:12 AM, Srinivasa Rao Ragolu wrote:
Hi Yehuda,
I have config
I had similar issues. Do you get any message when running
/etc/init.d/radosgw start? you should be getting:
starting**radosgw.gateway
or similar. If not, the radosgw process is not starting up. It doesn't
give much output to say what is wrong.
i found it was due to the actual hostname o
perhaps group sets of hosts into racks in crushmap. The crushmap doesn't
have to strictly map the real world.
On 05/13/2014 08:52 AM, Cao, Buddy wrote:
Hi,
I have a crushmap structure likes root->rack->host->osds. I designed
the rule below, since I used "chooseleaf...rack" in rule definition
ceph/mon/ceph-a
etc
Thanks for looking and if you need more info let me know.
Cheers,
Peter
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
anks for looking and if you need more info let me know.
Cheers,
Peter
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ourse we could just wipe
and start over but I'd really like to know if we can fix this, as a good
excercise.
Cheers,
Peter
Links:
--
[1] http://ceph.com/docs/next/rados/operations/add-or-rm-mons/
[2] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[3] http://ceph.com
___
a4780 7 mon.a@0(leader).osd
e2809
update_from_paxos applying incremental 2810
Is this accurate? It's applying the *same* incremental I've and over
again?
Yes, this is the current state:
Peter,
Can you point me to the full log of the monitor caught in this
apparent loop?
-
9 up, 9 in
2013-06-13 11:41:05.317064 7f7689ca4780 7 mon.a@0(leader).osd
e2809
update_from_paxos applying incremental 2810
Is this accurate? It's applying the *same* incremental I've and
over
again?
Yes, this is the current state:
Peter,
Can you point me to the full log of the mo
er
again?
Yes, this is the current state:
Peter,
Can you point me to the full log of the monitor caught in this
apparent loop?
-Joao
Hi Joao,
Here it is:
http://www.2force.nl/ceph/ceph-mon.a.log.gz
Thanks,
Peter
Hi Joao,
Did you happen to figure out what is going on? If you need more l
er
again?
Yes, this is the current state:
Peter,
Can you point me to the full log of the monitor caught in this
apparent loop?
-Joao
Hi Joao,
Here it is:
http://www.2force.nl/ceph/ceph-mon.a.log.gz
Thanks,
Peter
Hi Joao,
Did you happen to figure out what is going on? If you need more l
:35:01 UTC 2013
x86_64 x86_64 x86_64 GNU/Linux
The machine needs a reboot in order to mount cephfs again. We can
easily reproduce this if needed.
Thanks,
Peter
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
ue or our environment.
Cheers
Peter
On 10/06/2014 09:52, Christian Eichelmann wrote:
Hi again,
just found the ceph pg repair command :) Now both clusters are OK again.
Anyways, I'm really interested in the caus of the problem.
Regards,
Christian
Am 10.06.2014 10:28, schrieb Christian
as to where and how to look for the problem. The network
does not seem to have any problems. ZFS is not reporting any problems
with the disks and the OSD's are fine.
Thanks
Peter.
Log as follows
health HEALTH_ERR 50 pgs inconsistent; 121 scrub errors
monmap e8: 6 mons at
{broll=10.
ck unclean). I tried a
ton of recommended processes for getting them working and nothing could get
them to budge. I did `ceph osd crush tunables legacy` and all 320 pgs went
from stuck to active. This is definitely repeatable as I can deploy a new
cluster with vagrant/puppet and this hap
Hi all,
I have been testing cephfs with erasure coded pool and cache tier. I
have 3 mds running on the same physical server as 3 mons. The cluster is
in ok state otherwise, rbd is working and all pg are active+clean. Im
running v 0.87.2 giant on all nodes and ubuntu 14.04.2 .
The cluster was
00 5 mds.0.objecter 0
unacked, 0 uncommitted
0> 2015-05-29 09:28:23.108478 7f78cb4d9700 -1 mds/MDCache.cc: In
function 'virtual void C_IO_MDC_TruncateFinish::finish(int)' thread
7f78cb4d9700 time 2015-05-29 09:28:23.107027
mds/MDCache.cc: 5974: FAILED assert(r == 0 || r == -2)
O
CachePool and not the ECpool?
thanks
On 29/05/15 11:17, John Spray wrote:
On 29/05/2015 09:46, Peter Tiernan wrote:
-16> 2015-05-29 09:28:23.106541 7f78c53a9700 10 mds.0.objecter in
handle_osd_op_reply
-15> 2015-05-29 09:28:23.106543 7f78c53a9700 7 mds.0.objecter
handle_osd_op
ok, thanks. I wasn’t aware of this. Should this command fix everything
or is do i need to delete cephfs and pools and start again:
> ceph osd tier cache-mode CachePool writeback
On 29/05/15 11:37, John Spray wrote:
On 29/05/2015 11:34, Peter Tiernan wrote:
ok, thats interesting. I
hi,
that appears to have worked. The mds are now stable and I can read and
write correctly.
thanks for the help and have a good day.
On 29/05/15 12:25, John Spray wrote:
On 29/05/2015 11:41, Peter Tiernan wrote:
ok, thanks. I wasn’t aware of this. Should this command fix
everything or is
Hi,
i have a use case for CephFS whereby files can be added but not modified
or deleted. Is this possible? Perhaps with cephFS layout or cephx
capabilities.
thanks in advance
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.
4.2 on
all machines.
Thanks,
--
Peter Hinman
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
the
ceph.conf file.
--
Peter Hinman
On 7/29/2015 12:49 PM, Robert LeBlanc wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Did you use ceph-depoy or ceph-disk to create the OSDs? If so, it
should use udev to start he OSDs. In that case, a new host that has
the correct ceph.conf and os
ey from the output of ceph auth list,
and pasted it into /var/lib/ceph/bootstrap-osd/ceph.keyring, but that
has not resolve the error.
But it sounds like you are saying that even once I get this resolved, I
have no hope of recovering the data?
--
Peter Hinman
On 7/29/2015 1:57 PM, Gregory Farnum
The end goal is to recover the data. I don't need to re-implement the
cluster as it was - that just appeared to the the natural way to recover
the data.
What monitor data would be required to re-implement the cluster?
--
Peter Hinman
International Bridge / ParcelPool.com
On 7/29/2015
Thanks Robert -
Where would that monitor data (database) be found?
--
Peter Hinman
On 7/29/2015 3:39 PM, Robert LeBlanc wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
If you built new monitors, this will not work. You would have to
recover the monitor data (database) from at least
or two previous monitors. Any idea which one
I should select? The one with the highest manifest number? The most
recent time stamp?
What files should I be looking for in /etc/conf? Just the keyring and
rbdmap files? How important is it to use the same keyring file?
--
Peter Hinman
Inter
the future.
--
Peter Hinman
On 7/29/2015 5:15 PM, Robert LeBlanc wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
If you had multiple monitors, you should recover if possible more than
50% of them (they will need to form a quorum). If you can't, it is
messy but, you can manuall
After upgrading from Dumpling to Emperor on Ubuntu 12.04 I noticed the
admin sockets for each of my monitors were missing although the cluster
seemed to continue running fine. There wasn't anything under
/var/run/ceph. After restarting the service on each monitor node they
reappeared. Anyone?
~
On 11/13/2013 08:29 AM, Alfredo Deza wrote:
> Hi All,
>
> I'm happy to announce a new release of ceph-deploy, the easy
> deployment tool for Ceph.
>
> The only two (very important) changes made for this release are:
>
> * Automatic SSH key copying/generation for hosts that do not have keys
> set
On 11/14/2013 12:29 PM, Alfredo Deza wrote:
> On Thu, Nov 14, 2013 at 12:23 PM, Peter Matulis
> wrote:
>> On 11/13/2013 08:29 AM, Alfredo Deza wrote:
>>> * Automatic SSH key copying/generation for hosts that do not have keys
>>> setup when using `ceph-deploy ne
On 11/14/2013 05:08 AM, Dan Van Der Ster wrote:
> Hi,
> We’re trying the same, on SLC. We tried rbdmap but it seems to have some
> ubuntu-isms which cause errors.
> We also tried with rc.local, and you can map and mount easily, but at
> shutdown we’re seeing the still-mapped images blocking a mac
On 11/20/2013 05:33 AM, Laurent Barbe wrote:
> Hello,
>
> Yes, with ubuntu, the init script needs to be enabled with update-rc.d.
> If you still have this problem, could you try to add "_netdev" option in
> your fstab ?
>
> e.g. :
> UUID=2f6aca33-c957-452c-8534-7234dd1612c9 /mnt/testrbd xfs
> de
blueprint:
>
> https://blueprints.launchpad.net/ubuntu/+spec/servercloud-1311-ceph
>
> Let me know if you want anything else added/tracked.
The rbdmap init script should be upstartified.
Peter Matulis
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
On 01/14/2014 10:42 PM, Sage Weil wrote:
> This is a big release, with lots of infrastructure going in for
> firefly. The big items include a prototype standalone frontend for
> radosgw (which does not require apache or fastcgi), tracking for read
> activity on the osds (to inform tiering decision
On 01/28/2014 10:08 AM, Graeme Lambert wrote:
> Hi Karan,
>
> Surely this doesn't apply to all pools though? Several of the pools
> created for the RADOS gateway have very small levels of objects and if I
> set 256 PGs to all pools I would have warnings about the ratio of
> objects to pgs.
Withi
On 01/28/2014 09:46 PM, McNamara, Bradley wrote:
> I finally have my first test cluster up and running. No data on it,
> yet. The config is: three mons, and three OSDS servers. Each OSDS
> server has eight 4TB SAS drives and two SSD journal drives.
>
>
>
> The cluster is healthy, so I start
On 02/07/2014 03:11 AM, Alexandre DERUMIER wrote:
>>> This page reads "If you set rbd_cache=true, you must set cache=writeback or
>>> risk data loss." ...
> if you enable writeback,guest send flush request. If the host is crashing,
> you'll lost datas but it'll not corrupt the guest filesystem.
On 02/27/2014 09:42 AM, Michael wrote:
> Thanks Tim, I'll give the raring packages a try.
> Found a tracker for Saucy packages, looks like the person they were
> assigned to hasn't checking in for a fair while so they might have just
> been overlooked http://tracker.ceph.com/issues/6726.
Packages
*roughly* what (OSD and MON) resources are required for a
single specifically-sized PG (in idle and in recovery modes).
peter
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
client.radosgw.gateway
Then i setup s3cmd with the access and secret key, and still the same,
'Access Denied'
On 04/12/2014 09:35 PM, Craig Lewis wrote:
On 4/11/14 02:36 , Peter wrote:
Hello,
I am testing out federated gateways. I have created one gateway with
one region and one zone. T
48576 in ceph.conf
The OSDs are setup with dedicated journal disks - 3 OSDs share one
journal device.
Any advice on what I'm missing, or where I should dig deeper?
Thanks,
peter.
signature.asc
Description: OpenPGP digital signature
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 16.09.15 16:41, Peter Sabaini wrote:
> Hi all,
>
> I'm having trouble adding OSDs to a storage node; I've got
> about 28 OSDs running, but adding more fails.
So, it seems the requisite knob was sysctl fs.aio-max-nr
By defaul
uration change -;
I'm sorry, I don't quite understand what you mean. Could you
elaborate? Are there specific risks associated with a high setting
of fs.aio-max-nr?
FWIW, I've done some load testing (using rados bench and rados
load-gen) -- anything I should watch out for in y
Hi list,
I have a 3 node ceph cluster with a total of 9 ods (2,3 and 4 with
different size drives). I changed the layout (failure domain from per osd
to per host and changed min_size) and I now have a few pgs stuck in peering
or remapped+peering for a couple of day now.
The hosts are under powere
;Ceph -s"? Are your new crush rules actually
> satisfiable? Is your cluster filling up?
> -Greg
>
>
> On Saturday, November 14, 2015, Peter Theobald
> wrote:
>
>> Hi list,
>>
>> I have a 3 node ceph cluster with a total of 9 ods (2,3 and 4 with
>> diff
ry\/Peering",
"enter_time": "2015-11-14 11:25:20.451882",
"past_intervals": [
{
"first": 116838,
"last": 120813,
"maybe_went_rw": 1,
&q
188 active+clean
62 peering
4 remapped+peering
2 active+clean+scrubbing+deep+repair
Pete
On 15 November 2015 at 18:04, Peter Theobald wrote:
> I still have the pgs stuck peering. I ran ceph pg n.nn query on a few of
> the pgs that ar
osd54 :6808 socket closed (con state OPEN)
[685906.690445] libceph: osd74 :6824 socket closed (con state OPEN)
Is this a symptom of something?
Thanks in advance,
Peter
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hey Nick,
Thanks for taking the time to answer my questions. Some in-line comments.
On Tue, May 3, 2016 at 10:51 AM, Nick Fisk wrote:
> Hi Peter,
>
>
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Pete
Thank you, I will attempt to play around with these settings and see if I
can achieve better read performance.
Appreciate your insights.
Peter
On Tue, May 3, 2016 at 3:00 PM, Nick Fisk wrote:
>
>
> > -Original Message-
> > From: Peter Kerdisle [mailto:peter.
any limits. There is
one thing that might be a problem and that is that one of the cache node
has a bonded interface and no access to the cluster network, and the other
cache node has a public and cluster interface.
Could anybody give me some more steps I can take to further discover where
this bot
This is why the
> user-configurable promotion throttles were added in jewel.
Are these already in the docs somewhere?
>
> 3) The cache tier to fill up quickly when empty but change slowly once
> it's full (ie limiting promotions and evictions). No real way to do
> this yet.
>
cluster?
Thanks,
Peter
On Fri, May 6, 2016 at 9:58 AM, Peter Kerdisle
wrote:
> Hey Mark,
>
> Sorry I missed your message as I'm only subscribed to daily digests.
>
>
>> Date: Tue, 3 May 2016 09:05:02 -0500
>> From: Mark Nelson
>> To: ceph-users@lists.ceph.co
this setting. Is there
an other way to change these settings?
On Sun, May 8, 2016 at 2:37 PM, Peter Kerdisle
wrote:
> Hey guys,
>
> I noticed the merge request that fixes the switch around here
> https://github.com/ceph/ceph/pull/8912
>
> I had two questions:
>
>
>-
.ceph.com] On Behalf Of
> > Peter Kerdisle
> > Sent: 10 May 2016 14:37
> > Cc: ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] Erasure pool performance expectations
> >
> > To answer my own question it seems that you can change setting
m the cluster to see what happens then.
Thanks for all your help so far!
On Wed, May 11, 2016 at 9:07 AM, Nick Fisk wrote:
> Hi Peter, yes just restart the OSD for the setting to take effect.
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of
27;t seem to be the case. When I set it to 2MB I would expect a
node with 10 OSDs to do a max of 20MB/s during promotions. Is this math
correct?
Thanks,
Peter
On Tue, May 10, 2016 at 3:48 PM, Nick Fisk wrote:
>
>
> > -Original Message-
> > From: ceph-users [mailto:ceph
http://docs.ceph.com/docs/master/rados/operations/cache-tiering/
On Mon, May 16, 2016 at 11:14 AM Nick Fisk wrote:
> > -Original Message-
> > From: Peter Kerdisle [mailto:peter.kerdi...@gmail.com]
> > Sent: 15 May 2016 08:04
> > To: Nick Fisk
> > Cc: ceph-use
On Mon, May 16, 2016 at 11:58 AM, Nick Fisk wrote:
> > -Original Message-
> > From: Peter Kerdisle [mailto:peter.kerdi...@gmail.com]
> > Sent: 16 May 2016 10:39
> > To: n...@fisk.me.uk
> > Cc: ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] E
Thanks yet again Nick for the help and explanations. I will experiment some
more and see if I can get the slow requests further down and increase the
overall performance.
On Mon, May 16, 2016 at 12:20 PM, Nick Fisk wrote:
>
>
> > -Original Message-
> > Fr
On Mon, May 16, 2016 at 12:20 PM, Nick Fisk wrote:
>
>
> > -Original Message-
> > From: Peter Kerdisle [mailto:peter.kerdi...@gmail.com]
> > Sent: 16 May 2016 11:04
> > To: Nick Fisk
> > Cc: ceph-users@lists.ceph.com
> > Subject: Re: [ceph-us
ot being able to fix this and I
simply don't know what else I can do at this point.
If anybody has anything I haven't tried before please let me know.
Peter
On Thu, May 5, 2016 at 10:30 AM, Peter Kerdisle
wrote:
> Hey guys,
>
> I'm running into an issue with my cluster du
using the pool in read-forward now so there should be almost
no promotion from EC to the SSD pool. I will see what options I have for
adding some SSD journals to the OSD nodes to help speed things along.
Thanks, and apologies again for missing your earlier replies.
Peter
On Tue, May 24, 2016 at 4:
tions that indexless buckets cannot be replicated. Is that
accurate? Does anyone have more details about this?
Thanks
peter.
signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/li
r A is trying to connect to server B from
it's cluster ip to the client ip. Could this be the root cause? And if so
how can I prevent that from happening?
Thanks,
Peter
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
If you are wanting to run VMs, OSD, and Monitors all on the same
hardware in a lab environment, it sounds like Proxmox might simplify
things for you.
Peter
On 8/18/2016 9:57 AM, Gaurav Goyal wrote:
Hello Mart,
My Apologies for that!
We are couple of office colleagues using the common gmail
below).
Cheers, Peter
-
root@gurke:~# ceph-deploy -h
Traceback (most recent call last):
File "/usr/bin/ceph-deploy", line 12, in
main()
File "/usr/lib/pymodules/python2.7/ceph_deploy/cli.py", line
Hi Kurt,
thanks for the hint. It is working now. :-)
I agree that it would be good to fix the dependencies in the package.
Cheers, Peter
On 05/15/2013 11:36 AM, Kurt Bauer wrote:
Hi,
the missing package is 'python-setuptools', but the dependencies in the
deb package should be fi
1 - 100 of 270 matches
Mail list logo