[ceph-users] Deploy a Ceph cluster to play around with

2013-09-16 Thread Guang
.mobstor.gq1.yahoo.com][INFO ] /dev/sda : [repl101.mobstor.gq1.yahoo.com][INFO ] /dev/sda1 other, ext4, mounted on /boot [repl101.mobstor.gq1.yahoo.com][INFO ] /dev/sda2 other, LVM2_member Thanks, Guang ___ ceph-users mailing list ceph-users

[ceph-users] Ceph / RadosGW deployment questions

2013-09-24 Thread Guang
_thread_pool_size": "100", Is this expected? 4. cephx authentication. After reading through the cephx introduction, I got the feeling that cephx is for client to cluster authentication, so that each librados user will need to create a new key. However, this page http://ceph.com/docs/master/rados/operations/authentication/#enabling-cephx got me confused in terms of why should we create keys for mon and osd? And how does that fit into the authentication diagram? BTW, I found the keyrings under /var/lib/cecph/{role}/ for each roles, are they being used when talk to other roles? Thanks, Guang ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Ceph deployment issue in physical hosts

2013-09-25 Thread Guang
et.gateway_bootstrap.HostNotFound: web2 Does anyone come across the same issue? Looks like I mis-configured the network environment? Thanks, Guang ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph deployment issue in physical hosts

2013-09-25 Thread Guang
actually used ceph-deploy to install ceph onto the web2 remote host… Thanks, Guang Date: Wed, 25 Sep 2013 10:29:14 +0200 From: Wolfgang Hennerbichler To: Subject: Re: [ceph-users] Ceph deployment issue in physical hosts Message-ID: <52429eda.8070...@risc-software.at> Content-Type: text

Re: [ceph-users] Ceph deployment issue in physical hosts

2013-09-25 Thread Guang
her investigate. Thanks all for the help! Guang On Sep 25, 2013, at 8:38 PM, Alfredo Deza wrote: > On Wed, Sep 25, 2013 at 5:08 AM, Guang wrote: >> Thanks Wolfgang. >> >> -bash-4.1$ ping web2 >> PING web2 (10.193.244.209) 56(84) bytes of data. >> 64 bytes from web2 (

[ceph-users] ceph-deploy issues on RHEL6.4

2013-09-27 Thread Guang
Hi ceph-users, I recently deployed a ceph cluster with use of *ceph-deploy* utility, on RHEL6.4, during the time, I came across a couple of issues / questions which I would like to ask for your help. 1. ceph-deploy does not help to install dependencies (snappy leveldb gdisk python-argparse gper

[ceph-users] Ceph monitoring / stats and troubleshooting tools

2013-10-08 Thread Guang
Hi ceph-users, After walking through the operations document, I still have several questions in terms of operation / monitoring for ceph which need you help. Thanks! 1. Does ceph provide build in monitoring mechanism for Rados and RadosGW? Taking Rados for example, is it possible to monitor the

[ceph-users] Expanding ceph cluster by adding more OSDs

2013-10-08 Thread Guang
optimal any more and there is no chance to correct it? Thanks, Guang ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph monitoring / stats and troubleshooting tools

2013-10-09 Thread Guang
For the second issue, I got the answer from within: http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/#finding-an-object-location. Thanks, Guang On Oct 8, 2013, at 8:43 PM, Guang wrote: > Hi ceph-users, > After walking through the operations document, I still have several que

[ceph-users] Ceph stats and monitoring

2013-10-09 Thread Guang
Hi, Can someone share your experience with monitoring the Ceph cluster? How is going with the work mentioned here: http://wiki.ceph.com/01Planning/02Blueprints/Dumpling/ceph_stats_and_monitoring_tools Thanks, Guang___ ceph-users mailing list ceph-user

Re: [ceph-users] Expanding ceph cluster by adding more OSDs

2013-10-09 Thread Guang
Thanks Mike. Is there any documentation for that? Thanks, Guang On Oct 9, 2013, at 9:58 PM, Mike Lowe wrote: > You can add PGs, the process is called splitting. I don't think PG merging, > the reduction in the number of PGs, is ready yet. > > On Oct 8, 2013, at 11:58

Re: [ceph-users] Expanding ceph cluster by adding more OSDs

2013-10-09 Thread Guang
1.1PB to 1.2PB or move to 2PB directly? Thanks, Guang On Oct 10, 2013, at 11:10 AM, Michael Lowe wrote: > There used to be, can't find it right now. Something like 'ceph osd set > pg_num ' then 'ceph osd set pgp_num ' to actually move your data > into the new pg&#x

Re: [ceph-users] Ceph / RadosGW deployment questions

2013-10-10 Thread Guang
of httpd processes? Besides, it looks like a documentation issue on this page: http://ceph.com/docs/master/install/rpm/, it recommends using FastCgiExternalServer, however, it does not mention that part that radosgw application should be manually launched (maybe I am new to fastcgi though). Than

[ceph-users] ceph-deploy zap disk failure

2013-10-15 Thread Guang
Hi ceph-users, I am trying with the new ceph-deploy utility on RHEL6.4 and I came across a new issue: -bash-4.1$ ceph-deploy --version 1.2.7 -bash-4.1$ ceph-deploy disk zap server:/dev/sdb [ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy disk zap server:/dev/sdb [ceph_deploy.osd][

Re: [ceph-users] ceph-deploy zap disk failure

2013-10-15 Thread Guang
-bash-4.1$ which sgdisk /usr/sbin/sgdisk Which path does ceph-deploy use? Thanks, Guang On Oct 15, 2013, at 11:15 PM, Alfredo Deza wrote: > On Tue, Oct 15, 2013 at 10:52 AM, Guang wrote: >> Hi ceph-users, >> I am trying with the new ceph-deploy utility on RHEL6.4 and I came

[ceph-users] Adding a new OSD crash the monitors (assertion failure)

2013-10-30 Thread Guang
ead.so.0() [0x3208a07851] 14: (clone()+0x6d) [0x32086e890d] NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. Anyone else came across the same issue? Or am I missing anything when add a new OSD? Thanks, Guang ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Adding a new OSD crash the monitors (assertion failure)

2013-10-30 Thread Guang
I just found the trick.. When I am using a default crush, which use straw bucket type, things are good. However, for the error I posted below, it is using tree bucket type. Is it related? Thanks, Guang On Oct 30, 2013, at 6:52 PM, Guang wrote: > Hi all, > Today I tried to add a new OS

Re: [ceph-users] Expanding ceph cluster by adding more OSDs

2013-11-02 Thread Guang
very much. Thanks, Guang Date: Thu, 10 Oct 2013 05:15:27 -0700 From: Kyle Bader To: "ceph-users@lists.ceph.com" Subject: Re: [ceph-users] Expanding ceph cluster by adding more OSDs Message-ID: Content-Type: text/plain; charset="utf-8" I've contracted and expand

[ceph-users] Ceph cluster performance degrade (radosgw) after running some time

2013-12-30 Thread Guang
Hi ceph-users and ceph-devel, Merry Christmas and Happy New Year! We have a ceph cluster with radosgw, our customer is using S3 API to access the cluster. The basic information of the cluster is: bash-4.1$ ceph -s cluster b9cb3ea9-e1de-48b4-9e86-6921e2c537d2 health HEALTH_ERR 1 pgs inconsis

[ceph-users] Fwd: [rgw - Bug #7073] (New) "rgw gc max objs" should have a prime number as default value

2014-01-01 Thread Guang
Hi ceph-users, After reading through the GC related code, I am thinking to use a much larger value for "rgw gc max obis" (like 997), and I don't see any side effect if we increase this value. Did I miss anything? Thanks, Guang Begin forwarded message: > From: redm..

Re: [ceph-users] Ceph cluster is unreachable because of authentication failure

2014-01-14 Thread Guang
;fsid": "b9cb3ea9-e1de-48b4-9e86-6921e2c537d2", "modified": "0.00", "created": "0.00", "mons": [ { "rank": 0, "name": "osd152", "addr&q

Re: [ceph-users] Ceph cluster is unreachable because of authentication failure

2014-01-16 Thread Guang
5}) = 0 (Timeout) [pid 75873] select(0, NULL, NULL, NULL, {0, 5}) = 0 (Timeout) [pid 75873] select(0, NULL, NULL, NULL, {0, 5}) = 0 (Timeout) [pid 75873] select(0, NULL, NULL, NULL, {0, 5}) = 0 (Timeout) Thanks, Guang On Jan 15, 2014, at 5:54 AM, Guang wrote: > Thanks Sage

Re: [ceph-users] Ceph cluster is unreachable because of authentication failure

2014-01-19 Thread Guang
Thanks Sage. I just captured part of the log (it was fast growing), the process did not hang but I saw the same pattern repeatedly. Should I increase the log level and send over email (it constantly reproduced)? Thanks, Guang On Jan 18, 2014, at 12:05 AM, Sage Weil wrote: > On Fri, 17

Re: [ceph-users] RADOS + deep scrubbing performance issues in production environment

2014-02-03 Thread Guang
+ceph-users. Does anybody have the similar experience of scrubbing / deep-scrubbing? Thanks, Guang On Jan 29, 2014, at 10:35 AM, Guang wrote: > Glad to see there are some discussion around scrubbing / deep-scrubbing. > > We are experiencing the same that scrubbing could affect late

[ceph-users] PG folder hierarchy

2014-02-25 Thread Guang
Hello, Most recently when looking at PG’s folder splitting, I found that there was only one sub folder in the top 3 / 4 levels and start having 16 sub folders starting from level 6, what is the design consideration behind this? For example, if the PG root folder is ‘3.1905_head’, in the first le

Re: [ceph-users] PG folder hierarchy

2014-02-25 Thread Guang
Got it. Thanks Greg for the response! Thanks, Guang On Feb 26, 2014, at 11:51 AM, Gregory Farnum wrote: > On Tue, Feb 25, 2014 at 7:13 PM, Guang wrote: >> Hello, >> Most recently when looking at PG's folder splitting, I found that there was >> only one sub folder i

[ceph-users] Linux kernel module / package / drivers needed for RedHat 6.4 to work with CEPH RBD

2014-03-28 Thread Guang
Hello ceph-users, We are trying to play with RBD and I would like to ask if RBD works for RedHat 6.4 (with kernel version 2.6.32), what modules / package / drives do I need to install to use RBD? Thanks, Guang ___ ceph-users mailing list ceph-users

[ceph-users] A simple tool to do osd crush reweigh after creating pool to gain better PG distribution across OSDs

2014-04-13 Thread Guang
Hi all, In order to deal with PG uneven problem[1, 2] which further leads to uneven disk usage, I recently developed a simple script which aims to do *osd crush* reweight right after creating the pool (which has the most significant data, e.g. .rgw.buckets), we had good experience to tune the di

[ceph-users] CEPH's data durability with different configurations

2014-04-18 Thread Guang
Hi all, One goal of the storage system is to achieve certain durability SLAs, so that we replicate data with multiple copies, and check consistency on regular basis (e.g. scrubbing), however, replication could increase cost (tradeoff between cost & durability), and cluster wide consistency check

[ceph-users] Docs - trouble shooting mon

2014-04-24 Thread Guang
ask how to check if “there are other copies of its content” for a given monitor instance? Thanks, Guang ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Tape backup for CEPH

2014-05-12 Thread Guang
radosgw for 1, I would like to check if anyone has experience on tape backup? Thanks, Guang ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] osd down/autoout problem

2014-05-15 Thread Guang
On May 15, 2014, at 6:06 PM, Cao, Buddy wrote: > Hi, > > One of the osd in my cluster downs w no reason, I saw the error message in > the log below, I restarted osd, but after several hours, the problem come > back again. Could you help? > > “Too many open files not handled on operation 24

Re: [ceph-users] Expanding pg's of an erasure coded pool

2014-05-25 Thread Guang Yang
a thread for each connection, so kerblam. > (And it actually requires a couple such connections because we have > separate heartbeat, cluster data, and client data systems.) Hi Greg, Is there any plan to refactor the messenger component to reduce the num of threads? For example, use eve

Re: [ceph-users] Expanding pg's of an erasure coded pool

2014-05-29 Thread Guang Yang
On May 28, 2014, at 5:31 AM, Gregory Farnum wrote: > On Sun, May 25, 2014 at 6:24 PM, Guang Yang wrote: >> On May 21, 2014, at 1:33 AM, Gregory Farnum wrote: >> >>> This failure means the messenger subsystem is trying to create a >>> thread and is getting an

[ceph-users] XFS - number of files in a directory

2014-06-23 Thread Guang Yang
large directories. I would like to check with your experience on top of this for your Ceph cluster if you are using XFS. Thanks. [1] http://www.scs.stanford.edu/nyu/02fa/sched/xfs.pdf Thanks, Guang ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Ask a performance question for the RGW

2014-06-29 Thread Guang Yang
this limitation. Thanks, Guang On Jun 30, 2014, at 2:54 PM, baijia...@126.com wrote: > > hello, everyone! > > when I user rest bench test RGW performance and the cmd is: > ./rest-bench --access-key=ak --secret=sk --bucket=bucket_name --seconds=600 > -t 200 -b 524288

Re: [ceph-users] Ask a performance question for the RGW

2014-06-30 Thread Guang Yang
n was hang there waiting other ops finishing their work. > > thanks > baijia...@126.com > > 发件人: Guang Yang > 发送时间: 2014-06-30 14:57 > 收件人: baijiaruo > 抄送: ceph-users > 主题: Re: [ceph-users] Ask a performance question for the RGW > Hello, > There is a k

[ceph-users] row geo-replication to another data store?

2014-07-17 Thread Guang Yang
Thanks, Guang ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] OSD disk replacement best practise

2014-08-14 Thread Guang Yang
, Guang ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Usage pattern and design of Ceph

2013-08-18 Thread Guang Yang
Hi ceph-users, This is Guang and I am pretty new to ceph, glad to meet you guys in the community! After walking through some documents of Ceph, I have a couple of questions:   1. Is there any comparison between Ceph and AWS S3, in terms of the ability to handle different work-loads (from KB to

[ceph-users] Usage pattern and design of Ceph

2013-08-18 Thread Guang Yang
Hi ceph-users, This is Guang and I am pretty new to ceph, glad to meet you guys in the community! After walking through some documents of Ceph, I have a couple of questions:   1. Is there any comparison between Ceph and AWS S3, in terms of the ability to handle different work-loads (from KB to

[ceph-users] Deploy Ceph on RHEL6.4

2013-08-19 Thread Guang Yang
Hi ceph-users, I would like to check if there is any manual / steps which can let me try to deploy ceph in RHEL? Thanks, Guang___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Usage pattern and design of Ceph

2013-08-19 Thread Guang Yang
Thanks Mark. What is the design considerations to break large files into 4M chunk rather than storing the large file directly? Thanks, Guang From: Mark Kirkwood To: Guang Yang Cc: "ceph-users@lists.ceph.com" Sent: Monday, August 19, 2013 5:18

Re: [ceph-users] Usage pattern and design of Ceph

2013-08-19 Thread Guang Yang
Thanks Greg. Some comments inline... On Sunday, August 18, 2013, Guang Yang wrote: Hi ceph-users, >This is Guang and I am pretty new to ceph, glad to meet you guys in the >community! > > >After walking through some documents of Ceph, I have a couple of questions: >  1. Is th

Re: [ceph-users] Usage pattern and design of Ceph

2013-08-20 Thread Guang Yang
Then that makes total sense to me. Thanks, Guang From: Mark Kirkwood To: Guang Yang Cc: "ceph-users@lists.ceph.com" Sent: Tuesday, August 20, 2013 1:19 PM Subject: Re: [ceph-users] Usage pattern and design of Ceph On 20/08/13 13:27, Guang

Re: [ceph-users] Usage pattern and design of Ceph

2013-08-20 Thread Guang Yang
Thanks Greg. >>The typical case is going to depend quite a lot on your scale. [Guang] I am thinking the scale as billions of objects with size from several KB to several MB, my concern is over the cache efficiency for such use case. That said, I'm not sure why you'd want to

Re: [ceph-users] ceph-deploy zap disk failure

2013-10-18 Thread Guang Yang
Thanks all for the recommendation. I worked around by modifying the ceph-deploy by giving and full path for sgdisk. Thanks, Guang 在 2013-10-16,下午10:47,Alfredo Deza 写道: > On Tue, Oct 15, 2013 at 9:19 PM, Guang wrote: >> -bash-4.1$ which sgdisk >> /usr/sbin/sgdisk >> &g

[ceph-users] Rados bench result when increasing OSDs

2013-10-21 Thread Guang Yang
PGs, and for the large cluster, the pool has 4 PGs (as I will to further scale the cluster, so I choose a much large PG). Does my test result make sense? Like when the PG number and OSD increase, the latency might drop? Thanks, Guang ___ ceph

Re: [ceph-users] Rados bench result when increasing OSDs

2013-10-22 Thread Guang Yang
:13 AM, Guang Yang wrote: > Dear ceph-users, Hi! > Recently I deployed a ceph cluster with RadosGW, from a small one (24 OSDs) > to a much bigger one (330 OSDs). > > When using rados bench to test the small cluster (24 OSDs), it showed the > average latency was around 3ms (objec

Re: [ceph-users] Rados bench result when increasing OSDs

2013-10-22 Thread Guang Yang
Hi Kyle and Greg, I will get back to you with more details tomorrow, thanks for the response. Thanks, Guang 在 2013-10-22,上午9:37,Kyle Bader 写道: > Besides what Mark and Greg said it could be due to additional hops through > network devices. What network devices are you using, what is the n

Re: [ceph-users] Rados bench result when increasing OSDs

2013-10-24 Thread Guang Yang
e OSDs, the cluster will need to maintain far more connections between OSDs which potentially slow things down? 3. Anything else i might miss? Thanks all for the constant help. Guang 在 2013-10-22,下午10:22,Guang Yang 写道: > Hi Kyle and Greg, > I will get back to you with more details tomo

Re: [ceph-users] Rados bench result when increasing OSDs

2013-10-24 Thread Guang Yang
Thanks Mark. I cannot connect to my hosts, I will do the check and get back to you tomorrow. Thanks, Guang 在 2013-10-24,下午9:47,Mark Nelson 写道: > On 10/24/2013 08:31 AM, Guang Yang wrote: >> Hi Mark, Greg and Kyle, >> Sorry to response this late, and thanks for providing the

[ceph-users] 'ceph osd reweight' VS 'ceph osd crush reweight'

2013-12-11 Thread Guang Yang
Hello ceph-users, I am a little bit confused by these two options, I understand crush reweight determine the weight of the OSD in the crush map so that it impacts I/O and utilization, however, I am a little bit confused by osd reweight option, is that something control the I/O distribution acros

Re: [ceph-users] Ceph cluster performance degrade (radosgw) > after running some time

2013-12-30 Thread Guang Yang
Thanks Wido, my comments inline... >Date: Mon, 30 Dec 2013 14:04:35 +0100 >From: Wido den Hollander >To: ceph-users@lists.ceph.com >Subject: Re: [ceph-users] Ceph cluster performance degrade (radosgw) >    after running some time >On 12/30/2013 12:45 PM, Guang wrote: > H

Re: [ceph-users] Ceph cluster performance degrade (radosgw) after running some time

2013-12-31 Thread Guang Yang
Thanks Wido, my comments inline... >Date: Mon, 30 Dec 2013 14:04:35 +0100 >From: Wido den Hollander >To: ceph-users@lists.ceph.com >Subject: Re: [ceph-users] Ceph cluster performance degrade (radosgw) >    after running some time >On 12/30/2013 12:45 PM, Guang wrote: > H

Re: [ceph-users] Ceph cluster performance degrade (radosgw) after running some time

2013-12-31 Thread Guang Yang
Thanks Mark, my comments inline... Date: Mon, 30 Dec 2013 07:36:56 -0600 From: Mark Nelson To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph cluster performance degrade (radosgw)     after running some time On 12/30/2013 05:45 AM, Guang wrote: > Hi ceph-users and ceph-devel, >

Re: [ceph-users] Ceph cluster performance degrade (radosgw) > after running some time

2014-02-10 Thread Guang Yang
Thanks all for the help. We finally identified the root cause of the issue was due to a lock contention happening at folder splitting and here is a tracking ticket (thanks Inktank for the fix!): http://tracker.ceph.com/issues/7207 Thanks, Guang On Tuesday, December 31, 2013 8:22 AM, Guang

[ceph-users] Ceph GET latency

2014-02-18 Thread Guang Yang
Hi ceph-users, We are using Ceph (radosgw) to store user generated images, as GET latency is critical for us, most recently I did some investigation over the GET path to understand where time spend. I first confirmed that the latency came from OSD (read op), so that we instrumented code to trac

[ceph-users] XFS tunning on OSD

2014-03-05 Thread Guang Yang
Hello all, Recently I am working on Ceph performance analysis on our cluster, our OSD hardware looks like: 11 SATA disks, 4TB for each, 7200RPM 48GB RAM When break down the latency, we found that half of the latency (average latency is around 60 milliseconds via radosgw) comes from file loo

Re: [ceph-users] Firefly 0.80 rados bench cleanup / object removal broken?

2014-05-19 Thread Guang Yang
turn of run and cleanup on that basis, if you still want to do a slow liner search to cleanup, be sure removing the benchmark_last_metadata object before you kick off running the cleanup. Let me know if that helps. Thanks, Guang On May 20, 2014, at 6:45 AM, matt.lat...@hgst.com wrote: > >

[ceph-users] RBD shared between clients

2013-05-01 Thread Yudong Guang
Hi, I've been trying to use block device recently. I have a running cluster with 2 machines and 3 OSDs. On a client machine, let's say A, I created a rbd image using `rbd create` , then formatted, mounted and wrote something in it, everything was working fine. However, problem occurred when I tr

Re: [ceph-users] RBD shared between clients

2013-05-01 Thread Yudong Guang
ve such > mechanisms (with limits, read the docs). > > On May 1, 2013, at 11:19 AM, Yudong Guang > wrote: > > > Hi, > > > > I've been trying to use block device recently. I have a running cluster > with 2 machines and 3 OSDs. > > > > On a client ma

Re: [ceph-users] RBD shared between clients

2013-05-02 Thread Yudong Guang
Thank you, Gandalf and Igor. I intuitively think that building a cluster on another is not appropriate. Maybe I should give RadosGW a try first. On Thu, May 2, 2013 at 3:00 AM, Igor Laskovy wrote: > Or maybe in case the hosting purposes easier implement RadosGW. > -- Yudong

[ceph-users] Long peering - throttle at FileStore::queue_transactions

2016-01-04 Thread Guang Yang
action immediately. > Could we delay the queue of the transactions until all PGs on the host are > peered? Thanks, Guang ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Long peering - throttle at FileStore::queue_transactions

2016-01-05 Thread Guang Yang
On Mon, Jan 4, 2016 at 7:21 PM, Sage Weil wrote: > On Mon, 4 Jan 2016, Guang Yang wrote: >> Hi Cephers, >> Happy New Year! I got question regards to the long PG peering.. >> >> Over the last several days I have been looking into the *long peering* >> problem when