I thought the industry had gone to water soluble flux and then to no clean 
flux.  Unless you’re talking about maybe some alcohol after a manual repair.

 

From: AF <af-boun...@af.afmug.com> On Behalf Of Forrest Christian (List Account)
Sent: Wednesday, July 22, 2020 9:25 PM
To: AnimalFarm Microwave Users Group <af@af.afmug.com>
Subject: Re: [AFMUG] 450i/450m DFS false detect problem solved in later 
firmware?

 

Yes it is.   I'm also a bit concerned with a bit of the material compatibility 
with the latest patch they've used as they seem to be doing some sort of metal 
depositing which is rather easy to 'erase' with a few of the solvents one 
normally encounters in an electronics assembly line.  It really doesn't like 
the flux cleaner we use here as an example.

 

 

 

On Wed, Jul 22, 2020 at 8:05 PM Chuck McCown <ch...@wbmfg.com 
<mailto:ch...@wbmfg.com> > wrote:

Is the metal of the patch exposed.  Even if it is exposed, a thin layer will 
not lower the frequency that much.  If it was .100” thick then you may see a 
significant change,

Sent from my iPhone





On Jul 22, 2020, at 6:50 PM, Forrest Christian (List Account) 
<li...@packetflux.com <mailto:li...@packetflux.com> > wrote:



Actually it is a bare ceramic patch antenna.   Look at the top, you're seeing 
the side view.

 

When I got the response last time from the vendor they said they don't 
recommend it, and stated several reasons including if I remember correctly 
something about changing the dielectric constant in some odd way, detuning the 
antenna.   We didn't get much farther than that since they didn't have a good 
answer about whether this was confined to certain types of coating or what...

 

I haven't heard back from them on whether or not the latest modules or not have 
the same issues, but I suspect they likely do.

 

On the other hand, it wouldn't surprise me to find that what they're concerned 
about isn't going to change the tuning enough to actually make a difference, 
and they're just being overly cautious.

 

 

On Wed, Jul 22, 2020 at 7:02 AM Chuck McCown <ch...@wbmfg.com 
<mailto:ch...@wbmfg.com> > wrote:

I would not expect a thin layer of conformal to bother the antenna in a 
significant way.  It might if you were putting it on a bare patch but that is 
not the case here.

Sent from my iPhone





On Jul 22, 2020, at 6:53 AM, Mark Radabaugh <m...@amplex.net 
<mailto:m...@amplex.net> > wrote:



 

I decided to try a conformal coating on the Syncbox to see if it would cause 
any problems with the GPS and avoid the corrosion issues we have seen.   We 
tested before and after the conformal coating with no detectable impairment to 
the GPS SNR and signal level.   

 

Only about 16 hours so far but throwing a pitcher of water on it and leaving 
the cover off all night in a rainstorm hasn’t seemed to bother it.   The 
picture didn’t catch the sync light but it’s happily blinking away.  There is a 
lot of silicon grease on/around the RJ45 which looks a bit funny but I can’t 
conformal coat the jack itself for obvious reasons and I was intending to douse 
this one with water.  I’m sure SNR is bad right now with a blob of water on top 
of the antenna but it’s picking up enough to stay in lock.

 

Going to leave it like this and give it a couple weeks to see how it holds up, 
but so far so good.  

 

Regarding loss of sync on 450 equipment - on further examination of the logs we 
are seeing issues with 450 equipment randomly losing sync or switching to 
free-run and back to sync-over-power across a number of injectors - CMM5, 
CTM-2, and Rackinjectors.   These issues started about the time we started 
deploying SyncInjectors but it’s really looking like that is coincidental.   
Data is pointing toward something that changed with the firmware rather than 
the injector.   I have not had time yet to see if it included the older 450 
versus 450i yet.

 

Mark

 

<IMG_3364.jpeg>





On Jul 13, 2020, at 10:28 PM, Forrest Christian (List Account) 
<li...@packetflux.com <mailto:li...@packetflux.com> > wrote:

 

You bring up a few fair, and known, points.  I'll respond to at least a few of 
them:

 

In relation to the water ingress, we've seen enough of these to know it's at 
least an occasional issue.  Especially in hotter climates, one thing we've seen 
is the gasket cracking in the enclosure.  I'm also not completely convinced 
this isn't a water condensation/surface moisture issue in damper climates.   
Our enclosure manufacture just switched the gasketing material, perhaps as a 
result of our whining, to something different.   I've also looked at various 
alternatives for enclosures but haven't found the right one yet.   I thought I 
had one which would have been perfect, until I got the $50 per unit quote in 
quantity.  Some days I miss the pipes, but it was time for them to go from a 
mostly marketing perspective. 

 

I've also looked at the conformal coating in the past, and the challenge has 
been that the GPS module manufacture has told us this is a no-no since it 
apparently changes the tuning of the GPS patch antenna.   I probably need to 
re-evaluate this now that there's a different patch antenna on the new modules 
- but I expect a similar answer.    

 

For the past couple of years, I've been trying to move to a module flat on the 
board and then either using a PCB antenna like is used in devices like cell 
phones, where the antenna's pattern is such that mounting it flat on vertically 
oriented PCB results in a vertical pattern like the patch has. This should 
solve the water getting on the antenna issue, as there won't be anything 
directly below the seam to leak on.    The holdup has been me trying to 
possibly move to a module with a different GPS chipset at the same time, hoping 
that this would be better than our existing module.   With all the warts (some 
of which will be described below), I'm finding that the modules/chipset that 
we're using is actually not that bad in comparison to some others.    I even 
have had an eval of a 'timing grade' gps receiver here, which had more failures 
than the non-timing-grade ones we're using.  I have a few more to qualify, but 
look for this change in coming months assuming I can find a suitable module and 
antenna which works.

 

In relation to the random timing loss of lock, I am not going to disagree with 
you at all.   I suspect some of this might be leftovers from the GPS+GLONASS 
issue from the end of the year which seems to have resolved itself for the most 
part, but I know enough about that bug that it wouldn't shock me to find that 
there are still lingering issues.   The problem here is that although it 
'feels' like there might be more issues, I don't have quantifiable numbers.    
With all of that in mind, I'm currently working on in-field upgrade procedures 
for upgrading the firmware for these modules to get them the GLONASS fix.  This 
seems to be a more troublesome problem than it should be, since it's just an 
issue of getting the right firmware - the problem being that the company who 
built the firmware for these got gobbled and the successor company tends to be 
more difficult to work with.  I have firmware which does work on the modules, 
it just isn't an official build by the manufacturer.  

 

Around the end of the year, we did switch our basics to a newer module, based 
on the same chipset.   The Basics shipped since then default to GPS+GALILEO.   
Recently we've been using this module in Aux Port and Deluxes, but with 
GPS+GLONASS+Fixed-Glonass firmware since the Cambium firmware doesn't 
understand (yet) the Galileo sentences.   So anything you get from us today has 
this latest chipset in it, and has the GLONASS fix even if it isn't enabled.   
I will say that testing and in-field reports indicates this is even more 
stable, not sure how much of it is because of the increased antenna gain, and 
how much of it is due to the updated firmware, or how much is just a side 
effect of having a lot more of the older units in the field having customers 
report problems on.   I know at least some of it is measurable on the bench 
here, so it isn't all based on field reports.  The only downside so far is that 
we do see an uptick in DoA's (but still well under 1%).  Frustratingly, the 
DoA's we've gotten back have somehow resurrected themselves between the field 
and here, but thankfully there doesn't seem to be any increased in-field 
failures that we can see other than the DoA's.

 

To Eric's point about the holdover timer..    I understand the CTM2's had this 
functionality built in.  Nothing else I'm aware of has had that except for some 
our early GPS modules which just produced a pulse no matter what, and when it 
could align it it would.   This was good in the early days, not so much 
nowadays.

 

I'm looking at a couple options to implement a holdover.   First of all, 
assuming the docs are correct, I can tell the GPS modules to produce sync all 
the time, or only when they have a 2D lock, or when they have a 3d lock.   Sync 
all the time can be bad.   Especially if you're not monitoring lock status, 
since it means that a GPS can be out of lock but producing sync pulses which 
are wildly wrong, causing all sorts of issues.   2d lock can be bad too since 
it shares similar issues.   As a result, the modules default to '3d lock'.   
I've been toying with doing something dynamic in the rackinjector where it is 
able to dynamically changes the mode, so it waits until you get 3d lock, and 
then since it should have a good position, switches to '2d lock' mode, or maybe 
even freerun.   At the bare minimum, I probably will allow customers to change 
the setting statically if they're willing to deal with the ramifications.

 

I have some other ideas I'm cooking up as well, just don't want to say too much 
until I'm actually a bit farther down the path.

 

 

On Mon, Jul 13, 2020 at 5:57 AM Mark Radabaugh <m...@amplex.net 
<mailto:m...@amplex.net> > wrote:

Forrest,

 

As usual there are probably multiple issues going on.   We have seen an 
increase in the number of DFS hits recently, and we also have a lot of 
Packetflux timing equipment on the network.  I have noticed that DFS hits tend 
to be worse in ‘unusual’ hot weather conditions.   And we have certainly seen a 
pretty unusual heat wave over the last few weeks.  I’m not sure if this is just 
heat changing sensitivity of the DFS detections (temperature is involved in the 
RF calibration), if there is more reflection off the ionosphere in these 
weather conditions, or if the weather radar systems are just really jacking up 
power looking for storms.  

 

As far as timing - I do think there is some timing instability going on but I 
can’t pin it down to anything specific.   We continue to struggle with 
RackInjectors losing the GPS timing signal from the Syncbox Basic during or 
after storm events.   Typical symptom in the RackInjector fails to see sync 
from the syncbox and the AP’s go into freerun.  Sat’s in view, etc. all look 
normal, just no pulses.  Sometimes a power cycle from the RackInjector will fix 
it, sometimes physically unplugging it will fix it, and sometimes you just have 
to wait.   I have instructed the field crews multiple times to make absolutely 
sure they screw in every screw tight on the syncbox but I’m not 100% sure they 
are doing that.   I have seen at least one come back to the shop with evidence 
of water damage to the GPS board at the top.   I would really like to see the 
extra step of conformal coating on the boards if there isn’t a reliable way of 
keeping water off of them.

 

We have also been seeing an unusual number of LBT issues with the 3.65 gear 
which I believe are related to other AP’s drifting or briefly going off timing.

 

Due to the number of times we were seeing loss of sync we had to enable sync + 
freerun in order to avoid session resets.   I’m not convinced that we are not 
still seeing timing jumps due to the sequence of loss of sync, into freerun, 
then an abrupt change in framing when sync comes back.   Any time something 
like that happens it tends to cause a wave of DFS and LBT events across the 
network.   I can’t necessarily show anything specific at this point though.    
We do get a lot of archived and searchable logging from our Sumologic syslog 
server.   I’m going to ask the NOC to put together a report of AP’s reporting 
timing recovery and any correlation with DFS or LBT events within a 60 second 
window and see what we get.

 

Mark





On Jul 13, 2020, at 3:19 AM, Forrest Christian (List Account) 
<li...@packetflux.com <mailto:li...@packetflux.com> > wrote:

 

I need to be a bit clearer in that I'm not really sure what version this 
customer is running.   The question about 15.x /16.x came from a couple of 
oldish threads which indicated that something broke early in 15, and that it 
still wasn't fixed in 16.   But I found that unlikely to still be the case 
another year or two on.   In these year-and-a-bit old threads, the report was 
that one had to go back to very early in 15.x to "fix" this issue.   But like 
I've said before in this paragraph - I find this unlikely to still be the case 
- I just was hoping to verify that this wasn't a common knowledge issue that 
DFS was broken on 16.x.

 

I know this customer has been in contact with Cambium.   Based on our 
conversations with the customer so far, I get the impression that for some 
reason they've decided this is a sync issue.   I don't know if this is a 
customer determination or if Cambium has told them this.  I like your word 
dubious as I'm skeptical as well, but I'm also not one to dismiss a possible 
cause until I fully rule it out, as they could be 100% correct.

 

I could see where if you have an AP with sync broken intermittently (especially 
if you have freerun on), you might end up with a DFS event as a result of 
things just not being in sync.   But I have reason to believe this isn't the 
case with any of their AP's - at least not the ones I have seen the GPS status 
screen on the RackInjector for.  

 

I could also see where a stray pulse or two may be misinterpreted by the AP to 
be the correct alignment and have the same effect with causing AP to transmit 
out of sync as well.  But generally, the radios should ignore these as they're 
very rare (and exist in both the PacketFlux and official Cambium gear, so if 
it's a problem with mine, it should be a problem with the official gear as 
well).

 

And I agree with you 100% about a dislike for DFS.  I have a feeling that this 
customer isn't going to help me change my opinion.

 

 

On Sun, Jul 12, 2020 at 8:23 PM Ken Hohhof <af...@kwisp.com 
<mailto:af...@kwisp.com> > wrote:

I am unaware of any correlation between DFS events and either Packetflux or 
15.x FW.

 

I don’t use a lot of DFS because honestly it seems fussy no matter what.  But I 
have a tower with 10 sectors in 5 GHz (8 x 450i and 2 x 450m).  They are all 
synced from a Packetflux Rackinjector using Cambium Sync.  4 of the 450i 
sectors are in 5.4 DFS, and I’m embarrassed to find they are still on 15.2 FW.  
Uptime of about 6 months and no DFS events.  So I’m dubious about all of this.

 

The latest production FW is 16.2.1 and it also has a lot of fixes so I’m not 
sure why you would be running something so far behind.  As I said, I’m 
embarrassed to find I still have radios on 15.2.

 

Has he opened a case with Cambium support?  There are some best practices with 
DFS.  For sure you don’t want to configure the AP to think the antenna gain is 
lower than it is (not possible with 450m or integrated 450i).  You don’t want 
to set the SM Receive Target Level higher than necessary on other sectors.  
Then there’s choosing the alternate frequencies.  And I suppose a poor sync 
configuration could cause false DFS detections, where an AP sees the signal 
from an adjacent AP.

 

But who knows what causes these events?  Somebody’s Linksys reflected off a 
bird?  A competitor aiming a new radio?  I used to have a 5.4 GHz PTP500 
backhaul and the end pointed in the general direction of Chicago would have DFS 
events when there were storms.  I thought ducting was causing it to see distant 
signals, but it could also have been tripped by lightning.  DFS is fussy.  I 
don’t like it.  If I could swap out all the SMs on those DFS sectors for 450b, 
I would probably move them to U-NII-1.

 

 

From: AF <af-boun...@af.afmug.com <mailto:af-boun...@af.afmug.com> > On Behalf 
Of Forrest Christian (List Account)
Sent: Sunday, July 12, 2020 7:56 PM
To: AnimalFarm Microwave Users Group <af@af.afmug.com <mailto:af@af.afmug.com> >
Subject: Re: [AFMUG] 450i/450m DFS false detect problem solved in later 
firmware?

 

I read the 16.0.1 release notes, nothing really specific about DFS other than 
it being on when it shouldn't be.  However, I agree there is lots of stuff 
fixed in there, some of which could have repercussions for DFS.

 

Are you saying that mid to late 15.x was generally broken for DFS and this is 
largely fixed in 16.x?   I guess my real question should have been 'What is the 
state of DFS in the 450 platform and how fussy is it'?

 

I'm still gathering information from this customer but it sounds like they're 
still trying to track down the root cause.  Sometime in the past week or so 
they figured out that there was some correlation between the DFS events adding 
a fair bit of PacketFlux gear, so this correlation is now the leading root 
cause in their minds.   So now I get to try to resolve their problem for them. 

 

 

 

On Sun, Jul 12, 2020 at 3:00 PM Dave <dmilho...@wletc.com 
<mailto:dmilho...@wletc.com> > wrote:

If they are not running 16.0.1 nuthing can help them from some weird issues 
with the DFS bands.

 Lots of things corrected in 15.2 and later for EIRP and SNR related 
calculations the help with H/V misreads and A/B channel alignments.

Read the release notes in 16.0.1 for further info.

 

On 7/11/2020 3:12 AM, Forrest Christian (List Account) wrote:

I'm working with a customer that is having problems with DFS false hits who is 
convinced this is a PacketFlux sync issue.   I'm never one to say it 
definitively isn't my problem, but I'm skeptical in this case. 

 

I know that at some point in the past that anything beyond 15.0.2 was known to 
have fairly common DFS issues by some customers.   I thought this was resolved 
in later releases, but I also don't see any mention of said issue being 
resolved in any release notes post 15.0.02.

 

I was wondering if anyone knew the current status?  I.E. if they had been 
seeing the problem previously, and then discovered it was fixed.  Or have tried 
recent releases and discovered the problem still exists, etc...

 

-- 

- Forrest

 

-- 
AF mailing list
AF@af.afmug.com <mailto:AF@af.afmug.com> 
http://af.afmug.com/mailman/listinfo/af_af.afmug.com




 

-- 

- Forrest

-- 
AF mailing list
AF@af.afmug.com <mailto:AF@af.afmug.com> 
http://af.afmug.com/mailman/listinfo/af_af.afmug.com




 

-- 

- Forrest

-- 
AF mailing list
AF@af.afmug.com <mailto:AF@af.afmug.com> 
http://af.afmug.com/mailman/listinfo/af_af.afmug.com

 

-- 
AF mailing list
AF@af.afmug.com <mailto:AF@af.afmug.com> 
http://af.afmug.com/mailman/listinfo/af_af.afmug.com




 

-- 

- Forrest

-- 
AF mailing list
AF@af.afmug.com <mailto:AF@af.afmug.com> 
http://af.afmug.com/mailman/listinfo/af_af.afmug.com


-- 
AF mailing list
AF@af.afmug.com <mailto:AF@af.afmug.com> 
http://af.afmug.com/mailman/listinfo/af_af.afmug.com

-- 
AF mailing list
AF@af.afmug.com <mailto:AF@af.afmug.com> 
http://af.afmug.com/mailman/listinfo/af_af.afmug.com




 

-- 

- Forrest

-- 
AF mailing list
AF@af.afmug.com <mailto:AF@af.afmug.com> 
http://af.afmug.com/mailman/listinfo/af_af.afmug.com

-- 
AF mailing list
AF@af.afmug.com <mailto:AF@af.afmug.com> 
http://af.afmug.com/mailman/listinfo/af_af.afmug.com




 

-- 

- Forrest

-- 
AF mailing list
AF@af.afmug.com
http://af.afmug.com/mailman/listinfo/af_af.afmug.com

Reply via email to