Jason, Everything points to a internal issue maybe with a CT, ribbon connection or just a bad board, regardless a new inverter is the key and solark should really want to investigate the issue themselves anyway. Fun times
On Sat, Nov 9, 2024, 7:16 AM Jason Szumlanski via RE-wrenches < re-wrenches@lists.re-wrenches.org> wrote: > I wrote in another thread about an off-grid quad Sol-Ark system that was > shutting down due to parallel stop when one of the four inverters > experienced a DC PV fault, and how that shutdown is far from ideal. The > same system is down once again, this time due to an AC fault code. > > The homeowner started getting repeated F18 and F34 AC overcurrent faults > on one of the slave inverters. This, in turn, shut the entire system down > due to parallel stop faults (F41). None of the other units had AC > overcurrent faults, and the load is nowhere near requiring all four > inverters for even the most demanding circumstances. It was designed this > way for redundancy, which I am quickly finding out is not Sol-Ark's strong > suit. > > To diagnose the issue remotely, I had the owner turn off all four load > breakers, all DC PV input, and the AC microinverter input on the GEN > terminals. I had them restart everything (several times). Every time, the > same inverter would have repeated AC overcurrent faults, and the others > would have parallel system faults. Since there were no loads connected by > virtue of the load breakers being open, I suspected this had to be an > internal fault. > > I went to the site, and Sol-Ark Tier 1 tech support had me shut off all > inverters and take the suspect inverter out of parallel operation mode. As > a standalone master it was able to power up and support the entire house > load without issue. Then we reprogrammed it for parallel operation again > and turned everything back on. We were unable to stay on the phone long > enough to determine if this was successful, but ultimately, the fault > returned. I was told to call and ask for Tier 2 next time if it happened > again, which I intend to do on Monday. At this point, the issue can only be > internal to the unit, and I intend to demand warranty replacement of > suspect components or the whole unit. > > I had to get the system running, so I wanted to take the bad inverter out > of the parallel system. I was hoping that simply shutting it down would > work. This is the third of 4 inverters in the Modbus chain. When turning it > off completely (all AC and DC switches disconnected), the 4th inverter > would fault, presumably because the Modbus signal was not being relayed, > but inverters #1 and #2 worked fine. However, I wanted #4 to also continue > working while taking #3 out of service. So then I turned on the battery > disconnect for #3 but left it in the off mode by not pressing the on/off > button, thinking that it would allow relay of the Modbus signal from #2 to > #4. That allowed the system to work momentarily, but then everything > faulted out due to parallel system stop. In other words, I was going to > have to physically take #3 out of the Modbus daisy chain to make this work. > > Of course, I didn't have a long enough Cat5 cable with me, nor a Cat5 > splice connector. So I had to rig something, which I did successfully to > jumper from #2 to #4. But when I turned everything back on, #4 still would > not work. I eventually realized that you have to change the Modbus address > from 04 to 03 in the settings. Apparently, the addresses need to be > sequential for it to work. Once I did this, I was able to get the system up > and running again as a triple-inverter parallel setup. No faults were > observed. So the theory was proven that #3 has an issue internally. > > > Anyway, bottom line, I am disappointed at how one inverter fault takes > down the whole paralleled system, and also how taking a faulted inverter > out of the system requires physical and programming changes. Turning it off > should be sufficient. This is a very poor way to implement a parallel > system that should provide the peace of mind that redundancy implies. Now I > have a customer who thought they were getting a system with failsafe > redundancy that actually requires a service call every time one of the > paralleled units decides it does not want to play nicely with others. > > Jason Szumlanski > Florida Solar Design Group > > _______________________________________________ > List sponsored by Redwood Alliance > > Pay optional member dues here: http://re-wrenches.org > > List Address: RE-wrenches@lists.re-wrenches.org > > Change listserver email address & settings: > http://lists.re-wrenches.org/options.cgi/re-wrenches-re-wrenches.org > > There are two list archives for searching. When one doesn't work, try the > other: > https://www.mail-archive.com/re-wrenches@lists.re-wrenches.org/ > http://lists.re-wrenches.org/pipermail/re-wrenches-re-wrenches.org > > List rules & etiquette: > http://www.re-wrenches.org/etiquette.htm > > Check out or update participant bios: > http://www.members.re-wrenches.org > >
_______________________________________________ List sponsored by Redwood Alliance Pay optional member dues here: http://re-wrenches.org List Address: RE-wrenches@lists.re-wrenches.org Change listserver email address & settings: http://lists.re-wrenches.org/options.cgi/re-wrenches-re-wrenches.org There are two list archives for searching. When one doesn't work, try the other: https://www.mail-archive.com/re-wrenches@lists.re-wrenches.org/ http://lists.re-wrenches.org/pipermail/re-wrenches-re-wrenches.org List rules & etiquette: http://www.re-wrenches.org/etiquette.htm Check out or update participant bios: http://www.members.re-wrenches.org