Hey,

I'm a dev on this project, I'll provide a few explanations about this topic 
:

I don't get it. Why are they supposed to be dropped only because the 
> logging service is down?
>

We're calling this a "logger" but it's not as you'd think ; the data being 
transmitted is subject to GDPR and cannot be saved anywhere. Thus, if the 
receiving end disconnects from the main program, we're just dropping the 
data (it doesn't actually matter a lot).

After trying a few ways to handle this data stream inside "P" as it's been 
called, I found a way that doesn't hang the program terribly when 
disconnections occur (or data stream is too strong) :

- 1 Goroutine ranges on a buffered channel, writes to the Logger
- All Goroutines that send data to this channel use this pattern :

select {
case channel <- data:
default:
}

This is surprisingly cheap performance-wise.
- 1 Goroutine checks every N seconds atomically if the routine that writes 
to the logger found a disconnection error. If so, reconnects, resets 
bufio.Writer/Reader, and resets the atomic state.
Meanwhile, the logger routine is just on a "range-> if -> continue" mode 
until connection is back up.

I didn't find a good way to avoid this last "mode" though. Here's a code 
sample :

for rawData := range channel {
        // If state == 1, we can safely write again
        if atomic.LoadInt32(&channelState) == 0 {
            continue
        }
        // Prepare data
        var dataToWrite []byte
        // ...

        // Send to logger
        n, err = writer.Write(dataToWrite)
        // ...
}

This ends up running fine but I'm sure there's a better way to avoid the 
fact that there's a lot of useless data going in this channel if the logger 
is down.

Thanks for the answers :)


Le jeudi 14 février 2019 13:29:51 UTC+1, Marc Zahn a écrit :
>
> I don't get it. Why are they supposed to be dropped only because the 
> logging service is down?
>
> Making a service between P and the logger is an interesting way to go
>>
> Another service which could be down then as well? Why not a Queue? 
>
> Am Donnerstag, 14. Februar 2019 10:05:40 UTC+1 schrieb Michel Levieux:
>>
>> Hello everyone, thx for all your interesting answers!
>>
>> I think the fact that when the logger's down, the requests have to be 
>> dropped (not queued, maybe I was not clear enough about that in my first 
>> message) restrains our possibilities. Making a service between P and the 
>> logger is an interesting way to go. For the moment we have made something 
>> quite simple with atomic and some goroutines cooperating to know if the 
>> connection is still correct, or to try reconnect when it's not, but I think 
>> we will come back on that later.
>>
>> Le mer. 13 févr. 2019 à 14:54, Dany Xu <xuletter0...@gmail.com> a écrit :
>>
>>> As discuss above, i think the answer is decoupling the P and logger, 
>>> storing the logs when the logger is down.The push and pull pattern would be 
>>> better.The p sends all logs and the logger pulls all logs.Just keep a 
>>> bigger storage for un-consumed logs.Queue may be is a better way but using 
>>> a single storage.
>>>
>>> 在 2019年2月12日星期二 UTC+8上午12:34:46,Michel Levieux写道:
>>>>
>>>> Hi guys. I need a little help here.
>>>>
>>>> I work in a digital marketing company, where we have a program that 
>>>> receives a lot of requests every second (counted in thousands) and logs 
>>>> its 
>>>> behaviour via a logger that runs on another server. We are currently 
>>>> trying 
>>>> to implement a connection-retry system between this program and its 
>>>> logging 
>>>> API. What we want is :
>>>>
>>>> - We have a main program - let's call it P
>>>> - P sends logs to the logger in multiple goroutines.
>>>> - Sometimes we might need to shut down the logger (for maintenance or 
>>>> anything)
>>>> - We want P to keep running when the logger's down
>>>> - Once the logger's up again, P must Dial it back automatically and 
>>>> repair the *bufio.Writer associated with it
>>>>
>>>> Would you guys know a way not to check each single Read/Write if the 
>>>> logger's up?
>>>>
>>>> Up to here we have thought of using atomic, mutexes and context for 
>>>> synchronization, but the issues we face are the following:
>>>>
>>>> - mutexes create "pending" requests, since there's no way to check if a 
>>>> mutex is locked or not
>>>> - we're not really sure about the right way to use context for this 
>>>> specific case
>>>> - we'd like to avoid using atomics as much as possible, notably about 
>>>> this quote from the docs : "*Except for special, low-level 
>>>> applications, synchronization is better done with channels or the 
>>>> facilities of the sync package*"
>>>>
>>>> In the end, what we're looking for is to reach a minimal checking 
>>>> frequency (is connection up? do something, else do nothing), the ideal 
>>>> being not to have to check anything.
>>>>
>>>> Have you guys already faced such problematics in the past? What 
>>>> solutions have you come up with?
>>>>
>>>> Many thx in advance for your help!
>>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "golang-nuts" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to golang-nuts...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to