I am evaluating Graylog in order for us to manage both log analysis and 
alerts for our applications.  Right now our Azure web applications are 
writing structured logs (in JSON) into file storage and I am trying to get 
those logs into Graylog.

I created an input, ingested some logs, then created some extractors to get 
information from those logs.  However, since those logs were already 
ingested prior to the extractors I need to re-ingest them again so that 
they get run through the extractors properly this time.

Due to the setup that we have, it's technically possible I will need to 
re-add my logs into Graylog and would like to remove the possibility of 
duplicates from my statistics.  While experimenting with the ELK stack I 
did this by telling logstash to set the document_id property in the output 
to the ID I specified (in this case every event I generate has a GUID field 
that can be used to uniquely identify each message).

However, when I tried to create an extractor that took my Id field and cut 
it to _id I got a 400 error, so it seems this is explicitly disallowed by 
Graylog.

Does Graylog have any detection of duplicate messages to overwrite, and if 
not is there any way to force an id on a message via an extractor?

-- 
You received this message because you are subscribed to the Google Groups 
"Graylog Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/graylog2/a67e3b8e-6259-43bf-bb5c-8ea371048e22%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to