In some email I received from Chris Calabrese, sie wrote:
 > 
 > 
 > Let's see...
 > 
 > Orange Book requires that the TCB shut down if it can't log, so that
 > jibes with Chris L's statement about the .gov wanting confirmed delivery.
 > Not all subsystems will have this requirement, however, and it will also
 > be difficult to do over UDP, so we want this to be optional in some way.
 > This probably means an option to openlog(3) and logger(1) to specify that
 > the logs must be verified, and an option in syslog.conf to say that a
 > particular log stream can only go over a verifiable connection.

I think we need to do what we can, at a protocol level, to support whatever
one needs to do at an application level, in order to facilitate confirmed
delivery.  If we can provide host to host confirmed delivery (and storage?)
then the applications exchanging messages may be able to provide feedback
to other programs about the success/failure of their logging.

 > The courts are going to want chain-of-custody evidence, so that implies
 > digital signatures to prove where the data came from and that it has not
 > been altered.  This is both in transmission and in the log store.

Yup.  What can we do about the different steps in transmission to do this?
The host-host transmission problem is easy.  The initial process-syslogd
transmission, I'm not sure about (it's usually host-local IPC).  Arguably,
since unix domain sockets use the same protocol, it is within out charter,
but I'm not sure if it makes sense unless the application did something
like ask the syslogd for an initial seeding key (or does it generate its
own temporary one and pass that to syslogd ?).

 > Again, this means options in openlog(3), logger(1), and syslog.conf to
 > control this stuff.

That's taking a particularly unix-centric view of the situation.  I may
have a number of network devices which are old and don't do the new syslog
thing but I still need to provide assurance about what is received from
them.  Forget about openlog/logger, and other implementation issues.
Think protocol, think messages, forget Unix ;)

 > Since multiple systems are generating events at roughly the same time,
 > we need a timestamp to be able to show in what order events happened in.

Yup.

 > Since each process can generate more than one event, we need a process
 > identifier (PID, etc.) to show how messages group together.

Hrm.  I don't see how this, in itself, is important.  Granted, it's great
when looking for things to kill, etc, but I don't see it as being anything
special.  What happens when you are sending/analyzing logs from a device
which has no concept of a process ID ?  It's quite reasonable to expect
devices to know about time and their own name, but how much more ?

 > Since some log information may be confidential we need encrypted
 > transmission.  Again, we're going to need options in openlog(3),
 > logger(1) and syslog.conf to control this.  Security of the log
 > store is an implementation issue that can be handled by encryption
 > or by other "system" security mechanisms.

I think you're pushing way too much back to the application here - more
than could be reasonably supported in a sane way.  Given this wording,
a syslogd would be sending both encrypted and unencrypted messages via
the same connection.

 > Since we need log even correlation to support things like intrusion
 > detection, we need standardized tags to correlate on (the ULM/XML idea).

That's a different problem (IDWG, even).

Darren

Reply via email to