It is, but that would mean it wouldn't be able to process incoming emails from 
the other OTRS system, the system ID gets embedded in the full ticket "number", 
the system ID is primarily used to allow separate OTRS systems to send emails 
back and forth (assuming the system IDs are different) without screwing up 
other tickets already in your database.

This is a good point. In our configuration, we assume the OTRS nodes are a 
“single system”.

Yep, but like I said there is nothing stopping you from doing it, but it does 
have it's drawbacks. Like David said if you are going to store this on disk 
then you will need to use a cluster filesystem, and I'm not entirely sure how 
this will work with multiple front end servers, hence why storing it in the DB 
(even given its "issues") would be the easier option as the frontend doesn't 
need to worry about being able to access the filesystem, it's all in the one 
place.

It also puts all the data management policies in one architectural point in the 
system. One feature I’d really like to see in OTRS is some more abstraction in 
the attachment handling. It’d be really nice to have it just hand me an object 
and provide some standard methods to call. It would make implementing method 
overrides a lot easier.

If you go with flat files, you need the cluster filesystem to make certain that 
fstat and dirent manipulation is atomic across systems. Given how much caching 
Unix/Linux systems do to avoid actual physical I/O, there are some race 
conditions that can occur in the OTRS code in this scenario without the cluster 
file system. That’s one reason we decided to go with the external object store; 
it put all the object identifier assignment behind the DMS API, which 
guaranteed unique IDs.  NFS just can’t provide those atomicity guarantees.
---------------------------------------------------------------------
OTRS mailing list: otrs - Webpage: http://otrs.org/
Archive: http://lists.otrs.org/pipermail/otrs
To unsubscribe: http://lists.otrs.org/cgi-bin/listinfo/otrs

Reply via email to