-
Notifications
You must be signed in to change notification settings - Fork 97
RDBArchiveEngine
Fundamentally, the ArchiveEngine operates as the previous Channel Archiver's Archive Engine did, so the manual on https://ics-web.sns.ornl.gov/kasemir/archiver/index.html still has some relevance.
Differences:
- Implemented in Java (CSS, Eclipse) instead of C++
- Writes to RDB (Oracle, MySQL, maybe soon Hyper Table) instead of the binary data files used by the Channel Archiver
- Configured from RDB instead of XML files
Compared to the Channel Archiver, the configuration options of the RDB Archive Engine has a few differences:
The old engine supported 'disabling' channels, while the new engine supports 'enabling' channels. The meaning is exactly reversed, because the positive 'enabling' seemed easier to understand than the negative 'disabling'.
Works as before, and it's use is strongly encouraged.
The 'get_threshold' is no longer used. Scanned channels are always internally based on monitors. Since scanned operation is usually either missing short value changes or needlessly scanning the same unchanged value, you are strongly encouraged to use Monitored operation with an appropriate ADEL on the IOC side.
If the IOC cannot be configured with a suitable ADEL, the sample engine itself can now perform the deadband check via the "smpl_val" parameter. This, however, is a last resort for Channel Access servers based on for example LabView that cannot perform a proper deadband check. To minimize network traffic and archive engine CPU load, this should happen in the IOC.
This used to be configurable via 'ignored_future', it's for now fixed at "1 day".
While the Archive Engine fundamentally performs one SQL 'INSERT' per sample, it submits these in batches. For example, with the default batch_size=500 it will collect 500 INSERTs and then perform a JDBC executeBatch(). This can be a substantial performance improvement over separate INSERT/execute/INSERT/execute operations. See JDBC documentation for details.
This does not override the write_period: The engine will still perform writes at that period. If for example 100 values have accumulated at a write period, they will be written as one batch. If on the other hand 1100 values have accumulated, 500 will be written in one batch, then 500 in another, finally 100 in a last batch, and then the engine will wait for the next write period.
Technically, inserts for double-typed data differs from inserts for, say, enum-typed data, so internally the engine will batch up to 500 double inserts, up to 500 enum inserts and so on.
While batched JDBC processing increases performance, the disadvantage is that one error in 500 batched inserts will cause all 500 inserts to fail, i.e. those samples are lost. Especially with older Oracle versions there was in fact no way to determine which insert failed, and the engine will not try to re-insert each sample individually. The engine will disconnect, try to reconnect, and then resume with new samples once the connection succeeds.
The engine uses the CSS logger based on Log4J, and can thus log to a console, file, or JMS. In addition, it has some additional 'throttles' to reduce the number of log messages by only logging them once, then suppressing additional messages of the same kind for some time with a message like "... More messsages suppressed for 2.00 sec ...."