|Trevor Perrin||Mar 18, 2003 1:31 pm|
|Dimitri Andivahis||Mar 19, 2003 3:12 pm|
|Trevor Perrin||Mar 19, 2003 8:35 pm|
|Dimitri Andivahis||Mar 20, 2003 4:27 pm|
|jmessing||Mar 20, 2003 4:46 pm|
|Trevor Perrin||Mar 20, 2003 7:41 pm|
|jmessing||Mar 20, 2003 8:42 pm|
|Robert Zuccherato||Mar 21, 2003 7:09 am|
|Robert Zuccherato||Mar 21, 2003 7:36 am|
|Trevor Perrin||Mar 21, 2003 3:10 pm|
|Dimitri Andivahis||Mar 21, 2003 3:35 pm|
|Dimitri Andivahis||Mar 21, 2003 4:07 pm|
|Trevor Perrin||Mar 21, 2003 6:24 pm|
|Nick Pope||Mar 22, 2003 6:58 am|
|Robert Zuccherato||Mar 24, 2003 7:40 am|
|Robert Zuccherato||Mar 24, 2003 7:44 am|
|Robert Zuccherato||Mar 24, 2003 7:51 am|
|Nick Pope||Mar 24, 2003 8:28 am|
|Trevor Perrin||Mar 24, 2003 12:03 pm|
|Gregor Karlinger||Mar 25, 2003 7:39 am||.bin|
|Gregor Karlinger||Mar 25, 2003 8:05 am||.bin|
|kare...@esat.kuleuven.ac.be||Mar 25, 2003 8:38 am|
|Trevor Perrin||Mar 25, 2003 10:48 am|
|Nick Pope||Mar 25, 2003 11:34 am|
|Robert Zuccherato||Mar 27, 2003 11:08 am|
|Gregor Karlinger||Mar 31, 2003 12:07 am||.bin|
|Nick Pope||Mar 31, 2003 4:42 am|
|Dimitri Andivahis||Apr 1, 2003 3:24 pm|
|Karel Wouters||Apr 2, 2003 4:21 am|
|Trevor Perrin||Apr 3, 2003 11:47 am|
|Robert Zuccherato||Apr 3, 2003 11:49 am|
|Robert Zuccherato||Apr 3, 2003 12:29 pm|
|Trevor Perrin||Apr 3, 2003 2:06 pm|
|Dimitri Andivahis||Apr 4, 2003 5:57 am|
|Dimitri Andivahis||Apr 4, 2003 3:00 pm|
|Dimitri Andivahis||Apr 4, 2003 3:24 pm|
|Trevor Perrin||Apr 4, 2003 11:39 pm|
|Trevor Perrin||Apr 7, 2003 11:56 am|
|Subject:||RE: [dss] Timestamping|
|From:||Dimitri Andivahis (dimi...@surety.com)|
|Date:||Apr 1, 2003 3:24:45 pm|
Since different claims have been made about the linking timestamping schemes, I'm providing a short description that is consistent with the ISO/IEC 18014-3 terminology and definitions. I will try to cover some of the salient features of different linking schemes as they are being used in the real world, and provide the rationale for the different encapsulations (digested and signed data) in ISO/IEC 18014-3. To keep things simple, I'm ignoring the effect of aggregation at first glance.
A TSA practicing a simple linking scheme (linear chain, simple BLS etc) may be implemented so that it maintains a database with two sequences of values, the A values and the S values. For a given timestamp request at time T(i), the TSA computes a timestamp info object (i.e., time value, hash, serial number, tsa name, policy and so on), then it computes the following two values: - A(i), computed by hashing the timestamp info object, and - S(i), computed by hashing over inputs A(i), S(i-1) and possibly multiple other previous values S(j), j<i-1.
The TSA adds A(i) and S(i) to its database of A and S values. It then generates a BindingInfo object, containing A(i), S(i-1) and all other S(j), j<i-1, that were used for computing S(i). Finally, it generates the timestamp token, encapsulating the timestamp info object and the BindingInfo object in a digested data object. The timestamp token is returned to the requestor over a channel providing data integrity and origin authentication (for example, by using keys that do not need to last longer than the transaction lapse).
The TSA maintains all A and S values that were ever generated in the process above, and publishes in widely available media values derived from the S values at regular intervals (every day or week). Any 3rd party with access to the A and S values can verify the correctness of the computation of the S values, as well as the correctness of the published values derived from the S values.
The data integrity of the database of A and S values must be ensured during operations. The process of publishing the values must be done in an authenticated manner. However, none of the above requires that the timestamp tokens issued to the requestors by the TSA be protected by a signature, assuming the TSA operations are audited and the correctness of the computation of the published values is verified.
Verification of a token issued by a TSA implementing a simple linking scheme is done by carrying out a verification protocol with the issuing TSA (listed in the "tsa" field in the timestamp info object). The token is submitted to the TSA; the TSA examines the BindingInfo object in the token, verifies that the A value in BindingInfo matches the timestamp info object, combines the A value with the S values stored in BindingInfo, and compares the result with the appropriate S value in its own database of S values. The lookup operation uses as index key either the time value or serial number of the timestamp info object. If the value computed from the token matches the S value looked up in the database, the TSA returns a success code, otherwise it returns a failure code. If the verification succeeds and the index key is the time value, the time value associated with the matched S value in the database is equal to the time value in the token. All communication must be done over a channel providing data integrity and origin authentication (for example, by using keys that do not need to last longer than the transaction lapse).
The data type definitions in 18014-3 also support accumulated linking schemes, which are variants of corresponding simple linking schemes. In this setup, the TSA operation is very similar to what was described previously, however, the TSA doesn't maintain forever in its online database all the A and S values it ever computed. Instead, it maintains recent A and S values, as well as the A and S values corresponding to the reduced subsequence of timestamping "rounds". The complete sequences of the A and S values may be in deep storage or may even be deleted after a retention period, as specified in the TSA's practices statement. Timestamp tokens issued within the same round typically have different time values associated with them. The TSA may be publishing to widely available media additional values derived from the S values in the reduced subsequence, or may be publishing all S values of the reduced subsequence. Any 3rd party with access to the reduced subsequences of A and S values can verify the correctness of the computation of the S values in the reduced subsequence, as well as that of the published values.
Accumulated schemes may be implemented using an asynchronous request protocol. Alternatively, a synchronous protocol may be used for the timestamp requests, and the requestor has to contact the TSA again after the next S "round" value in the reduced subsequence is generated. At that point, the TSA populates the "publish" field in the BindingInfo object of the token with data binding the already issued token to the immediately following S "round" value. Timestamp tokens generated using accumulated linking schemes synchronously are typically encapsulated as signed data when they are first generated by the TSA: the TSA signs the timestamp info object and includes the BindingInfo object in the signed attributes. This is to ensure that a form of timestamp token verification is available until the timestamp token is refreshed and the "publish" field of the BindingInfo is populated. Once the "publish" field is populated, the token can be verified as a purely linked token thereafter. The verification protocol is similar to what was described above for simple linking schemes, except that the TSA uses the A value and the "publish" field in BindingInfo to compute the value that gets compared against the appropriate S "round" value in its own database.
In all linking schemes, simple or accumulated, if the verification result code is success, the timestamp is proved to have participated in the linking operation of the TSA at the time value included in the token itself. This is true even in the case of the accumulated schemes, where the time value associated with the matching S "round" value in the database is not the same as the one in the token. If anybody tampers with the time value in the timestamp info object and/or the A value in BindingInfo, the token will fail to verify against the TSA's database of S values.
Any verification protocol with a TSA or other authorized 3rd party assumes that the verifier has established through out of band means that the specific TSA or 3rd party may authoritatively speak for the tokens issued in the name of the TSA named in the "tsa" field of the timestamp token.
ISO/IEC 18014-3 doesn't specify a protocol for refreshing the "publish" field in BindingInfo. It may be possible that a single publish request protocol could be used to return the data binding the timestamp token either to a next "round" S value (in the accumulated schemes) or to a "published" value (in simple and accumulated schemes).
If the TSA uses aggregation, the TSA operations are slightly more complicated during timestamp token generation. Instead of storing the A values in its database, the TSA stores the results of the aggregation of the A values from all timestamp info objects containing the same time value. Each aggregated value authenticates all A values participating in the aggregation. An additional field in BindingInfo ("aggregate") accounts for the binding of the A value to the corresponding aggregated value. Note that accumulated schemes using a signed data encapsulation for the timestamp tokens don't benefit at all from aggregation, since each timestamp info object in the aggregation must be signed separately.