Logs
Logs on Aztec are similar to logs on Ethereum, enabling smart contracts to convey arbitrary data to external entities. Offchain applications can use logs to interpret events that have occurred on-chain. There are three types of log:
Requirements
-
Availability: The logs get published.
A rollup proof won't be accepted by the rollup contract if the log preimages are not available. Similarly, a sequencer cannot accept a transaction unless log preimages accompany the transaction data.
-
Immutability: A log cannot be modified once emitted.
The protocol ensures that once a proof is generated at any stage (for a function, transaction, or block), the emitted logs are tamper-proof. In other words, only the original log preimages can generate the committed hashes in the proof.
-
Integrity: A contract cannot impersonate another contract.
Every log is emitted by a specific contract, and users need assurances that a particular log was indeed generated by a particular contract (and not some malicious impersonator contract). The protocol ensures that the source contract's address for a log can be verified, while also preventing the forging of the address.
Log Hash
Hash Function
The protocol uses SHA256 as the hash function for logs, and then reduces the 256-bit result to 248 bits for representation as a field element.
Throughout this page, hash(value)
is an abbreviated form of: truncate_to_field(SHA256(value))
Hashing
Regardless of the log type, the log hash is derived from an array of fields, calculated as:
hash(log_preimage[0], log_preimage[1], ..., log_preimage[N - 1])
Here, log_preimage is an array of field elements of length N
, representing the data to be broadcast.
Emitting Logs from Function Circuits
A function can emit an arbitrary number of logs, provided they don't exceed the specified [limit] . The function circuits must compute a hash for each log, and push all the hashes into the public inputs for further processing by the protocol circuits.
Aggregation in Protocol Circuits
To minimize the on-chain verification data size, protocol circuits aggregate log hashes. The end result is a single hash within the root rollup proof, encompassing all logs of the same type.
Each protocol circuit outputs two values for each log type:
accumulated_logs_hash
: A hash representing all logs.accumulated_logs_length
: The total length of all log preimages.
In cases where two proofs are combined to form a single proof, the accumulated_logs_hash and accumulated_logs_length from the two child proofs must be merged into one accumulated value:
accumulated_logs_hash = hash(proof_0.accumulated_logs_hash, proof_1.accumulated_logs_hash)
- If either hash is zero, the new hash will be
proof_0.accumulated_logs_hash | proof_1.accumulated_logs_hash
.
- If either hash is zero, the new hash will be
accumulated_logs_length = proof_0.accumulated_logs_length + proof_1.accumulated_logs_length
For private and public kernel circuits, beyond aggregating logs from a function call, they ensure that the contract's address emitting the logs is linked to the logs_hash. For more details, refer to the "Hashing" sections in Unencrypted Log, Encrypted Log, and Encrypted Note Preimage.
Encoding
-
The encoded logs data of a transaction is a flattened array of all logs data within the transaction:
tx_logs_data = [number_of_logs, ...log_data_0, ...log_data_1, ...]
The format of log_data varies based on the log type. For specifics, see the "Encoding" sections in Unencrypted Log, Encrypted Log, and Encrypted Note Preimage.
-
The encoded logs data of a block is a flatten array of a collection of the above tx_logs_data, with hints facilitating hashing replay in a binary tree structure:
block_logs_data = [number_of_branches, number_of_transactions, ...tx_logs_data_0, ...tx_logs_data_1, ...]
- number_of_transactions is the number of leaves in the left-most branch, restricted to either 1 or 2.
- number_of_branches is the depth of the parent node of the left-most leaf.
Here is a step-by-step example to construct the block_logs_data
:
-
A rollup, R01, merges two transactions: tx0 containing tx_logs_data_0, and tx1 containing tx_logs_data_1:
block_logs_data:
[0, 2, ...tx_logs_data_0, ...tx_logs_data_1]
Where 0 is the depth of the node R01, and 2 is the number of aggregated tx_logs_data of R01.
-
Another rollup, R23, merges two transactions: tx3 containing tx_logs_data_3, and tx2 without any logs:
block_logs_data:
[0, 1, ...tx_logs_data_3]
Here, the number of aggregated tx_logs_data is 1.
-
A rollup, RA, merges the two rollups R01 and R23:
block_logs_data:
[1, 2, ...tx_logs_data_0, ...tx_logs_data_1, 0, 1, ...tx_logs_data_3]
The result is the block_logs_data of R01 concatenated with the block_logs_data of R23, with the number_of_branches of R01 incremented by 1. The updated value of number_of_branches (0 + 1) is also the depth of the node R01.
-
A rollup, RB, merges the above rollup RA and another rollup R45:
block_logs_data:
[2, 2, ...tx_logs_data_0, ...tx_logs_data_1, 0, 1, ...tx_logs_data_3, 0, 2, ...tx_logs_data_4, ...tx_logs_data_5]
The result is the concatenation of the block_logs_data from both rollups, with the number_of_branches of the left-side rollup, RA, incremented by 1.
Verification
Upon receiving a proof and its encoded logs data, the entity can ensure the correctness of the provided block_logs_data by verifying that the accumulated_logs_hash in the proof can be derived from it:
const accumulated_logs_hash = compute_accumulated_logs_hash(block_logs_data);
assert(accumulated_logs_hash == proof.accumulated_logs_hash);
assert(block_logs_data.accumulated_logs_length == proof.accumulated_logs_length);
function compute_accumulated_logs_hash(logs_data) {
const number_of_branches = logs_data.read_u32();
const number_of_transactions = logs_data.read_u32();
let res = hash_tx_logs_data(logs_data);
if number_of_transactions == 2 {
res = hash(res, hash_tx_logs_data(logs_data));
}
for (let i = 0; i < number_of_branches; ++i) {
const res_right = compute_accumulated_logs_hash(logs_data);
res = hash(res, res_right);
}
return res;
}
function hash_tx_logs_data(logs_data) {
const number_of_logs = logs_data.read_u32();
let res = hash_log_data(logs_data);
for (let i = 1; i < number_of_logs; ++i) {
const log_hash = hash_log_data(logs_data);
res = hash(res, log_hash);
}
return res;
}
The accumulated_logs_length in block_logs_data is computed during the processing of each logs_data within hash_log_data(). The implementation of hash_log_data varies depending on the type of the logs being processed. Refer to the "Verification" sections in Unencrypted Log, Encrypted Log, and Encrypted Note Preimage for details.
Unencrypted Log
Unencrypted logs are used to communicate public information out of smart contracts. They can be emitted from both public and private functions.
Emitting unencrypted logs from private functions may pose a privacy leak. However, in-protocol restrictions are intentionally omitted to allow for potentially valuable use cases, such as custom encryption schemes utilizing Fully Homomorphic Encryption (FHE), and similar scenarios.
Hashing
Following the iterations for all private or public calls, the tail kernel circuits hash each log hash with the contract contract before computing the accumulated_logs_hash.
-
Hash the contract_address to each log_hash:
log_hash_a = hash(contract_address_a, log_hash_a)
- Repeat the process for all log_hashes in the transaction.
-
Accumulate all the hashes and output the final hash to the public inputs:
accumulated_logs_hash = hash(log_hash[0], log_hash[1], ..., log_hash[N - 1])
for N logs.
Encoding
The following represents the encoded data for an unencrypted log:
log_data = [log_preimage_length, contract_address, ...log_preimage]
Verification
function hash_log_data(logs_data) {
const log_preimage_length = logs_data.read_u32();
logs_data.accumulated_logs_length += log_preimage_length;
const contract_address = logs_data.read_field();
const log_preimage = logs_data.read_fields(log_preimage_length);
const log_hash = hash(...log_preimage);
return hash(log_hash, contract_address);
}
Encrypted Log
Encrypted logs contain information encrypted using the recipient's key. They can only be emitted from private functions. This restriction is due to the necessity of obtaining a secret for log encryption, which is challenging to manage privately in a public domain.
Hashing
Private kernel circuits ensure the association of the contract address with each encrypted log_hash. However, unlike unencrypted logs, submitting encrypted log preimages with their contract address poses a significant privacy risk. Therefore, instead of using the contract_address, a masked_contract_address is generated for each encrypted log_hash.
The masked_contract_address is a hash of the contract_address and a random value randomness, computed as:
masked_contract_address = hash(contract_address, randomness)
.
Here, randomness is generated in the private function circuit and supplied to the private kernel circuit. The value must be included in the preimage for encrypted log generation. The private function circuit is responsible for ensuring that the randomness differs for every encrypted log to avoid potential information linkage based on identical masked_contract_address.
After successfully decrypting an encrypted log, one can use the randomness in the log preimage, hash it with the contract_address, and verify it against the masked_contract_address to ascertain that the log originated from the specified contract.
-
Hash the contract_address_tag to each log_hash:
masked_contract_address_a = hash(contract_address_a, randomness)
log_hash_a = hash(contract_address_tag_a, log_hash_a)
- Repeat the process for all log_hashes in the transaction.
-
Accumulate all the hashes in the tail and outputs the final hash to the public inputs:
accumulated_logs_hash = hash(log_hash[0], log_hash[1], ..., log_hash[N - 1])
for N logs, with hashes defined above.
Note that, in some cases, the user may want to reveal which contract address the encrypted log came from. Providing a randomness
value of 0 signals that we should not mask the address, so in this case the log hash is simply:
log_hash_a = hash(contract_address_a, log_hash_a)
Encoding
The following represents the encoded data for an encrypted log:
log_data = [log_preimage_length, masked_contract_address, ...log_preimage]
Verification
function hash_log_data(logs_data) {
const log_preimage_length = logs_data.read_u32();
logs_data.accumulated_logs_length += log_preimage_length;
const contract_address_tag = logs_data.read_field();
const log_preimage = logs_data.read_fields(log_preimage_length);
const log_hash = hash(...log_preimage);
return hash(log_hash, contract_address_tag);
}
Encrypted Note Preimage
Similar to encrypted logs, encrypted note preimages are data that only entities possessing the keys can decrypt to view the plaintext. Unlike encrypted logs, each encrypted note preimage can be linked to a note, whose note hash can be found in the block data.
Note that a note can be "shared" to one or more recipients by emitting one or more encrypted note preimages. However, this is not mandatory, and there may be no encrypted preimages emitted for a note if the information can be obtain through alternative means.
Hashing
As each encrypted note preimage can be associated with a note in the same transaction, enforcing a contract_address_tag is unnecessary. Instead, by calculating the note_hash using the decrypted note preimage, hashed with the contract_address, and verify it against the block data, the recipient can confirm that the note was emitted from the specified contract.
The kernel circuit simply accumulates all the hashes:
accumulated_logs_hash = hash(log_hash[0], log_hash[1], ..., log_hash[N - 1])
for N logs.
Encoding
The following represents the encoded data for an unencrypted note preimage:
log_data = [log_preimage_length, ...log_preimage]
Verification
function hash_log_data(logs_data) {
const log_preimage_length = logs_data.read_u32();
logs_data.accumulated_logs_length += log_preimage_length;
const log_preimage = logs_data.read_fields(log_preimage_length);
return hash(...log_preimage);
}
Log Encryption
Refer to Private Message Delivery for detailed information on generating encrypted data.