legacy

package
v0.1.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 13, 2020 License: BSD-3-Clause Imports: 8 Imported by: 10

Documentation

Overview

Binary Merkle Tree Hash is a hash function over arbitrary datachunks of limited size. It is defined as the root hash of the binary merkle tree built over fixed size segments of the underlying chunk using any base hash function (e.g., keccak 256 SHA3). Chunks with data shorter than the fixed size are hashed as if they had zero padding.

BMT hash is used as the chunk hash function in swarm which in turn is the basis for the 128 branching swarm hash http://swarm-guide.readthedocs.io/en/latest/architecture.html#swarm-hash

The BMT is optimal for providing compact inclusion proofs, i.e. prove that a segment is a substring of a chunk starting at a particular offset. The size of the underlying segments is fixed to the size of the base hash (called the resolution of the BMT hash), Using Keccak256 SHA3 hash is 32 bytes, the EVM word size to optimize for on-chain BMT verification as well as the hash size optimal for inclusion proofs in the merkle tree of the swarm hash.

Two implementations are provided:

RefHasher is optimized for code simplicity and meant as a reference implementation that is simple to understand

Hasher is optimized for speed taking advantage of concurrency with minimalistic control structure to coordinate the concurrent routines

BMT Hasher implements the following interfaces:

standard golang hash.Hash - synchronous, reusable

io.Writer - synchronous left-to-right datawriter

Index

Constants

View Source
const (
	// PoolSize is the maximum number of bmt trees used by the hashers, i.e,
	// the maximum number of concurrent BMT hashing operations performed by the same hasher
	PoolSize = 8
)

Variables

View Source
var (
	ZeroSpan = make([]byte, 8)
)

Functions

func LengthToSpan

func LengthToSpan(length int64) []byte

LengthToSpan creates a binary data span size representation. It is required for calculating the BMT hash.

Types

type BaseHasherFunc

type BaseHasherFunc func() hash.Hash

BaseHasherFunc is a hash.Hash constructor function used for the base hash of the BMT. implemented by Keccak256 SHA3 sha3.NewLegacyKeccak256

type Hasher

type Hasher struct {
	// contains filtered or unexported fields
}

Hasher a reusable hasher for fixed maximum size chunks representing a BMT It reuses a pool of trees for amortised memory allocation and resource control, and supports order-agnostic concurrent segment writes and section (double segment) writes as well as sequential read and write.

The same hasher instance must not be called concurrently on more than one chunk.

The same hasher instance is synchronously reuseable.

Sum gives back the tree to the pool and guaranteed to leave the tree and itself in a state reusable for hashing a new chunk.

func New

func New(p *TreePool) *Hasher

New creates a reusable BMT Hasher that pulls a new tree from a resource pool for hashing each chunk.

func (*Hasher) BlockSize

func (h *Hasher) BlockSize() int

BlockSize returns the optimal write size to the Hasher

func (*Hasher) Capacity

func (h *Hasher) Capacity() int

Count returns the maximum amount of bytes that will be processed by this hasher implementation.

func (*Hasher) GetZeroHash

func (h *Hasher) GetZeroHash() []byte

GetZeroHash returns the zero hash of the full depth of the Hasher instance.

func (*Hasher) Reset

func (h *Hasher) Reset()

Reset prepares the Hasher for reuse

func (*Hasher) SetSpan

func (h *Hasher) SetSpan(length int64) error

SetSpan sets the span length value prefix in numeric form for the current hash operation.

func (*Hasher) SetSpanBytes added in v0.1.2

func (h *Hasher) SetSpanBytes(b []byte) error

SetSpanBytes sets the span length value prefix in bytes for the current hash operation.

func (*Hasher) Size

func (h *Hasher) Size() int

Size returns the digest size of the hash

func (*Hasher) Sum

func (h *Hasher) Sum(b []byte) (s []byte)

Sum returns the BMT root hash of the buffer using Sum presupposes sequential synchronous writes (io.Writer interface).

func (*Hasher) Write

func (h *Hasher) Write(b []byte) (int, error)

Write calls sequentially add to the buffer to be hashed, with every full segment calls writeSection in a go routine.

NOTE: This legacy implementation has no error handling for the writer. Use with caution.

func (*Hasher) WriteSection

func (h *Hasher) WriteSection(idx int, data []byte) error

writeSection is not in use for this implementation.

type TreePool

type TreePool struct {
	SegmentSize  int // size of leaf segments, stipulated to be = hash size
	SegmentCount int // the number of segments on the base level of the BMT
	Capacity     int // pool capacity, controls concurrency
	Depth        int // depth of the bmt trees = int(log2(segmentCount))+1
	Size         int // the total length of the data (count * size)
	// contains filtered or unexported fields
}

TreePool provides a pool of trees used as resources by the BMT Hasher. A tree popped from the pool is guaranteed to have a clean state ready for hashing a new chunk.

func NewTreePool

func NewTreePool(hasher BaseHasherFunc, segmentCount, capacity int) *TreePool

NewTreePool creates a tree pool with hasher, segment size, segment count and capacity on Hasher.getTree it reuses free trees or creates a new one if capacity is not reached.

func (*TreePool) Drain

func (p *TreePool) Drain(n int)

Drain drains the pool until it has no more than n resources.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL