codec

package
v0.0.0-...-1643975 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 28, 2018 License: MIT Imports: 21 Imported by: 295

README

Codec

High Performance, Feature-Rich Idiomatic Go codec/encoding library for binc, msgpack, cbor, json.

Supported Serialization formats are:

To install:

go get github.com/ugorji/go/codec

This package will carefully use 'unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=safe ...'. Note that unsafe is only supported for the last 3 go sdk versions e.g. current go release is go 1.9, so we support unsafe use only from go 1.7+ . This is because supporting unsafe requires knowledge of implementation details.

Online documentation: http://godoc.org/github.com/ugorji/go/codec
Detailed Usage/How-to Primer: http://ugorji.net/blog/go-codec-primer

The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc).

Rich Feature Set includes:

  • Simple but extremely powerful and feature-rich API
  • Support for go1.4 and above, while selectively using newer APIs for later releases
  • Excellent code coverage ( > 90% )
  • Very High Performance. Our extensive benchmarks show us outperforming Gob, Json, Bson, etc by 2-4X.
  • Careful selected use of 'unsafe' for targeted performance gains. 100% mode exists where 'unsafe' is not used at all.
  • Lock-free (sans mutex) concurrency for scaling to 100's of cores
  • Coerce types where appropriate e.g. decode an int in the stream into a float, decode numbers from formatted strings, etc
  • Corner Cases: Overflows, nil maps/slices, nil values in streams are handled correctly
  • Standard field renaming via tags
  • Support for omitting empty fields during an encoding
  • Encoding from any value and decoding into pointer to any value (struct, slice, map, primitives, pointers, interface{}, etc)
  • Extensions to support efficient encoding/decoding of any named types
  • Support encoding.(Binary|Text)(M|Unm)arshaler interfaces
  • Support IsZero() bool to determine if a value is a zero value. Analogous to time.Time.IsZero() bool.
  • Decoding without a schema (into a interface{}). Includes Options to configure what specific map or slice type to use when decoding an encoded list or map into a nil interface{}
  • Mapping a non-interface type to an interface, so we can decode appropriately into any interface type with a correctly configured non-interface value.
  • Encode a struct as an array, and decode struct from an array in the data stream
  • Option to encode struct keys as numbers (instead of strings) (to support structured streams with fields encoded as numeric codes)
  • Comprehensive support for anonymous fields
  • Fast (no-reflection) encoding/decoding of common maps and slices
  • Code-generation for faster performance.
  • Support binary (e.g. messagepack, cbor) and text (e.g. json) formats
  • Support indefinite-length formats to enable true streaming (for formats which support it e.g. json, cbor)
  • Support canonical encoding, where a value is ALWAYS encoded as same sequence of bytes. This mostly applies to maps, where iteration order is non-deterministic.
  • NIL in data stream decoded as zero value
  • Never silently skip data when decoding. User decides whether to return an error or silently skip data when keys or indexes in the data stream do not map to fields in the struct.
  • Encode/Decode from/to chan types (for iterative streaming support)
  • Drop-in replacement for encoding/json. json: key in struct tag supported.
  • Provides a RPC Server and Client Codec for net/rpc communication protocol.
  • Handle unique idiosyncrasies of codecs e.g.

Extension Support

Users can register a function to handle the encoding or decoding of their custom types.

There are no restrictions on what the custom type can be. Some examples:

type BisSet   []int
type BitSet64 uint64
type UUID     string
type MyStructWithUnexportedFields struct { a int; b bool; c []int; }
type GifImage struct { ... }

As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like.

Custom Encoding and Decoding

This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree

  • is type a codec.Selfer?
  • is there an extension registered for the type?
  • is format binary, and is type a encoding.BinaryMarshaler and BinaryUnmarshaler?
  • is format specifically json, and is type a encoding/json.Marshaler and Unmarshaler?
  • is format text-based, and type an encoding.TextMarshaler?
  • else we use a pair of functions based on the "kind" of the type e.g. map, slice, int64, etc

This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode.

Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree.

RPC

RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package.

Usage

Typical usage model:

// create and configure Handle
var (
  bh codec.BincHandle
  mh codec.MsgpackHandle
  ch codec.CborHandle
)

mh.MapType = reflect.TypeOf(map[string]interface{}(nil))

// configure extensions
// e.g. for msgpack, define functions and enable Time support for tag 1
// mh.SetExt(reflect.TypeOf(time.Time{}), 1, myExt)

// create and use decoder/encoder
var (
  r io.Reader
  w io.Writer
  b []byte
  h = &bh // or mh to use msgpack
)

dec = codec.NewDecoder(r, h)
dec = codec.NewDecoderBytes(b, h)
err = dec.Decode(&v)

enc = codec.NewEncoder(w, h)
enc = codec.NewEncoderBytes(&b, h)
err = enc.Encode(v)

//RPC Server
go func() {
    for {
        conn, err := listener.Accept()
        rpcCodec := codec.GoRpc.ServerCodec(conn, h)
        //OR rpcCodec := codec.MsgpackSpecRpc.ServerCodec(conn, h)
        rpc.ServeCodec(rpcCodec)
    }
}()

//RPC Communication (client side)
conn, err = net.Dial("tcp", "localhost:5555")
rpcCodec := codec.GoRpc.ClientCodec(conn, h)
//OR rpcCodec := codec.MsgpackSpecRpc.ClientCodec(conn, h)
client := rpc.NewClientWithCodec(rpcCodec)

Running Tests

To run tests, use the following:

go test

To run the full suite of tests, use the following:

go test -tags alltests -run Suite

You can run the tag 'safe' to run tests or build in safe mode. e.g.

go test -tags safe -run Json
go test -tags "alltests safe" -run Suite

Running Benchmarks

Please see http://github.com/ugorji/go-codec-bench .

Caveats

Struct fields matching the following are ignored during encoding and decoding

  • struct tag value set to -
  • func, complex numbers, unsafe pointers
  • unexported and not embedded
  • unexported and embedded and not struct kind
  • unexported and embedded pointers (from go1.10)

Every other field in a struct will be encoded/decoded.

Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.

Documentation

Overview

Package codec provides a High Performance, Feature-Rich Idiomatic Go 1.4+ codec/encoding library for binc, msgpack, cbor, json.

Supported Serialization formats are:

To install:

go get github.com/ugorji/go/codec

This package will carefully use 'unsafe' for performance reasons in specific places. You can build without unsafe use by passing the safe or appengine tag i.e. 'go install -tags=safe ...'. Note that unsafe is only supported for the last 3 go sdk versions e.g. current go release is go 1.9, so we support unsafe use only from go 1.7+ . This is because supporting unsafe requires knowledge of implementation details.

For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer .

The idiomatic Go support is as seen in other encoding packages in the standard library (ie json, xml, gob, etc).

Rich Feature Set includes:

  • Simple but extremely powerful and feature-rich API
  • Support for go1.4 and above, while selectively using newer APIs for later releases
  • Excellent code coverage ( > 90% )
  • Very High Performance. Our extensive benchmarks show us outperforming Gob, Json, Bson, etc by 2-4X.
  • Careful selected use of 'unsafe' for targeted performance gains. 100% mode exists where 'unsafe' is not used at all.
  • Lock-free (sans mutex) concurrency for scaling to 100's of cores
  • Coerce types where appropriate e.g. decode an int in the stream into a float, decode numbers from formatted strings, etc
  • Corner Cases: Overflows, nil maps/slices, nil values in streams are handled correctly
  • Standard field renaming via tags
  • Support for omitting empty fields during an encoding
  • Encoding from any value and decoding into pointer to any value (struct, slice, map, primitives, pointers, interface{}, etc)
  • Extensions to support efficient encoding/decoding of any named types
  • Support encoding.(Binary|Text)(M|Unm)arshaler interfaces
  • Support IsZero() bool to determine if a value is a zero value. Analogous to time.Time.IsZero() bool.
  • Decoding without a schema (into a interface{}). Includes Options to configure what specific map or slice type to use when decoding an encoded list or map into a nil interface{}
  • Mapping a non-interface type to an interface, so we can decode appropriately into any interface type with a correctly configured non-interface value.
  • Encode a struct as an array, and decode struct from an array in the data stream
  • Option to encode struct keys as numbers (instead of strings) (to support structured streams with fields encoded as numeric codes)
  • Comprehensive support for anonymous fields
  • Fast (no-reflection) encoding/decoding of common maps and slices
  • Code-generation for faster performance.
  • Support binary (e.g. messagepack, cbor) and text (e.g. json) formats
  • Support indefinite-length formats to enable true streaming (for formats which support it e.g. json, cbor)
  • Support canonical encoding, where a value is ALWAYS encoded as same sequence of bytes. This mostly applies to maps, where iteration order is non-deterministic.
  • NIL in data stream decoded as zero value
  • Never silently skip data when decoding. User decides whether to return an error or silently skip data when keys or indexes in the data stream do not map to fields in the struct.
  • Detect and error when encoding a cyclic reference (instead of stack overflow shutdown)
  • Encode/Decode from/to chan types (for iterative streaming support)
  • Drop-in replacement for encoding/json. `json:` key in struct tag supported.
  • Provides a RPC Server and Client Codec for net/rpc communication protocol.
  • Handle unique idiosyncrasies of codecs e.g.
  • For messagepack, configure how ambiguities in handling raw bytes are resolved
  • For messagepack, provide rpc server/client codec to support msgpack-rpc protocol defined at: https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md

Extension Support

Users can register a function to handle the encoding or decoding of their custom types.

There are no restrictions on what the custom type can be. Some examples:

type BisSet   []int
type BitSet64 uint64
type UUID     string
type MyStructWithUnexportedFields struct { a int; b bool; c []int; }
type GifImage struct { ... }

As an illustration, MyStructWithUnexportedFields would normally be encoded as an empty map because it has no exported fields, while UUID would be encoded as a string. However, with extension support, you can encode any of these however you like.

Custom Encoding and Decoding

This package maintains symmetry in the encoding and decoding halfs. We determine how to encode or decode by walking this decision tree

  • is type a codec.Selfer?
  • is there an extension registered for the type?
  • is format binary, and is type a encoding.BinaryMarshaler and BinaryUnmarshaler?
  • is format specifically json, and is type a encoding/json.Marshaler and Unmarshaler?
  • is format text-based, and type an encoding.TextMarshaler?
  • else we use a pair of functions based on the "kind" of the type e.g. map, slice, int64, etc

This symmetry is important to reduce chances of issues happening because the encoding and decoding sides are out of sync e.g. decoded via very specific encoding.TextUnmarshaler but encoded via kind-specific generalized mode.

Consequently, if a type only defines one-half of the symmetry (e.g. it implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't satisfy the check and we will continue walking down the decision tree.

RPC

RPC Client and Server Codecs are implemented, so the codecs can be used with the standard net/rpc package.

Usage

The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent modification.

The Encoder and Decoder are NOT safe for concurrent use.

Consequently, the usage model is basically:

  • Create and initialize the Handle before any use. Once created, DO NOT modify it.
  • Multiple Encoders or Decoders can now use the Handle concurrently. They only read information off the Handle (never write).
  • However, each Encoder or Decoder MUST not be used concurrently
  • To re-use an Encoder/Decoder, call Reset(...) on it first. This allows you use state maintained on the Encoder/Decoder.

Sample usage model:

// create and configure Handle
var (
  bh codec.BincHandle
  mh codec.MsgpackHandle
  ch codec.CborHandle
)

mh.MapType = reflect.TypeOf(map[string]interface{}(nil))

// configure extensions
// e.g. for msgpack, define functions and enable Time support for tag 1
// mh.SetExt(reflect.TypeOf(time.Time{}), 1, myExt)

// create and use decoder/encoder
var (
  r io.Reader
  w io.Writer
  b []byte
  h = &bh // or mh to use msgpack
)

dec = codec.NewDecoder(r, h)
dec = codec.NewDecoderBytes(b, h)
err = dec.Decode(&v)

enc = codec.NewEncoder(w, h)
enc = codec.NewEncoderBytes(&b, h)
err = enc.Encode(v)

//RPC Server
go func() {
    for {
        conn, err := listener.Accept()
        rpcCodec := codec.GoRpc.ServerCodec(conn, h)
        //OR rpcCodec := codec.MsgpackSpecRpc.ServerCodec(conn, h)
        rpc.ServeCodec(rpcCodec)
    }
}()

//RPC Communication (client side)
conn, err = net.Dial("tcp", "localhost:5555")
rpcCodec := codec.GoRpc.ClientCodec(conn, h)
//OR rpcCodec := codec.MsgpackSpecRpc.ClientCodec(conn, h)
client := rpc.NewClientWithCodec(rpcCodec)

Running Tests

To run tests, use the following:

go test

To run the full suite of tests, use the following:

go test -tags alltests -run Suite

You can run the tag 'safe' to run tests or build in safe mode. e.g.

go test -tags safe -run Json
go test -tags "alltests safe" -run Suite

Running Benchmarks

Please see http://github.com/ugorji/go-codec-bench .

Caveats

Struct fields matching the following are ignored during encoding and decoding

  • struct tag value set to -
  • func, complex numbers, unsafe pointers
  • unexported and not embedded
  • unexported and embedded and not struct kind
  • unexported and embedded pointers (from go1.10)

Every other field in a struct will be encoded/decoded.

Embedded fields are encoded as if they exist in the top-level struct, with some caveats. See Encode documentation.

Index

Constants

View Source
const (
	CborStreamBytes  byte = 0x5f
	CborStreamString      = 0x7f
	CborStreamArray       = 0x9f
	CborStreamMap         = 0xbf
	CborStreamBreak       = 0xff
)

These define some in-stream descriptors for manual encoding e.g. when doing explicit indefinite-length

View Source
const GenVersion = 8

GenVersion is the current version of codecgen.

Variables

View Source
var GoRpc goRpc

GoRpc implements Rpc using the communication protocol defined in net/rpc package.

Note: network connection (from net.Dial, of type io.ReadWriteCloser) is not buffered.

For performance, you should configure WriterBufferSize and ReaderBufferSize on the handle. This ensures we use an adequate buffer during reading and writing. If not configured, we will internally initialize and use a buffer during reads and writes. This can be turned off via the RPCNoBuffer option on the Handle.

var handle codec.JsonHandle
handle.RPCNoBuffer = true // turns off attempt by rpc module to initialize a buffer

Example 1: one way of configuring buffering explicitly:

var handle codec.JsonHandle // codec handle
handle.ReaderBufferSize = 1024
handle.WriterBufferSize = 1024
var conn io.ReadWriteCloser // connection got from a socket
var serverCodec = GoRpc.ServerCodec(conn, handle)
var clientCodec = GoRpc.ClientCodec(conn, handle)

Example 2: you can also explicitly create a buffered connection yourself, and not worry about configuring the buffer sizes in the Handle.

var handle codec.Handle     // codec handle
var conn io.ReadWriteCloser // connection got from a socket
var bufconn = struct {      // bufconn here is a buffered io.ReadWriteCloser
    io.Closer
    *bufio.Reader
    *bufio.Writer
}{conn, bufio.NewReader(conn), bufio.NewWriter(conn)}
var serverCodec = GoRpc.ServerCodec(bufconn, handle)
var clientCodec = GoRpc.ClientCodec(bufconn, handle)
View Source
var MsgpackSpecRpc msgpackSpecRpc

MsgpackSpecRpc implements Rpc using the communication protocol defined in the msgpack spec at https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md .

See GoRpc documentation, for information on buffering for better performance.

Functions

func GenHelperDecoder

func GenHelperDecoder(d *Decoder) (gd genHelperDecoder, dd genHelperDecDriver)

GenHelperDecoder is exported so that it can be used externally by codecgen.

Library users: DO NOT USE IT DIRECTLY. IT WILL CHANGE CONTINOUSLY WITHOUT NOTICE.

func GenHelperEncoder

func GenHelperEncoder(e *Encoder) (ge genHelperEncoder, ee genHelperEncDriver)

GenHelperEncoder is exported so that it can be used externally by codecgen.

Library users: DO NOT USE IT DIRECTLY. IT WILL CHANGE CONTINOUSLY WITHOUT NOTICE.

Types

type BasicHandle deprecated

type BasicHandle struct {

	// TypeInfos is used to get the type info for any type.
	//
	// If not configured, the default TypeInfos is used, which uses struct tag keys: codec, json
	TypeInfos *TypeInfos

	RPCOptions

	DecodeOptions

	EncodeOptions
	// contains filtered or unexported fields
}

BasicHandle encapsulates the common options and extension functions.

Deprecated: DO NOT USE DIRECTLY. EXPORTED FOR GODOC BENEFIT. WILL BE REMOVED.

func (*BasicHandle) AddExt deprecated

func (o *BasicHandle) AddExt(rt reflect.Type, tag byte,
	encfn func(reflect.Value) ([]byte, error),
	decfn func(reflect.Value, []byte) error) (err error)

AddExt registes an encode and decode function for a reflect.Type. To deregister an Ext, call AddExt with nil encfn and/or nil decfn.

Deprecated: Use SetBytesExt or SetInterfaceExt on the Handle instead.

func (*BasicHandle) Intf2Impl

func (o *BasicHandle) Intf2Impl(intf, impl reflect.Type) (err error)

Intf2Impl maps an interface to an implementing type. This allows us support infering the concrete type and populating it when passed an interface. e.g. var v io.Reader can be decoded as a bytes.Buffer, etc.

Passing a nil impl will clear the mapping.

func (*BasicHandle) SetExt deprecated

func (o *BasicHandle) SetExt(rt reflect.Type, tag uint64, ext Ext) (err error)

SetExt will set the extension for a tag and reflect.Type. Note that the type must be a named type, and specifically not a pointer or Interface. An error is returned if that is not honored. To Deregister an ext, call SetExt with nil Ext.

Deprecated: Use SetBytesExt or SetInterfaceExt on the Handle instead.

type BincHandle

type BincHandle struct {
	BasicHandle

	// AsSymbols defines what should be encoded as symbols.
	//
	// Encoding as symbols can reduce the encoded size significantly.
	//
	// However, during decoding, each string to be encoded as a symbol must
	// be checked to see if it has been seen before. Consequently, encoding time
	// will increase if using symbols, because string comparisons has a clear cost.
	//
	// Values:
	// - 0: default: library uses best judgement
	// - 1: use symbols
	// - 2: do not use symbols
	AsSymbols uint8
	// contains filtered or unexported fields
}

BincHandle is a Handle for the Binc Schema-Free Encoding Format defined at https://github.com/ugorji/binc .

BincHandle currently supports all Binc features with the following EXCEPTIONS:

  • only integers up to 64 bits of precision are supported. big integers are unsupported.
  • Only IEEE 754 binary32 and binary64 floats are supported (ie Go float32 and float64 types). extended precision and decimal IEEE 754 floats are unsupported.
  • Only UTF-8 strings supported. Unicode_Other Binc types (UTF16, UTF32) are currently unsupported.

Note that these EXCEPTIONS are temporary and full support is possible and may happen soon.

func (*BincHandle) AddExt deprecated

func (o *BincHandle) AddExt(rt reflect.Type, tag byte,
	encfn func(reflect.Value) ([]byte, error),
	decfn func(reflect.Value, []byte) error) (err error)

AddExt registes an encode and decode function for a reflect.Type. To deregister an Ext, call AddExt with nil encfn and/or nil decfn.

Deprecated: Use SetBytesExt or SetInterfaceExt on the Handle instead.

func (*BincHandle) Intf2Impl

func (o *BincHandle) Intf2Impl(intf, impl reflect.Type) (err error)

Intf2Impl maps an interface to an implementing type. This allows us support infering the concrete type and populating it when passed an interface. e.g. var v io.Reader can be decoded as a bytes.Buffer, etc.

Passing a nil impl will clear the mapping.

func (*BincHandle) Name

func (h *BincHandle) Name() string

Name returns the name of the handle: binc

func (*BincHandle) SetBytesExt

func (h *BincHandle) SetBytesExt(rt reflect.Type, tag uint64, ext BytesExt) (err error)

SetBytesExt sets an extension

func (*BincHandle) SetExt deprecated

func (o *BincHandle) SetExt(rt reflect.Type, tag uint64, ext Ext) (err error)

SetExt will set the extension for a tag and reflect.Type. Note that the type must be a named type, and specifically not a pointer or Interface. An error is returned if that is not honored. To Deregister an ext, call SetExt with nil Ext.

Deprecated: Use SetBytesExt or SetInterfaceExt on the Handle instead.

type BytesExt

type BytesExt interface {
	// WriteExt converts a value to a []byte.
	//
	// Note: v is a pointer iff the registered extension type is a struct or array kind.
	WriteExt(v interface{}) []byte

	// ReadExt updates a value from a []byte.
	//
	// Note: dst is always a pointer kind to the registered extension type.
	ReadExt(dst interface{}, src []byte)
}

BytesExt handles custom (de)serialization of types to/from []byte. It is used by codecs (e.g. binc, msgpack, simple) which do custom serialization of the types.

type CborHandle

type CborHandle struct {
	BasicHandle

	// IndefiniteLength=true, means that we encode using indefinitelength
	IndefiniteLength bool

	// TimeRFC3339 says to encode time.Time using RFC3339 format.
	// If unset, we encode time.Time using seconds past epoch.
	TimeRFC3339 bool
	// contains filtered or unexported fields
}

CborHandle is a Handle for the CBOR encoding format, defined at http://tools.ietf.org/html/rfc7049 and documented further at http://cbor.io .

CBOR is comprehensively supported, including support for:

  • indefinite-length arrays/maps/bytes/strings
  • (extension) tags in range 0..0xffff (0 .. 65535)
  • half, single and double-precision floats
  • all numbers (1, 2, 4 and 8-byte signed and unsigned integers)
  • nil, true, false, ...
  • arrays and maps, bytes and text strings

None of the optional extensions (with tags) defined in the spec are supported out-of-the-box. Users can implement them as needed (using SetExt), including spec-documented ones:

  • timestamp, BigNum, BigFloat, Decimals,
  • Encoded Text (e.g. URL, regexp, base64, MIME Message), etc.

func (*CborHandle) AddExt deprecated

func (o *CborHandle) AddExt(rt reflect.Type, tag byte,
	encfn func(reflect.Value) ([]byte, error),
	decfn func(reflect.Value, []byte) error) (err error)

AddExt registes an encode and decode function for a reflect.Type. To deregister an Ext, call AddExt with nil encfn and/or nil decfn.

Deprecated: Use SetBytesExt or SetInterfaceExt on the Handle instead.

func (*CborHandle) Intf2Impl

func (o *CborHandle) Intf2Impl(intf, impl reflect.Type) (err error)

Intf2Impl maps an interface to an implementing type. This allows us support infering the concrete type and populating it when passed an interface. e.g. var v io.Reader can be decoded as a bytes.Buffer, etc.

Passing a nil impl will clear the mapping.

func (*CborHandle) Name

func (h *CborHandle) Name() string

Name returns the name of the handle: cbor

func (*CborHandle) SetExt deprecated

func (o *CborHandle) SetExt(rt reflect.Type, tag uint64, ext Ext) (err error)

SetExt will set the extension for a tag and reflect.Type. Note that the type must be a named type, and specifically not a pointer or Interface. An error is returned if that is not honored. To Deregister an ext, call SetExt with nil Ext.

Deprecated: Use SetBytesExt or SetInterfaceExt on the Handle instead.

func (*CborHandle) SetInterfaceExt

func (h *CborHandle) SetInterfaceExt(rt reflect.Type, tag uint64, ext InterfaceExt) (err error)

SetInterfaceExt sets an extension

type DecodeOptions

type DecodeOptions struct {
	// MapType specifies type to use during schema-less decoding of a map in the stream.
	// If nil (unset), we default to map[string]interface{} iff json handle and MapStringAsKey=true,
	// else map[interface{}]interface{}.
	MapType reflect.Type

	// SliceType specifies type to use during schema-less decoding of an array in the stream.
	// If nil (unset), we default to []interface{} for all formats.
	SliceType reflect.Type

	// MaxInitLen defines the maxinum initial length that we "make" a collection
	// (string, slice, map, chan). If 0 or negative, we default to a sensible value
	// based on the size of an element in the collection.
	//
	// For example, when decoding, a stream may say that it has 2^64 elements.
	// We should not auto-matically provision a slice of that size, to prevent Out-Of-Memory crash.
	// Instead, we provision up to MaxInitLen, fill that up, and start appending after that.
	MaxInitLen int

	// MaxDepth defines the maximum depth when decoding nested
	// maps and slices. If 0 or negative, we default to 100.
	MaxDepth int

	// ReaderBufferSize is the size of the buffer used when reading.
	//
	// if > 0, we use a smart buffer internally for performance purposes.
	ReaderBufferSize int

	// If DecodeUnknownFields, then when decoding a map from a
	// codec stream into a struct implementing
	// UnknownFieldHandler, and no matching struct field is found,
	// keep track of the unknown fields and pass them into
	// CodecSetUnknownFields().
	DecodeUnknownFields bool

	// If ErrorIfNoField, return an error when decoding a map
	// from a codec stream into a struct, and no matching struct field is found.
	ErrorIfNoField bool

	// If ErrorIfNoArrayExpand, return an error when decoding a slice/array that cannot be expanded.
	// For example, the stream contains an array of 8 items, but you are decoding into a [4]T array,
	// or you are decoding into a slice of length 4 which is non-addressable (and so cannot be set).
	ErrorIfNoArrayExpand bool

	// If SignedInteger, use the int64 during schema-less decoding of unsigned values (not uint64).
	SignedInteger bool

	// MapValueReset controls how we decode into a map value.
	//
	// By default, we MAY retrieve the mapping for a key, and then decode into that.
	// However, especially with big maps, that retrieval may be expensive and unnecessary
	// if the stream already contains all that is necessary to recreate the value.
	//
	// If true, we will never retrieve the previous mapping,
	// but rather decode into a new value and set that in the map.
	//
	// If false, we will retrieve the previous mapping if necessary e.g.
	// the previous mapping is a pointer, or is a struct or array with pre-set state,
	// or is an interface.
	MapValueReset bool

	// SliceElementReset: on decoding a slice, reset the element to a zero value first.
	//
	// concern: if the slice already contained some garbage, we will decode into that garbage.
	SliceElementReset bool

	// InterfaceReset controls how we decode into an interface.
	//
	// By default, when we see a field that is an interface{...},
	// or a map with interface{...} value, we will attempt decoding into the
	// "contained" value.
	//
	// However, this prevents us from reading a string into an interface{}
	// that formerly contained a number.
	//
	// If true, we will decode into a new "blank" value, and set that in the interface.
	// If false, we will decode into whatever is contained in the interface.
	InterfaceReset bool

	// InternString controls interning of strings during decoding.
	//
	// Some handles, e.g. json, typically will read map keys as strings.
	// If the set of keys are finite, it may help reduce allocation to
	// look them up from a map (than to allocate them afresh).
	//
	// Note: Handles will be smart when using the intern functionality.
	// Every string should not be interned.
	// An excellent use-case for interning is struct field names,
	// or map keys where key type is string.
	InternString bool

	// PreferArrayOverSlice controls whether to decode to an array or a slice.
	//
	// This only impacts decoding into a nil interface{}.
	// Consequently, it has no effect on codecgen.
	//
	// *Note*: This only applies if using go1.5 and above,
	// as it requires reflect.ArrayOf support which was absent before go1.5.
	PreferArrayOverSlice bool

	// DeleteOnNilMapValue controls how to decode a nil value in the stream.
	//
	// If true, we will delete the mapping of the key.
	// Else, just set the mapping to the zero value of the type.
	DeleteOnNilMapValue bool
}

DecodeOptions captures configuration options during decode.

type Decoder

type Decoder struct {
	// contains filtered or unexported fields
}

A Decoder reads and decodes an object from an input stream in the codec format. Decoder is not goroutine-safe.

func NewDecoder

func NewDecoder(r io.Reader, h Handle) *Decoder

NewDecoder returns a Decoder for decoding a stream of bytes from an io.Reader.

For efficiency, Users are encouraged to pass in a memory buffered reader (eg bufio.Reader, bytes.Buffer).

func NewDecoderBytes

func NewDecoderBytes(in []byte, h Handle) *Decoder

NewDecoderBytes returns a Decoder which efficiently decodes directly from a byte slice with zero copying.

func (*Decoder) Decode

func (d *Decoder) Decode(v interface{}) (err error)

Decode decodes the stream from reader and stores the result in the value pointed to by v. v cannot be a nil pointer. v can also be a reflect.Value of a pointer.

Note that a pointer to a nil interface is not a nil pointer. If you do not know what type of stream it is, pass in a pointer to a nil interface. We will decode and store a value in that nil interface.

Sample usages:

// Decoding into a non-nil typed value
var f float32
err = codec.NewDecoder(r, handle).Decode(&f)

// Decoding into nil interface
var v interface{}
dec := codec.NewDecoder(r, handle)
err = dec.Decode(&v)

When decoding into a nil interface{}, we will decode into an appropriate value based on the contents of the stream:

  • Numbers are decoded as float64, int64 or uint64.
  • Other values are decoded appropriately depending on the type: bool, string, []byte, time.Time, etc
  • Extensions are decoded as RawExt (if no ext function registered for the tag)

Configurations exist on the Handle to override defaults (e.g. for MapType, SliceType and how to decode raw bytes).

When decoding into a non-nil interface{} value, the mode of encoding is based on the type of the value. When a value is seen:

  • If an extension is registered for it, call that extension function
  • If it implements BinaryUnmarshaler, call its UnmarshalBinary(data []byte) error
  • Else decode it based on its reflect.Kind

There are some special rules when decoding into containers (slice/array/map/struct). Decode will typically use the stream contents to UPDATE the container.

  • A map can be decoded from a stream map, by updating matching keys.
  • A slice can be decoded from a stream array, by updating the first n elements, where n is length of the stream.
  • A slice can be decoded from a stream map, by decoding as if it contains a sequence of key-value pairs.
  • A struct can be decoded from a stream map, by updating matching fields.
  • A struct can be decoded from a stream array, by updating fields as they occur in the struct (by index).

When decoding a stream map or array with length of 0 into a nil map or slice, we reset the destination map or slice to a zero-length value.

However, when decoding a stream nil, we reset the destination container to its "zero" value (e.g. nil for slice/map, etc).

Note: we allow nil values in the stream anywhere except for map keys. A nil value in the encoded stream where a map key is expected is treated as an error.

func (*Decoder) MustDecode

func (d *Decoder) MustDecode(v interface{})

MustDecode is like Decode, but panics if unable to Decode. This provides insight to the code location that triggered the error.

func (*Decoder) NumBytesRead

func (d *Decoder) NumBytesRead() int

func (*Decoder) Reset

func (d *Decoder) Reset(r io.Reader)

Reset the Decoder with a new Reader to decode from, clearing all state from last run(s).

func (*Decoder) ResetBytes

func (d *Decoder) ResetBytes(in []byte)

ResetBytes resets the Decoder with a new []byte to decode from, clearing all state from last run(s).

type EncodeOptions

type EncodeOptions struct {
	// WriterBufferSize is the size of the buffer used when writing.
	//
	// if > 0, we use a smart buffer internally for performance purposes.
	WriterBufferSize int

	// ChanRecvTimeout is the timeout used when selecting from a chan.
	//
	// Configuring this controls how we receive from a chan during the encoding process.
	//   - If ==0, we only consume the elements currently available in the chan.
	//   - if  <0, we consume until the chan is closed.
	//   - If  >0, we consume until this timeout.
	ChanRecvTimeout time.Duration

	// StructToArray specifies to encode a struct as an array, and not as a map
	StructToArray bool

	// Canonical representation means that encoding a value will always result in the same
	// sequence of bytes.
	//
	// This only affects maps, as the iteration order for maps is random.
	//
	// The implementation MAY use the natural sort order for the map keys if possible:
	//
	//     - If there is a natural sort order (ie for number, bool, string or []byte keys),
	//       then the map keys are first sorted in natural order and then written
	//       with corresponding map values to the strema.
	//     - If there is no natural sort order, then the map keys will first be
	//       encoded into []byte, and then sorted,
	//       before writing the sorted keys and the corresponding map values to the stream.
	//
	Canonical bool

	// CheckCircularRef controls whether we check for circular references
	// and error fast during an encode.
	//
	// If enabled, an error is received if a pointer to a struct
	// references itself either directly or through one of its fields (iteratively).
	//
	// This is opt-in, as there may be a performance hit to checking circular references.
	CheckCircularRef bool

	// RecursiveEmptyCheck controls whether we descend into interfaces, structs and pointers
	// when checking if a value is empty.
	//
	// Note that this may make OmitEmpty more expensive, as it incurs a lot more reflect calls.
	//
	// This is also available as the tag 'omitemptyrecursive', so
	// that one can turn it on only for certain fields. This can
	// be for performance reasons or control reasons; for example
	// one might only want to recursively descend into pointers
	// for some fields.
	RecursiveEmptyCheck bool

	// Raw controls whether we encode Raw values.
	// This is a "dangerous" option and must be explicitly set.
	// If set, we blindly encode Raw values as-is, without checking
	// if they are a correct representation of a value in that format.
	// If unset, we error out.
	Raw bool

	// If EncodeUnknownFields, then when encoding a struct
	// implementing UnknownFieldHandler, also encode the unknown
	// fields.
	EncodeUnknownFields bool
}

EncodeOptions captures configuration options during encode.

type Encoder

type Encoder struct {
	// contains filtered or unexported fields
}

An Encoder writes an object to an output stream in the codec format.

func NewEncoder

func NewEncoder(w io.Writer, h Handle) *Encoder

NewEncoder returns an Encoder for encoding into an io.Writer.

For efficiency, Users are encouraged to pass in a memory buffered writer (eg bufio.Writer, bytes.Buffer).

func NewEncoderBytes

func NewEncoderBytes(out *[]byte, h Handle) *Encoder

NewEncoderBytes returns an encoder for encoding directly and efficiently into a byte slice, using zero-copying to temporary slices.

It will potentially replace the output byte slice pointed to. After encoding, the out parameter contains the encoded contents.

func (*Encoder) Encode

func (e *Encoder) Encode(v interface{}) (err error)

Encode writes an object into a stream.

Encoding can be configured via the struct tag for the fields. The key (in the struct tags) that we look at is configurable.

By default, we look up the "codec" key in the struct field's tags, and fall bak to the "json" key if "codec" is absent. That key in struct field's tag value is the key name, followed by an optional comma and options.

To set an option on all fields (e.g. omitempty on all fields), you can create a field called _struct, and set flags on it. The options which can be set on _struct are:

  • omitempty: so all fields are omitted if empty
  • toarray: so struct is encoded as an array
  • int: so struct key names are encoded as signed integers (instead of strings)
  • uint: so struct key names are encoded as unsigned integers (instead of strings)
  • float: so struct key names are encoded as floats (instead of strings)

More details on these below.

Struct values "usually" encode as maps. Each exported struct field is encoded unless:

  • the field's tag is "-", OR
  • the field is empty (empty or the zero value) and its tag specifies the "omitempty" option.

When encoding as a map, the first string in the tag (before the comma) is the map key string to use when encoding. ... This key is typically encoded as a string. However, there are instances where the encoded stream has mapping keys encoded as numbers. For example, some cbor streams have keys as integer codes in the stream, but they should map to fields in a structured object. Consequently, a struct is the natural representation in code. For these, configure the struct to encode/decode the keys as numbers (instead of string). This is done with the int,uint or float option on the _struct field (see above).

However, struct values may encode as arrays. This happens when:

  • StructToArray Encode option is set, OR
  • the tag on the _struct field sets the "toarray" option

Note that omitempty is ignored when encoding struct values as arrays, as an entry must be encoded for each field, to maintain its position.

Values with types that implement MapBySlice are encoded as stream maps.

The empty values (for omitempty option) are false, 0, any nil pointer or interface value, and any array, slice, map, or string of length zero.

Anonymous fields are encoded inline except:

  • the struct tag specifies a replacement name (first value)
  • the field is of an interface type

Examples:

// NOTE: 'json:' can be used as struct tag key, in place 'codec:' below.
type MyStruct struct {
    _struct bool    `codec:",omitempty"`   //set omitempty for every field
    Field1 string   `codec:"-"`            //skip this field
    Field2 int      `codec:"myName"`       //Use key "myName" in encode stream
    Field3 int32    `codec:",omitempty"`   //use key "Field3". Omit if empty.
    Field4 bool     `codec:"f4,omitempty"` //use key "f4". Omit if empty.
    io.Reader                              //use key "Reader".
    MyStruct        `codec:"my1"           //use key "my1".
    MyStruct                               //inline it
    ...
}

type MyStruct struct {
    _struct bool    `codec:",toarray"`     //encode struct as an array
}

type MyStruct struct {
    _struct bool    `codec:",uint"`        //encode struct with "unsigned integer" keys
    Field1 string   `codec:"1"`            //encode Field1 key using: EncodeInt(1)
    Field2 string   `codec:"2"`            //encode Field2 key using: EncodeInt(2)
}

The mode of encoding is based on the type of the value. When a value is seen:

  • If a Selfer, call its CodecEncodeSelf method
  • If an extension is registered for it, call that extension function
  • If implements encoding.(Binary|Text|JSON)Marshaler, call Marshal(Binary|Text|JSON) method
  • Else encode it based on its reflect.Kind

Note that struct field names and keys in map[string]XXX will be treated as symbols. Some formats support symbols (e.g. binc) and will properly encode the string only once in the stream, and use a tag to refer to it thereafter.

func (*Encoder) MustEncode

func (e *Encoder) MustEncode(v interface{})

MustEncode is like Encode, but panics if unable to Encode. This provides insight to the code location that triggered the error.

func (*Encoder) Reset

func (e *Encoder) Reset(w io.Writer)

Reset resets the Encoder with a new output stream.

This accommodates using the state of the Encoder, where it has "cached" information about sub-engines.

func (*Encoder) ResetBytes

func (e *Encoder) ResetBytes(out *[]byte)

ResetBytes resets the Encoder with a new destination output []byte.

type Ext

type Ext interface {
	BytesExt
	InterfaceExt
}

Ext handles custom (de)serialization of custom types / extensions.

type Handle

type Handle interface {
	Name() string
	// contains filtered or unexported methods
}

Handle is the interface for a specific encoding format.

Typically, a Handle is pre-configured before first time use, and not modified while in use. Such a pre-configured Handle is safe for concurrent access.

type InterfaceExt

type InterfaceExt interface {
	// ConvertExt converts a value into a simpler interface for easy encoding
	// e.g. convert time.Time to int64.
	//
	// Note: v is a pointer iff the registered extension type is a struct or array kind.
	ConvertExt(v interface{}) interface{}

	// UpdateExt updates a value from a simpler interface for easy decoding
	// e.g. convert int64 to time.Time.
	//
	// Note: dst is always a pointer kind to the registered extension type.
	UpdateExt(dst interface{}, src interface{})
}

InterfaceExt handles custom (de)serialization of types to/from another interface{} value. The Encoder or Decoder will then handle the further (de)serialization of that known type.

It is used by codecs (e.g. cbor, json) which use the format to do custom serialization of types.

type JsonHandle

type JsonHandle struct {
	BasicHandle

	// Indent indicates how a value is encoded.
	//   - If positive, indent by that number of spaces.
	//   - If negative, indent by that number of tabs.
	Indent int8

	// IntegerAsString controls how integers (signed and unsigned) are encoded.
	//
	// Per the JSON Spec, JSON numbers are 64-bit floating point numbers.
	// Consequently, integers > 2^53 cannot be represented as a JSON number without losing precision.
	// This can be mitigated by configuring how to encode integers.
	//
	// IntegerAsString interpretes the following values:
	//   - if 'L', then encode integers > 2^53 as a json string.
	//   - if 'A', then encode all integers as a json string
	//             containing the exact integer representation as a decimal.
	//   - else    encode all integers as a json number (default)
	IntegerAsString byte

	// HTMLCharsAsIs controls how to encode some special characters to html: < > &
	//
	// By default, we encode them as \uXXX
	// to prevent security holes when served from some browsers.
	HTMLCharsAsIs bool

	// PreferFloat says that we will default to decoding a number as a float.
	// If not set, we will examine the characters of the number and decode as an
	// integer type if it doesn't have any of the characters [.eE].
	PreferFloat bool

	// TermWhitespace says that we add a whitespace character
	// at the end of an encoding.
	//
	// The whitespace is important, especially if using numbers in a context
	// where multiple items are written to a stream.
	TermWhitespace bool

	// MapKeyAsString says to encode all map keys as strings.
	//
	// Use this to enforce strict json output.
	// The only caveat is that nil value is ALWAYS written as null (never as "null")
	MapKeyAsString bool

	// RawBytesExt, if configured, is used to encode and decode raw bytes in a custom way.
	// If not configured, raw bytes are encoded to/from base64 text.
	RawBytesExt InterfaceExt
	// contains filtered or unexported fields
}

JsonHandle is a handle for JSON encoding format.

Json is comprehensively supported:

  • decodes numbers into interface{} as int, uint or float64 based on how the number looks and some config parameters e.g. PreferFloat, SignedInt, etc.
  • decode integers from float formatted numbers e.g. 1.27e+8
  • decode any json value (numbers, bool, etc) from quoted strings
  • configurable way to encode/decode []byte . by default, encodes and decodes []byte using base64 Std Encoding
  • UTF-8 support for encoding and decoding

It has better performance than the json library in the standard library, by leveraging the performance improvements of the codec library.

In addition, it doesn't read more bytes than necessary during a decode, which allows reading multiple values from a stream containing json and non-json content. For example, a user can read a json value, then a cbor value, then a msgpack value, all from the same stream in sequence.

Note that, when decoding quoted strings, invalid UTF-8 or invalid UTF-16 surrogate pairs are not treated as an error. Instead, they are replaced by the Unicode replacement character U+FFFD.

func (*JsonHandle) AddExt deprecated

func (o *JsonHandle) AddExt(rt reflect.Type, tag byte,
	encfn func(reflect.Value) ([]byte, error),
	decfn func(reflect.Value, []byte) error) (err error)

AddExt registes an encode and decode function for a reflect.Type. To deregister an Ext, call AddExt with nil encfn and/or nil decfn.

Deprecated: Use SetBytesExt or SetInterfaceExt on the Handle instead.

func (*JsonHandle) Intf2Impl

func (o *JsonHandle) Intf2Impl(intf, impl reflect.Type) (err error)

Intf2Impl maps an interface to an implementing type. This allows us support infering the concrete type and populating it when passed an interface. e.g. var v io.Reader can be decoded as a bytes.Buffer, etc.

Passing a nil impl will clear the mapping.

func (*JsonHandle) Name

func (h *JsonHandle) Name() string

Name returns the name of the handle: json

func (*JsonHandle) SetExt deprecated

func (o *JsonHandle) SetExt(rt reflect.Type, tag uint64, ext Ext) (err error)

SetExt will set the extension for a tag and reflect.Type. Note that the type must be a named type, and specifically not a pointer or Interface. An error is returned if that is not honored. To Deregister an ext, call SetExt with nil Ext.

Deprecated: Use SetBytesExt or SetInterfaceExt on the Handle instead.

func (*JsonHandle) SetInterfaceExt

func (h *JsonHandle) SetInterfaceExt(rt reflect.Type, tag uint64, ext InterfaceExt) (err error)

SetInterfaceExt sets an extension

type MapBySlice

type MapBySlice interface {
	MapBySlice()
}

MapBySlice is a tag interface that denotes wrapped slice should encode as a map in the stream. The slice contains a sequence of key-value pairs. This affords storing a map in a specific sequence in the stream.

Example usage:

type T1 []string         // or []int or []Point or any other "slice" type
func (_ T1) MapBySlice{} // T1 now implements MapBySlice, and will be encoded as a map
type T2 struct { KeyValues T1 }

var kvs = []string{"one", "1", "two", "2", "three", "3"}
var v2 = T2{ KeyValues: T1(kvs) }
// v2 will be encoded like the map: {"KeyValues": {"one": "1", "two": "2", "three": "3"} }

The support of MapBySlice affords the following:

  • A slice type which implements MapBySlice will be encoded as a map
  • A slice can be decoded from a map in the stream
  • It MUST be a slice type (not a pointer receiver) that implements MapBySlice

type MsgpackHandle

type MsgpackHandle struct {
	BasicHandle

	// RawToString controls how raw bytes are decoded into a nil interface{}.
	RawToString bool

	// NoFixedNum says to output all signed integers as 2-bytes, never as 1-byte fixednum.
	NoFixedNum bool

	// WriteExt flag supports encoding configured extensions with extension tags.
	// It also controls whether other elements of the new spec are encoded (ie Str8).
	//
	// With WriteExt=false, configured extensions are serialized as raw bytes
	// and Str8 is not encoded.
	//
	// A stream can still be decoded into a typed value, provided an appropriate value
	// is provided, but the type cannot be inferred from the stream. If no appropriate
	// type is provided (e.g. decoding into a nil interface{}), you get back
	// a []byte or string based on the setting of RawToString.
	WriteExt bool
	// contains filtered or unexported fields
}

MsgpackHandle is a Handle for the Msgpack Schema-Free Encoding Format.

func (*MsgpackHandle) AddExt deprecated

func (o *MsgpackHandle) AddExt(rt reflect.Type, tag byte,
	encfn func(reflect.Value) ([]byte, error),
	decfn func(reflect.Value, []byte) error) (err error)

AddExt registes an encode and decode function for a reflect.Type. To deregister an Ext, call AddExt with nil encfn and/or nil decfn.

Deprecated: Use SetBytesExt or SetInterfaceExt on the Handle instead.

func (*MsgpackHandle) Intf2Impl

func (o *MsgpackHandle) Intf2Impl(intf, impl reflect.Type) (err error)

Intf2Impl maps an interface to an implementing type. This allows us support infering the concrete type and populating it when passed an interface. e.g. var v io.Reader can be decoded as a bytes.Buffer, etc.

Passing a nil impl will clear the mapping.

func (*MsgpackHandle) Name

func (h *MsgpackHandle) Name() string

Name returns the name of the handle: msgpack

func (*MsgpackHandle) SetBytesExt

func (h *MsgpackHandle) SetBytesExt(rt reflect.Type, tag uint64, ext BytesExt) (err error)

SetBytesExt sets an extension

func (*MsgpackHandle) SetExt deprecated

func (o *MsgpackHandle) SetExt(rt reflect.Type, tag uint64, ext Ext) (err error)

SetExt will set the extension for a tag and reflect.Type. Note that the type must be a named type, and specifically not a pointer or Interface. An error is returned if that is not honored. To Deregister an ext, call SetExt with nil Ext.

Deprecated: Use SetBytesExt or SetInterfaceExt on the Handle instead.

type MsgpackSpecRpcMultiArgs

type MsgpackSpecRpcMultiArgs []interface{}

MsgpackSpecRpcMultiArgs is a special type which signifies to the MsgpackSpecRpcCodec that the backend RPC service takes multiple arguments, which have been arranged in sequence in the slice.

The Codec then passes it AS-IS to the rpc service (without wrapping it in an array of 1 element).

type RPCOptions

type RPCOptions struct {
	// RPCNoBuffer configures whether we attempt to buffer reads and writes during RPC calls.
	//
	// Set RPCNoBuffer=true to turn buffering off.
	// Buffering can still be done if buffered connections are passed in, or
	// buffering is configured on the handle.
	RPCNoBuffer bool
}

RPCOptions holds options specific to rpc functionality

type Raw

type Raw []byte

Raw represents raw formatted bytes. We "blindly" store it during encode and retrieve the raw bytes during decode. Note: it is dangerous during encode, so we may gate the behaviour behind an Encode flag which must be explicitly set.

type RawExt

type RawExt struct {
	Tag uint64
	// Data is the []byte which represents the raw ext. If nil, ext is exposed in Value.
	// Data is used by codecs (e.g. binc, msgpack, simple) which do custom serialization of types
	Data []byte
	// Value represents the extension, if Data is nil.
	// Value is used by codecs (e.g. cbor, json) which leverage the format to do
	// custom serialization of the types.
	Value interface{}
}

RawExt represents raw unprocessed extension data. Some codecs will decode extension data as a *RawExt if there is no registered extension for the tag.

Only one of Data or Value is nil. If Data is nil, then the content of the RawExt is in the Value.

type Rpc

type Rpc interface {
	ServerCodec(conn io.ReadWriteCloser, h Handle) rpc.ServerCodec
	ClientCodec(conn io.ReadWriteCloser, h Handle) rpc.ClientCodec
}

Rpc provides a rpc Server or Client Codec for rpc communication.

type Selfer

type Selfer interface {
	CodecEncodeSelf(*Encoder)
	CodecDecodeSelf(*Decoder)
}

Selfer defines methods by which a value can encode or decode itself.

Any type which implements Selfer will be able to encode or decode itself. Consequently, during (en|de)code, this takes precedence over (text|binary)(M|Unm)arshal or extension support.

Note: *the first set of bytes of any value MUST NOT represent nil in the format*. This is because, during each decode, we first check the the next set of bytes represent nil, and if so, we just set the value to nil.

type SimpleHandle

type SimpleHandle struct {
	BasicHandle

	// EncZeroValuesAsNil says to encode zero values for numbers, bool, string, etc as nil
	EncZeroValuesAsNil bool
	// contains filtered or unexported fields
}

SimpleHandle is a Handle for a very simple encoding format.

simple is a simplistic codec similar to binc, but not as compact.

  • Encoding of a value is always preceded by the descriptor byte (bd)
  • True, false, nil are encoded fully in 1 byte (the descriptor)
  • Integers (intXXX, uintXXX) are encoded in 1, 2, 4 or 8 bytes (plus a descriptor byte). There are positive (uintXXX and intXXX >= 0) and negative (intXXX < 0) integers.
  • Floats are encoded in 4 or 8 bytes (plus a descriptor byte)
  • Length of containers (strings, bytes, array, map, extensions) are encoded in 0, 1, 2, 4 or 8 bytes. Zero-length containers have no length encoded. For others, the number of bytes is given by pow(2, bd%3)
  • maps are encoded as [bd] [length] [[key][value]]...
  • arrays are encoded as [bd] [length] [value]...
  • extensions are encoded as [bd] [length] [tag] [byte]...
  • strings/bytearrays are encoded as [bd] [length] [byte]...
  • time.Time are encoded as [bd] [length] [byte]...

The full spec will be published soon.

func (*SimpleHandle) AddExt deprecated

func (o *SimpleHandle) AddExt(rt reflect.Type, tag byte,
	encfn func(reflect.Value) ([]byte, error),
	decfn func(reflect.Value, []byte) error) (err error)

AddExt registes an encode and decode function for a reflect.Type. To deregister an Ext, call AddExt with nil encfn and/or nil decfn.

Deprecated: Use SetBytesExt or SetInterfaceExt on the Handle instead.

func (*SimpleHandle) Intf2Impl

func (o *SimpleHandle) Intf2Impl(intf, impl reflect.Type) (err error)

Intf2Impl maps an interface to an implementing type. This allows us support infering the concrete type and populating it when passed an interface. e.g. var v io.Reader can be decoded as a bytes.Buffer, etc.

Passing a nil impl will clear the mapping.

func (*SimpleHandle) Name

func (h *SimpleHandle) Name() string

Name returns the name of the handle: simple

func (*SimpleHandle) SetBytesExt

func (h *SimpleHandle) SetBytesExt(rt reflect.Type, tag uint64, ext BytesExt) (err error)

SetBytesExt sets an extension

func (*SimpleHandle) SetExt deprecated

func (o *SimpleHandle) SetExt(rt reflect.Type, tag uint64, ext Ext) (err error)

SetExt will set the extension for a tag and reflect.Type. Note that the type must be a named type, and specifically not a pointer or Interface. An error is returned if that is not honored. To Deregister an ext, call SetExt with nil Ext.

Deprecated: Use SetBytesExt or SetInterfaceExt on the Handle instead.

type TypeInfos

type TypeInfos struct {
	// contains filtered or unexported fields
}

TypeInfos caches typeInfo for each type on first inspection.

It is configured with a set of tag keys, which are used to get configuration for the type.

func NewTypeInfos

func NewTypeInfos(tags []string) *TypeInfos

NewTypeInfos creates a TypeInfos given a set of struct tags keys.

This allows users customize the struct tag keys which contain configuration of their types.

type UnknownFieldHandler

type UnknownFieldHandler interface {
	// CodecSetUnknownFields is called exactly once during
	// decoding with the set of all unknown fields encountered.
	CodecSetUnknownFields(UnknownFieldSet)
	// CodecGetUnknownFields is called exactly once during
	// encoding to get the set of unknown fields to include in the
	// encoding. Encoding must be done with the same handle type
	// as what was used when decoding.
	CodecGetUnknownFields() UnknownFieldSet
}

UnknownFieldHandler defines methods by which a value can store unknown fields encountered during decoding.

type UnknownFieldSet

type UnknownFieldSet struct {
	// contains filtered or unexported fields
}

An UnknownFieldSet holds information about unknown fields encountered during decoding. The zero value is an empty set.

type UnknownFieldSetHandler

type UnknownFieldSetHandler struct {
	// contains filtered or unexported fields
}

UnknownFieldSetHandler is an implementation of UnknownFieldHandler that uses an underlying UnknownFieldSet, so you can just embed it in a struct type and it will automatically preserve unknown fields.

func (UnknownFieldSetHandler) CodecGetUnknownFields

func (ufsh UnknownFieldSetHandler) CodecGetUnknownFields() UnknownFieldSet

func (*UnknownFieldSetHandler) CodecSetUnknownFields

func (ufsh *UnknownFieldSetHandler) CodecSetUnknownFields(other UnknownFieldSet)

Directories

Path Synopsis
codecgen generates codec.Selfer implementations for a set of types.
codecgen generates codec.Selfer implementations for a set of types.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL