Imandrakit_twine.Decode
val create : Imandrakit_bytes.Byte_slice.t -> t
val of_string : string -> t
type 'a decoder = t -> int -> 'a
val failf : ('a, unit, string, 'b) format4 -> 'a
module Value : sig ... end
module Array_cursor : sig ... end
module Dict_cursor : sig ... end
val deref_rec : int decoder
Given any value, follow pointers until a non-pointer value is reached, a return its address.
val null : unit decoder
val bool : bool decoder
val int_truncate : int decoder
val int64 : int64 decoder
val float : float decoder
val string_slice : Imandrakit_bytes.Byte_slice.t decoder
val string : string decoder
val blob_slice : Imandrakit_bytes.Byte_slice.t decoder
val blob : string decoder
val array : Array_cursor.t decoder
val dict : Dict_cursor.t decoder
val tag : (int * int) decoder
val cstor : (cstor_index * Array_cursor.t) decoder
val get_entrypoint : t -> int
Offset of the entrypoint (the topevel value)
val decode_string : 'a decoder -> string -> 'a
Caching is used to reflect the sharing of values embedded in a Twine slice, into the decoded values. It means that, for a given type, if values of this type are encoded with sharing (e.g. a graph-heavy term representation), then with caching we can decode the values to OCaml values that also have sharing.
val create_cache_key : unit -> _ cache_key
Generate a new (generative) cache key for a type.
NOTE this should be called only at module toplevel, as a constant, not dynamically inside a function: let key: foo value_pack.Deser.cache_key = value_pack.Deser.create_cache_key ();;
. Indeed, this is generative, so creating multiple keys for a type will result in sub-par performance or non-existent caching.
with_cache key dec
is the same decoder as dec
but it uses key
to retrieve values directly from an internal table for entries/values that have already been decoded in the past. This means that a value that was encoded with a lot of sharing (e.g in a graph, or a large string using Ser.add_string
) will be decoded only once.