Imandrakit_twine.Decodeval create : Imandrakit_bytes.Byte_slice.t -> tval of_string : string -> ttype 'a decoder = t -> int -> 'aval failf : ('a, unit, string, 'b) format4 -> 'amodule Value : sig ... endmodule Array_cursor : sig ... endmodule Dict_cursor : sig ... endval deref_rec : int decoderGiven any value, follow pointers until a non-pointer value is reached, a return its address.
val null : unit decoderval bool : bool decoderval int_truncate : int decoderval int64 : int64 decoderval float : float decoderval string_slice : Imandrakit_bytes.Byte_slice.t decoderval string : string decoderval blob_slice : Imandrakit_bytes.Byte_slice.t decoderval blob : string decoderval array : Array_cursor.t decoderval dict : Dict_cursor.t decoderval tag : (int * int) decoderval cstor : (cstor_index * Array_cursor.t) decoderval get_entrypoint : t -> intOffset of the entrypoint (the topevel value)
val decode_string : 'a decoder -> string -> 'aCaching is used to reflect the sharing of values embedded in a Twine slice, into the decoded values. It means that, for a given type, if values of this type are encoded with sharing (e.g. a graph-heavy term representation), then with caching we can decode the values to OCaml values that also have sharing.
val create_cache_key : unit -> _ cache_keyGenerate a new (generative) cache key for a type.
NOTE this should be called only at module toplevel, as a constant, not dynamically inside a function: let key: foo value_pack.Deser.cache_key = value_pack.Deser.create_cache_key ();;. Indeed, this is generative, so creating multiple keys for a type will result in sub-par performance or non-existent caching.
with_cache key dec is the same decoder as dec but it uses key to retrieve values directly from an internal table for entries/values that have already been decoded in the past. This means that a value that was encoded with a lot of sharing (e.g in a graph, or a large string using Ser.add_string) will be decoded only once.