Getting started with Viper¶
You should start with Getting Started with DSM
This tutorial introduces the APIs of the various systems implemented by Viper. It's a technical introduction, and most of the APIs described here are used by high-level tools and the generated code.
So, if you want to understand the principles, you are in the right spot.
Then, if you want to understand how it works, Viper Internal is the next spot...
Introduction¶
Viper is a runtime designed with a strong usage of metadata to implement many features like type, value, function, collection, serialization, function call, database persistence, plugins, remote procedure call, network service, remote database, scripting in Python and Web interoperability with JSON and Bson.
Viper was designed to simplify the way scripting language and C++ work together through metadata. The dynamic aspect of a script interpreter uses the dynamic aspect of the metadata defined in Viper for type, value and function to implement a strong typed data access and function call in a fast and lazy way.
P_Viper (the Python C/API binding of Viper) is like a safe strong-typed layer over Python required for C++ interoperability. P_Viper raises an exception immediately when you use the wrong type, but P_Viper is smart enough to use native Python value and collection as input since the metadata drives everything in Viper.
Due to the "metadata everywhere" principle, the bidirectional bridge (Python object/Viper Value) is automatic and seamless. You use a Python native object and collections as input, and it's just work if the object is consistent with the metadata description of the Viper type. ex: A Python dictionary {"field_name": value, ...} is a partial form of a Viper ValueStructure, and it's enough for initialization.
In P_Viper, each bound function uses the metadata to drive the encoding/decoding of the parameters and the return value.
The seamless importer is smart enough to do type promotion/adaptation. A Python object implementing the sequence protocol is enough to fill a Viper ValueVector.
A Viper collection is a strong-typed version of a Python collection with almost the same API. Conversion between the two worlds is only required when you need a native Python object to compute an expression or assign a new value.
dsviper and Python use the same object reference model.
Viper / P_Viper / dsviper¶
The declination of the Viper technology.
- Viper is the C++ library that implements the type and the value system and all derived features.
- P_Viper is the Python C/API binding of Viper and implements the seamless bidirectional bridge.
- dsviper is the Python extension module of P_Viper distributed as a wheel.
Learning dsviper means learning Viper. The documentation dsviper to Viper explains how to transfer your dsviper knowledge to Viper C++ API.
The type and value system.¶
- The type system allows you to dynamically construct types.
- The value system instantiate a value from a type with the universal value
constructor
Value.create(type, [initial_value]]).
- There is always a corresponding value class for a type class. When you create an instance with a
TypeFloat()you get aValueFloat...
> python3
>>> from dsviper import *
>>> type(Value.create(TypeFloat()))
<class 'dsviper.ValueFloat'>
# or use Type.<constant>
>>> type(Value.create(Type.STRING))
<class 'dsviper.ValueString'>
>>> type(Value.create(TypeMap(Type.UUID, Type.STRING)))
<class 'dsviper.ValueMap'>
# or use value constructor
>>> type(ValueMap(TypeMap(TypeUUId(), TypeString())))
<class 'dsviper.ValueMap'>
...
We create a type vector of string.
# Define a type vector<string>
>>> type_vector_of_string = TypeVector(Type.STRING)
# Description representation
>>> type_vector_of_string.description()
'vector<string>'
# The type of vector<T>
>>> type_vector_of_string.element_type()
string
We create value of type vector of string with Value.create(...).
# Instanciate a vector<string>
>>> v = Value.create(type_vector_of_string)
>>> type(v)
<class 'dsviper.ValueVector'>
>>> v.append("hello")
>>> v.append("world")
>>>v
['hello', 'world']
# The Type of the Value
>>> v.type()
vector<string>
# A More descriptive representation
>>> v.description()
"['hello', 'world']:vector<string>"
We use the seamless bridge to extend v with a Python list.
>>> v.extend(["from", "python"])
>>> v
['hello', 'world', 'from', 'python']
Since the seamless bridge know the type of
vfrom the metadatav.type(), it can consume the Pythonlistfrom a strong typed perspective and create the appropriate Viper value.
We can see the dynamic type checker in action.
>>> v.append(3)
File "<python-input-20>", line 1, in <module>
v.append(3)
~~~~~~~~^^^
dsviper.ViperError: [pid(9443)@mac.home]:P_Viper:P_ViperDecoderErrors:0:expected type 'str', got 'int' while decoding 'checkValue.string'.
We know everything about the type and value system.
Data model, definitions and types¶
Viper was designed to realize an ecosystem of data models expressible by immutable Definitions.
a Definitions is a way to create, document and share the types of a data model. The
type system has specific types for modeling a relational model
with surrogate key and foreign key and various levels of abstractions
(see Digital Substrate Model)
The type system supports standard primitive types, standard generic containers and specific types for modeling the data model.
Types and values¶
We construct a value with Value.create(type, [initial_value]). If initial_value
is not specified the value is initialized with the zero of the type else the
initial_value is consumed from a strong typed perspective to initialize the value.
Bool¶
- DSM:
bool - Type:
TypeBool - Value:
ValueBool
>>> Value.create(Type.BOOL)
>>> false
>>> Value.create(Type.BOOL, True)
>>> true
>>> Value.create(Type.BOOL, 4)
Traceback (most recent call last):
File "<python-input-23>", line 1, in <module>
Value.create(Type.BOOL, 4)
~~~~~~~~~~~~^^^^^^^^^^^^^^
dsviper.ViperError: [pid(9443)@mac.home]:P_Viper:P_ViperDecoderErrors:0:expected type 'bool', got 'int' while decoding 'create'.
Integers¶
- DSM:
uint8|16|32|64,int8|16|32|64 - Type:
TypeUInt8|16|32|64,TypeInt8|16|32|64 - Value:
ValueUInt8|16|32|64,ValueInt8|16|32|64
>>> Value.create(Type.INT8)
0
>>> Value.create(Type.INT8, 42)
42
>>> Value.create(Type.INT8, 256)
Traceback (most recent call last):
File "<python-input-27>", line 1, in <module>
Value.create(Type.INT8, 256)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^
dsviper.ViperError: [pid(9443)@mac.home]:P_Viper:P_ViperDecoderErrors:2:value is not in the range of 'int8' while decoding 'create.int8'.
Reals¶
- DSM:
float,double - Type:
TypeFloat,TypeDouble - Value:
ValueFloat,ValueDouble
>>> Value.create(Type.FLOAT)
0.0
>>> Value.create(Type.FLOAT, 42)
42.0
>> Value.create(Type.FLOAT, 1e300)
Traceback (most recent call last):
File "<python-input-31>", line 1, in <module>
Value.create(Type.FLOAT, 1e300)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
dsviper.ViperError: [pid(9443)@mac.home]:P_Viper:P_ViperDecoderErrors:2:value is not in the range of 'float' while decoding 'create.float'.
String¶
- DSM:
string - Type:
TypeString - Value:
ValueString
Viper String must be UTF-8 encoded.
>>> Value.create(Type.STRING)
''
>>> Value.create(Type.STRING, "a string")
'a string'
UUID¶
- DSM:
uuid - Type:
TypeUUId - Value:
ValueUUId
ValueUUId.create(...) generate a new ValueUUId or create a ValueUUId from an
uuid_string.
A
ValueUUIdis a multiplatform implementation of an UUID4
>>> uuid = ValueUUId.create()
>>> uuid = ValueUUId.create("c98ddeec-0496-494c-b1e9-b470ff204be3")
>>> uuid = ValueUUId.create("c98ddeec-0496-494c-b1e9-b470ff204be334")
Traceback (most recent call last):
File "<python-input-36>", line 1, in <module>
uuid = ValueUUId.create("c98ddeec-0496-494c-b1e9-b470ff204be334")
dsviper.ViperError: [pid(9443)@mac.home]:Viper.UUId:UUIdErrors:1:The literal 'c98ddeec-0496-494c-b1e9-b470ff204be334' is a malformed UUId.
>>> uuid = Value.create(Type.UUID)
>>> uuid = Value.create(Type.UUID, "c98ddeec-0496-494c-b1e9-b470ff204be3")
BlobId¶
- DSM:
blob_id - Type:
TypeBlobId - Value:
ValueBlobId
A ValueBlobId is like an uuid but have a specific semantic for the persistence layer. The
value of a blob_id must be a reference to an existing blob in the persistence layer.
a ValueBlobId is computed from the layout of the blob and the content of the blob.
>>> ValueBlobId(BlobLayout(), ValueBlob(bytes([1,2,3,4])))
6b8f3ca756046be29244d9bdb6b5ca5c00468ad5
>>> ValueBlobId.try_parse("6b8f3ca756046be29244d9bdb6b5ca5c00468ad5")
6b8f3ca756046be29244d9bdb6b5ca5c00468ad5
Vec¶
- DSM:
vec<T, n>where T is numeric and n is a constant - Type:
TypeVec - Value:
ValueVec
Numeric types are integers and reals.
# DSM vec<float, 3>
>>> t = TypeVec(Type.FLOAT, 3)
>>> Value.create(t)
(0.0, 0.0, 0.0)
The value of any container can be initialized by the optional parameter
initial_value of Value.create(...).
>>> Value.create(t, (1,2,3)) # initialized from Python tuple.
(1.0, 2.0, 3.0)
Mat¶
- DSM:
mat<T, columns, rows>where T is numeric and columns and rows are constant - Type:
TypeMat - Value:
ValueMat
# DSM mat<float, 2, 3>
t = TypeMat(Type.FLOAT,2,3)
>>> Value.create(t)
[(1.0, 0.0, 0.0), (0.0, 1.0, 0.0)]
>>> Value.create(t, ((1,2,3), (4,5,6))) # initialized from Python tuple.
[(1.0, 2.0, 3.0), (4.0, 5.0, 6.0)]
Vector¶
- DSM:
vector<T> - Type:
TypeVector - Value:
ValueVectoris compatible with the PythonlistAPI.
# DSM vector<string>
>>> Value.create(TypeVector(Type.STRING), ["this", "is", "the", "initial", "value"])
['this', 'is', 'the', 'initial', 'value']
Set¶
- DSM:
set<T> - Type:
TypeSet - Value:
ValueSetis compatible with the PythonsetAPI.
# DSM set<int64>
>>> Value.create(TypeSet(Type.INT64), {1,2,3,2,1})
{1, 2, 3}
Map¶
- DSM:
map<K, V> - Type:
TypeMapis a generic type. - Value:
ValueMapis compatible with a PythondictAPI.
# DSM map<string, int64)
>>> Value.create(TypeMap(Type.STRING, Type.INT64), {"1": 10, "2": 20})
{'1': 10, '2': 20}
Optional¶
- DSM:
optional<T> - Type:
TypeOptional - Value:
ValueOptionalis container with a value or none
TypeOptional<T>is not the Python union of typeT | None.
# An empty optional<string>
>>> v = Value.create(TypeOptional(Type.STRING))
>>> v.type()
optional<string>
>>> v
nil
# Try to unwrap ...
>>> v.unwrap()
Traceback (most recent call last):
File "<python-input-54>", line 1, in <module>
v.unwrap()
~~~~~~~~^^
dsviper.ViperError: [pid(9443)@mac.home]:Viper.ValueOptional:ContainerErrors:1:Try to unwrap empty optional<string> [unwrap].
# Set a value
>>> v.wrap("the value")
>>>v.unwrap()
'the value'
>>> Value.create(TypeOptional(Type.INT64), 42)
Optional(42)
Tuple¶
- DSM:
tuple<T0, ...> - Type:
TypeTuple - Value:
ValueTupleis compatible with the PythontupleAPI.
>>> Value.create(TypeTuple([Type.STRING, Type.INT8]), ("a string", 42))
('a string', 42)
Variant¶
- DSM:
variant<T0, ...> - Type:
TypeVariant - Value:
ValueVariantand it's a container with one value
>>> v = Value.create(TypeVariant([Type.STRING, Type.INT8]), "a string")
>>> v.type()
int8|string
>>> v.wrap(42)
>>> v.wrap(42.0)
File "<python-input-64>", line 1, in <module>
v.wrap(42.0)
~~~~~~^^^^^^
dsviper.ViperError: [pid(9443)@mac.home]:P_Viper:P_ViperDecoderErrors:0:expected type 'string|int8', got 'float' while decoding 'wrap.Variant'.
Type XArray¶
- DSM:
xarray<T> - Type
TypeXArray - Value:
ValueXArrayis like aValueVectorbut is designed to keep the order of an element during collaborative mutations by replacingindexbyPosition
Any¶
- DSM:
any - Type:
TypeAnysupport all types. - Value:
ValueAnyis container with one value
>>> v = Value.create(TypeVector(Type.ANY))
>>> v.append("a string")
>>> v.append(42)
>>> v.description()
"['a string':string:any, 42:int64:any]:vector<any>"
Type Deduction for Python Objects.¶
The static method Value.deduce(...) create a value with the appropriate type by recursively
mapping the Python type of the input object to a Viper type.
# list[int]
>>> v = Value.deduce([1,2,3])
>>> v.description()
'[1, 2, 3]:vector<int64>'
# a strange Python object ?
>>> v = Value.deduce([1, 2, (3, "string"), {1.0, 2.0, 3.0}, {"a": 0, "b": 1}])
>>> v
[1, 2, (3, 'string'), {1.0, 2.0, 3.0}, {'a': 0, 'b': 1}]
>>> v.type()
vector<int64|map<string, int64>|set<double>|tuple<int64, string>>
Data Model Types¶
To define a data model, dsviper introduces five fundamental notions.
- Namespace: a space to define types.
- Concept: an abstract thing (like an abstract type)
- Key: a way to identify the instantiation of a Concept (like a UUID<Concept>)
- Document: a piece of information expressible in the type system (like a struct Document {...})
- Attachment: a way to associate a Document with a Key (like a map
)
A data model is a collection of concepts, clubs, structures, enumerations and attachments.
see Digital Substrate Model for the theory.
Definitions¶
A Definitions is a way to create, share and document the types of a data model for the
ecosystem. In the ecosystem each definition requires a Definition Identifier.
For type, we use the term TypeId and for attachment the term AttachmentId.
We create a Definitions
# Create a new collection of definitions
>>> defs = Definitions()
# Create a namespace
>>> ns = NameSpace(ValueUUId("f529bc42-0618-4f54-a3fb-d55f95c5ad03"), "Tuto"))
>>> ns
Tuto::
# Create the Concept 'User'
# DSM concept User;
>>> t_User = defs.create_concept(ns, "User")
>>> t_User
Tuto::User
>>> t_User.type_name().name_space(), t_User.type_name().name()
(Tuto::, 'User')
>>> t_User.runtime_id()
bfb135a1-49e6-132a-e693-b84be57e9726
Structure¶
- DSM:
struct - Type:
TypeStucture - Value:
ValueStructure
A Structure is a piece of information (an aggregate type) constructed from a
TypeStructureDescriptor.
# DSM: struct Login { string nickname; string password; };
>>> d_Login = TypeStructureDescriptor("Login")
>>> d_Login.add_field("nickname", Type.STRING)
>>> d_Login.add_field("password", Type.STRING)
>>> t_Login = defs.create_structure(ns, d_Login)
>>> t_Login
Tuto::Login
We use Value.create(...) to instantiate a new value for the type struct Login.
>>> Value.create(t_Login)
{nickname='', password=''}
# initialize one field from a dict[str, Any]
>>> Value.create(t_Login, {"nickname": "zoop"})
{nickname='zoop', password=''}
We can also specify the default value of a field.
>>> d_S = TypeStructureDescriptor("S")
>>> d_S.add_field("f", ValueFloat(1.77))
>>> d_S.add_field("v", Value.create(TypeVector(Type.INT64), [1,2,3]))
>>> t_s = defs.create_structure(ns, d_S)
>>> Value.create(t_s)
{i=1.77, v=[1, 2, 3]}
Enumeration¶
- DSM:
enum - Type:
TypeEnumeration - Value:
ValueEnumeration
An Enumeration is a type that consists of a set of named constants, known as
enumeration case. We use a TypeEnumerationDescriptor to add a case.
>>> d_Skill = TypeEnumerationDescriptor("Skill")
>>> d_Skill.add_case("beginner")
>>> d_Skill.add_case("expert")
>>> t_Skill = defs.create_enumeration(ns, d_Skill)
>>> t_Skill.description()
'{.beginner, .expert}:Tuto::Skill'
We use Value.create(...) to create an enum.
# The default value is the first case.
>>> Value.create(t_Skill)
.beginner
>> Value.create(t_Skill, ".expert")
.expert
Concept and Key¶
A Concept is generally an abstract term from the data domain to be modeled. Given its totally abstract nature, there is no concrete implementation of a Concept.
Only ValueKey exists which uniquely identify the instantiation of a Concept.
- DSM:
key<T>where T is an abstraction (ex:concept,club,any_concept) - Type:
TypeConcept - Value:
ValueKey
see Digital Substrate Model for the theory.
A Material is a concept. Concepts can also be strongly related to each other. a MirrorMaterial, a MatteMaterial and a MultiLayerMaterial are also a Material.
We define this relation by passing the type of the parent concept when creating a concept.
# DSM: concept Material {...};
>>> t_Material = defs.create_concept(ns, "Material")
# MaterialMatte is a Material by passing t_Material
# DSM: concept MaterialMatte is a Material {...};
>>> t_MaterialMatte = defs.create_concept(ns, "MaterialMatte", parent=t_Material)
# Create an instance of a MaterialMatte
# DSM: key<MaterialMatte>
>>> mm_key = ValueKey.create(t_MaterialMatte)
>>> type(mm_key)
<class 'dsviper.ValueKey'>
>>> mm_key.description()
'710ee41f-0f1f-41de-91da-c2b27dcf8e41:key<Tuto::MaterialMatte>'
# Convert the key to the parent Key
# DSM: key<Material>
>>> m_key = mm_key.to_parent_key()
>>> m_key.description()
'710ee41f-0f1f-41de-91da-c2b27dcf8e41:key<Tuto::Material>(Tuto::MaterialMatte)'
# Convert back to a ConceptKey
>>> c_key = m_key.to_concept_key()
>>> c_key.description()
'710ee41f-0f1f-41de-91da-c2b27dcf8e41:key<Tuto::MaterialMatte>'
Club and Key¶
A club is used to group concept when they are not related to abstraction. A concept becomes a member of a club by registering a membership. Given its totally abstract nature, there is no concrete implementation of a club.
Only ValueKey exists which uniquely identify the instantiation of a concept through the
membership relation.
- DSM:
club - Type:
TypeClub - Value:
ValueKey
see Digital Substrate Model for the theory.
# Unrelated Concepts
>>> t_GeometryLayer = defs.create_concept(ns, "GeometryLayer")
>>> t_AspectLayer = defs.create_concept(ns, "AspectLayer")
# A Club
# DSM: club ConfigurationTargetElement {...};
>>> t_ConfigurationTargetElement = defs.create_club(ns, "ConfigurationTargetElement")
# Create Memberhip
# DSM: membership ConfigurationTargetElement GeometryLayer;
>>> defs.create_membership(t_ConfigurationTargetElement, t_GeometryLayer)
>>> defs.create_membership(t_ConfigurationTargetElement, t_AspectLayer)
# Create a new instance of a GeometryLayer
>>> gl_key = ValueKey.create(t_GeometryLayer)
>>> gl_key.description()
'88e59c56-5258-4e85-9e99-33345d245bb0:key<Tuto::GeometryLayer>'
# Convert the key to a ClubKey
# DSM key<ConfigurationTargetElement>
>>> cte_key = gl_key.to_club_key(t_ConfigurationTargetElement)
>>> cte_key.description()
'88e59c56-5258-4e85-9e99-33345d245bb0:key<Tuto::ConfigurationTargetElement>(Tuto::GeometryLayer)'
# Convert back to a ConceptKey
>>> key = cte_key.to_concept_key()
>>> key.description()
'88e59c56-5258-4e85-9e99-33345d245bb0:key<Tuto::GeometryLayer>'
AnyConcept and Key¶
AnyConceptType accepts all defined Concepts. Given its totally abstract nature, there is no concrete implementation of a AnyConceptType.
Only ValueKey exists which uniquely identify the instantiation of a defined Concept.
- DSM:
any_concept - Type:
TypeAnyConcept - Value:
ValueKey
see Digital Substrate Model for the theory.
# DSM: key<User>
>>> user_key = ValueKey.create(t_User)
# Convert to AnyConceptKey
# DSM: key<any_concept>
>>> ac_key = user_key.to_any_concept_key()
>>> ac_key.description()
'40ea5e86-3535-474a-8b16-5808c9da7c6e:key<any_concept>(Tuto::User)'
# Convert back to ConceptKey
>>> key = ac_key.to_concept_key()
>>> key.description()
'40ea5e86-3535-474a-8b16-5808c9da7c6e:key<Tuto::User>'
Attachment¶
An attachment describes an association of a key with a document.
- DSM:
attachment<K, D> name
We create the Attachment.
# DSM attachment<User, Login> login;
>>> a_User_Login = defs.create_attachment(ns, "login", t_User, t_Login)
>>> a_User_Login
attachment<User, Login> Tuto::login
>>> a_User_Login.key_type()
Tuto::User
>>> a_User_Login.document_type()
Tuto::Login
We use the attachment API to create a new key and a new document.
>>> key = a_User_Login.create_key()
>>> key.description()
'ca78772c-bef5-4dde-bff4-43af82bb8288:key<Tuto::User>'
>>> doc = a_User_Login.create_document()
>>> doc.description()
"{nickname='':string, password='':string}:Tuto::Login"
However, we need a database to persist this association.
# Create a Database in memory
>>> db = Database.create_in_memory()
# Extend the definitions of the database
>>> db.extend_definitions(defs.const())
# Persist the association
>>> db.begin_transaction()
>>> db.set(a_User_Login, key, doc)
>>> db.commit()
# Retreive the association
>>> login = db.get(a_User_Login, key)
>>> type(login)
<class 'dsviper.ValueOptional'>
>>> login.unwrap()
{nickname='', password=''}
Path¶
A Path locates a particular piece of information in a value. A Path is constructed component by component and is applicable on any value if it can traverse the value component by component to the end point.
A Path is used to get and set a value at the end point.
# Create two structures with the field named "f_a"
>>> defs = Definitions()
>>> ns = NameSpace(ValueUUId("f529bc42-0618-4f54-a3fb-d55f95c5ad03"), "Tuto"))
>>> d_S1 = TypeStructureDescriptor("S1")
>>> d_S1.add_field("f_a", Type.STRING)
>>> t_S1 = defs.create_structure(ns, ValueUUId.create(), d_S1)
>>> d_S2 = TypeStructureDescriptor("S2")
>>> d_S2.add_field("f_a", Type.INT64)
>>> t_S2 = defs.create_structure(ns, d_S2)
# Instanciate the structures
>>> s1 = Value.create(t_S1)
>>> s2 = Value.create(t_S2)
# Create path for a field named "f_a"
>>> p = Path.from_field("f_a").const()
>>> p
.f_a
# Get the value for s1.f_a and s2.f_a
>>> p.at(s1)
''
>>> p.at(s2)
0
# Set the value for s1.f_a and s2.f_a
>>> p.set(s1, "a string")
>>> s1
{f_a='a string'}
>>> p.set(s2, 42)
>>> s2
{f_a=42}
# The type checker in action
>>> p.set(s2, "oops")
File "<python-input-76>", line 1, in <module>
p.set(s2, "oops")
~~~~~^^^^^^^^^^^^
dsviper.ViperError: [pid(9950)@mac.home]:P_Viper:P_ViperDecoderErrors:0:expected type 'long', got 'str' while decoding 'checkValue.int64'.
Path component can be:
- a field name for a Structure
- an unwrap directive for an Optional.
- an index of a Vector.
- a key for a Map
- a Position for a XArray.
# A complicated path with field, index, unwrap and key.
>>> p = Path.from_field("addresses").index(0).unwrap().field("tags").key(("zoop", 42)).const()
>>> p
.addresses[0].tags[('zoop', 42)]
>> p.components()
[
{type: Field, value: 'addresses':string},
{type: Index, value: 0:uint64},
{type: Unwrap, value: void:void},
{type: Field, value: 'tags':string},
{type: Key, value: ('zoop', 42):tuple<string, int64>}
]
DSM Definitions¶
We know how to create a data model with the type system and a Definitions, but for
many usages, we can use the DSM Language to do the same things.
see Digital Substrate Model for the theory.
Create and edit the content of the file model.dsm.
namespace Tuto {f529bc42-0618-4f54-a3fb-d55f95c5ad03} {
concept User;
struct Login {
string nickname;
string password;
};
struct Identity {
string firstName;
string lastName;
};
attachment<User, Login> login;
attachment<User, Identity> identity;
};
First we need to read and parse the file model.dsm to get a DSMDefinitions
>>> from dsviper import *
>>> parse_report, dsm_definitions, defs = DSMBuilder.assemble("model.dsm").parse()
The DSMDefinitions is root class to recursively introspect definitions.
>>> dsm_definitions.concepts()
[Tuto::User]
>>> dsm_definitions.structures()
[Tuto::Login, Tuto::Identity]
>>> dsm_definitions.structures()[0].fields()
[nickname, password]
>>> field = dsm_definitions.structures()[0].fields()[0]
>>> field.name(), field.type()
('nickname', string)
>>> dsm_definitions.attachments()
[attachment<User, Login> Tuto::login, attachment<User, Identity> Tuto::identity]
>>> dsm_definitions.attachments()[0].key_type()
Tuto::User
>>> dsm_definitions.attachments()[0].document_type()
Tuto::User
With the introspection API, we will easily write a script to convert DSMDefinitions to other representations like Graphviz, UML, SQL, C++, Python ...
We just convert the DSMDefinitions to a Definitions and retrieve the type with the
TypeId (Definition Identifier)
>>> t_User = defs.query_types("User")[0]
>>> t_Login = defs.query_types("Login")[0]
>>> Value.create(t_Login)
{nickname='', password=''}
>>> ValueKey.create(t_User)
9c052fa7-f1b5-43bc-a6b2-01165368e5ae
DSMDefinitions types are static and descriptive and Definition types are dynamic
and instantiable.
JSON representation¶
Viper can generate a JSON representation for any value.
We deduce a complicated value.
>>> v = Value.deduce({(1,2):"One", (1,3):"Two"})
>>> v.description()
"{(1, 2): 'One', (1, 3): 'Two'}:map<tuple<int64, int64>, string>"
# Encode a json representation of a map<tuple<int64, int64>, string>
>>> js = Value.json_encode(v)
>>> js
'[[[1,2],"One"],[[1,3],"Two"]]'
Decode the complicated value from the type.
>>> dv = Value.json_decode(js, v.type(), Definitions().const())
>>> dv.description()
"{(1, 2): 'One', (1, 3): 'Two'}:map<tuple<int64, int64>, string>"
We can also encode a DSM Definitions.
# Create Definitions
>>> defs = Definitions()
>>> ns = NameSpace(ValueUUId("f529bc42-0618-4f54-a3fb-d55f95c5ad03"), "Tuto")
>>> defs.create_concept(ns, "User")
Tuto::User
>>> d_Identity = TypeStructureDescriptor("Identity")
>>> d_Identity.add_field("firstname", Type.STRING)
>>> d_Identity.add_field("lastname", Type.STRING)
>>> defs.create_structure(ns, d_Identity)
Tuto::Identity
# Convert to DSM Definitions
>>> dsm_definitions = defs.const().to_dsm_definitions()
# Encode a json representation
>>> js = dsm_definitions.json_encode(2)
>>> print(js)
{...}
# Decode the json representation
>>> d_dsm_definitions = DSMDefinitions.json_decode(js)
# print the Definitions in the DSM Language
>>> print(d_dsm_definitions.to_dsm())
namespace Tuto {f529bc42-0618-4f54-a3fb-d55f95c5ad03} {
concept User;
struct Identity {
string firstname;
string lastname;
};
}; // ns Tuto
See the documentation for bson interop
Binary representation¶
Viper uses a binary representation to transmit and persist DSM Definitions, Definitions, Value and Path.
>>> v = Value.deduce({(1,2):"One", (1,3):"Two"})
>>> v_e = Value.encode(v)
>>> v_e
blob(62)
>>> v_d = Value.decode(v_e, v.type(), Definitions().const())
>> v_d
{(1, 2): 'One', (1, 3): 'Two'}
Python representation¶
We can project a value to a Python object with the dumps function.
>>> v = Value.deduce({(1,2):"One", (1,3):"Two"})
>>> v.type()
map<tuple<int64, int64>, string>
>>> p = Value.dumps(v)
>>> type(p)
<class 'dict'>
>>> p
{(1, 2): 'One', (1, 3): 'Two'}
and safely loads a Python object from a type.
>>> d_v = Value.loads(p, v.type(), Definitions().const())
>>> d_v.description()
"{(1, 2): 'One', (1, 3): 'Two'}:map<tuple<int64, int64>, string>"
Database¶
A Database is a Key-Value transactional database.
Assume we have those definitions in the file model.dsm.
namespace Tuto {f529bc42-0618-4f54-a3fb-d55f95c5ad03} {
concept User;
struct Login {
string nickname;
string password;
};
struct Identity {
string firstName;
string lastName;
};
attachment<User, Login> login;
attachment<User, Identity> identity;
};
let's use our knowledge for DSM Definitions, Definitions and value.
First, we read, parse and convert the DSM Definition.
>>> parse_report, dsm_definitions, defs = DSMBuilder.assemble("model.dsm").parse()
We create a Database to persist data.
>>> db = Database.create_in_memory()
>>> db.extend_definitions(defs)
We retrieve the attachment<User, Login> login by injecting module constants
derived from the definitions.
>>> db.definitions().inject()
>>> TUTO_A_USER_LOGIN
attachment<User, Login> Tuto::login
>>> TUTO_T_USER
Tuto::User
We create a key and a document from the attachment API and commit the association in a transaction.
>>> key = TUTO_A_USER_LOGIN.create_key()
>>> doc = TUTO_A_USER_LOGIN.create_document()
>>> doc.nickname = "zoop"
>>> doc.password = "robust"
>>> db.begin_transaction()
>>> db.set(TUTO_A_USER_LOGIN, key, doc)
True
>>> db.commit()
We retrieve the document.
>>> db.get(TUTO_A_USER_LOGIN, key)
Optional({nickname='zoop', password='robust'})
We delete the association.
>>> db.begin_transaction()
>>> db.delete(TUTO_A_USER_LOGIN, key)
True
>>> db.commit()
We try to retrieve the deleted document.
>>> db.get(TUTO_A_USER_LOGIN, key)
nil
We close the database.
>> db.close()
Commit database¶
A Commit database persists the history for all mutations of your data in a DAG of commit.
Assume we have those definitions in the file model.dsm.
namespace Tuto {f529bc42-0618-4f54-a3fb-d55f95c5ad03} {
concept User;
struct Login {
string nickname;
string password;
};
struct Identity {
string firstName;
string lastName;
};
attachment<User, Login> login;
attachment<User, Identity> identity;
};
let's use our knowledge for DSM Definitions, Definitions, Path and value.
We read, parse the DSM Definition.
>>> parse_report, dsm_definitions, defs = DSMBuilder.assemble("model.dsm").parse()
We create a CommitDatabase to persist our data.
>>> db = CommitDatabase.create_in_memory()
We convert and extend the definitions embedded in the database.
>>> db.extend_definitions(defs)
We retrieve the attachment<User, Login> login by injecting
some constants derived from the definition in the main module.
>>> db.definitions().inject()
>>> TUTO_A_USER_LOGIN
attachment<User, Login> Tuto::login
>>> TUTO_T_USER
User
We create a MutableState to get access to the CommitGetting interface
and mutate your data.
>>> mutable_state = CommitMutableState(db.initial_state())
>>> key = TUTO_A_USER_LOGIN.create_key()
>>> doc = TUTO_A_USER_LOGIN.create_document()
>>> doc.nickname = "zoop"
>>> doc.password = "robust"
>>> mutable_state.commit_mutating().set(TUTO_A_USER_LOGIN, key, doc)
>>> db.commit_mutations("First Commit", mutable_state)
07cb582de8589855153416d91101c6173645aaf6
We retrieve the document from the state associated with last_commit_id through
the CommitGettting interface.
>>> state = db.state(db.last_commit_id())
>>> state.commit_getting().get(TUTO_A_USER_LOGIN, key)
Optional({nickname='zoop', password='robust'})
Now, we want to only modify the field .nickname of the document.
>>> mutable_state = CommitMutableState(state)
>>> mutable_state.commit_mutating().update(TUTO_A_USER_LOGIN, key, Path.from_field("nickname").const(), "john")
>>> db.commit_mutations("Update nickname", mutable_state)
d7432dc9378b9af83aad445a072eb99c37c908de
We retrieve the document from the last commit.
>>> state = db.state(db.last_commit_id())
>>> state.commit_getting().get(TUTO_A_USER_LOGIN, key)
Optional({nickname='john', password='robust'})
We retrieve the document from the first commit.
>>> state = db.state(db.first_commit_id())
>>> state.commit_getting().get(TUTO_A_USER_LOGIN, key)
Optional({nickname='zoop', password='robust'})
We retrieve the document from the initial state.
>>> state = db.initial_state()
>>> state.commit_getting().get(TUTO_A_USER_LOGIN, key)
nil
>> db.close()
Blob¶
A blob is used to encode a resource, like an image, a 3D mesh or any complicated binary structure.
ValueBlob, BlobLayout and ValueBlobId¶
A ValueBlob is an immutable blob. A BlobLayout describe the type of one element in the
blob, so we can interpret a blob as an immutable array
The layout of the generic BLOB is 'uchar-1' by convention. For blobs which encode
the positions, the normals, the uvs or the triangle indices of a 3D mesh, we can
respectively interpret the content of the blobs as an array
a float-3 is like a tuple
.
Since a ValueBlob is immutable, we can compute a ValueBlobId to uniquely identify a
blob from its layout and its content.
>>> blob = ValueBlob(bytes([1,2,3,4]))
>>> blob_layout = BlobLayout()
>>> blob_layout
'uchar-1'
>>> blob_id = ValueBlobId(blob_layout, blob)
6b8f3ca756046be29244d9bdb6b5ca5c00468ad5
ValueBlobEncoder and ValueBlobView¶
A ValueBlobEncoder is used to fill the content a blob with an interface adapted
for the type described by the BlobLayout. We can use a Python tuple[float, float, float]
for a float-3 value.
The position attribute of the vertices is represented by a list[tuple[float, float, float]].
>>> positions = [(0.0,0.0,0.0), (0.0,1.0,1.0), (1.0,0.0,0.0), (1.0,1.0,0.0)]
>>> be = ValueBlobEncoder(BlobLayout('float', 3))
>>> for e in positions:
be.write(e)
blob = be.end_encoding()
A ValueBlobView is used to interpret the blob from the BlobLayout perspective.
>>> d_positions = []
>>> bv = ValueBlobView(BlobLayout('float', 3), blob)
>>> for e in bv:
d_positions.append(e)
>>> d_positions
[(0.0, 0.0, 0.0), (0.0, 1.0, 1.0), (1.0, 0.0, 0.0), (1.0, 1.0, 0.0)]
StreamEncoder and StreamDecoder¶
We can also use the codec system to encode and decode a blob.
We encode a list of float [1.0, 2,0, 3.0, 4.0, 5.0, 6.0]
>>> floats = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
>>> encoder = Codec.STREAM_BINARY.create_encoder()
>>> for e in floats:
encoder.write_float(e)
>>> blob = encoder.end_encoding()
We decode the blob to a list[float].
>>> decoder = Codec.STREAM_BINARY.create_decoder(blob)
>>> sizer = Codec.STREAM_BINARY.create_sizer()
>>> count = int(len(blob) / sizer.size_of_float())
>>> d_floats = []
>>> for i in range(count):
d_floats.append(decoder.read_float())
>>> print(d_floats)
[1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
ValueBlob support the Python Buffer Protocol for contiguous array of unsigned byte.
BlobArray¶
A BlobArray is a contiguous, fixed-size array stored in a blob with a specific layout. It provides efficient storage for homogeneous data with random access capabilities through indexing.
Creating a BlobArray¶
Create a BlobArray by specifying the layout and element count:
# Create an array of 100 float-3 elements (e.g., 3D positions)
>>> layout = BlobLayout('float', 3)
>>> array = BlobArray(layout, 100)
>>> len(array)
100
Accessing and Modifying Elements¶
Access elements using standard Python indexing:
# Set a position at index 0
>>> array[0] = (1.0, 2.0, 3.0)
>>> array[0]
(1.0, 2.0, 3.0)
# Modify multiple elements
>>> for i in range(10):
... array[i] = (float(i), float(i*2), float(i*3))
# Access with negative indices
>>> array[-1] # Last element
(9.0, 18.0, 27.0)
BlobArray Properties¶
# Get the blob layout
>>> array.blob_layout()
float-3
# Get element and data counts
>>> array.count() # Number of elements
100
>>> array.data_count() # Total number of primitive values (100 * 3 = 300)
300
# Get byte information
>>> array.byte_count() # Total bytes (100 * 3 * 4 = 1200)
1200
>>> array.offset() # Offset in parent blob
0
# Get the underlying blob
>>> blob = array.blob()
>>> type(blob)
<class 'dsviper.ValueBlob'>
Buffer Protocol Support¶
BlobArray implements the Python Buffer Protocol for zero-copy interoperability:
# Create memoryview for direct memory access
>>> mv = memoryview(array)
>>> mv.nbytes
1200
# Convert to bytes
>>> data_bytes = bytes(array)
# Copy from bytes
>>> source = b'\x00\x01\x02\x03...'
>>> array.copy(source)
# NumPy integration (zero-copy)
>>> import numpy as np
>>> np_array = np.array(array, copy=False)
>>> np_array.shape
(100, 3)
>>> np_array.dtype
dtype('float32')
# Modifications through NumPy affect the original
>>> np_array[0] = [10.0, 20.0, 30.0]
>>> array[0]
(10.0, 20.0, 30.0)
Serialization¶
Serialize and deserialize a BlobArray:
# Serialize to blob
>>> blob = array.blob()
>>> len(blob)
1200
# Reconstruct from blob
>>> restored_array = BlobArray.from_blob(blob)
>>> restored_array[0]
(10.0, 20.0, 30.0)
BlobPack¶
A BlobPack organizes multiple named regions with different layouts into a single contiguous blob. This is ideal for complex data structures like 3D meshes with vertices, normals, UVs, and indices.
Creating a BlobPack¶
Use a BlobPackDescriptor to define the structure:
# Define the structure
>>> descriptor = BlobPackDescriptor()
>>> descriptor.add_region('vertices', BlobLayout('float', 3), 100) # 100 vec3
>>> descriptor.add_region('normals', BlobLayout('float', 3), 100) # 100 vec3
>>> descriptor.add_region('uvs', BlobLayout('float', 2), 100) # 100 vec2
>>> descriptor.add_region('indices', BlobLayout('uint', 1), 300) # 300 uint
# Create the pack
>>> pack = BlobPack(descriptor)
>>> len(pack)
4
Region Name Constraints¶
Region names have specific constraints:
# Maximum name length is 31 characters
>>> descriptor.add_region('a' * 31, layout, 10) # OK
>>> descriptor.add_region('a' * 32, layout, 10) # Exception: name too long
# Names must be unique (case-sensitive)
>>> descriptor.add_region('vertices', layout, 10) # OK
>>> descriptor.add_region('Vertices', layout, 10) # OK (different case)
>>> descriptor.add_region('vertices', layout, 20) # Exception: duplicate name
Accessing Regions¶
Access regions like a Python dictionary:
# Check if region exists
>>> 'vertices' in pack
True
>>> 'colors' in pack
False
# Get a region (raises ValueError if not found)
>>> vertices = pack['vertices']
>>> type(vertices)
<class 'dsviper.BlobPackRegion'>
# Query safely (returns None if not found)
>>> colors = pack.query('colors')
>>> colors is None
True
# Check (raises exception if not found)
>>> vertices = pack.check('vertices') # OK
>>> colors = pack.check('colors') # Exception: no such region
Iterating Over Regions¶
# Iterate over region names
>>> for name in pack:
... print(name)
vertices
normals
uvs
indices
# Get all region names
>>> list(pack)
['vertices', 'normals', 'uvs', 'indices']
# Get all region objects
>>> regions = pack.regions()
>>> len(regions)
4
Working with Regions¶
Each BlobPackRegion provides array-like access:
# Get region properties
>>> vertices = pack['vertices']
>>> vertices.name()
'vertices'
>>> vertices.count() # Number of elements
100
>>> vertices.data_count() # Total primitive values (100 * 3)
300
>>> vertices.byte_count() # Total bytes (100 * 3 * 4)
1200
>>> vertices.offset() # Offset in pack blob
80
>>> vertices.blob_layout()
float-3
# Access and modify elements
>>> vertices[0] = (1.0, 0.0, 0.0)
>>> vertices[0]
(1.0, 0.0, 0.0)
# Iterate over elements
>>> for i in range(3):
... vertices[i] = (float(i), 0.0, 0.0)
# Length
>>> len(vertices)
100
Buffer Protocol Support¶
Regions also support the Buffer Protocol:
# Create memoryview
>>> mv = memoryview(vertices)
>>> mv.nbytes
1200
# NumPy integration
>>> import numpy as np
>>> positions = np.array(vertices, copy=False)
>>> positions.shape
(100, 3)
# Modify through NumPy
>>> positions[0] = [10.0, 20.0, 30.0]
>>> vertices[0]
(10.0, 20.0, 30.0)
# Copy data in/out
>>> vertices.copy(source_bytes)
Memory Layout¶
BlobPack uses a specific memory layout:
[Header: 16 + (64 × num_regions) bytes][Region1][Padding?][Region2][Padding?]...
All regions share the same underlying blob:
# Get the pack's blob
>>> pack_blob = pack.blob()
>>> vertices_blob = pack['vertices'].blob()
>>> len(pack_blob) == len(vertices_blob) # Same underlying blob
True
# Regions are contiguous with padding for alignment
>>> vertices_end = vertices.offset() + vertices.byte_count()
>>> normals_start = pack['normals'].offset()
>>> normals_start >= vertices_end # No overlap
True
Serialization¶
Serialize and reconstruct a BlobPack:
# Populate the pack
>>> for i in range(100):
... pack['vertices'][i] = (float(i), 0.0, 0.0)
... pack['normals'][i] = (0.0, 1.0, 0.0)
... pack['uvs'][i] = (float(i)/100, 0.5)
# Serialize to blob
>>> blob = pack.blob()
# Reconstruct from blob
>>> pack2 = BlobPack.from_blob(blob)
>>> pack2['vertices'][0]
(0.0, 0.0, 0.0)
>>> len(pack2)
4
Validation and Error Handling¶
BlobPack validates blobs on reconstruction:
# Invalid blob (too small)
>>> tiny_blob = ValueBlob(b'\x00\x01')
>>> BlobPack.from_blob(tiny_blob) # Exception: blob too small
# Invalid magic
>>> wrong_blob = ValueBlob(b'XXXX' + b'\x00' * 12)
>>> BlobPack.from_blob(wrong_blob) # Exception: invalid magic
# Valid blob structure is preserved
>>> blob = pack.blob()
>>> pack_restored = BlobPack.from_blob(blob)
>>> pack_restored['vertices'].offset() == pack['vertices'].offset()
True
Example: 3D Mesh Storage¶
Complete example of storing mesh data:
# Define mesh structure
>>> descriptor = BlobPackDescriptor()
>>> descriptor.add_region('positions', BlobLayout('float', 3), 4)
>>> descriptor.add_region('normals', BlobLayout('float', 3), 4)
>>> descriptor.add_region('uvs', BlobLayout('float', 2), 4)
>>> descriptor.add_region('indices', BlobLayout('uint', 3), 2)
>>> mesh = BlobPack(descriptor)
# Fill positions (quad vertices)
>>> mesh['positions'][0] = (-1.0, -1.0, 0.0)
>>> mesh['positions'][1] = (1.0, -1.0, 0.0)
>>> mesh['positions'][2] = (1.0, 1.0, 0.0)
>>> mesh['positions'][3] = (-1.0, 1.0, 0.0)
# Fill normals (all pointing up)
>>> for i in range(4):
... mesh['normals'][i] = (0.0, 0.0, 1.0)
# Fill UVs
>>> mesh['uvs'][0] = (0.0, 0.0)
>>> mesh['uvs'][1] = (1.0, 0.0)
>>> mesh['uvs'][2] = (1.0, 1.0)
>>> mesh['uvs'][3] = (0.0, 1.0)
# Fill indices (two triangles)
>>> mesh['indices'][0] = (0, 1, 2)
>>> mesh['indices'][1] = (0, 2, 3)
# Serialize for storage/transmission
>>> mesh_blob = mesh.blob()
>>> mesh_blob_id = ValueBlobId(BlobLayout(), mesh_blob)
Database APIs¶
Database and CommitDatabase expose two API to register and retrieve blobs.
The first API is used for a simple blob (< 2GB).
We specify the layout and register a small blob.
>>> db = Database.create_in_memory()
>>> db.begin_transaction()
>>> blob_id = db.create_blob(BlobLayout(), ValueBlob(bytes([1,2,3,4])))
>>> db.commit()
We retrieve the info for the blob.
>>> info = db.blob_info(blob_id)
>>> info.blob_layout(), info.size()
('uchar-1', 4)
We retrieve the blob
>>> db.blob(blob_id).encoded()
b'\x01\x02\x03\x04'
The second API is used to incrementally fill a blob with a BlobStream. The size of the
blob (chunked blob) is only limited by the maximum size of the database.
We need to specify the BlobLayout and the size of blob to allocate the chunks
in the database.
To illustrate the blob stream API, we use a very huge blob of 4o. Append more that 2GB is possible but is not recommended.
>>> db = Database.create_in_memory()
>>> db.begin_transaction()
>>> bs = db.blob_stream_create(BlobLayout(), 4)
>>> db.blob_stream_append(bs, ValueBlob(bytes([1,2])))
>>> db.blob_stream_append(bs, ValueBlob(bytes([3,4])))
>>> blob_id = db.blob_stream_close(bs)
>>> db.commit()
We incrementally retrieve the blob content with the method read_blob(blob_id, size, offset).
# Incrementally retreive the blob by region
>>> db.read_blob(blob_id, 2, 0).encoded()
b'\x01\x02'
>>> db.read_blob(blob_id, 2, 2).encoded()
b'\x03\x04'
The database requires that a blob_id referenced by a document exists.
It's time to use everything we've learned so far about Viper.
# Minimal Definitions to illustrate the blob_id constraint
>>> defs = Definitions()
>>> ns = NameSpace(ValueUUId("f529bc42-0618-4f54-a3fb-d55f95c5ad03"), "Tuto")
>>> t_C = defs.create_concept(ns, "C")
>>> t_A = defs.create_attachment(ns, "usage", t_C, Type.BLOB_ID)
# Create a blob and get the blob_id
>>> db = Database.create_in_memory()
>>> db.extend_definitions(defs.const())
>>> db.begin_transaction()
>>> blob_id = db.create_blob(BlobLayout(), ValueBlob(bytes([1,2,3,4])))
# Create a document that use the blob_id
>>> db.set(t_A, t_A.create_key(), blob_id)
>>> db.commit()
And now we violate the constraint by referencing an unknown blob_id ...
>>> db.begin_transaction()
>>> db.set(t_A, t_A.create_key(), ValueBlobId(BlobLayout(), ValueBlob()))
Traceback (most recent call last):
File "<python-input-12>", line 1, in <module>
db.set(t_A, t_A.create_key(), ValueBlobId(BlobLayout(), ValueBlob()))
~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
dsviper.ViperError: [pid(50230)@mac.home]:Viper.DatabaseSQLite:DatabaseErrors:3:Missing blob 031ee8071545c35ffdc0aa20da07ce2e36ce543f.
Stream Codec¶
The binary codecs available in Viper are StreamBinary and StreamTokenBinary.
Those codecs are used to store or transmit all sorts of data. Codec are created from
their public name StreamTokenBinary or StreamBinary or available through
static properties.
The codec StreamTokenBinary adds a type token before the value to detect a stream desynchronization as soon as possible.
We create a codec StreamTokenBinary.
>>> c = Codec.check('StreamTokenBinary')
or the use the static property
>>> c = Codec.STREAM_TOKEN_BINARY
We create an encoder, and use the write* family of method to construct the stream of
binary data. The end_encoding return the encoded stream as a ValueBlob and end the encoder.
>>> e = c.create_encoder()
>>> e.write_int64(42)
>>> b = e.end_encoding()
>>> b
blob(9)
We can't append data to the stream after end_encoding()
>>> e.write_float(42)
Traceback (most recent call last):
File "<python-input-52>", line 1, in <module>
e.write_float(42)
~~~~~~~~~~~~~^^^^
dsviper.ViperError: [pid(10916)@mac.home]:Viper.StreamTokenBinaryEncoder:StreamErrors:1:Stream is ended [writeFloat].
We create a decoder with a ValueBlob and consume the binary stream.
>>> d = c.create_decoder(b)
>>> d.read_int64()
42
>>> d.has_more()
False
>> d.read_int64()
Traceback (most recent call last):
File "<python-input-57>", line 1, in <module>
d.read_int64()
~~~~~~~~~~~~^^
dsviper.ViperError: [pid(10916)@mac.home]:Viper.StreamBinaryDecoder:StreamErrors:2:Prematurely reached the end of the stream [readInt64].
We can rewind the stream and try to read a bool.
>>> d.rewind()
>>> d.offset()
0
>>> d.read_bool()
Traceback (most recent call last):
File "<python-input-63>", line 1, in <module>
d.read_bool()
~~~~~~~~~~~^^
dsviper.ViperError: [pid(10916)@mac.home]:Viper.StreamTokenBinaryDecoder:StreamErrors:4:Expected token 'bool', got 'int64'.
Hasher¶
Viper hashers are compatible with the Python hashlib library API.
Available hashers are:
- HashMD5
- HashSHA1
- HashSHA256
- HashSHA3('224'|'256'|'384'|'512')
- HashCRC32
>>> b = ValueBlob(bytes([1,2,3,4]))
>>> HashMD5.hash(b)
'08d6c05a21512a79a1dfeb9d2a8f262f'
>>> h = HashMD5()
>>> h.update(b)
>>> h.hexdigest()
'08d6c05a21512a79a1dfeb9d2a8f262f'