![]() |
VPP
0.8
A high-level modern C++ API for Vulkan
|
After constructing a rendering engine from building blocks like render graphs and pipelines, we need to think of providing some actual data to it for processing. There is also a question concerning how to retrieve the results - or redirect them to the screen to display.
There are two aspects of the problem to consider:
We will answer these questions in this chapter.
Buffers generally store one-dimensional, non-image data, which can however be structured in complex ways.
First we will start from simple, non-structured buffers. The most basic, elementary buffer class in VPP is vpp::Buf. It is not recommended to create these directly, however vpp::Buf is a base class for slighly more specific vpp::Buffer and there are many functions and methods in VPP accepting just vpp::Buf.
The difference between the two is that vpp::Buffer is not a class but rather a template, carrying the information about the buffer usage inside its type. Using a vpp::Buffer allows for more type safety and error checking. The vpp::Buf takes usage as a parameter, but all constructed vpp::Buf objects are identical and it is possible to supply a wrong buffer in wrong place.
Both types take size (in bytes) and the target vpp::Device. Also in both cases the actual memory allocation is deferred - buffers do not allocate memory when being constructed, but rather at the moment of memory binding so the process is two-step.
Buffer classes are reference counted, and you may pass these objects around by value. For input arguments, a const
reference is recommended for higher performance (as reference counters in VPP are atomics, which brings some small overhead when passing them as values).
The usage is determined by a bitwise or
of usage flags, which are taken from enumeration inside vpp::Buf class (in fact these constants are equal to their core Vulkan counterparts). See the docs for vpp::Buf::EUsageFlags and vpp::Buffer for a description.
You define a buffer usage by providing the bitwise or
result to the vpp::Buffer class as an argument. For example:
It is convenient to make a typedef
for every buffer kind you are using in your program.
Since buffers do not allocate memory immediately when they are created, a separate allocation function is required. The function is named vpp::Buf::bindMemory(). It is actually a function template, taking a parameter which can be one of these two classes:
The difference between these two is that vpp::MappableDeviceMemory can be mapped into the address space of the CPU and its contents may be accessed from CPU program. vpp::DeviceMemory does not have that capability, but the advantage is that it may be allocated inside GPU-only memory, which guarantees much better performance. In practice, use vpp::MappableDeviceMemory only if you really need to access the memory directly.
An example of buffer usage:
An important thing is what are vpp::MemProfile constants. They specify what kind of memory we are requesting. See the docs for vpp::MemProfile for description of these constants. In most scenarios either DEVICE_STATIC
or HOST_STATIC
will be appropriate.
Another way to bind memory to a buffer is to use external functions vpp::bindDeviceMemory() and vpp::bindMappableMemory(). They take a buffer reference and vpp::MemProfile constant. They return an object of type vpp::MemoryBinding, which holds references to both the buffer and memory object bound to it. It is sometimes convenient to have a compound object like this. Also some VPP functions require vpp::MemoryBinding objects as parameters, to ensure that buffers they operate on are bound to memory.
An example:
While buffers are low-level concept, basically offering ranges of unformatted memory, VPP also offers much more advanced mechanism: vectors.
The vpp::gvector template is basically a STL-like vector operating on top of a buffer. The vector internally manages the buffer, as well as memory allocation and binding. From user's perpective it behaves much like std::vector
. The only limitation is that maximum size must be specified in advance.
Vectors are quite versatile and can be used in all places where either a buffer (vpp::Buf subclass) is needed, or a memory binding (vpp::MemoryBinding).
An example of using vpp::gvector to define vertices:
You can find more information in vpp::gvector documentation.
Vectors can be used for any buffers, not just vertices. An example of defining a vector of frame parameters (matrices, etc.) as uniform buffer:
Buffer views are intermediate object classes which participate in binding buffers to binding points in pipelines. The task of a buffer view is to define a slice of the buffer which will be visible from the shader.
Most buffer views are simple, "throw away" objects which can be constructed temporarily only to bind actual buffer. The only exception is vpp::TexelBufferView template, which defines a view on core Vulkan level and this view must exist as long as the pipeline is using it.
There are several buffer view classes, intended to be used with different kinds of buffers:
All buffer views take a vpp::MemoryBinding object in constructors. That means you can use either a regular buffer bound to memory, or vpp::gvector.
Binding points are objects declared within your custom pipeline as class fields, whose purpose is to allow to access external data from shaders. The shaders are written C++ code but executed on GPU. Binding points are hybrid between CPU-level and GPU-level objects, acting as go-betweens for both worlds.
On the GPU level, binding points are accessed from shaders just as regular class fields.
On the CPU level, binding points are being bound to data buffers with help of view objects and shader data blocks.
Binding points exist both for buffers and images. Here is the list of binding points for buffers:
See the documentation for respective classes for more information.
Any of them should be placed in your class derived from vpp::PipelineConfig or vpp::ComputePipelineConfig, as a field (usually private).
Note that we did not list a binding point for vertices (vpp::inVertexData) here, although it is also a binding point working with buffers. This is because vertex buffers are treated differently by Vulkan. For example, they are not part of shader data blocks desribed below. Vertex binding points will be covered in separate section.
A buffer (or image) binding is an association between some binding point and some buffer view. A shader data block is a container for such associations. It remembers the bindings for a number of binding points.
A shader data block must be selected for subsequent drawing operations in rendering or computation command sequence. That means, all affected binding points will be bound at once. You can later switch to another shader data block, which has some (or all) binding points associated with different data. Switching shader data blocks is very fast.
Currently VPP offers shader data blocks which act on all binding points defined in particular pipeline configuration (except vertex data). Therefore vpp::ShaderDataBlock takes a vpp::PipelineLayout as the constructor parameter.
In order to bind a buffer or some buffers to binding points in particular pipeline, first you need to have a vpp::ShaderDataBlock in which the binding will be stored. Create one or more shader data blocks right next to your vpp::PipelineLayout object. Also define a method in your vpp::PipelineConfig (or vpp::ComputePipelineConfig) subclass that takes a pointer to vpp::ShaderDataBlock as well as set of buffer and image views to bind.
In the method, call the vpp::ShaderDataBlock::update() method on supplied data block. This method has somewhat interesting interface, as it takes an assignment list allowing to bind multiple buffers and images at once.
An assignment list, is just as the name says, a list of assignments separated by commas. Entire list should be enclosed with additional parentheses, to avoid confusing the C++ parser (so that it won't search for overload with multiple arguments). Each assignment on the list has the form:
Binding point is a class field (e.g. of type vpp::inUniformBuffer or other binding point type) and buffer view object is a reference to one of buffer view classes mentioned in earlier subsection (e.g. a vpp::VertexBufferView).
Because we are inside a method defined inside vpp::PipelineConfig subclass, you access binding points directly as fields. The views must be provided as arguments.
An example:
This example shows binding of both a buffer and an image. The rules are exactly the same for both types of resources.
A different question is when to call the setDataBuffers
method. It's up to you, but it must be some place where vpp::ShaderDataBlock objects as well as all buffer and image objects are already constructed. On the other hand, it must be done before shader data block is actually used.
A shader data block is used when it is being bound to a pipeline inside command sequence.
As we have already mentioned, vertex (or in general geometry) data is treated in a special way by Vulkan. This kind of data is being stored in buffers, but it is not bound via shader data blocks.
The binding point class for vertex data is vpp::inVertexData. It is a template which takes somewhat unusual parameter - a structure definition. This is because you must define the format of your vertex data, so that Vulkan knows how to parse it. Vertex data is the only kind of user-defined data which is actually processed by fixed-function hardware on GPU. Therefore the format must be determined before accessing the data. See the section below for instructions on how to define a vertex data structure and how to distinguish it from instance data structure.
To bind a vertex buffer (or several of them) to vpp::inVertexData binding points, the best way is to start with writing a helper method, just the same as for regular buffers in previous section. This time, it will not take any vpp::ShaderDataBlock, just the buffer references.
Instead of updating a shader data block, it calls a command vpp::PipelineConfig::cmdBindVertexInput(). This will bind the vertex buffer to the pipeline and make it the vertex source for subsequent draw commands.
Calling the command directly means that the bindVertices
method must be called also directly from within vpp::Process command sequence (that is, from within the lambda function). This is the same place where you issue draw commands, switch pipelines and shader data blocks. Vertex buffers is another part that can be switched multiple times. You can have e.g. a separate buffer for each mesh.
The vpp::PipelineConfig::cmdBindVertexInput() command accepts the same kind of assignment list as vpp::ShaderDataBlock::update(). You should assign only vpp::inVertexData binding points here, otherwise you will get a nasty C++ compiler error.
An example:
You can also define more than one vpp::inVertexData binding point and have several vertex buffers. What needs to be well understood here is that all these buffers (imagine they are data tables organized in columns) define the same set of vertices. One buffer can hold positions, another one normals, yet another one UV coordinates – it is up to you to decide.
Generally it may be better for performance to pack all basic attributes into single buffer (with a structure). However, sometimes certain data is optional. For example anything else than positions is unused inside D-passes (depth-only rendering), so it might be better for performance to create one buffer with positions (used in D-pass and color pass) and another one with the rest of the attributes (for color pass only).
VPP offer full flexibility here. It is up to you to experiment, benchmark and decide which layout is optimal in your scenario.
Index buffers are another kind of buffers which are not treated like regular data buffers. These do not even have an explicit binding point. Otherwise they are similar to vertex buffers and always go along with them (indices define the ordering of vertices and groups them into primitives, e.g. triangles).
When you need to bind an index buffer, do it in the same bindVertices
method as the vertex buffer:
Index buffers contain packed 32-bit or 64-bit integer values. These indices point to individual vertices in the vertex buffer - and in fact, in all defined vertex buffers in parallel (as they define the same set). Because of that, there is a need only for single index buffer without any customizable structure. That's why there is no explicit binding point for index buffers - it would contain no useful information.
To create index buffer, you can use a convenience class called vpp::Indices. This is a typedef
to vpp::gvector, configured to contain 32-bit index values.
Although buffers transmit opaque data blobs, vertex data is obviously structured. Each vertex has a number of attributes. Typically one of them is the position in 3D or 4D homogenous space. Other attributes might include a normal vector direction, tangent vector direction, UV coordinates, etc. This quite resembles a C++ structure and in fact the vertex buffer is an array of structures.
VPP offers a simple way to define vertex data structures with syntax very similar to regular C++ structures and with all benefits of them. Due to very specific usage of these structures (they are accessed both on CPU and GPU) a simple C++ struct
or class
would not suffice without requiring special support from the compiler, which VPP completely avoids. Therefore the structure definition is a template, and its fields are instantiations of templates.
A simple vertex structure definition looks like this:
As you can see, the definition is very simple and is actually a regular C++ type definition, although templatized.
Note that you can also use inheritance if you wish, and/or define any methods with either CPU or GPU scope. Do not define a constructor or destructor, though.
What can we do with such definition? First, it is always recommended to make atr least first two typedefs from the three ones shown below (best all three):
Now you have 3 types and can do some nice things with them:
CVertexAttr
as a regular C++ data structure with layout compatible with GPU side. For example, m_pos
field will be visible in CVertexAttr
structure and you will be able to read or assign a glm::vec4
to it. For the vpp::format like fields, these fields will be equivalents of vpp::format< ... >::data_type
types.GVertexAttr
as a way to access these fields on GPU side. Accessing fields on GPU is done by means of the indexing operator (
[]), because the dot is not overloadable. To access e.g. m_pos
field, you will write [ & GVertexAttr::m_pos ]
in some place of your shader.VertexAttrBuffer
to pass around vertex buffers, and send them to GPU. This is just a STL-styled vector holding vertices, that you can bind to GPU pipeline with single method call. Cool!Instanced drawing is a method to efficiently render large number of similar objects, sharing the same mesh. Basically it is just drawing the same mesh over and over, with different parameters (e.g. varying model matrix). You specify that your mesh has e.g. 2000 vertices and you want to draw it 100 times. Then, the GPU will repeat drawing 100 times, each time drawing the same mesh of 2k vertices but picking a new model matrix (which is being used in the vertex shader and affects the transformation of the mesh).
Parameters for instances come from buffers quite similar to vertex buffers. They are called instance buffers and the difference is that a new item from instance buffer is picked when next instance is being started.
Although instance buffers are implemented in almost the same way in core Vulkan as vertex buffers, VPP gives them recognition on the structure definition level. This is because of some complex internals, but also design logic. Instance buffers conceptually hold very different data than vertex buffers and there is no sense in mixing them together.
An example:
As you can see, apart from using vpp::InstanceStruct instead of vpp::VertexStruct everything is the same. Also vpp::Buf::VERTEX for the buffer type is fine.
Uniform buffers are another type of data buffers designed to pass the data to the rendering engine (or from, as there are both read-only and read-write variants).
First let's look at read-only uniform buffers. They are suitable to pass highest level data to shaders, not vertex-scoped nor instance-scoped, but rather frame-scoped. This is the ideal place to put all the data which does not change for entire frame, like the projection matrix or positions of light sources.
Defining the structure for uniform buffers is quite similar to vertex and instance buffers, with some small differences. They can be seen in the following example:
The definition above is applicable to read-only buffers. For buffers that are potentially read-write, there is only one thing to change:
Images store data organized into 1, 2 or 3 geometrical dimensions and additionally can have multiple layers and/or MIP maps. They have no user-defined structure, but rather a format which is a single piece of information (enumeration value in core Vulkan).
The counterpart of generic vpp::Buf class for images is vpp::Img. It has all parameters of the image specified at runtime and it is a universal image type, acceptable by many VPP functions. Just like with vpp::Buf, it is better to avoid creating them explicitly. Instead use the vpp::Image template. This is type-safe image definition which is suitable for dealing with image binding points in pipelines. The vpp::Image template carries enough information to enable accessing these images from C++ shaders (via binding points).
Definition of image type is somewhat more complex than a buffer. It consists of several typedef declarations and it is done in stages. Look at the following simple example:
These typedefs define several data types. Two of these - TextureImage
and TextureView
you will be using regularly to handle images and views of this type. For example, to create an image on the device, and a view for it:
As you can see, after all these type definitions are being made, creating actual images becomes trivial.
Another important thing is that image binding points are templates that require a view class to instantiate, like this:
This is the same TextureView
as above. You can now bind a TextureView
object to this binding point.
Sometimes it is unsuitable to specify the format as a template argument, because the format is varying or not known during compilation. It is a common case for textures loaded from external sources, which can have various compressed formats determined only when the texture file is being examined.
VPP supports this scenario by introducing a special vpp::texture_format syntax. It is used as in the example below:
The format defined in this way has some limitations. It is guaranteed to work only for read-only sampled textures. In other cases, support depends on actual rendering device capabilities (so assume there is none).
The format must be ultimately specified somewhere and the proper place is alternative version of vpp::Image constructor:
vpp::ImageAttributes requires several parameters determining the type and possible uasge of the image. The most important ones are the format and usage mask.
Format must be a vpp::format instance. Please see the docs for vpp::format for more information about formats.
Image usage flags is the fourth parameter on the list. This is very important parameter for the core Vulkan, specifying how the image will be used. It should be a bitwise or
of flags defined in the vpp::Img class. See the docs for vpp::Img::EUsageFlags for a description of these flags.
Most frequently used values are:
If you are copying the data from or to an image, also include vpp::Img::SOURCE and/or vpp::Img::TARGET flags.
In case of intermediate nodes in render graphs, combine vpp::Img::INPUT with vpp::Img::COLOR (or vpp::Img::DEPTH).
Other flag combinations might result in suboptimal performance, or be unsupported on particular hardware. Best to avoid them unless you know what you are doing.
vpp::Img and vpp::Image constructors which have a vpp::MemProfile argument automatically allocate memory for the image according to the profile. Other constructors do not do this.
You can manually bind memory to an image by using vpp::bindDeviceMemory() and vpp::bindMappableMemory() functions, just the same as for buffers.
Retrieving data from textures is much more complex than just reading pixel value from a bitmap at (x,y)
position. Actually this simple scheme of data access is applicable to storage images (vpp::Img::STORAGE), but textures (vpp::Img::SAMPLED) follow much more involved algorithm, called sampling.
Sampling is configurable by user, and that means a separate object is needed to hold sampling parameters. This object is simply called a sampler. A sampler can be associated with a texture in one of three ways:
The first way is the fastest one but the last is the most versatile. More information on that topic is contained in the section Image binding points.
In sampling, the coordinates of a texel to retrieve are floating-point, not integer. The texture surface is "continuous" and "infinite" in some sense (such that the coordinates cover entire range of float
type). There can be however several conventions regarding how the coordinates map onto actual image surface.
First of all, VPP offers two kinds of samplers: normalized and unnormalized ones.
Normalized samplers have the primary coordinate range equal to [0, 1]
. The 0
value maps to the left or top point and 1
to right or bottom. This is compatible with UV maps that texture editing programs typically produce (UV maps associate a vertex in a mesh with some point in the texture).
Unnormalized samplers have the range starting from 0
, but ending at the exact value of the image dimension (e.g. width, height or depth in case of 3D images). This is according to definition in section 15.1.1 of the official Vulkan specs. Sometimes this is useful – if you know the size of the texture and want to have direct control where the texels come from. Unnormalized samplers however have some limitations regarding the configuration options they support.
In VPP, normalized sampler configuration is defined by setting fields of vpp::SNormalizedSampler structure. For unnormalized samplers, there is vpp::SUnnormalizedSampler structure, which has generally less options.
These structure do not define proper sampler object yet. For that, use vpp::NormalizedSampler and vpp::UnnormalizedSampler classes. They take the structures as construction parameters, as well as the device. Both objects represent a Vulkan sampler object.
Normalized samplers support the following features:
Unnormalized samplers are much more limited and they support:
Image views perform similar role as buffer views. They are intermediate objects used while binding images to binding points. Also a view allows to define a slice for the image constisting of a subset of its array levels and MIP maps (but not a window within its pixel area). Use this e.g. to treat some selected array level of the image as single-leveled image.
An image view always is a Vulkan object and its lifetime must at least as long as operations on it are performed.
Image view classes are defined from vpp::ImageView template, like in the example below:
The vpp::ImageViewAttributes intermediate template offers several additional parameters which are defined per view, rather than per image. All of these are optional.
The first one is associated sampler type. There are normalized and unnormalized samplers as explained above, and the view is configured for either one. By default, if you do not specify any parameter, normalized sampler is assumed. However, if you specify vpp::SAMPLER_UNNORMALIZED constant, the view will allow unnormalized samplers (as well as normalized ones) at the cost of some limitations for underlying image (it can be either 1D or 2D only).
Example:
Second optional parameter is called the aspect mask. This is very rarely used parameter taking a bitwise or
of the following values (usually just one):
VK_IMAGE_ASPECT_COLOR_BIT
VK_IMAGE_ASPECT_DEPTH_BIT
VK_IMAGE_ASPECT_STENCIL_BIT
This parameter determines kind of data accessed by this image view. Regular images contain color data. There can be also depth images containing pixel Z coordinates, as well as stencil images with bitmasks of user-defined meaning. Also there can be combined depth+stencil images. This parameter is useful mostly for these. You can select which part (depth or stencil) you want to access.
In most cases just do not specify anything for this parameter, as VPP will determine the aspect automatically (except for combined depth+stencil images).
Example:
Third optional parameter is also very rarely used. It allows to override the image format. This way you can access the image pixels as if they were in different format as they really are, similar to the way reinterpret_cast
or unions in C++ work.
If the overriding format is specified, it should be a vpp::format template instance. In most cases, leave this parameter unset, as VPP will simply use the same format as the image is in.
Image binding points are placed inside a vpp::PipelineConfig or vpp::ComputePipelineConfig derived class, jsut as other binding points. All of them take the image view type as the first argument (some may take more optional arguments). There are the following binding point classes:
There are also binding points for samplers. These are also templates, taking the sampler type (either vpp::NormalizedSampler or vpp::UnnormalizedSampler) as the only argument:
Those binding points which have the Const
word in names, are statically bound to samplers. This is faster and simpler, but the sampler parameters can not be changed.
vpp::inSampledTexture and vpp::inSampler allow to change the sampler by binding – either by binding a different sampler to a shader data block, or by switching to a different shader data block. This allows to use different samplers for each draw command.
vpp::inTexture image is combined with a sampler directly in the shader code. This allows to change a sampler even inside single draw call. It migh be slower than static or bound variants.
Images are bound to binding points in the pipeline in exactly same way as buffers do. See section Binding buffers to points for an introduction.
Images participate in same shader data blocks as buffers, are updated by means of the same vpp::ShaderDataBlock::update() method and can in fact be updated in the same method call as buffers (by mixing buffers and images on the same assignment list). You can proceed either this way, or write separate updating method for images (or some images) - it is up to you.
An example of binding both a buffer and image in single call:
Slightly more complicated form must be used when you bind both image and sampler simultaneously. This can happen only for vpp::inSampledTexture binding points. You need to use vpp::bind() function. As the first argument give the image view as above. As the second one, specify the sampler (normalized or unnormalized). Example:
Both forms accept one more optional parameter: image layout. This is a Vulkan layout code specifying in which layout the image is in when accessing it in a shader. Usually it is not specified and VPP detects automatically viable layout from the following allowed values:
If you have special needs regarding the layout, you can override it by specifying the layout as the final argument to the vpp::bind() function. VPP will pass it to the VkDescriptorImageInfo
structure unchanged. This is advanced usage and do it only when you know what you are doing (can result in validation/runtime errors).
Output attachments are images that are the results of particular process in the render graph. All cases listed below are possible:
Each output attachment should have a vpp::outAttachment binding point declared in thep pipeline configuration subclass. This binding point requires passing a reference to the corresponding render graph node in the constructor. This can be either:
Output attachments binding points are not bound to actual images via shader data blocks, but rather through their corresponding nodes in render graph.
A set of such bindings for all output attachments in particular render pass (and graph) is called a framebuffer in Vulkan. It is somewhat analogous to shader data block, but it is not switchable during rendering.
One particular case is permanent binding of images to render graph nodes. In such case, you do not need to construct the framebuffer explicitly, as VPP maintains it internally. To use this simple variant, just pass image views to constructors of vpp::Display and vpp::Attachment nodes.
In case of a display node, the role of image view is being performed by a vpp::Surface object. This way you can render directly to screen or a window. This configuration is shown in the example below.
For vpp::Attachment node, pass an image view directly to the constructor, like in the following example:
As shown in both examples above, we can configure rendering without creating Vulkan framebuffer explicitly. This is suitable for scenarios where we do not want to change rendering targets.
In case if the target images must not be bound permanently to render graph nodes, we can use FrameBuffer objects directly.
vpp::inAttachment
- vpp::inPushConstant