VPP  0.8
A high-level modern C++ API for Vulkan
Methods of supplying data

After constructing a rendering engine from building blocks like render graphs and pipelines, we need to think of providing some actual data to it for processing. There is also a question concerning how to retrieve the results - or redirect them to the screen to display.

There are two aspects of the problem to consider:

  • What objects are available to store the data physically?
  • How to bind these objects to the rendering engine, composed of render graphs and pipelines?

We will answer these questions in this chapter.

Buffers

Buffers generally store one-dimensional, non-image data, which can however be structured in complex ways.

Basic buffer classes in VPP

First we will start from simple, non-structured buffers. The most basic, elementary buffer class in VPP is vpp::Buf. It is not recommended to create these directly, however vpp::Buf is a base class for slighly more specific vpp::Buffer and there are many functions and methods in VPP accepting just vpp::Buf.

The difference between the two is that vpp::Buffer is not a class but rather a template, carrying the information about the buffer usage inside its type. Using a vpp::Buffer allows for more type safety and error checking. The vpp::Buf takes usage as a parameter, but all constructed vpp::Buf objects are identical and it is possible to supply a wrong buffer in wrong place.

Both types take size (in bytes) and the target vpp::Device. Also in both cases the actual memory allocation is deferred - buffers do not allocate memory when being constructed, but rather at the moment of memory binding so the process is two-step.

Buffer classes are reference counted, and you may pass these objects around by value. For input arguments, a const reference is recommended for higher performance (as reference counters in VPP are atomics, which brings some small overhead when passing them as values).

Buffer types and usages

The usage is determined by a bitwise or of usage flags, which are taken from enumeration inside vpp::Buf class (in fact these constants are equal to their core Vulkan counterparts). See the docs for vpp::Buf::EUsageFlags and vpp::Buffer for a description.

You define a buffer usage by providing the bitwise or result to the vpp::Buffer class as an argument. For example:

typedef vpp::Buffer< vpp::Buf::SOURCE > StagingBuffer;

It is convenient to make a typedef for every buffer kind you are using in your program.

Memory allocation

Since buffers do not allocate memory immediately when they are created, a separate allocation function is required. The function is named vpp::Buf::bindMemory(). It is actually a function template, taking a parameter which can be one of these two classes:

The difference between these two is that vpp::MappableDeviceMemory can be mapped into the address space of the CPU and its contents may be accessed from CPU program. vpp::DeviceMemory does not have that capability, but the advantage is that it may be allocated inside GPU-only memory, which guarantees much better performance. In practice, use vpp::MappableDeviceMemory only if you really need to access the memory directly.

An example of buffer usage:

typedef vpp::Buffer< vpp::Buf::VERTEX > MyVtxBuffer;
MyVtxBuffer createVertexBuffer ( unsigned int size, const vpp::Device& hDevice )
{
using namespace vpp;
// Create the buffer, no memory allocated yet.
MyVtxBuffer hBuffer ( size, hDevice );
// Allocate the memory and get the memory object.
// This buffer will be allocated on the CPU side, because we requested
// HOST_STATIC profile. This host-side memory is visible to GPU, but
// slower than GPU-side memory.
// Map the memory to CPU address space.
hMemory.map();
// Get starting and ending addresses.
unsigned char* pBufferBegin = hMemory.beginMapped();
unsigned char* pBufferEnd = hMemory.endMapped();
// fill the area between pBufferBegin, pBufferEnd
// ... some code here
// Ensure that writes are flushed.
hMemory.syncToDevice();
// Unmap the memory.
hMemory.unmap();
return hBuffer;
}

An important thing is what are vpp::MemProfile constants. They specify what kind of memory we are requesting. See the docs for vpp::MemProfile for description of these constants. In most scenarios either DEVICE_STATIC or HOST_STATIC will be appropriate.

Another way to bind memory to a buffer is to use external functions vpp::bindDeviceMemory() and vpp::bindMappableMemory(). They take a buffer reference and vpp::MemProfile constant. They return an object of type vpp::MemoryBinding, which holds references to both the buffer and memory object bound to it. It is sometimes convenient to have a compound object like this. Also some VPP functions require vpp::MemoryBinding objects as parameters, to ensure that buffers they operate on are bound to memory.

An example:

using namespace vpp;
typedef Buffer< Buf::SOURCE > StagingBuffer;
StagingBuffer stagingBuffer ( tex2D.size(), m_device );
// stagingBufferMemory will be of type MemoryBinding<...>
auto stagingBufferMemory = bindMappableMemory ( stagingBuffer, MemProfile::HOST_STATIC );
stagingBufferMemory.memory().map();
memcpy ( stagingBufferMemory.memory().beginMapped(), tex2D.data(), tex2D.size() );
stagingBufferMemory.memory().unmap();

Vectors

While buffers are low-level concept, basically offering ranges of unformatted memory, VPP also offers much more advanced mechanism: vectors.

The vpp::gvector template is basically a STL-like vector operating on top of a buffer. The vector internally manages the buffer, as well as memory allocation and binding. From user's perpective it behaves much like std::vector. The only limitation is that maximum size must be specified in advance.

Vectors are quite versatile and can be used in all places where either a buffer (vpp::Buf subclass) is needed, or a memory binding (vpp::MemoryBinding).

An example of using vpp::gvector to define vertices:

template< vpp::ETag TAG >
struct TVertexAttr : public vpp::VertexStruct< TAG, TVertexAttr >
{
};
typedef TVertexAttr< vpp::CPU > CVertexAttr;
typedef TVertexAttr< vpp::GPU > GVertexAttr;
class MyRenderingEngine
{
public:
MyRenderingEngine ( const vpp::Device& hDevice ) :
m_device ( hDevice )
{
// m_init is a vpp::Preprocess node
m_renderGraph.m_init << [ this ]()
{
// ...
// This will copy the vector contents to the device before
// rendering begins.
m_pVertexAttrBuffer->cmdCommit();
};
}
void loadMeshes ( const MeshLoader::Meshes& meshesVector )
{
unsigned int nTotalVertices = ...; // compute number of vertices in meshes
// Create the vector. nTotalVertices is the maximum capacity,
// for now the size is zero.
m_pVertexAttrBuffer.reset ( new VertexAttrBuffer (
nTotalVertices, MemProfile::DEVICE_STATIC, m_device ) );
for ( uint32_t m = 0; m < meshesVector.size(); m++ )
{
const MeshLoader::Mesh& mesh = meshesVector [ m ];
for ( uint32_t i = 0; i < mesh.m_vertices.size(); i++ )
{
const MeshLoader::Vertex& vtx = mesh.m_vertices [ i ];
CVertexAttr vertex;
vertex.m_pos = glm::vec4 ( vtx.m_pos, 1.0f );
vertex.m_normal = glm::vec4 ( vtx.m_normal, 1.0f );
vertex.m_uv = glm::vec4 ( vtx.m_tex, 0.0f, 0.0f );
vertex.m_color = glm::vec4 ( vtx.m_color, 1.0f );
// Insert the vertex into the vpp::gvector.
m_pVertexAttrBuffer->push_back ( vertex );
}
}
}
private:
vpp::Device m_device;
MyRenderGraph m_renderGraph;
std::unique_ptr< VertexAttrBuffer > m_pVertexAttrBuffer;
// ...
};

You can find more information in vpp::gvector documentation.

Vectors can be used for any buffers, not just vertices. An example of defining a vector of frame parameters (matrices, etc.) as uniform buffer:

template< vpp::ETag TAG >
struct TFrameParams : public vpp::UniformStruct< TAG, TFrameParams >
{
};
typedef TFrameParams< vpp::GPU > GFrameParams;
typedef TFrameParams< vpp::CPU > CFrameParams;

Buffer views

Buffer views are intermediate object classes which participate in binding buffers to binding points in pipelines. The task of a buffer view is to define a slice of the buffer which will be visible from the shader.

Most buffer views are simple, "throw away" objects which can be constructed temporarily only to bind actual buffer. The only exception is vpp::TexelBufferView template, which defines a view on core Vulkan level and this view must exist as long as the pipeline is using it.

There are several buffer view classes, intended to be used with different kinds of buffers:

All buffer views take a vpp::MemoryBinding object in constructors. That means you can use either a regular buffer bound to memory, or vpp::gvector.

Buffer binding points

Binding points are objects declared within your custom pipeline as class fields, whose purpose is to allow to access external data from shaders. The shaders are written C++ code but executed on GPU. Binding points are hybrid between CPU-level and GPU-level objects, acting as go-betweens for both worlds.

On the GPU level, binding points are accessed from shaders just as regular class fields.

On the CPU level, binding points are being bound to data buffers with help of view objects and shader data blocks.

Binding points exist both for buffers and images. Here is the list of binding points for buffers:

See the documentation for respective classes for more information.

Any of them should be placed in your class derived from vpp::PipelineConfig or vpp::ComputePipelineConfig, as a field (usually private).

Note that we did not list a binding point for vertices (vpp::inVertexData) here, although it is also a binding point working with buffers. This is because vertex buffers are treated differently by Vulkan. For example, they are not part of shader data blocks desribed below. Vertex binding points will be covered in separate section.

Shader data blocks

A buffer (or image) binding is an association between some binding point and some buffer view. A shader data block is a container for such associations. It remembers the bindings for a number of binding points.

A shader data block must be selected for subsequent drawing operations in rendering or computation command sequence. That means, all affected binding points will be bound at once. You can later switch to another shader data block, which has some (or all) binding points associated with different data. Switching shader data blocks is very fast.

Currently VPP offers shader data blocks which act on all binding points defined in particular pipeline configuration (except vertex data). Therefore vpp::ShaderDataBlock takes a vpp::PipelineLayout as the constructor parameter.

Binding buffers to points

In order to bind a buffer or some buffers to binding points in particular pipeline, first you need to have a vpp::ShaderDataBlock in which the binding will be stored. Create one or more shader data blocks right next to your vpp::PipelineLayout object. Also define a method in your vpp::PipelineConfig (or vpp::ComputePipelineConfig) subclass that takes a pointer to vpp::ShaderDataBlock as well as set of buffer and image views to bind.

In the method, call the vpp::ShaderDataBlock::update() method on supplied data block. This method has somewhat interesting interface, as it takes an assignment list allowing to bind multiple buffers and images at once.

An assignment list, is just as the name says, a list of assignments separated by commas. Entire list should be enclosed with additional parentheses, to avoid confusing the C++ parser (so that it won't search for overload with multiple arguments). Each assignment on the list has the form:

<binding point> = <buffer view object>

Binding point is a class field (e.g. of type vpp::inUniformBuffer or other binding point type) and buffer view object is a reference to one of buffer view classes mentioned in earlier subsection (e.g. a vpp::VertexBufferView).

Because we are inside a method defined inside vpp::PipelineConfig subclass, you access binding points directly as fields. The views must be provided as arguments.

An example:

typedef vpp::format< vpp::texture_format > TextureLoaderFmt;
TextureLoaderFmt, vpp::RENDER, vpp::IMG_TYPE_2D,
VK_IMAGE_TILING_OPTIMAL, VK_SAMPLE_COUNT_1_BIT,
false, false > TextureLoaderAttr;
typedef vpp::Image< TextureLoaderAttr > TextureLoaderImage;
typedef vpp::ImageViewAttributes< TextureLoaderImage > TextureLoaderViewAttr;
typedef vpp::ImageView< TextureLoaderViewAttr > TextureLoaderView;
class MyPipeline : public vpp::PipelineConfig
{
public:
// ...
void setDataBuffers (
const TextureLoaderView& texv,
vpp::ShaderDataBlock* pDataBlock );
private:
// ...
};
void MyPipeline :: setDataBuffers (
const TextureLoaderView& texv,
vpp::ShaderDataBlock* pDataBlock )
{
// Never forget about extra parentheses!
pDataBlock->update ((
m_framePar = fpv,
m_colorMap = texv
));
}

This example shows binding of both a buffer and an image. The rules are exactly the same for both types of resources.

A different question is when to call the setDataBuffers method. It's up to you, but it must be some place where vpp::ShaderDataBlock objects as well as all buffer and image objects are already constructed. On the other hand, it must be done before shader data block is actually used.

A shader data block is used when it is being bound to a pipeline inside command sequence.

Binding vertex buffers to pipelines

As we have already mentioned, vertex (or in general geometry) data is treated in a special way by Vulkan. This kind of data is being stored in buffers, but it is not bound via shader data blocks.

The binding point class for vertex data is vpp::inVertexData. It is a template which takes somewhat unusual parameter - a structure definition. This is because you must define the format of your vertex data, so that Vulkan knows how to parse it. Vertex data is the only kind of user-defined data which is actually processed by fixed-function hardware on GPU. Therefore the format must be determined before accessing the data. See the section below for instructions on how to define a vertex data structure and how to distinguish it from instance data structure.

To bind a vertex buffer (or several of them) to vpp::inVertexData binding points, the best way is to start with writing a helper method, just the same as for regular buffers in previous section. This time, it will not take any vpp::ShaderDataBlock, just the buffer references.

Instead of updating a shader data block, it calls a command vpp::PipelineConfig::cmdBindVertexInput(). This will bind the vertex buffer to the pipeline and make it the vertex source for subsequent draw commands.

Calling the command directly means that the bindVertices method must be called also directly from within vpp::Process command sequence (that is, from within the lambda function). This is the same place where you issue draw commands, switch pipelines and shader data blocks. Vertex buffers is another part that can be switched multiple times. You can have e.g. a separate buffer for each mesh.

The vpp::PipelineConfig::cmdBindVertexInput() command accepts the same kind of assignment list as vpp::ShaderDataBlock::update(). You should assign only vpp::inVertexData binding points here, otherwise you will get a nasty C++ compiler error.

An example:

class MyPipeline : public vpp::PipelineConfig
{
public:
// ...
void bindVertices ( const vpp::VertexBufferView& vert )
{
// Do not forget about extra parentheses!!!
cmdBindVertexInput (( m_vertices = vert ));
}
private:
// ...
};
...
// In your rendering engine intialization routine...
{
...
m_renderGraph.m_render << [ this ]()
{
m_renderPass.pipeline ( 0, 0 ).cmdBind();
m_dataBlock.cmdBind();
// This calls the method defined above to bind the vertices.
m_pipelineLayout.definition().bindVertices ( m_rectVertexBuffer );
// This will read geometry from m_rectVertexBuffer.
m_renderGraph.cmdDraw ( 4, 1, 0, 0 );
};
}

You can also define more than one vpp::inVertexData binding point and have several vertex buffers. What needs to be well understood here is that all these buffers (imagine they are data tables organized in columns) define the same set of vertices. One buffer can hold positions, another one normals, yet another one UV coordinates – it is up to you to decide.

Generally it may be better for performance to pack all basic attributes into single buffer (with a structure). However, sometimes certain data is optional. For example anything else than positions is unused inside D-passes (depth-only rendering), so it might be better for performance to create one buffer with positions (used in D-pass and color pass) and another one with the rest of the attributes (for color pass only).

VPP offer full flexibility here. It is up to you to experiment, benchmark and decide which layout is optimal in your scenario.

Binding vertex buffers to pipelines

Index buffers are another kind of buffers which are not treated like regular data buffers. These do not even have an explicit binding point. Otherwise they are similar to vertex buffers and always go along with them (indices define the ordering of vertices and groups them into primitives, e.g. triangles).

When you need to bind an index buffer, do it in the same bindVertices method as the vertex buffer:

class MyPipeline : public vpp::PipelineConfig
{
public:
// ...
void bindVertices (
const vpp::VertexBufferView& vert,
{
// Do not forget about extra parentheses!!!
cmdBindVertexInput (( m_vertices = vert ));
// This one accepts only single buffer. No assignment list
// and no extra parentheses required.
}
private:
// ...
};

Index buffers contain packed 32-bit or 64-bit integer values. These indices point to individual vertices in the vertex buffer - and in fact, in all defined vertex buffers in parallel (as they define the same set). Because of that, there is a need only for single index buffer without any customizable structure. That's why there is no explicit binding point for index buffers - it would contain no useful information.

To create index buffer, you can use a convenience class called vpp::Indices. This is a typedef to vpp::gvector, configured to contain 32-bit index values.

Defining the structure for vertex buffers

Although buffers transmit opaque data blobs, vertex data is obviously structured. Each vertex has a number of attributes. Typically one of them is the position in 3D or 4D homogenous space. Other attributes might include a normal vector direction, tangent vector direction, UV coordinates, etc. This quite resembles a C++ structure and in fact the vertex buffer is an array of structures.

VPP offers a simple way to define vertex data structures with syntax very similar to regular C++ structures and with all benefits of them. Due to very specific usage of these structures (they are accessed both on CPU and GPU) a simple C++ struct or class would not suffice without requiring special support from the compiler, which VPP completely avoids. Therefore the structure definition is a template, and its fields are instantiations of templates.

A simple vertex structure definition looks like this:

// The definition is always a template taking vpp::ETag parameter.
// The parameter specifies whether we want a CPU or GPU version of the structure.
// The ETag enumeration has two public (non-internal) values: vpp::CPU and vpp::GPU.
template< vpp::ETag TAG >
struct TVertexAttr :
// Always inherit from vpp::VertexStruct when defining a vertex structure.
// Pass along the tag and our own template name (that is, VPP uses CRT pattern here).
public vpp::VertexStruct< TAG, TVertexAttr >
{
// Now define the attributes. Pass the tag along. Also give the type of
// the attribute, as seen on CPU. VPP will infer GPU type automatically.
// The types can be either the same as in vpp::format template, or
// single vpp::ext< ... > with external data type (e.g. GLM matrix or vector).
};

As you can see, the definition is very simple and is actually a regular C++ type definition, although templatized.

Note that you can also use inheritance if you wish, and/or define any methods with either CPU or GPU scope. Do not define a constructor or destructor, though.

What can we do with such definition? First, it is always recommended to make atr least first two typedefs from the three ones shown below (best all three):

typedef TVertexAttr< vpp::CPU > CVertexAttr;
typedef TVertexAttr< vpp::GPU > GVertexAttr;

Now you have 3 types and can do some nice things with them:

  • Use CVertexAttr as a regular C++ data structure with layout compatible with GPU side. For example, m_pos field will be visible in CVertexAttr structure and you will be able to read or assign a glm::vec4 to it. For the vpp::format like fields, these fields will be equivalents of vpp::format< ... >::data_type types.
  • Use GVertexAttr as a way to access these fields on GPU side. Accessing fields on GPU is done by means of the indexing operator ([]), because the dot is not overloadable. To access e.g. m_pos field, you will write [ & GVertexAttr::m_pos ] in some place of your shader.
  • Use VertexAttrBuffer to pass around vertex buffers, and send them to GPU. This is just a STL-styled vector holding vertices, that you can bind to GPU pipeline with single method call. Cool!

Defining the structure for instance buffers

Instanced drawing is a method to efficiently render large number of similar objects, sharing the same mesh. Basically it is just drawing the same mesh over and over, with different parameters (e.g. varying model matrix). You specify that your mesh has e.g. 2000 vertices and you want to draw it 100 times. Then, the GPU will repeat drawing 100 times, each time drawing the same mesh of 2k vertices but picking a new model matrix (which is being used in the vertex shader and affects the transformation of the mesh).

Parameters for instances come from buffers quite similar to vertex buffers. They are called instance buffers and the difference is that a new item from instance buffer is picked when next instance is being started.

Although instance buffers are implemented in almost the same way in core Vulkan as vertex buffers, VPP gives them recognition on the structure definition level. This is because of some complex internals, but also design logic. Instance buffers conceptually hold very different data than vertex buffers and there is no sense in mixing them together.

An example:

template< vpp::ETag TAG >
struct TInstanceStd : public vpp::InstanceStruct< TAG, TInstanceStd >
{
};
typedef TInstanceStd< vpp::CPU > CInstanceStd;
typedef TInstanceStd< vpp::GPU > GInstanceStd;

As you can see, apart from using vpp::InstanceStruct instead of vpp::VertexStruct everything is the same. Also vpp::Buf::VERTEX for the buffer type is fine.

Defining the structure for uniform buffers

Uniform buffers are another type of data buffers designed to pass the data to the rendering engine (or from, as there are both read-only and read-write variants).

First let's look at read-only uniform buffers. They are suitable to pass highest level data to shaders, not vertex-scoped nor instance-scoped, but rather frame-scoped. This is the ideal place to put all the data which does not change for entire frame, like the projection matrix or positions of light sources.

Defining the structure for uniform buffers is quite similar to vertex and instance buffers, with some small differences. They can be seen in the following example:

// Template parameter is the exactly the same as for vertex buffers.
template< vpp::ETag TAG >
struct TFrameParams :
// Now we have a base class called UniformStruct, but otherwise it is the same syntax.
public vpp::UniformStruct< TAG, TFrameParams >
{
// Fields are now defined with UniformFld template. It it almost the same, but:
// - no option to use fields based on vpp::format< ... >,
// - fields must be declared as custom math types (GLM, etc.),
// - no vpp::ext< ... > is required.
};
// These typedefs are identical as for vertex buffers.
typedef TFrameParams< vpp::GPU > GFrameParams;
typedef TFrameParams< vpp::CPU > CFrameParams;
// Here we have only different vector usage flag.

The definition above is applicable to read-only buffers. For buffers that are potentially read-write, there is only one thing to change:

// Vector usage flag now specifies STORAGE buffer. This is read-write.

Images

Images store data organized into 1, 2 or 3 geometrical dimensions and additionally can have multiple layers and/or MIP maps. They have no user-defined structure, but rather a format which is a single piece of information (enumeration value in core Vulkan).

Basic image classes in VPP

The counterpart of generic vpp::Buf class for images is vpp::Img. It has all parameters of the image specified at runtime and it is a universal image type, acceptable by many VPP functions. Just like with vpp::Buf, it is better to avoid creating them explicitly. Instead use the vpp::Image template. This is type-safe image definition which is suitable for dealing with image binding points in pipelines. The vpp::Image template carries enough information to enable accessing these images from C++ shaders (via binding points).

Definition of image type is somewhat more complex than a buffer. It consists of several typedef declarations and it is done in stages. Look at the following simple example:

// First, name your format. This particular format has one component
// per pixel, which is 8-bit unsigned value mapped linearly into [0.0, 1.0]
// floating point range.
// With this format, you will see 'unsigned char' (0..255) on the CPU side,
// but Float on the GPU side.
typedef vpp::format< vpp::unorm8_t > TextureFmt;
// ImageAttributes template is always the next step after format. It gives
// all needed information about the image.
// Format type, just defined.
TextureFmt,
// For typical usages, always put vpp::RENDER here. Other values
// have currently only internal use.
// Dimensionality of the image. Use 1D, 2D, 3D, or CUBE_2D.
vpp::IMG_TYPE_2D,
// Image usage flags. See next section.
// Image tiling mode. See next section.
VK_IMAGE_TILING_OPTIMAL,
// Image sample count. For multisampled anti-aliasing. For now, use 1 bit
// (no MSAA).
VK_SAMPLE_COUNT_1_BIT,
// Whether the image is MIP-mapped. Typically enabled for textures.
false,
// Whether the image is arrayed. Useful in various scenarios.
false
>
TextureAttr; // our attributes type name, used later.
// These typedefs are purely mechanical. This one defines image type from
// given attributes.
typedef vpp::Image< TextureAttr > TextureImage;
// This one defines the view attributes from given image. Can have
// some extra parameters, not shown here.
// Finally, this one defines an image view type.

These typedefs define several data types. Two of these - TextureImage and TextureView you will be using regularly to handle images and views of this type. For example, to create an image on the device, and a view for it:

TextureImage myTexImg (
VkExtent3D { width, height, 1 },
MemProfile::DEVICE_STATIC,
m_device
);
TextureView myTexView ( myTexImg );

As you can see, after all these type definitions are being made, creating actual images becomes trivial.

Another important thing is that image binding points are templates that require a view class to instantiate, like this:

This is the same TextureView as above. You can now bind a TextureView object to this binding point.

Images with dynamic format

Sometimes it is unsuitable to specify the format as a template argument, because the format is varying or not known during compilation. It is a common case for textures loaded from external sources, which can have various compressed formats determined only when the texture file is being examined.

VPP supports this scenario by introducing a special vpp::texture_format syntax. It is used as in the example below:

typedef vpp::format< vpp::texture_format > TextureFmt;
TextureFmt, vpp::RENDER, vpp::IMG_TYPE_2D,
VK_IMAGE_TILING_OPTIMAL, VK_SAMPLE_COUNT_1_BIT,
false, false > TextureAttr;
typedef vpp::Image< TextureAttr > TextureImage;

The format defined in this way has some limitations. It is guaranteed to work only for read-only sampled textures. In other cases, support depends on actual rendering device capabilities (so assume there is none).

The format must be ultimately specified somewhere and the proper place is alternative version of vpp::Image constructor:

TextureImage myTexImg (
VK_FORMAT_BC3_UNORM_BLOCK,
VkExtent3D { width, height, 1 },
MemProfile::DEVICE_STATIC,
m_device
);
TextureView myTexView ( myTexImg );

Image types and usages

vpp::ImageAttributes requires several parameters determining the type and possible uasge of the image. The most important ones are the format and usage mask.

Format must be a vpp::format instance. Please see the docs for vpp::format for more information about formats.

Image usage flags is the fourth parameter on the list. This is very important parameter for the core Vulkan, specifying how the image will be used. It should be a bitwise or of flags defined in the vpp::Img class. See the docs for vpp::Img::EUsageFlags for a description of these flags.

Most frequently used values are:

If you are copying the data from or to an image, also include vpp::Img::SOURCE and/or vpp::Img::TARGET flags.

In case of intermediate nodes in render graphs, combine vpp::Img::INPUT with vpp::Img::COLOR (or vpp::Img::DEPTH).

Other flag combinations might result in suboptimal performance, or be unsupported on particular hardware. Best to avoid them unless you know what you are doing.

Memory allocation

vpp::Img and vpp::Image constructors which have a vpp::MemProfile argument automatically allocate memory for the image according to the profile. Other constructors do not do this.

You can manually bind memory to an image by using vpp::bindDeviceMemory() and vpp::bindMappableMemory() functions, just the same as for buffers.

Samplers

Retrieving data from textures is much more complex than just reading pixel value from a bitmap at (x,y) position. Actually this simple scheme of data access is applicable to storage images (vpp::Img::STORAGE), but textures (vpp::Img::SAMPLED) follow much more involved algorithm, called sampling.

Sampling is configurable by user, and that means a separate object is needed to hold sampling parameters. This object is simply called a sampler. A sampler can be associated with a texture in one of three ways:

  • permanently,
  • by binding,
  • in shader code on the GPU.

The first way is the fastest one but the last is the most versatile. More information on that topic is contained in the section Image binding points.

In sampling, the coordinates of a texel to retrieve are floating-point, not integer. The texture surface is "continuous" and "infinite" in some sense (such that the coordinates cover entire range of float type). There can be however several conventions regarding how the coordinates map onto actual image surface.

First of all, VPP offers two kinds of samplers: normalized and unnormalized ones.

Normalized samplers have the primary coordinate range equal to [0, 1]. The 0 value maps to the left or top point and 1 to right or bottom. This is compatible with UV maps that texture editing programs typically produce (UV maps associate a vertex in a mesh with some point in the texture).

Unnormalized samplers have the range starting from 0, but ending at the exact value of the image dimension (e.g. width, height or depth in case of 3D images). This is according to definition in section 15.1.1 of the official Vulkan specs. Sometimes this is useful – if you know the size of the texture and want to have direct control where the texels come from. Unnormalized samplers however have some limitations regarding the configuration options they support.

In VPP, normalized sampler configuration is defined by setting fields of vpp::SNormalizedSampler structure. For unnormalized samplers, there is vpp::SUnnormalizedSampler structure, which has generally less options.

These structure do not define proper sampler object yet. For that, use vpp::NormalizedSampler and vpp::UnnormalizedSampler classes. They take the structures as construction parameters, as well as the device. Both objects represent a Vulkan sampler object.

Normalized samplers support the following features:

  • Multiple modes of coordinate wrapping (e.g. repeated, mirrored). This determines what to do when a coordinate falls outside the primary range.
  • Several modes for border color (used for some of coordinate wrapping modes).
  • Threshold comparisons (aka depth comparisons or shadow mapping) instead of regular texel read.
  • Separate filtering (interpolation) configuration for upscaling, downscaling and MIP-mapping.
  • Anisotropic filtering.
  • LOD bias.
  • LOD clamping.

Unnormalized samplers are much more limited and they support:

  • Smaller selection of coordinate wrapping modes.
  • Same modes for border color.
  • Filtering for upscaling and downscaling is the same, and for MIP-mapping is fixed.
  • LOD bias.
  • No depth comparisons, anisotropic filtering, LOD clamping.

Image views

Image views perform similar role as buffer views. They are intermediate objects used while binding images to binding points. Also a view allows to define a slice for the image constisting of a subset of its array levels and MIP maps (but not a window within its pixel area). Use this e.g. to treat some selected array level of the image as single-leveled image.

An image view always is a Vulkan object and its lifetime must at least as long as operations on it are performed.

Image view classes are defined from vpp::ImageView template, like in the example below:

typedef vpp::format< vpp::unorm8_t > TextureFmt;
TextureFmt, vpp::RENDER, vpp::IMG_TYPE_2D,
VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_TRANSFER_DST_BIT,
VK_IMAGE_TILING_OPTIMAL, VK_SAMPLE_COUNT_1_BIT,
false, false > TextureAttr;
typedef vpp::Image< TextureAttr > TextureImage;

The vpp::ImageViewAttributes intermediate template offers several additional parameters which are defined per view, rather than per image. All of these are optional.

The first one is associated sampler type. There are normalized and unnormalized samplers as explained above, and the view is configured for either one. By default, if you do not specify any parameter, normalized sampler is assumed. However, if you specify vpp::SAMPLER_UNNORMALIZED constant, the view will allow unnormalized samplers (as well as normalized ones) at the cost of some limitations for underlying image (it can be either 1D or 2D only).

Example:

// Default - normalized samplers only.
// Allow unnormalized samplers.
// Normalized samplers only.

Second optional parameter is called the aspect mask. This is very rarely used parameter taking a bitwise or of the following values (usually just one):

  • VK_IMAGE_ASPECT_COLOR_BIT
  • VK_IMAGE_ASPECT_DEPTH_BIT
  • VK_IMAGE_ASPECT_STENCIL_BIT

This parameter determines kind of data accessed by this image view. Regular images contain color data. There can be also depth images containing pixel Z coordinates, as well as stencil images with bitmasks of user-defined meaning. Also there can be combined depth+stencil images. This parameter is useful mostly for these. You can select which part (depth or stencil) you want to access.

In most cases just do not specify anything for this parameter, as VPP will determine the aspect automatically (except for combined depth+stencil images).

Example:

DepthTextureImage,
vpp::SAMPLER_NORMALIZED,
VK_IMAGE_ASPECT_DEPTH_BIT > DepthTextureViewAttr;

Third optional parameter is also very rarely used. It allows to override the image format. This way you can access the image pixels as if they were in different format as they really are, similar to the way reinterpret_cast or unions in C++ work.

If the overriding format is specified, it should be a vpp::format template instance. In most cases, leave this parameter unset, as VPP will simply use the same format as the image is in.

Image binding points

Image binding points are placed inside a vpp::PipelineConfig or vpp::ComputePipelineConfig derived class, jsut as other binding points. All of them take the image view type as the first argument (some may take more optional arguments). There are the following binding point classes:

  • vpp::ioImage - for read/write unsampled (storage) images,
  • vpp::inTexture - for read-only sampled images (textures) combined with a sampler on GPU level (from the shader code).
  • vpp::inSampledTexture - for read-only sampled images (textures) combined with a sampler on CPU level during binding.
  • vpp::inConstSampledTexture - for read-only sampled images (textures) combined with a sampler on CPU level during construction.

There are also binding points for samplers. These are also templates, taking the sampler type (either vpp::NormalizedSampler or vpp::UnnormalizedSampler) as the only argument:

  • inSampler - for samplers required to be bound to a shader data block,
  • inConstSampler - for samplers defined statically, during the construction.

Those binding points which have the Const word in names, are statically bound to samplers. This is faster and simpler, but the sampler parameters can not be changed.

vpp::inSampledTexture and vpp::inSampler allow to change the sampler by binding – either by binding a different sampler to a shader data block, or by switching to a different shader data block. This allows to use different samplers for each draw command.

vpp::inTexture image is combined with a sampler directly in the shader code. This allows to change a sampler even inside single draw call. It migh be slower than static or bound variants.

Binding images to points

Images are bound to binding points in the pipeline in exactly same way as buffers do. See section Binding buffers to points for an introduction.

Images participate in same shader data blocks as buffers, are updated by means of the same vpp::ShaderDataBlock::update() method and can in fact be updated in the same method call as buffers (by mixing buffers and images on the same assignment list). You can proceed either this way, or write separate updating method for images (or some images) - it is up to you.

An example of binding both a buffer and image in single call:

typedef vpp::format< vpp::texture_format > TextureLoaderFmt;
TextureLoaderFmt, vpp::RENDER, vpp::IMG_TYPE_2D,
VK_IMAGE_TILING_OPTIMAL, VK_SAMPLE_COUNT_1_BIT,
false, false > TextureLoaderAttr;
typedef vpp::Image< TextureLoaderAttr > TextureLoaderImage;
typedef vpp::ImageViewAttributes< TextureLoaderImage > TextureLoaderViewAttr;
typedef vpp::ImageView< TextureLoaderViewAttr > TextureLoaderView;
class MyPipeline : public vpp::PipelineConfig
{
public:
// ...
void setDataBuffers (
const TextureLoaderView& texv,
vpp::ShaderDataBlock* pDataBlock );
private:
// ...
};
void MyPipeline :: setDataBuffers (
const TextureLoaderView& texv,
vpp::ShaderDataBlock* pDataBlock )
{
// Never forget about extra parentheses!
pDataBlock->update ((
// this binds the buffer
m_framePar = fpv,
// this binds the image
m_colorMap = texv
));
}

Slightly more complicated form must be used when you bind both image and sampler simultaneously. This can happen only for vpp::inSampledTexture binding points. You need to use vpp::bind() function. As the first argument give the image view as above. As the second one, specify the sampler (normalized or unnormalized). Example:

// ...
// ...
void MyPipeline :: setDataBuffers (
const TextureLoaderView& texv,
const vpp::NormalizedSampler& sampler,
vpp::ShaderDataBlock* pDataBlock )
{
// Never forget about extra parentheses!
pDataBlock->update ((
// this binds the buffer
m_framePar = fpv,
// this binds the image
m_colorMap = vpp::bind ( texv, sampler )
));
}

Both forms accept one more optional parameter: image layout. This is a Vulkan layout code specifying in which layout the image is in when accessing it in a shader. Usually it is not specified and VPP detects automatically viable layout from the following allowed values:

  • VK_IMAGE_LAYOUT_GENERAL (selected for storage images),
  • VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL (selected for color textures and input attachments),
  • VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL (selected for depth/stencil textures and input attachments).

If you have special needs regarding the layout, you can override it by specifying the layout as the final argument to the vpp::bind() function. VPP will pass it to the VkDescriptorImageInfo structure unchanged. This is advanced usage and do it only when you know what you are doing (can result in validation/runtime errors).

Binding images to output attachments

Output attachments are images that are the results of particular process in the render graph. All cases listed below are possible:

  • Single output attachment (most typical).
  • Multiple output attachments. There can be several output attachments per process, but this number is limited depending on the hardware. The minimum limit is 4 output attachments. Typically mobile GPUs offer only the minimum limit. Currently all desktop GPUs allow at least 8 output attachments.
  • Zero output attachments. Images can be generated also by other means, e.g. storage images bound via vpp::ioImage binding points.

Each output attachment should have a vpp::outAttachment binding point declared in thep pipeline configuration subclass. This binding point requires passing a reference to the corresponding render graph node in the constructor. This can be either:

Output attachments binding points are not bound to actual images via shader data blocks, but rather through their corresponding nodes in render graph.

A set of such bindings for all output attachments in particular render pass (and graph) is called a framebuffer in Vulkan. It is somewhat analogous to shader data block, but it is not switchable during rendering.

One particular case is permanent binding of images to render graph nodes. In such case, you do not need to construct the framebuffer explicitly, as VPP maintains it internally. To use this simple variant, just pass image views to constructors of vpp::Display and vpp::Attachment nodes.

In case of a display node, the role of image view is being performed by a vpp::Surface object. This way you can render directly to screen or a window. This configuration is shown in the example below.

class MyPipeline : public vpp::PipelineConfig
{
public:
MyPipeline (
const vpp::Process& pr,
const vpp::Device& dev,
const vpp::Display& outImage ) :
vpp::PipelineConfig ( pr ),
// Binds the binding point to the render graph node.
m_outColor ( outImage ),
// ...
{}
void fFragmentShader ( vpp::FragmentShader* pShader )
{
// Writing to the output attachment is very simple - just use
// the assignment operator.
// This example simply fills entire image with red color.
m_outColor = vpp::Vec4 ( 1.0f, 0.0f, 0.0f, 1.0f );
}
private:
// ...
};
class MyRenderGraph : public vpp::RenderGraph
{
public:
MyRenderGraph ( const vpp::Surface& hSurface ) :
// This binds actual image (display surface) to the output node.
m_display ( hSurface )
{
// Registers the output node for the process.
m_render.addColorOutput ( m_display );
}
public:
vpp::Process m_render;
vpp::Display m_display;
};
class MyRenderingEngine
{
public:
MyRenderingEngine ( ... ) :
// ...
m_renderGraph ( m_surface ),
// This passes m_display node to the pipeline and this will be bound
// to m_outColor output attachment binding point.
m_pipelineLayout ( m_renderGraph.m_render, m_device, m_renderGraph.m_display ),
// ...
{
// ...
}
private:
// ...
MyRenderGraph m_renderGraph;
// ...
};

For vpp::Attachment node, pass an image view directly to the constructor, like in the following example:

// This example shows two images: a color display and a depth (Z-buffer)
// off-screen image.
// Definitions below define image and view types for the Z-buffer.
typedef vpp::format< float, vpp::DT > FormatDepth;
FormatDepth, vpp::RENDER, vpp::IMG_TYPE_2D, vpp::Img::DEPTH,
VK_IMAGE_TILING_OPTIMAL, VK_SAMPLE_COUNT_1_BIT,
false, false > DepthBufferAttr;
typedef vpp::Image< DepthBufferAttr > DepthBufferImage;
typedef vpp::ImageView< DepthBufferViewAttr > DepthBufferView;
// Render graph
class MyRenderGraph : public vpp::RenderGraph
{
public:
MyRenderGraph (
// Target surface for rendering.
const vpp::Surface& hSurface,
// A view pointing to image used as the Z-buffer.
const DepthBufferView& depthBufferView );
// Initialize the display node with surface.
m_display ( hSurface ),
// Initialize the attachment node with the view.
m_depthBuffer ( depthBufferView )
{
// Register outputs in the rendering process.
m_render.addColorOutput ( m_display );
m_render.setDepthOutput ( m_depthBuffer, 1.0f );
}
public:
vpp::Process m_render;
// Output image node.
vpp::Display m_display;
// Z-buffer image node
};

As shown in both examples above, we can configure rendering without creating Vulkan framebuffer explicitly. This is suitable for scenarios where we do not want to change rendering targets.

In case if the target images must not be bound permanently to render graph nodes, we can use FrameBuffer objects directly.

Swapchains, surfaces and displays

Binding images to input attachments

vpp::inAttachment

Other data

Push constants

- vpp::inPushConstant

Indirect drawing buffers

Queries

Timestamps