what is manifold CLOUD?

manifold CLOUD is a live production software that runs on COTS FPGA programmable acceleration cards (PAC).

At its core, manifold CLOUD is a service-oriented software that utilizes an on-demand configurable pool of shared resources allocated within a private cloud environment.

Hardware resources (PAC) are pooled together in clusters which can be thought of as Virtual Private Clouds or, simply, a broadcast production.

Typically, a cluster has a fixed purpose for a set period of time, such as the “6 ‘o’clock news" or the “Sunday football game.”

Multiple clusters can be operated simultaneously, each with different services utilizing a shared hardware resource pool from one or more data centers.

Users operate manifold CLOUD services through a single-sign-on secure web UI which facilitates access to clusters.


manifold MULTIVIEWER is a live production multiviewer service using our unique Distributed Multi-viewer (DMV) technology allowing for up to 512 PIPs per head of any source (including UHD) with no more than 1 frame/field delay from input to output. The layouts are easily created in the manifold CLOUD web UI.

The manifold MULTIVIEWER service generates a 3G or UHD mosaic as a 2110-20 stream which is then available as a new source in the cluster.

manifold UDX

The manifold UDX service takes a ST2110-20 video source and performs video format conversion between HD(720p/1080i), Full HD(3G) and UHD(12G) with high quality deinterlacing.

The output is a ST2110-20 video of the selected format which becomes available as a new source in the cluster.

manifold CLOUD




All manifold CLOUD services are performed with guaranteed sub-frame latency just as live event operators are used to.

Service Focus

Simple configuration. Focus on the what not the how. Easily save productions as code


Power savings

COTS FPGA accelerators use 90% less power than comparable CPU based servers.

If you're serious about green gas emissions you can't find a more efficient solution!


manifold CLOUD services scale linearly, in direct relation to the total aggregated connected network capacity. Just create your services and they are automatically placed on available compute in your cluster.

1.6 Tbps

per rack-unit

Utilizing the latest generation COTS FPGA Programmable Acceleration Cards offers up to 1.6Tbps of processing per RU. That's enough to process up to 1024 HD signals!

And because it's networked if you need more just add another server!



manifold CLOUD is built upon the time multiplexing of services using FPGA High Bandwidth Memory (HBM).

In difference to all other broadcast products on the market manifold CLOUD does not have a fixed amount of senders and receivers.

The only limitation is the amount of bandwidth allocated to the system!


Current generation COTS FPGA PACs offer 8 x higher density than comparable CPU-based platforms with up to 1.6Tbps of usable processing per RU.


manifold CLOUD is inherently self healing and will automatically recover from processing failure.


supported fPGa programmable accelerator cards

our technology partners

manifold CLOUD supports COTS FPGA Programmable Accelerator Cards from several vendors already, with more to be announced.

As of the current generation, these accelerators provide up to 400Gbps of processing per card. With 4 cards per server this allows for up to 1.6 Tbps of media processing per RU.

Utilizing these accelerators manifold CLOUD can for example process up to 512 x 3G multiviewer sources and 256 x 3G heads in one RU, and it continues to scale linearly as more compute is added!

More Information

Learn more about manifold CLOUD - The broadcast industry's first true live production cloud solution!

We use cookies
Cookie preferences
Below you may find information about the purposes for which we and our partners use cookies and process data. You can exercise your preferences for processing, and/or see details on our partners' websites.
Analytical cookies Disable all
Functional cookies
Other cookies
I consent to the terms of use Learn more about our cookie policy.
Details I understand