Zebra Strengths

Fastest Inference

Zebra computes neural networks at world-class speed, setting a new standard for performance.

Highly Scalable

Zebra runs on highest-throughput boards all the way to the smallest boards. The scaling provides the required throughput, in data centers, at the edge or in the Cloud.

Straightforward to Use

Deploying Zebra is a “plug&play” process. No hardware details are exposed to the users. Zebra conceals the FPGAs from AI engineers and lets them enjoy faster execution.

Works with all Neural Networks

Zebra accelerates any neural network, including user-defined neural networks. Zebra processes the same CPU/GPU-based trained neural network with the same accuracy without any change.


Zebra can replace CPUs or complement GPUs in data centers for heavy loads. Its scalability and lower power consumption make it also ideal at the edge or just under a desk.


Zebra adapts the hardware cost to your need. Don’t want to pay upfront cost? Pay-as-you-go by accelerating your neural network in the Cloud.

Supports most popular Frameworks

Zebra works with major AI frameworks, like Caffe, Caffe2, MXNET, or TensorFlow. Just plug Zebra in and go.

No changes to the software environment

Zebra users don’t have to learn new languages, new frameworks, or new tools. Not a single line of code must be changed in the application.

Lower Precision, Same Quality of Result

Zebra computes NN using 16-bit, 8-bit, or lower precision, without sacrificing the accuracy of the results.

Low Cost of Ownership

FPGA-boards are compact, use less power than other accelerators, and don’t require forced cooling. The advantages lead to higher reliability and lower cost of ownership.

We partner with world-class companies to deliver Zebra with robust hardware. Contact Mipsology to get the Zebra hardware that match your needs, or ask us to make Zebra run on your own hardware.

Want to learn more about Zebra? Fill out the form to receive our brochure!