Plug & Play AI inference computation
Zebra® software ensures high performance and flexibility while saving users time that’s often lost when deploying neural networks after each training.
Zebra works on main frameworks: PyTorch, TensorFlow and ONNX and is easily deployed using best in class FPGA technology. Zebra is so easy to deploy, our users call Zebra ‘plug & play.’
"Mipsology Zebra on Xilinx FPGAs Beats GPUs, ASICS for ML Inference Efficiency" - Semiconductor Digest
How does Zebra work?
With Zebra on FPGA hardware, you can use your current trained GPU or CPU model and deploy accelerated inference using Zebra.
No need to change your neural network, no changes to your framework, no new tools to learn. Just ‘plug in’ Zebra and get immediate inference acceleration in the data center or at the edge.
Zebra computes neural networks at world-class speed, setting a new standard for performance. Plus, Zebra works in the data center, or at the Edge and can withstand harsh environments where real-world AI is deployed.
Easy to Use
Deploying Zebra is a “plug&play” process. No hardware details are exposed to the users. Zebra conceals the FPGAs from AI engineers and lets them enjoy faster execution.
Works with all Neural Networks
Zebra accelerates any neural network, including user-defined neural networks. Zebra processes the same CPU/GPU-based trained neural network with the same accuracy without any change.
Supports most popular Frameworks
Zebra works with major AI frameworks, like Caffe, Caffe2, MXNET, or TensorFlow. Just plug Zebra in and go.
No changes to the software environment
Zebra users don’t have to learn new languages, new frameworks, or new tools. Not a single line of code must be changed in the application.
Low Cost of Ownership
FPGA-boards are compact, use less power than other accelerators, and don’t require forced cooling. The advantages lead to higher reliability and lower cost of ownership.
See Zebra in action!