Mipsology offers FPGA-based class-leading acceleration for Deep Learning, with no FPGA knowledge required. We leverage more than 20 years designing high-performance FPGA-based systems for Linux to offer our users the best solutions for Deep Learning.

Mipsology first product, Zebra, offers users FPGA-based class leading acceleration for Neural network inference. There is zero FPGA knowledge required nor a single line of code to write to use Zebra. Zebra runs user defined neural network just as it would on GPU or CPU, switching takes minutes.

Zebra – Faster Neural Network Inference

High Performance

FPGA are hardware programmable. They offer best-in-class computation throughput for neural network. Zebra executes neural network fast on one or multiple FPGA.

No FPGA Knowledge Required

No FPGA details are exposed to our users. Zebra users focus on solving their AI problems, Zebra makes it run on FPGA.

Fast Set-up

Select the FPGA board, plug it, link to the Zebra library, run it. Or simply use it in the Cloud.

Same Silicon, Better Performance

Mipsology R&D works to continuously improve performance with the same FPGA. No need to wait for years for a new silicon to accelerate further or to support latest neural network evolutions.

Compatible with Data Centers

Zebra runs on "standard" FPGA-based boards, which can easily fit in data centers with usual power and cooling constraints.

FPGA Fits More than One Usage

Zebra runs on hardware that can run different data-center loads, making the hardware investment not dedicated to a specific usage.

Supports All Neural Networks

Unlike usual FPGA-based solutions, Zebra works on the user-defined neural networks as it would on other processing engines like CPU and GPU.

Already Integrated into Your Environment

Zebra users don’t have to learn new languages, new infrastructures or new tools, it works with Caffe, MXNET, TensorFlow, etc.

Supports Multiple Precisions

Zebra supports 16-bit fixed point, 8-bit fixed point, and more.

Fully Integrated

Zebra includes FPGA contents and SW stack. It is simply like replacing a GPU by a FPGA. There is no long wait for FPGA compilation, Zebra runs immediately.

Low Power

Zebra runs on FPGA boards typically using less than 40W.

Easily Portable

Zebra can run on any FPGA. With latest FPGA generation offering higher calculation bandwidth, Zebra users run faster the same content by just installing a new board.

Zebra | Version 16-12

  • Supports TUL-KU115 (based on Xilinx KU115).
  • Performance of AlexNet inference with int16: over 1000 img/s.
  • Power: below 40W
  • Perf/W : approx. 30 img/s/W.
  • Caffe infrastructure.
  • Demonstration on demand

Zebra | Version 17-06

  • large datacenter deployment, or in-house PCIe boards.
  • Supports all NN similar to AlexNet and GoogLeNet.
  • Caffe & MXNET infrastructures.
  • Expected AlexNet inference performance: int16 over 2,400 img/s, int8 over 4,500 img/s.
  • Supports KU115-based & VU9P-based boards (VU13P-based boards upon availability).
  • Power targeted: below 40W, 100 img/s/W.
  • Try it on AWS Marketplace (ES2 F1): just look for the Zebra.

Find Out More

For pricing and availability or to receive updates about our products, please sign-up or contact us at contact@mipsology.com