OKI Electric Industry Co., Ltd. (President: Takahiro Mori; office: Minato-ku, Tokyo Japan; “OKI”), OKI IDS Co., Ltd. (President: Ienobu Takizawa; office: Takasaki-shi, Gunma Japan; “OIDS”) and Mipsology SAS (CEO: Ludovic Larzul; office: France, “Mipsology”) have successfully developed the automation technology for the whole processes from AI model streamlining to the implementation in FPGAs based on PCAS
We are proud to partner with AMD Xilinx at the Embedded World show in Germany and showcase our Zebra AI inference acceleration software!
In case you missed the show, click here to demo Zebra or try Zebra for yourself.
Did you see the Zebra demo at the September AI Hardware Summit in Santa Clara?
Thank you, to our partner AMD Xilinx, for your support and for selecting Zebra as the preferred AI inference computation engine for the new VCK5000.
In case you missed the show, click here to demo Zebra or try Zebra for yourself.
“…the most useful AI can only happen in the field. This is where edge computing comes into play. Computing on the edge offers opportunities in all markets spanning vital areas – cars, surgery, security, retail, robotics, assembly lines and more.”
“…If I ever get my time machine working… one of the things I’m going to do — in addition to purchasing stock in Apple, Amazon, Facebook, and Google, along with splashing some cash on Bitcoins when they could be had for just 10 cents apiece — is to invest in companies like Mipsology.”
“Customers in our region and across the world continue to express a need for superior AI acceleration for edge and embedded AI applications,” said Arthur Chung, Senior Director, Supplier Management, Avnet Asia. “With the new Zebra IP solution from Mipsology, a leader in AI acceleration, we will be able to confidently satisfy customer demand in this area.”
“We are thrilled with the capabilities of Mipsology’s new Zebra IP product,” said MB Kim, CEO, Libertron, Co., LTD. “We can now deliver the best AI inference acceleration solution to our customers who are looking for superior acceleration for their edge and embedded applications.”
‘We are very enthusiastic about Mipsology’s new Zebra IP capabilities. Zebra IP will deliver business critical capabilities for AI inference for our Edge and Embedded customers,’ said Alex Su, director, E-Elements Technology.
‘Zebra IP has the same features and behavior as our Zebra inference accelerator running on PCIe cards, but in a much more flexible format, so that developers can enhance their systems with additional capabilities,’ said Ludovic Larzul, CEO and founder of Mipsology. ‘They can integrate complex functions around Zebra to create smart systems deployed everywhere. The possibilities are endless.’
Zebra FPGA IP is well positioned to power smart systems that use cameras to collect information. When paired with video decoders and ARM CPUs on an FPGA, Zebra FPGA IP can perform complex processing without causing latency. This FPGA solution fits in cameras, set-top boxes, mobile systems, boxes installed in streets, and all other compact formats, without the cooling requirements and limited life spans of GPUs. FPGAs are extremely reliable, requiring minimal maintenance for Edge-based solutions.
“Customers in our region and across the world continue to express a need for superior AI acceleration for edge and embedded AI applications,” said Arthur Chung, Senior Director, Supplier Management, Avnet Asia. “With the new Zebra IP solution from Mipsology, a leader in AI acceleration, we will be able to confidently satisfy customer demand in this area.”
Zebra FPGA IP is well positioned to power smart systems that use cameras to collect information. When paired with video decoders and ARM CPUs on an FPGA, Zebra FPGA IP can perform complex processing without causing latency. This FPGA solution fits in cameras, set-top boxes, mobile systems, boxes installed in streets, and all other compact formats, without the cooling requirements and limited life spans of GPUs. FPGAs are extremely reliable, requiring minimal maintenance for Edge-based solutions.
The IP takes TensorFlow, PyTorch, or ONNX convolutional neural network (CNN) models and maps them to the FPGA. It uses only part of the FPGA, however, enabling designers to integrate their own functions and fully utilize ARM CPUS/GPUs.
Zebra is designed with edge applications in mind, such as industrial automation, security, autonomous vehicles, and smart cities, as well as for image super resolution enhancement.
“Mipsology’s solutions continue to lead the path for AI on FPGAs,” said Ramine Roane, vice president of AI Software, Xilinx. “When combined with ARM CPUs and GPU, as well as video decoding, the new Zebra FPGA IP product from Mipsology enables an AI platform, expanding our ability to deliver a superior AI inference solution for our edge and embedded customers.”
AI software innovator Mipsology announced the availability of Zebra FPGA IP, a solution that accelerates the development and deployment of FPGA and adaptive SoC-based machine learning systems. Zebra FPGA IP is optimized to power edge applications spanning industrial automation, security, autonomous vehicles, smart cities, super resolution and more. Zebra IP is currently being used by early access partners and is expected to ship early Q3.
The combination of Zebra FPGA IP and the ability to program FPGAs at the hardware level provides a malleable platform for machine learning-based systems at the edge.
Zebra FPGA IP can power smart systems that use cameras to collect information. When paired with video decoders and ARM CPUs on an FPGA, Zebra FPGA IP can perform complex processing without causing latency.
This FPGA product fits in cameras, set-top boxes, mobile systems, boxes installed in streets, and all other compact formats.
To add to today’s earlier announcement from Xilinx on its Versal AI Edge family, Mipsology, a startup focused on the acceleration of deep learning inferences, also has something to talk about.
Today, Mipsology is introducing its Zebra FPGA IP (field-programmable gate array intellectual property). This FPGA IP is designed to expedite the development and deployment of both FPGA and SoC-based machine learning systems, hoping to ease the requirements for programming FPGAs by removing the need for a CPU or GPU.
Mipsology has announced the availability of Zebra FPGA IP, a solution that has been developed to accelerate the development and deployment of FPGA and adaptive SoC-based machine learning systems.
The company, an AI software innovator, said that the Zebra FPGA IP has been optimized to power edge applications spanning industrial automation, security, autonomous vehicles and smart cities. The Zebra IP is currently being used by early access partners and is expected to ship early in Q3.
Silicon Valley, Calif., April 14, 2021 — AI software innovator Mipsology today announced a design partnership with E-Elements, a Taiwan-based supplier of professional FPGA service training, design, and technological services. E-Elements will bundle Xilinx solutions enhanced with Mipsology’s Zebra AI inference accelerator in products and services designed for the Asian medical, robotics, and autonomous transportation industries.
“Mipsology is providing a toolset for easy migration of existing AI applications from GPU-based architectures to the Alveo platform, as well as plug-and-play, high-performance AI inference acceleration.”
“Mipsology is providing a toolset for easy migration of existing AI applications from GPU-based architectures to the Alveo platform, as well as plug-and-play, high-performance AI inference acceleration.”
Mipsology Zebra neural network accelerating software has been integrated into the latest build of Xilinx’s Alveo U50 FPGA data center accelerator card. The card is a single-slot, low profile form factor passively cooled, adaptable accelerator operating up to a 75 W maximum power limit and can compute convolutional neural networks. It supports PCIe Gen3 x 16 or Gen4 x 8 and has 8 GB HBM2 and Ethernet networking capability. This plug-and-play FPGA solution provides broad application flexibility and longer life for deep learning inference acceleration as a CPU or GPU.
It is suited for both data center and large industrial AI applications including robotics, smart cities, imaging processing/video analytics, healthcare, retail, driver-assist cars and video surveillance.
Machine learning (ML) software innovator Mipsology and design and development company OKI IDS Co., Ltd. (OKI IDS) signed an agreement to join forces on hardware solutions for the Japanese high-speed image processing AI market.
OKI IDS will leverage Mipsology’s best-in-class Zebra ML inference accelerator for application designs targeting the fast-growing ML needs in the Japanese market. Japanese corporations are embracing the revolution of AI while wanting to guarantee the high quality and high performance of their products across all markets, including automotive, industrial robotics, smart cities, medical applications, and video monitoring.
“We are excited by our collaboration with OKI IDS to supply the Japanese demand for neural network computation that supports high-speed, low-latency image processing,” said Ludovic Larzul, CEO and founder of Mipsology. “OKI IDS and Mipsology have a natural synergy and mutual expertise in FPGA design and machine learning that will result in our delivering industry-leading products based on FPGAs.”
Machine learning software innovator Mipsology today announced that its Zebra AI inference accelerator achieved the highest efficiency based on the latest MLPerf inference benchmarking. Zebra on a Xilinx Alveo U250 accelerator card achieved more than 2x higher peak performance efficiency compared to all other commercial accelerators.
Machine learning software innovator Mipsology today announced that its Zebra AI inference accelerator achieved the highest efficiency based on the latest MLPerf inference benchmarking. Zebra on a Xilinx Alveo U250 accelerator card achieved more than 2x higher peak performance efficiency compared to all other commercial accelerators.
With a peak TOPS, the standard for measuring computation performance potential, of 38.3 announced by Xilinx, the Zebra-powered Alveo U250 accelerator card significantly outperformed competitors in terms of throughput per TOPS and ranks among the best accelerators available today. It delivers performance similar to an NVIDIA T4, based on the MLPerf v0.7 inference results, while it has 3.5x less TOPS. In other words, Zebra on the same number of TOPS as a GPU would deliver 3.5x more throughput or 6.5x higher than a TPU v3. This performance does not come at the cost of changing the neural network. Zebra was accepted in the demanding closed category of MLPerf, requiring no neural network changes, high accuracy, and no pruning or other methods requiring user intervention. Zebra achieves this efficiency all while maintaining TensorFlow and Pytorch framework programmability.
Machine learning software innovator Mipsology today announced that its Zebra AI inference accelerator achieved the highest efficiency based on the latest MLPerf inference benchmarking. Zebra on a Xilinx Alveo U250 accelerator card achieved more than 2x higher peak performance efficiency compared to all other commercial accelerators.
“We are very proud that our architecture proved to be the most efficient for computing neural networks out of all the existing solutions tested, and in ML Perf’s ‘closed’ category which has the highest requirements,” said Ludovic Larzul, CEO and founder, Mipsology. “We beat behemoths like NVIDIA, Google, AWS, and Alibaba, and extremely well-funded startups like Groq, without having to design a specific chip and by tapping the power of FPGA reprogrammable logic. Perhaps the industry needs to stop over-relying on only increasing peak TOPS. What is the point of huge, expensive silicon with 400+ TOPS if nobody can use the majority of it?”
Mipsology Zebra tool on Xilinx FPGA beats GPUs and ASICs for ML inference engine efficiency
Machine learning software developer Mipsology has released benchmarks for its Zebra FPGA-based AI accelerator on the MLPerf inference benchmarking.
Running on a Xilinx Alveo U250 accelerator card, Zebra achieved more than twice the peak performance efficiency of other commercial accelerators, whether GPs or dedicated AI edge chips.
Machine learning (ML) software innovator Mipsology and design and development company OKI IDS Co., Ltd. (OKI IDS) signed an agreement on October 20 to join forces on hardware solutions for the Japanese high-speed image processing AI market.
OKI IDS will leverage Mipsology’s best-in-class Zebra ML inference accelerator for application designs targeting the fast-growing ML needs in the Japanese market. Japanese corporations are embracing the revolution of AI while wanting to guarantee the high quality and high performance of their products across all markets, including automotive, industrial robotics, smart cities, medical applications, and video monitoring.
Mipsology, the Californian machine learning software specialist, says that its Zebra AI inference accelerator achieved the highest efficiency based on the latest MLPerf inference benchmarking.
Zebra, on a Xilinx Alveo U250, achieved more than 2x higher peak performance efficiency than all other commercial accelerators, says Mipsology.
“We are very proud that our architecture proved to be the most efficient for computing neural networks out of all the existing solutions tested, and in ML Perf’s ‘closed’ category which has the highest requirements,” says Ludovic Larzul, CEO and founder, Mipsology, “we beat behemoths like NVIDIA, Google, AWS, and Alibaba, and extremely well-funded startups like Groq, without having to design a specific chip and by tapping the power of FPGA reprogrammable logic. Perhaps the industry needs to stop over-relying on only increasing peak TOPS. What is the point of huge, expensive silicon with 400+ TOPS if nobody can use the majority of it?”
Silicon Valley-based startup Mipsology announced today that its Zebra AI inference accelerator achieved the highest efficiency based on the MLPerf inference test.
The benchmark, which measures training and inference performance of ML hardware, software, and services, pitted Mipsology’s FPGA-based Zebra AI accelerator against venerable data center GPUs like the Nvidia A100, V100, and T4. Comparisons were also drawn with AWS Inferentia, Groq, Google TPUv3, and other.
Mipsology announced that its Zebra AI inference accelerator achieved the highest efficiency based on the latest MLPerf inference benchmarking. Per the company, the Zebra on a Xilinx Alveo U250 accelerator card achieved more than 2x higher peak performance efficiency compared to all other commercial accelerators.
Peak TOPS have been the standard for measuring computation performance potential, many assume that more TOPS equal higher performance. However, this fails to take into consideration the real efficiency of the architecture, and the fact that at some point there are diminishing returns. This achievement, similar to “dark silicon” for power, occurs when the circuitry can not be used because of existing limitations. Zebra has proven to scale along with TOPS, maintaining the same high efficiency while peak TOPS are growing.
Silicon Valley-based startup Mipsology announced today that its Zebra AI inference accelerator achieved the highest efficiency based on the MLPerf inference test.
The benchmark, which measures training and inference performance of ML hardware, software, and services, pitted Mipsology’s FPGA-based Zebra AI accelerator against venerable data center GPUs like the Nvidia A100, V100, and T4. Comparisons were also drawn with AWS Inferentia, Groq, Google TPUv3, and other.
TOKYO–(BUSINESS WIRE)–OKI IDS Co., Ltd., an OKI Group company providing firmware and hardware design and development services, has teamed with Mipsology SAS, the company providing the Zebra neural network machine learning accelerating software for use with FPGAs.*1 The partners today announced the signing of a technical partnership agreement on October 20, 2020, with the stated goal of entering the domestic Japanese high-speed ML-based image processing market. This agreement marks the launch of a design and development service intended to support faster ML processing based on FPGA technology in domestic Japanese markets for applications requiring high-speed image processing, including autonomous driving, remotely controlled robots, telemedicine, and video monitoring.
AI software innovator Mipsology announced that its Zebra neural network accelerating software has been integrated into the latest build of Xilinx’s Alveo U50 data center accelerator card, the industry’s first low profile adaptable accelerator with PCIe Gen 4 support. Zebra’s ease-of-use and high throughput enable the Alveo U50 to compute convolutional neural networks with zero effort. This is the latest in a series of Zebra-enhanced Xilinx boards that enable inference acceleration for a wide variety of sophisticated AI applications. Others include the Alveo U200 and Alveo U250 boards.
Real-time IDC Research® opinion on industry news, trends and events Xilinx selects Mipsology Zebra Software to accelerate its Alveo U50 FPGA
By: Ashish Nadkarni
Group Vice President, Infrastructure Systems, Platforms and Technologies Group, Peter Rutten
Research Director, Infrastructure Systems, Platforms and Technologies Group, Sriram Subramanian
Xilinx recently tapped a startup for its AI accelerating software platform. In what ways might this platform add extra benefits for FPGAs in contrast to GPUs?
It’s no secret that artificial intelligence is rapidly unfolding and making its way into virtually every field. AI imposes a unique workload on its compute platforms, one where parallelization is crucial. For this reason, GPUs, which generally boast over 1000 cores, have become preferable to CPUs when running neural networks, according to Medium contributor Connor Shorten.
Mipsology has announced that its Zebra neural network accelerating software has been integrated into the latest build of Xilinx’s Alveo U50 data centre accelerator card.
The card, the industry’s first low profile adaptable accelerator with PCIe Gen 4 support, will benefit from Zebra’s high throughput capabilities that will enable the Alveo U50 to compute convolutional neural networks more efficiently.
The accelerator enables the Alveo U50 to compute convolutional neural networks with zero effort. This is the latest in a series of Zebra-enhanced Xilinx boards that enable inference acceleration for a wide variety of sophisticated AI applications. Others include the Alveo U200 and Alveo U250 boards.
SILICON VALLEY, Calif., June 24, 2020 — AI software developer Mipsology announced that its Zebra neural network accelerating software has been integrated into the latest build of Xilinx’s Alveo U50 datacenter accelerator card, the industry’s first low profile adaptable accelerator with PCIe Gen 4 support. Zebra’s ease-of-use and high throughput enable the Alveo U50 to compute convolutional neural networks with zero effort.
Underline is live-streaming the Conference of Artificial General Intelligence (AGI) today and tomorrow, with remarks from Ben Goertzel (SingularityNET), Joscha Bach (MIT Media Lab), and others.
An IBM blog post describes how the company is leveraging generative AI to speed up the discovery process of new drugs.
AI software innovator Mipsology today announced that its Zebra neural network accelerating software has been integrated into the latest build of Xilinx’s Alveo U50 data center accelerator card, the industry’s first low profile adaptable accelerator with PCIe Gen 4 support. Zebra’s ease-of-use and high throughput enable the Alveo U50 to compute convolutional neural networks with zero effort.
AI software startup Mipsology is working with Xilinx to enable FPGAs to replace GPUs in AI accelerator applications using only a single additional command. Mipsology’s “zero effort” software, Zebra, converts GPU code to run on Mipsology’s AI compute engine on an FPGA without any code changes or retraining necessary.
Mipsology has ported its Zebra neural network machine learning accelerating software to the latest version of the Xilinx Alveo U50 accelerator card.
European machine learning and AI software developer Mipsology has signed a deal to use its Zebra neural network accelerating software in the latest build of Xilinx’s Alveo U50 data centre accelerator card.
AI software innovator Mipsology today announced that its Zebra neural network accelerating software has been integrated into the latest build of Xilinx’s Alveo U50 data center accelerator card, the industry’s first low profile adaptable accelerator with PCIe Gen 4 support. Zebra’s ease-of-use and high throughput enable the Alveo U50 to compute convolutional neural networks with zero effort.
AI software innovator Mipsology today announced that its Zebra neural network accelerating software has been integrated into the latest build of Xilinx’s Alveo U50 data center accelerator card, the industry’s first low profile adaptable accelerator with PCIe Gen 4 support. Zebra’s ease-of-use and high throughput enable the Alveo U50 to compute convolutional neural networks with zero effort.
AI software innovator Mipsology today announced that its Zebra neural network accelerating software has been integrated into the latest build of Xilinx’s Alveo U50 data center accelerator card, the industry’s first low profile adaptable accelerator with PCIe Gen 4 support. Zebra’s ease-of-use and high throughput enable the Alveo U50 to compute convolutional neural networks with zero effort.
Global technology solutions provider Avnet Asia and AI software innovator Mipsology announced that Avnet will promote and resell Mipsology’s Zebra software to its APAC Customer base. Zebra removes the techical complexity of FPGAs, making them plug-and-play with exceptionally fast performance.