Deep Learning News Series 2


Welcome to another installment of our ML And Deep Learning News series! This time, we touch on news from AWS, a way to get more efficiency out of your neural network, and a wrap-up of some AI advancements that took place this year. Let’s dive in!


AWS accelerates AI and ML in the public sector with the Rapid Adoption Assistance Initiative (via SiliconANGLE)


Amazon Web Services (AWS) recently announced the launch of the AI and ML Rapid Adoption Assistance Initiative, which aims to accelerate the growth of AI adoption for government agencies and the rest of the public sector. According to Amazon, the program has three phases:


  • The envisioning phase, which dives deep into the specific use case and the problem that AI is trying to solve
  • The enablement phase, when AWS works with its partners’ technical teams to help train them on AI/ML operations
  • The build phase, when everything comes together and partners begin to build and roll out their projects

What do you think? Is this a good strategy for bringing AI to the public sector?

Common Assumptions on Machine Learning Malfunctions Could be Wrong (via Unite.AI)


University of Houston researchers believe that some of the common assumptions we have on why ML malfunctions may actually be wrong. “According to Cameron Buckner, an associate professor of philosophy at UH, there must be an understanding of the failures brought on by ‘adversarial examples.’ These adversarial examples occur when a deep neural network system misjudges images and other data when it comes across information outside the training inputs that were used to develop the network.“

These anomalies, or artifacts, are explained by Buckner through the analogy of a lens flare in a photograph, which is not caused by a defect in the camera lens but rather the interaction of light with the camera.


Tapping Into Purpose-Built Neural Network Models For Even Bigger Efficiency Gains (via Semiconductor Engineering)


This is a great article from our friends over at Xilinx highlighting how “purpose-built” neural networks can result in substantial efficiency gains.


“Neural network architecture has a significant impact on performance, and the peak performance metric is of little value in the context of selecting an inference solution unless we can achieve high levels of efficiency for the specific workloads that we need to accelerate.”


We strongly recommend giving this article a read.


How AI and machine learning moved forward in 2020 (via SD Times)


As we close out the year, this is a great article highlighting the advancements in AI/ML that came to fruition in 2020. This year, we saw the beta launch of the GPT-3 language model created by OpenAI, legislation calling for the reform of facial recognition technology (resulting in companies like IBM sunsetting their facial recognition programs), and the slight growth of autonomous testing tools.


What do you think 2021 will have in store for the industry?


AI unveils patterns in Earth’s biological mass extinctions (via Engineering & Technology)


Scientists from the Earth-Life Science Institute (ELSI) at Tokyo Institute of Technology have applied machine learning to examine the co-occurrence of fossil species and determined that mass extinctions and radiations are rarely correlated. What does this mean? Well, first and foremost, it shows that ML can be used to visualize and understand the fossil record, but it also gives researchers a better perspective on how extinction events occur.