AWS_Getting_Started

 

 

 

Zebra AMI

Copyright MIPSOLOGY SAS © 2017

Contact us at zebra@mipsology.com

Visit us at www.mipsology.com

Getting Started with Zebra on
AWS

Below you will find basic information on the launch of Zebra
on AWS and a step by step guide on how to set up your own Zebra instance.

You can also refer to the AWS online documentation:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html

 

1.    Create an AWS Account

If you don’t have an AWS account, you will need first
to create an account. Basic information is provided here, but we strongly
advise you to look at the Amazon documentation.

Open your WEB browser and go to aws.amazon.com:

Select the “Create a Free Account” and follow
the procedure to create an account from this page:

Note that the account creation does not imply any cost. The
credit card information you provide will only be used by Amazon to charge you
based on your actual use of paid services.

2.    Launch Zebra

You can refer to this AWS page for more information:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html

a.     Go to the Console

Once you are logged in, select “EC2” from the
AWS Services list, usually the first item in “Compute” sub-section:

You are now on the EC2 Dashboard, which looks like the
following window (resources may differ from the picture):

b.     Create the EC2 instance:

Click on “Launch instance” button to start a new
instance. You will get “Step 1”

c.     Select the Zebra AMI:

Select the “AWS Marketplace” from the menu on
the left and type “Zebra” in the search field:

d.     Select the Zebra AMI you want to run

In the displayed list, you will most likely see multiple
Zebra AMIs from Mipsology.

Multiple AMIs are free (no commercial use) or have a long
free trial. If you are not sure which AMI to run, you can get more details by
clicking on “More info” and selecting the link to the AMI info on
the Marketplace. You can then see the details:

Click “Select” on the AMI you want to run.

Note that if you already have an account with all parameters
already set-up, you can directly go on the Marketplace page, look for Zebra and
start the AMI with the “continue” button as displayed in previous
image.

IMPORTANT: if you selected the AMI named “ZEBRA on 1
FPGA (image classification)”, follow the instructions from:

http://www.mipsology.com/aws/Zebra_AWS_Getting_Started.170429f.html

e.     Select the instance type

In the long list of instances available, you can only
execute Zebra AMI on f1.2xlarge, and f1.16xlarge for some of them. Select the
instance you want (do not click on “Review and Launch”).

f.       Select “Next: Configure Instance Details”

Select the VPC and Subnet you need on this instance. You can
refer to the following page for more information:

http://docs.aws.amazon.com/workspaces/latest/adminguide/gsg_create_vpc.html

g.     Select “Next: Add Storage”

Nothing to do on this page

h.     Select “Next: Add Tags”

Nothing to do on this page

i.       Select “Next: Configure Security Group”

By default, the instance is accessible by anybody connected
to the internet. Most likely, you do not want that. You want to create a
security group and configure you own IP address.

j.       Select “Review and Launch”

You are one step away from having an instance launched. You
need to create a SSH key pair or download one if you have one already
available. This is important because this key “unlocks” the access
to the instance. If you don’t have it anymore, you will not be able to
reconnect to this instance.

k.     Select “Launch Instances”

You have you instance launched ready to be used.

3.    Run the Zebra example under Linux

You can find information on the AMI under Mipsology/V2017.08.1/doc
directory, particularly a readme file REAMDE.md.

The Zebra example will show the result of classifying images
using a neural network running on the FPGA installed on the EC2 F1 instance.
The few following steps allow to launch an image classifier executable on the
instance and see its results on a web browser. Alternatively, the results can
also be looked at in a text format.

Using Zebra in other applications do not require those
steps. Zebra replaces CPU or GPU to calculate neural networks without changing
a line of your application, and without any FPGA knowledge.

a.     Start a SSH tunnel

Type the following command in a shell:

ssh -i <Private Key> ubuntu@<Public IP/External DNS
Hostname>

b.     Start the image classifier server example

All examples are located under Mipsology/V2017.08.1/examples.
You can run various neural networks by launching the appropriate script:

·
run_demoWeb.sh : AlexNet under Caffe

·
run_caffe+GoogLeNet.sh : GoogLeNet under Caffe

·
etc

To launch one of the example, go in the example directory
and execute the script, for example:

cd Mipsology/V2017.08.1/examples

./run_demoWeb.sh

Then to see the result of the image classification, launch a
WEB browser on your local machine and connect to the instance (http://<Public IP/External
DNS Hostname>
). You should see something like the following
window:

You can stop the example with a simple CTRL-C in the shell
window.

You can find information on the capabilities and limitations
of this AMI under Mipsology/V2017.08.1/doc directory, particularly a readme file REAMDE.md.

Note that the first launch of the example may take longer
time as images are downloaded from Internet to be used for the classification.

4.    Run the Zebra example under Windows

You can find information on the AMI under Mipsology/V2017.08.1/doc
directory, particularly a readme file REAMDE.md.

The Zebra example will show the result of classifying images
using a neural network running on the FPGA installed on the EC2 F1 instance.
The few following steps allow to launch an image classifier executable on the
instance and see its results on a web browser. Alternatively, the results can
also be looked at in a text format.

Using Zebra in other applications do not require those
steps. Zebra replaces CPU or GPU to calculate neural networks without changing
a line of your application, and without any FPGA knowledge.

a.     Start a SSH tunnel

Note that the connection to your instance on AWS requires to
have the Java pluging installed on your WEB browser.

Select the instance you just launched and connect using the “Connect”
button on top of the instance page:

In the pop-up, select “A Java SSH Client directly from
my browser (Java required)” and fill the fields with correct information:

The “user name” is ubuntu and the Private key
path is the path to your SSH key file.

Then click on “Launch the SSH Client” button. As
a result, the following window will be displayed:

Select “Run” to start the remote connection.

b.     Start the image classifier server example

All examples are located under
Mipsology/V2017.08.1/examples. You can run various neural networks by launching
the appropriate script:

·
run_demoWeb.sh : AlexNet under Caffe

·
run_caffe+GoogLeNet.sh : GoogLeNet under Caffe

·
etc

To launch one of the example, go in the example directory
and execute the script, for example:

cd Mipsology/V2017.08.1/examples

./run_demoWeb.sh

 

Then to see the result of the image classification, launch a
WEB browser on your local machine and connect to the instance (http://<Public IP/External
DNS Hostname>
). You should see something like the following
window:

 

You can stop the example with a simple CTRL-C in the shell
window.

You can find information on the capabilities and limitations
of this AMI under Mipsology/V2017.08.1/doc directory, particularly a readme file REAMDE.md.

Note that the first launch of the example may take longer
time as images are downloaded from Internet to be used for the classification.

5.    Classify Your Own Images

You can classify your own images using the networks provided
in the example directory with the steps described here after. A simple image format
conversion has been included, converting jpeg to the neural network input
required in Caffe. Note that only jpeg format is supported. This step is not optimized
and may take some time. If you are looking for professional-level image
classification, requiring support of more types and performance-oriented
processing, we would advise to build such application using one of the supported
framework (currently this AMI supports Caffe, see following sections), Internet
has resources to guide you. Zebra provides the mean to execute the neural
network but does not provide efficient image pre-processing.

a.     Import your images

In the same shell as the one you ran the example, type the
following commands:

·
You need first to create a directory to import your images:
mkdir
user_images

·
Then copy your images into the directory:
cp
<path to images>/*.jpeg user_images

Note that only the “jpeg” extension is allowed.
If you have “jpg” extensions, you will have to rename the files to
have “jpeg” extension.

b.     Launch the classification of your images

Classification of your images is achieved with the same
script as the example, with an extra option to indicate the file location:

./run.sh –user

This will launch Zebra on the FPGA and run the same browser
as shown in 4.e.

Notes and limitations:

·
As Zebra runs many images every second, we advise to add many
images. However, you may want to limit the size of the images to avoid running
conversion for long.

·
You must have run once the example before adding your images (the
example creates the files and directories required by this application).

·
Extension of file must be “jpeg”.

·
The conversion does not support image orientation meta data.

·
CaffeNet take input images of size 227×227, you may prefer this
size as it avoids longer conversion.

6.    Use Zebra for your application

This AMI supports to run neural network inferences on Zebra using
Caffe framework with no effort to transition from CPU or GPU. You can define
your own network as you would usually do when using a CPU or a GPU under Caffe
(see Mipsology/V2017.08.1/doc/LIMITATIONS.txt for limitations). And of course, you can use your own parameters
to configure the network. You do not need to change the application source code
as Zebra is fully integrated into Caffe. The complexity of using the FPGA to
compute the inference has been hidden so you do not need to know anything about
the FPGA.

a.     Supported Framework

This AMI supports Caffe. You can find information related to
Caffe at http://caffe.berkeleyvision.org.

b.     Setting the environment

The only requirement to move your application from CPU/GPU
to Zebra is to set the environment. In the shell running on the instance,
source the environment file for Zebra matching your shell type:

$ source Mipsology/V2017.08.1/settings.sh

Or

$ source Mipsology/V2017.08.1/settings.csh

This will set two environment variables:

·
ZEBRA_INSTALL_DIR
: main directory for the Zebra release. Typically, you do not want to change the
value as there is one Zebra version installed on this AMI. You may have to
change this variable however if you obtain a new package from Mipsology.

·
LD_PRELOAD : this Linux variable defines
the shared libraries that an application will use in priority. It must include in
first position the path to the Zebra library called $ZEBRA_INSTALL_DIR/lib/libGpuToolsWrapper.so.

Once this is done, you can run your application as you would
on CPU/GPU.

Note: if you recompile Caffe in this AMI, you need to configure
the compilation to have the GPU support enabled and the GPU mode selected to
run on Zebra.

c.     Supported precision

This AMI support 16-bit integer precision for inference. CPU
and GPU are usually using 32-bit floating point for calculation, therefore using
int16 can impact slightly the result accuracy. This accuracy difference is
typically lower than the acceptable error to classify images. The training can
still be done using 32-bit floating point, Zebra libraries will automatically
translate the inputs into appropriate values.

While we do test many of the standard networks and verify
their accuracy when using 16-bit integer for calculation, we cannot guarantee
this will always work with any neural network. Please contact Mipsology (aws_support@mipsology.com) for
assistance if you have accuracy issues.

d.     Zebra log files

Specific information related to each Zebra run are saved
into the directory $ZEBRA_INSTALL_DIR/log. For this log to be created, all
users running on Zebra should have the right to write in this directory. The
log files are named according to the following rule:
<executed_binary_name>.YYYYMMDD-HHMMSS.log. Also, a link named zebra.log
is created in the current directory the application is ran from. It points to
the log file having the information on the last run executed from this place. Note
that all log older than 10 days are automatically erased. These log files can
help debugging issues related to Zebra and should be provided for support. You
can also decide to disable the log file by setting the ZEBRA_LOG_DIR environment variable to none.

The log file contains useful information on the application execution
like environment setup, execution time, system information, Zebra environment
variable used, configuration, warnings and errors, neural-network related
information, etc.

7.    Resources

Zebra on the AWS Marketplace: https://aws.amazon.com/marketplace/search/results?x=0&y=0&searchTerms=zebra&page=1&ref_=nav_search_box

Web site: http://www.mipsology.com/aws

Support: aws_support@mipsology.com

EULA: http://www.mipsology.com/aws/EULA

Third Party Licenses: http://www.mipsology.com/aws/THIRD_PARTY

Information you can find in the AMI under
/home/centos/Mipsology/V2017.08.1/ :

·
doc/README.md: Useful information on the selected AMI.

·
doc/LIMITATIONS.txt: What is supported and not supported in the
selected AMI.

·
doc/VERSION.txt: Version information you can use for support.

·
doc/EULA.txt: End User License Agreement.

·
doc/THIRD_PARTY.txt: Third-Party licenses for software used in
the AMI.