Digest of White Paper on AI Chip Technologies 2018

Make it to the Right and Larger Audience

Blog

Digest of White Paper on AI Chip Technologies 2018

The following is a digest of 2018 AI chip technology white paper.

 

AI systems often involve training and inference for which the required computing resources are quite different. For artificial neural networks, training aims to minimize the error as a function of the parameters (i.e., weights) in a given neural network structure, and can be achieved either offline or online and with supervision or without supervision. Inference, on the other hand, is usually done online with straightforward evaluation of the network.

While having a large number of parallel functional units, high memory bandwidth, and low latency operations, are generally desirable for AI chips, training and inference, however, due to their own respective objectives, have notable differences in terms of needs for computing resources.

 

The following is site premium content.
Use points to gain access. You can either purchase points or contribute content and use contribution points to gain access.
Highlights: 6516 words, 11 images
 
Author brief is empty
Tags:

3 Comments
  1. WPerkes 10 months ago
    0
    -0

    Good

    5
  2. cLeary 1 year ago
    0
    -0

    Good paper for a AI starter

    4
  3. AGhosal 1 year ago
    0
    -0

    Good tutorial. Inferencing can be that computational expensive?

    4

Contact Us

Thanks for helping us better serve the community. You can make a suggestion, report a bug, a misconduct, or any other issue. We'll get back to you using your private message ASAP.

Sending

©2020  ValPont.com

Forgot your details?