New Optimization Chip Tackles Machine Learning, 5G Routing

Forums IoTStack News (IoTStack) New Optimization Chip Tackles Machine Learning, 5G Routing

  • This topic has 1 voice and 0 replies.
Viewing 0 reply threads
  • Author
    Posts
    • #34798
      TelegramGroup IoTForIndia
      Moderator
      • Topic 2519
      • Replies 0
      • posts 2519
        @iotforindiatggroup

        #News(IoTStack) [ via IoTForIndiaGroup ]


        The training of machine learning systems and a wide variety of other data-intensive work can be cast as a set of mathematical problem called constrained optimization. In it, you’re trying to minimize the value of a function under some constraints, explains Georgia Tech professor Arijit Raychowdhury. For example, training a neural net could involve seeking the lowest error rate under the constraint of the size of the neural network.

        “If you can accelerate [constrained optimization] using smart architecture and energy-efficient design, you will be able to accelerate a large class of signal processing and machine learning problems,” says Raychowdhury. A 1980s-era algorithm called alternating direction method of multipliers, or ADMM, turned out to be the solution. The algorithm solves enormous optimization problems by breaking them up and then reaching a solution over several iterations.

        “If you want to solve a large problem with a lot of data—say one million data points with one million variables—ADMM allows you to break it up into smaller subproblems,” he says. “You can cut it down into 1,000 variables with 1,000 data points.” Each subproblem is solved and the results incorporated in a “consensus” step with the other subproblems to reach an interim solution. With that interim solution now incorporated in the subproblems, the process is repeated over and over until the algorithm arrives at the optimal solution.

        In a typical CPU or GPU, ADMM is limited because it requires the movement of a lot of data. So instead the Georgia Tech group developed a system with a “near-memory” architecture.

        “The ADMM framework as a method of solving optimization problems maps nicely to a many-core architecture where you have memory and logic in close proximity with some communications channels in between these cores,” says Raychowdhury.

        The test chip was made up of a grid of 49 “optimization processing units,” cores designed to perform ADMM and containing their own high-bandwidth memory. The units were connected to each other in a way that speeds ADMM. Portions of data are distributed to each unit, and they set about solving their individual subproblems. Their results are then gathered, and the data is adjusted and resent to the optimization units to perform the next iteration. The network that connects the 49 units is specifically designed to speed this gather and scatter process.

        The Georgia Tech team, which included graduate student Muya Chang and professor Justin Romberg, unveiled OPTIMO at the IEEE Custom Integrated Circuits Conference in Austin, Tex.


        Read More..

    Viewing 0 reply threads
    • You must be logged in to reply to this topic.