› Forums › General › Discussions (General) › Cerebras’ CS-2 brain-scale chip can power AI models with 120 trillion
Tagged: FPGA_H3
- This topic is empty.
-
AuthorPosts
-
-
August 25, 2021 at 10:15 pm #62373
#Discussion(General) [ via IoTGroup ]
Cerebras Systems said its CS-2 Wafer Scale Engine 2 processor is a “brain-scale” chip that can power AI models with more than 120 trillion parameters.And that’s why Cerebras believes its latest processor — which is actually built on a wafer instead of just individual chips — is going to be so powerful founder and CEO Andrew Feldman said in an interview with VentureBeat.
“The industry is moving past a trillion-parameter models and we are extending that boundary by two orders of magnitude enabling brain-scale neural networks with 120 trillion parameters.” Feldman said the Cerebras CS-2 is powered by the Wafer Scale Engine (WSE-2) the largest chip ever made and the fastest AI processor to date.The WSE-2 also has 123 times more cores and 1 000 times more high-performance on-chip memory than graphic processing unit competitors.As noted the largest AI hardware clusters were on the order of 1% of a human brain’s scale or about 1 trillion synapse equivalents or parameters.
But Feldman said a single CS-2 accelerator — the size of a dorm room refrigerator — can support models of over 120 trillion parameters in size.On top of that he said Cerebras’ new technology portfolio contains four innovations: Cerebras Weight Streaming a new software execution architecture; Cerebras MemoryX a memory extension technology; Cerebras SwarmX a high-performance interconnect fabric technology; and Selectable Sparsity a dynamic sparsity harvesting technology.The Cerebras Weight Streaming technology can store model parameters off-chip while delivering the same training and inference performance as if they were on-chip.This new execution model disaggregates compute and parameter storage — allowing researchers to flexibly scale size and speed independently — and eliminates the latency and memory bandwidth issues that challenge large clusters of small processors.The Cerebras MemoryX will provide the second-generation Cerebras Wafer Scale Engine (WSE-2) up to 2.4 petabytes of high-performance
Read More..
AutoTextExtraction by Working BoT using SmartNews 1.03976957683 Build 04 April 2020
-
-
AuthorPosts
- You must be logged in to reply to this topic.