February 12, 2019 at 5:44 am #28718
- Topic 844
- Replies 0
- posts 844
#News(IoTStack) [ via IoTForIndiaGroup ]
At re:Invent 2018, AWS added many capabilities to Amazon SageMaker, a machine learning platform as a service. SageMaker Neo was announced as an extension of SageMaker that optimizes fully-trained ML models for various deployment targets. Neo-AI project turns SageMaker Neo into an open source project making it possible for hardware and software vendors to extend the platform.
Machine learning models have two distinct phases – training and inference. Data scientists and developers select the right algorithm that’s most appropriate for the business problem. They apply the algorithm to the dataset to evolve a trained model. Once the trained model is thoroughly tested, it is moved to the inference phase where the fully-trained model is used with newer data points generated in production environments.
Organizations are turning to the public cloud to train machine learning models at scale while deploying the trained models at the edge for inference. The ML models trained in the cloud need to be optimized for the edge before they are used in production.
Initial releases of Amazon SageMaker exclusively targeted the training phase of machine learning models. The platform made it simple for data scientists and developers to upload large datasets to the cloud and to run a distributed training job across multiple machines powered by high-end CPUs and GPUs. The trained model is uploaded to a destination S3 Bucket that is ready for inference. Amazon SageMaker’s integration with Jupyter Notebook and Python made it developer’s favorite ML PaaS.
You must be logged in to reply to this topic.