RoboNet: A Dataset for Large-Scale Multi-Robot Learning

Forums IoTStack News (IoTStack) RoboNet: A Dataset for Large-Scale Multi-Robot Learning

Tagged: ,

  • This topic has 1 voice and 0 replies.
Viewing 0 reply threads
  • Author
    Posts
    • #39087
      Telegram SmartBoT
      Moderator
      • Topic 5959
      • Replies 0
      • posts 5959
        @tgsmartbot

        #News(IoTStack) [ via IoTGroup ]


        Headings…
        RoboNet: A Dataset for Large-Scale Multi-Robot Learning
        Collecting RoboNet
        How can we use RoboNet?
        Final Thoughts
        Comments

        Auto extracted Text……

        RoboNet: A Dataset for Large-Scale Multi-Robot Learning
        Motivated by the success of large-scale data-driven learning, we created RoboNet, an extensible and diverse dataset of robot interaction collected across four different research labs.
        Finally, we find that pre-training on RoboNet offers substantial performance gains compared to training from scratch in entirely new environments.
        Our goal is to pre-train reinforcement learning models on a sufficiently diverse dataset and then transfer knowledge (either zero-shot or with fine-tuning) to a different test environment.
        First, we pre-train visual dynamics models on a subset of data from RoboNet, and then fine-tune them to work in an unseen test environment using a small amount of new data.
        The constructed test environments (one of which is visualized below) all include different lab settings, new cameras and viewpoints, held-out robots, and novel objects purchased after data collection concluded.
        Note that while Baxters are present in RoboNet that data is not included during model pre-training.
        After tuning, we deploy the learned dynamics models in the test environment to perform control tasks – like picking and placing objects – using the visual foresight model based reinforcement learning algorithm.
        For the experiments, the target robot and environment was subtracted from RoboNet during pre-training.
        In each environment, we use a standard set of benchmark tasks to compare the performance of our pre-trained controller against the performance of a model trained only on data from the new environment.
        The results show that the fine-tuned model is ~4x more likely to complete the benchmark task than the one trained without RoboNet. Impressively, the pre-trained models can even slightly outperform models trained from scratch on significantly (5-20x) more data from the test environment


        Read More..
        AutoTextExtraction by Working BoT using SmartNews 1.02976805238 Build 26 Aug 2019

    Viewing 0 reply threads
    • You must be logged in to reply to this topic.