Why asking an AI to explain itself can make things worse

Forums IoTStack News (IoTStack) Why asking an AI to explain itself can make things worse

  • This topic is empty.
Viewing 0 reply threads
  • Author
    Posts
    • #40430
      Telegram SmartBoT
      Moderator
      • Topic 5959
      • Replies 0
      • posts 5959
        @tgsmartbot

        #News(IoTStack) [ via IoTGroup ]

         

        Auto extracted Text……

        For Ehsan, who studies the way humans interact with AI at the Georgia Institute of Technology in Atlanta, the intended message was clear: “Don’t get freaked out—this is why the car is doing what it’s doing.” But something about the alien-looking street scene highlighted the strangeness of the experience rather than reassuring.
        Ehsan is part of a small but growing group of researchers trying to make AIs better at explaining themselves, to help us look inside the black box.
        The aim of so-called interpretable or explainable AI (XAI) is to help people understand what features in the data a neural network is actually learning—and thus whether the resulting model is accurate and unbiased.
        “There are people in the community who advocate for the use of glassbox models in any high-stakes setting,” says Jennifer Wortman Vaughan, a computer scientist at Microsoft Research.
        In a 2018 study looking at how professional users interact with machine-learning tools, Vaughan found that transparent models can actually make it harder to detect and correct the model’s mistakes.
        The team took two popular interpretability tools that give an overview of a model via charts and data plots, highlighting things that the neural network picked up on most in training.
        Reasons help whether we understand them or not, says Ehsan: “The goal of human-centered XAI is not just to make the user agree to what the AI is saying—it is also to provoke reflection.” Riedl recalls watching the livestream of the tournament match between DeepMind’s AI and Korean Go champion Lee Sedol.
        (This is backed up by a new study from Howley and her colleagues, in which they show that people’s ability to understand an interactive or static visualization depends on their education levels.) Think of a cancer-diagnosing AI, says Ehsan


        Read More..
        AutoTextExtraction by Working BoT using SmartNews 1.02976805238 Build 26 Aug 2019

    Viewing 0 reply threads
    • You must be logged in to reply to this topic.