User Experience Trust Metrics

From IDESG Wiki
Revision as of 17:04, 26 September 2017 by Mary Hodder (talk | contribs) (→‎Problem: rewrite)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Introduction

User Research should enable measurement of the evolving trust that is enabled by the work of the IDESG and the IDEF registry.

The requirements for this page are based on a TFTM presentation of October 2014 and are intended as input to the TFTM process. The following questions are taken from that presentation.

What is the baseline?

Improving the security, privacy, usability, and interoperability of everyday online transactions

What benefits could the everyday consumer see if this baseline were established?

e.g., reduced account compromise through increased use of multifactor authentication; greater user control through notice, consent requirements; etc.

What we do know? (from the NSTIC Strategy Document)

By making online transactions more trustworthy and enhancing consumers' privacy, we will prevent costly crime; we will give businesses and consumers new confidence; and we will foster growth and innovation, online and across our economy — in some ways we can predict, and in other ways we can scarcely imagine. Ultimately, that is the goal of the NSTIC Strategy.

Context

As part of the process of building a IDEF Registry as the first of the IDESG Frameworks, potential users of the site have asked specifically what the benefits of joining such a framework might be.

Our Statement of Trust

The goal of any trust framework is a community of users and providers that will make every person or Digital Entity feel more secure that their privacy, safety and money is not likely to be compromised while operating within the framework.

This is accomplished by:

  1. Verifying that every Digital Entity that is member of the Framework has agreed to follow the principles and rules of the Framework.
  2. Enabling any Digital Entity to be assured that they are communicating with other entities that are currently part of the framework.
  3. Ensuring that users can be authenticated by Identity Providers that will follow the user's consent when release any user Private Information to any other Digital Entity.
  4. Knowing that when a user visits any web site (Relying Party); that site will honor the stipulations that the user places on the entity that holds their User Private Information.

Problem: How to Make IDEF Attractive to Relying Parties

The uptake of any privacy initiative has always stalled at uptake of the privacy requirements by the Relying Party. The Relying Party needs to acquire customers and other users to attract business or ad revenue. Traditionally this has meant that either many Identity Providers (IDP) didn't want to also be Relying Parties, IE take other IDPs identities, and that entities wanted to show business growth and use via their "user numbers." But in order that individuals gain more privacy in the system, through maintaining their identity information at a trusted IDP and then providing limited data to RPs, RPs have to be willing to take less information from individuals. What is the RPs incentive to function strictly as an RP?

Until the customers and user decide that privacy is important to them, the Relying Party will always view privacy initiatives as a cost that has no benefit to them. And until RPs decide that having less data is better, because they hold less than can be hacked and stolen, and therefore have less liability while still being able to run their businesses effectively selling their real products (not individual's data), there will be fewer RPs. Additionally, before individuals both value privacy and have a choice that includes web sites that honor privacy, there will be no market forces to drive privacy, because they will not understand how to create more privacy and how to put pressure in markets to support RPs by providing them the individual's business.

Governmental efforts to enforce privacy preserving methods and technologies will have some impact on Relying Party adoption, but until both government and market forces work in concert to drive adoption, privacy methods and technology uptake will be limited. Governmental efforts include direct regulatory action and supporting class action lawsuits via the legal process.

The set of metrics provided here apply to how to asses the UXC's requirements as they apply to the [IDEF Registry|http://idefregistry.org]. The IDEF Registry is designed to understand how users see privacy, security and interoperability when selecting web sites and identity products that are most likely to serve their interests and meet their objectives on the web for both functionality and security of their money, data they may provide to an entity online, their reputation and their privacy.

Metrics

The following are the points where user experience can be measured to determine if the base line requirements are met. Note that the first metrics are for overall usability moving into the trust measurements that will indicate compliance of a particular implementation with the terms of the IDESG or of the Framework.

How are UX metrics obtained

Metrics are quantitative measures that can be tracked over time. They come from questions presented to user and from the evaluation of direct observations of users on particular sites. The Wikipedia entry on User Experience Evaluation (at this site [[1]]) defines these terms: "User experience (UX) evaluation or User experience assessment (UXA) refers to a collection of methods, skills and tools utilized to uncover how a person perceives a system (product, service, non-commercial item, or a combination of them) before, during and after interacting with it. It is non-trivial to assess user experience since user experience is subjective, context-dependent and dynamic over time."

Since the evaluation of the user is dependent on the specific UX presented to the user on a specific site, the broad measure of the success of the whole ecosystem will only be measurable when multiple sites supporting the IDESG ecosystem are widely available and questions about the overall experience are possible. In the meantime, specific implementations of web sites supporting the IDESG are encouraged. The metrics provider here should be able to act as a base set that would allow comparison between different researcher's results. A common base set of metrics would allow the UXC to use independently generated research reports in the compilation of an ecosystem wide report.

(INSERT to flesh out questions) Human Centered Possibility-driven Options-focused Expect to be wrong (portfolio approach) Iterative

All measurements below (expect the verbatim data) should have a qualitative and quantitative ways to measure success from users. Follow up measurements should be included over time.

Measurements (Quantitative)

  1. Can the user accomplish the task set out to accomplish? (A goal might be 90%, a minimum acceptable might be 60%)
  2. What is the System Usability Scale (John Brooke's SUS)
  3. It the Trustmark discoverable and self-describing. (90%, 70%)
  4. Does the user feel safer as a result of the appearance of the Trustmark. (99%, 80% of those answering in the affirmative above.)
  5. Does the user feel that the site is safe overall? (the metric is a comparisons of the positives to the negatives.)
  6. Does the user understand the necessity for a strong identity in protecting their money, their reputation and their privacy?
  7. Does the user know whether the identity of the provider is strongly bound to a real-world, trusted entity?
  8. Collected verbatim responses used for site improvement.

Open question, does there need to be a separate list for a Relying Party and for an Identity Provider.

The System Usability Scale (Survey)

The SUS is a 10 item questionnaire with 5 response options.

  1. I think that I would like to use this framework frequently.
  2. I found the framework unnecessarily complex.
  3. I thought the sites that adhere to the framework are easy to use.
  4. I think that I would need the support of a technical person to be able to use this framework.
  5. I found the various functions in this web site were well integrated.
  6. I thought there was too much inconsistency in this web site.
  7. I would imagine that most people would learn to use this framework very quickly.
  8. I found the framework was more trouble than it was worth.
  9. I felt very confident using web sites that display the framework logo.
  10. I needed to learn a lot of things before I could get going with this framework.

Guidelines of Behavior for Designing Surveys and Evaluating Results

Usability researchers should adhere to IRB requirements for the treatment of users and user collected information.

Survey guidance
1. Consent to participate must be voluntarily given, as well as the option to decline participation in the survey.
2. Individuals should be able to exit the survey at any time.
3. Survey questions should be comprehendible by a wide audience.
4. Surveys should be concise and easy to answer.
5. Surveys should be open to all users to ensure the widest possible range of individual respondents.
6. Surveys should include an introductory explanation of how the user's personal information and responses will be treated, what level confidentiality they can expect and links to privacy policy, trust frameworks and other policies that govern collection of personal information.
7. Any results of surveys should be aggregated and depersonalized to protect the participant's privacy.
8. Results of the survey could be made available as long as privacy of participants can be insured by the system.

User Interviews (Qualitative Research and Ethnographic Studies)

Qualitative Research generally involves direct in-person interviews with human subjects, and as such requires care with respect to personal information and user stories shared. NSTIC pilots are required to use formal academic IRB standards and certification, and generally erring on the side of personal information privacy and confidentiality is preferred when conducting direct interviews with subjects or collecting other personal information.

Qualitative Research is conducted in order to gather in-depth information about the reasons subjects choose one path or method over another, understand systems one way verses another, etc. It is conducted in an effort to get at the deeper issues in a system, that cannot be understood just by reviewing usage logs or via survey data which rarely explains why or how people understand a system. Additionally, often subjects develop work-arounds for systems that don't work, and it is through watching them work that the original problem the work-around is meant to solve becomes apparent. It is this understanding that often isn't quite clear to the subject that is often discovered through interviews.

Guidelines of Behavior for Designing Qualitative Research

Qualitative Research can be conducted in person, through journal entry, via group discussion and observations in personal settings. Due to the personal nature of this research, some guidelines are included below:

Robert Wood Johnson Foundation Qualitative Research Guidelines: http://www.qualres.org/ and Evaluative Criteria http://www.qualres.org/HomeEval-3664.html
Cochrane Qualitative and Implementation Methods Group http://cqim.cochrane.org/

References and Coordination

  1. http://www.access-board.gov/guidelines-and-standards/communications-and-it/about-the-section-508-standards/other-resources-links
  2. http://www.section508.gov/best-practices