User Experience Guidelines Metrics
Introduction
UX Metrics should enable measurement of the evolving baseline for participation in the Identity Ecosystem. NOTE: prior work on this document can be found here: User_Experience_Trust_Metrics
The requirements for this page are based on a TFTM presentation of October 2014 and are intended as input to the TFTM process. The following questions are taken from that presentation.
What is the baseline?
Improving the security, privacy, usability, and interoperability of everyday online transactions
What benefits could the everyday consumer see if this baseline was established?
e.g., reduced account compromise through increased use of multifactor authentication; greater user control through notice, consent requirements; etc.
Content
The following are the points where user experience can be measured to determine if the base line requirements are met. Note that the first metrics are for overall usability moving into the trust measurements that will indicate compliance of a particular implementation with the terms of the IDESG or of the Framework.
How are UX metrics obtained
Metrics are quantitative measures that can be tracked over time. They come from questions presented to user and from the evaluation of direct observations of users on particular sites. The Wikipedia entry on User Experience Evaluation (at this site [[1]]) defines these terms: "User experience (UX) evaluation or User experience assessment (UXA) refers to a collection of methods, skills and tools utilized to uncover how a person perceives a system (product, service, non-commercial item, or a combination of them) before, during and after interacting with it. It is non-trivial to assess user experience since user experience is subjective, context-dependent and dynamic over time."
Since the evaluation of the user is dependent on the specific UX presented to the user on a specific site, the broad measure of the success of the whole ecosystem will only be measurable when multiple sites supporting the IDESG ecosystem are widely available and questions about the overall experience are possible. In the meantime, specific implementations of web sites supporting the IDESG are encouraged. The metrics provider here should be able to act as a base set that would allow comparison between different researcher's results. A common base set of metrics would allow the UXC to use independently generated research reports in the compilation of an ecosystem wide report.
(INSERT to flesh out questions) Human Centered Possibility-driven Options-focused Expect to be wrong (portfolio approach) Iterative
Measurements (Quantitative)
- Can the user accomplish the task set out to accomplish? (A goal might be 90%, a minimum acceptable might be 60%)
- What is the System Usability Scale (John Brooke's SUS)
- It the Trustmark discoverable and self-describing. (90%, 70%)
- Does the user feel safer as a result of the appearance of the Trustmark. (99%, 80% of those answering in the affirmative above.)
- Does the user feel that the site is safe overall? (the metric is a comparisons of the positives to the negatives.)
- Does the user understand the necessity for a strong identity for their providers?
- Does the user know whether the identity of the provider is strongly bound to a real-world entity?
- Collected verbatim responses used for site improvement.
The System Usability Scale (Survey)
The SUS is a 10 item questionnaire with 5 response options.
- I think that I would like to use this system frequently.
- I found the system unnecessarily complex.
- I thought the system was easy to use.
- I think that I would need the support of a technical person to be able to use this system.
- I found the various functions in this system were well integrated.
- I thought there was too much inconsistency in this system.
- I would imagine that most people would learn to use this system very quickly.
- I found the system very cumbersome to use.
- I felt very confident using the system.
- I needed to learn a lot of things before I could get going with this system.
Guidelines of Behavior for Designing Surveys and Evaluating Results
Usability researchers should adhere to IRB requirements for the treatment of users and user collected information.
Survey guidance
1. Consent to participate must be voluntarily given, as well as the option to decline participation in the survey.
2. Individuals should be able to exit the survey at any time.
3. Survey questions should be comprehendible by a wide audience.
4. Surveys should be concise and easy to answer.
5. Surveys should be open to all users to ensure the widest possible range of individual respondents.
6. Surveys should include an introductory explanation of how the user's personal information and responses will be treated, what level confidentiality they can expect and links to privacy policy, trust frameworks and other policies that govern collection of personal information.
7. Any results of surveys should be aggregated and depersonalized to protect the participant's privacy.
8. Results of the survey could be made available as long as privacy of participants can be insured by the system.
User Interviews (Qualitative Research and Ethnographic Studies)
Qualitative Research generally involves direct in-person interviews with human subjects, and as such requires care with respect to personal information and user stories shared. NSTIC pilots are required to use formal academic IRB standards and certification, and generally erring on the side of personal information privacy and confidentiality is preferred when conducting direct interviews with subjects or collecting other personal information.
Qualitative Research is conducted in order to gather in-depth information about the reasons subjects choose one path or method over another, understand systems one way verses another, etc. It is conducted in an effort to get at the deeper issues in a system, that cannot be understood just by reviewing usage logs or via survey data which rarely explains why or how people understand a system. Additionally, often subjects develop work-arounds for systems that don't work, and it is through watching them work that the original problem the work-around is meant to solve becomes apparent. It is this understanding that often isn't quite clear to the subject that is often discovered through interviews.
Guidelines of Behavior for Designing Qualitative Research
Qualitative Research can be conducted in person, through journal entry, via group discussion and observations in personal settings. Due to the personal nature of this research, some guidelines are included below:
Robert Wood Johnson Foundation Qualitative Research Guidelines: http://www.qualres.org/ and Evaluative Criteria http://www.qualres.org/HomeEval-3664.html
Cochrane Qualitative and Implementation Methods Group http://cqim.cochrane.org/
Additional links to help research:
http://www.access-board.gov/guidelines-and-standards/communications-and-it/about-the-section-508-standards/other-resources-links
http://abledata.com/abledata.cfm?CFID=83403419&CFTOKEN=2eec2aeffb602fc3-C88E7ADB-9730-C534-AC45C274BDDACBCC
http://abledata.com/abledata.cfm?pageid=113709&ksectionid=19327
http://www.section508.gov/best-practices