Talk:User Experience Guidelines Metrics

From IDESG Wiki
Jump to: navigation, search

Comments:

  • Trust ? Can we use a different word? - Group meeting disussion

Comments on the document here

Noreen Whysel

Comments on current document at https://docs.google.com/document/d/1BSDV_Px4MTEKTH22qPKGRUhVKc5tBDA32diz0ZwRynA

Usability Testing Guidance applied to IDEF Usability Reqs (work in progress)

United States

Usability.gov
U.S. Digital Service Digital Playbook

United Kingdom

UK Government Digital Services Design Principles
UK Government Services Design Manual includes standards and user evaluation from planning, initial design, alpha/beta releases and ongoing development
UK GDS Good Practice Guide: Requirements for Secure Delivery of Online Public Services - Chapter 2 and 3 addresses user expectations
UK GDS Good Practice Guide: Annex A: Stakeholder Expectations

European Union

Usabilitynet.org (EU Funded guidelines)

Other Resources

Kantara: The Design Principles of Relationship Management V1.0 Report (identifies design principles for identity management - possible resource for prescriptive guidance)

NNGroup: When to use which Usability Method
"...qualitative methods are much better suited for answering questions about why or how to fix a problem, whereas quantitative methods do a much better job answering how many and how much types of questions."

Ellen Nadeau

Survey guidance
1. Individuals should be given the option to decline participation in the survey.
2. Consent to participate must be voluntarily given.
3. Individuals should be able to exit the survey at any time.
4. Survey questions should be comprehendible by a wide audience.
5. Surveys should be concise and easy to answer.
6. Surveys should be open to all users to ensure the widest possible range of individual respondents.

Quantitative
quantitative research is the systematic empirical investigation of observable phenomena via statistical, mathematical or computational techniques. Quantitative research is beneficial because it provides precise, quantitative, numerical data that is reliable and objective.
1. Set answer choices on a scale (e.g., 1-5 with 1 being strongly disagree and 5 being strongly agree).
2. Set a goal for the system as a whole (e.g., at least 60% of users respond with 4 or 5 – agree or strongly agree).

Qualitative
-Ask open-ended questions to elicit a wider variety of responses.
-Provide sufficient space for thorough responses.

Ann Racuya-Robbins

In think it would be better to title this User Experience Metric

1. Baseline
1.1 Assisting the Human User in understanding the user's evolving vulnerabilities in cyberspace and how the evolving NSTIC compliant certified "Trustmark" facilitates in the user's understanding and making of informed choices when establishing relationships within the IDEF identity ecosystem. Such choices should be informed as to security, privacy, usability, and interoperability capabilities and protections provided by the NSTIC compliant certified "Trustmark" presented by a given class or set service providers, community of interest or other IDEF certified entity in the ecosystem(framework).

1.2 Benefits

2. Content
Measurement should begin by creating a metrics of human understanding of his or her vulnerabilities in cyberspace; the ability and satisfaction the human user has in making informed choices online. Once a human-user understanding-baseline has been agreed upon and established then further metrics can be developed and folded in to the measurement criteria regarding the human user's experience with other aspects of interacting and transacting with the IDEF Identity Ecosystem.

tomj

I don't understand the reluctance to use the word "TRUST", after all we are doing this in support of NSTIC = Trusted Identities in Cyberspace. Without measuring trust I cannot imagine how we are determining whether we have met our goals.

The sections on baseline were taken from a slide deck at plenary. If we want to change the plenary's position, it should be done at the plenary and not in this committee.

Content is the essence of this paper, but it is written as metrics. That is, it is written in terms of measurable quantities. Clearly making informed choices is the essence of what we want to measure, but we need something more specific to ask a user. Vague questions will not enable us to make clear decisions. Typically the best questions are comparative or success metrics, e.g.

  1. Have you been able to establish an identity with an organization that you trust?
  2. Have you been able to use that identity on web sites that are important to you?
  3. Was this experience better than using an email provider like Hotmail or Gmail?


One question raised by this is our overall goals here. The question from plenary was whether the UXC could feed suggestions into TFTM. The tone of this comment is more on the order of understanding the broad scope of users in cyberspace. That first goal is achievable. The second goal is way beyond our budget of money or volunteer time.

Mary Hodder

What about "privacy protection metrics" instead of trust metrics? Would that better convey the issues?

tomj

Not really - privacy protection is a function, not a UX, component.

Measuring whether the user's expectation of privacy protection is one element we could measure.

But at the end of the process (aka interaction) we need to measure the level of trust that the user had in the protection of their data.

Let us not forget that we need to also measure the user's trust in the identity of the provider.