-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7


Hannah Kane: Teach.webmaker.org: Initial Card Sorting Results

Суббота, 17 Января 2015 г. 02:56 + в цитатник

This past week I conducted a small user research project to help inform the IA of the new teach.webmaker.org site.

I chose a card sorting activity, which is a common research method for IA projects. In a card sorting activity, you give members of your target audience a stack of cards, each of which has one of the site content areas printed on it. You ask the participants to group items together and explain their thought process. In this way, you gain an understanding of the participants mental models. This is helpful for avoiding a common pitfall in site design, which is organizing content in a way that make sense to you but not your users.

Big Giant Caveat

This study was flawed in a couple ways. First, Jacob Nielsen (who is generally considered to be a real smartypants when it comes to usability and user research) recommends that you do card sorting with 15 users. I’ve only been able to get 11 to do the activity so far, though I think a few more are pending.

Another flaw is that I deviated from a common best practice of running these activities in person. A lot of the insights are gained by listening to the person think aloud. There are some tools for running an online card sorting activity, but they’re largely for what’s called “closed” card sorts, where you pre-determine the categories and the person’s task is to sort cards within those categories. Since one of my goals with this activity was to generate a better understanding of what terminology to use, I wanted to do an “open” sort, where the participants name their groupings themselves.

All that’s to say that we shouldn’t take these results or my analysis as gospel. I do think the participant responses will be useful as we move forward with designing some wireframes to user test in the next heartbeat.

Participant Demographics and Background Information

There were a range of ages and locations represented in the study.

Four participants are between 18 and 24 years old, three are between 25 and 34, two between 35 and 44, one between 45 and 54, and one between 55 and 64.

Four participants are from the United States, three from India, and one each from Colombia, Bangladesh, Canada, and the United Kingdom.

Participants were asked to rate their level of familiarity with the Webmaker Mentors program on a scale of 1 to 5, with 5 being the most familiar. Again, there was a range. Four participants rated themselves a 5, two a 4 or 4.5, two a 3, one a 2, and two a 1.

Initial Findings

The participants in the study had a range of different mental models they used to organize the content. Those models were:

  1. Grouping by program offering—that is, organizing by specific programs, concepts, or offerings, typically expressed as nouns (e.g. Web Literacy, Teaching Kits, Webmaker Clubs, Trainings, Activities, Resources, Social, Learning, Philosophy, Mentoring, Research, Events, Supportive Team)Five participants used a model like this as their primary model. The average familiarity level with Webmaker Mentoring for these participants matches the average for the entire sample (3.7 on a five-point scale).
  2. Grouping by functional area—that is, actions that a user might take, typically expressed as verbs (e.g. participate, learn, market/promote, meet others, do, lead, get involved, collaborate, organize, develop yourself, teach, experiment, host, attend).Four participants used a model like this as their primary model. Notably, all of the participants are from the United States, Canada, or the United Kingdom, and their average familiarity with Webmaker Mentoring is below the average of the entire sample (2.75 as compared to 3.67).
  3. Grouping by role or identity—some study participants organized the content by the type of user who would be interested in it (e.g. Learner, Mentor).One participant  used this as their primary model. Another made a distinction between Learning and Teaching, but it was framed more like the functional areas described above. One more used “Learning Geeks” as a topic area.
  4. Level of expertise—in this model, there is a pathway through the content based on level of expertise (e.g. intermediate, advanced).One participant used this as their primary model.

Other patterns, themes, and notable terminology:

  • Seven participants grouped together content related to hosting or attending events, and three participants made references to face-to-face communication. Of the seven who grouped content into the “Events” topic, five of them included the one item that referenced “Maker Party” (including two participants who rated their level of familiarity with the program at a 1), indicating a strong understanding of “Maker Party” as a type of event.
  • Five participants made references to the broader community. Three of them are from the United States, one from Canada, and one from India. (The specific terminology used were “Meet others,” “Social,” “Webmaker Community,” “Collaborate,” and “Supportive team”).
  • Four participants used the word “Webmaker” in their groupings, which gives us some insight into how they understand the brand. In each case, participants seem to connect the term to either teaching and teaching kits, or to the community of interested people.
  • Three participants used the term “Leading.”
  • One participant referenced a particular context (“Webmaker for Schools”).
  • One participant distinguished Mozilla-produced content (as “Mozilla Outputs”).
  • We included the term “Peer Learning Networks” in the content list to represent Hives (we assumed the meaning of “Hive” would be difficult to intuit for those unfamiliar). While we can’t draw any conclusions based on this data, it’s notable that this term was grouped into a wide variety of topics, including community (“Meet others,” “Social,” and “Collaborate”), “Get Involved,” “Intermediate,” “Mozilla Outputs,” and “Learning Geeks.” Three participants felt it didn’t fit under any category.
  • We tested both “Professional Development” and “Trainings” to see if we could understand how people interpret those terms. The results are fairly ambiguous. Both terms were associated with “Activities for teachers & mentors”, “Leading,” “Get Involved,” and “Research (things you learn on your own).” “Professional Development” was also associated with “Learning,” “Develop Yourself,” and “Learning Geeks”. “Trainings” was associated with “Intermediate,” “Mentoring,” “Organize in person events,” and “Supportive team.” For both terms, three participants could not categorize this term.

Let me know if you’re interested in seeing the raw data.


http://hannahgrams.com/2015/01/16/teach-webmaker-org-initial-card-sorting-results-and-analysis/


 

Добавить комментарий:
Текст комментария: смайлики

Проверка орфографии: (найти ошибки)

Прикрепить картинку:

 Переводить URL в ссылку
 Подписаться на комментарии
 Подписать картинку