Learning analytics in e-learning can be utilized in virtual lab scenarios to help teachers to analyze student’s interactions effectively and efficiently. We are going through a middle of a pandemic where the new normal is online learning. E-learning is here to stay and hence teachers need to have tools to judge how the students are utilizing the online materials.
A solution is hereby presented to show how xAPI (Experience API) can be used to generate details about student interaction. A dashboard is also build following the rigorous design thinking methodology to provide a good user experience. This gives an overview of how students are interacting with the online material at their disposal.
Let’s directly jump to the Design process. A thorough design approach is followed. All the stages and the techniques utilized in them are intricately referenced in the subsections of this part.
The design of the interactive dashboard is developed in accordance with the below-mentioned two sets of guidelines in design thinking: Ten Golden Rules of Interface Design and Gestalt’s Laws. Now let us look into the process phase-wise.
Empathize Phase: This is the phase to empathize with the user which means understanding the problems of the user from their perspective. On a higher level, this work aims to solve the problem by fetching student interactions from virtual labs using xAPI. But to display the data, a dashboard is needed to present the relevant information to the user efficiently. What problem does a user face while analyzing data? This will help me to understand the general cognition while playing the teachers when they try to analyze student data. You can summarise this work as a literature review. The different approaches that were followed are given below:
- Participation: This technique helped to analyze the problem at hand from the perspective of the user and helped to uncover the cause-and-effect relationship I have been given access to the RFID lab course for this work, from where I deduced that this is a case-based learning course, where after reading a case, an experiment has to be performed in the virtual reality environment of the lab. The learning goal of the lab is, a student needs to find the activation temperature of RFID transponders. There were videos available on Moodle, along with quizzes and a separate lab portal.
The difficulties I deduced are as follows:
a. Students can study or learn from any source. There is a need to link all the learning activities of the students. Thus, xAPI is the solution to it, which is an integral part of my work.
b. An interactive dashboard is needed to display the data fetched by xAPI.
2. Newspaper Headline of the Future: Headlines for the future are created to describe the vision, how a current problem will be solved. In this case, it is the interactive dashboard to view the JSON data fetched by xAPI that is to be created. And this is an integral part of the solution to the problem statement of this work.
Define Phase: In the Define phase, all the previous information gathered in the last phase, i.e., the empathize phase is structured and sorted. The goal of this phase is to define the core problem(s) documented in the observational records. This is done by reorganizing, structuring, and clustering the information I collected so far. By setting information in relation helped to understand the pain points of a user. The various techniques followed in this phase are as follows:
1. Creation of a User Persona: By definition, a persona is used to represent a group of users with similar behaviors, goals, and motivations. It is a fictive character with a name, photo, and further attributes. It helps to create a common ground about the target users which are teachers in this case. The user persona for a professor teaching the RFID Lab course is drawn below. This helped to narrow down the pain points a user faced and certainly helped to design the user interface as simple as possible so that professors who are aged and have not been in touch with technology to the extent that we as a human civilization are using today can also use the dashboard with ease.
2. User Journey Map: The purpose of this map is to understand the customer or user needs and visualize the customer’s experience in a chronological and gapless way, listed as actions. Qualitative research methods are used to investigate what a user literally experiences.
3. Storyboards: To make better products, developers must comprehend what’s happening in the user’s reality and see how their items can improve the user’s life. That is the place where storyboards come in. Storyboards ensure that the design approach is human-focused. Storyboards put individuals at the core of the design procedure. Forces the designer to place himself in the user’s shoes and see the items in a comparative light. This encourages them to comprehend existing situations in a better way. It helps to organize what’s significant. Storyboards additionally uncover what you don’t have to burn through cash on. A great deal of pointless work can be removed. A short comic about the product to be developed is drawn in this phase. The main aim is to depict it in as many relevant situations as possible. Each picture is accompanied by a short paragraph explaining what is happening. Storyboards are used to critically reflect on the idea or concept of the system as well as explaining a product in the context of the use. The storyboard of the user is shown below in the figure. This helped to visualize the daily problem faced by the user. Hence the design could be narrowed down to target this very problem.
4. Empathy Map: This helps in building empathy with the end-users who are teachers in this case, by visualizing their attributes and behaviors. It is based on qualitative research. The map is based on 4 quadrants: Says, Thinks, Feels, Does. The user is shown in the center. Knowledge about the user is filled into the 4 quadrants as per relevance. The empathy map of Prof Max Neuer is drawn below. This helped to structure down the problems that were identified in the previous phase.
Ideate Phase: Now that I have determined the pain points of the user, and also had a look at the user characteristics, ideas were needed to solve the problems of the user. In the backend, xAPI is used to fetch the data and store them in JSON format. But in the frontend, an interactive dashboard is needed to display this data even to teachers who are not sound with technology. As I determined in the previous phase that getting data from the LMS portal is viable and possible but the problem statement that I had in hand is a lab portal along with the LMS portal where students interacted. And there are teachers who are aged and taught the lab courses physically in face-to-face classroom scenarios for all their life. And suddenly switching on to the virtual lab is a daunting task. Moreover, it is more so difficult and challenging for teachers who wanted to give constructive feedback to improve students’ performance. This phase also has various techniques out of which the following were followed:
- Negative Brainstorming: The idea behind this method is to negate the challenges to the extreme opposite. For instance, “How can the world become a better place?” will convert to “How can I create a hell on earth?”. Then, after brainstorming about the negated challenge, randomly take the brainstormed ideas and transformed them into solutions for the original challenge. In this case, the challenge is to “How well can the data fetched by xAPI be displayed?”. This is converted to “How badly can the data fetched by xAPI be displayed?”. The ideas generated were:
a. Display the xAPI statements fetched from student interactions directly: This essentially meant that thousands of interactions and thousands of statements for the teacher to read per student. Well, this could lead to the formation of a book about student interaction for just one student. Good idea to badly display the data.
b. Display the JSON statements behind these xAPI statements to the teacher: Teachers from a nontechnical background will not be able to read this data. Moreover, one single interaction gets stored in around 15 lines of JSON code, which eventually meant a software server might crash with thousands of interactions.
c. One visualization per dashboard: This meant that multiple dashboards with multiple URLs or web applications for the user or teacher to handle. Not viable at all. Lotus Blossom or Idea-tree: This is a process where the topic is kept in the middle of a blossom or at the root of the tree, like how to visualize student’s interaction in this case and generate different ideas as petals or branches. After that, each petal or branch is taken as the center of new blossoms. This process helped to evolve ideas in a structured way around the idea clusters. Out of all the options shown below, Web App is chosen because of the flexibility it provides to the user and also because the data is stored as JSON.
2. Value-Effort Analysis: This is done to analyze whether the effort put in by the developer to build the solution and the value generated for the user are viable enough or not. For all the above-generated ideas, this analysis is done, and the result is as follows:
Prototype Phase: This is the most important phase of the design thinking process where I implement the idea generated and create a working model of an idea or solutions generated in the last phase. As a rule, always start with cheap, easy, and fast tools, such as paper prototyping, to not lose too much time on faulty concepts. Those early prototypes are of low fidelity, low-fi prototypes. Usually, designers need many iterations through previous and following phases until we have a concept that is worth being put to the next fidelity level: from low to mid and finally high fidelity. Each prototype is evaluated by the user or the designers having experience with UX designs. The prototype phase consists of 3 prototypes of increasing fidelity with the high-fidelity prototype being the working model that can be handed over to the end-user. Now let’s look into the different fidelity prototypes of the dashboard.
- Low Fidelity Prototype: This is the low-cost or zero-cost prototype and is hand-drawn or digitally drawn in nature. It can be no interactive or interactive depending upon the idea generated in the last phase. This helps to get a clear structure and spacing of the web application. Focus is more on the substance rather than the details and hence helps to communicate the idea to the user in a better way.
The low-fidelity prototypes are digitally drawn for all the pages in our web application. The designs were then evaluated by a group of UX designers who gave their valuable input to improve it further. More than one design is developed but the above figures show the final selected design. Minor improvements advise of the evaluators have been taken into consideration for the next level prototype, i.e., the medium-fidelity prototype. As far as a good design principle is concerned, designs should be such that the software is user-friendly and the users don’t require any outside help to use it. It has been kept in mind while designing the UI. The “Dashboard” page has an overview of the data in graphical format. The “Experiment” page has an in-depth overview of the experiment-related data in graphical format. The “Lab Data” page contains the report generated by the xAPI in a chart format. And the “About” gives a theoretical overview of this dashboard. The evaluation report is available in the “Evaluation” chapter of this document
2. Medium Fidelity Prototype: This is the next level of prototyping which is build using software tools called Figma. The prototype is completely interactive in nature except that it is not coded and hence real-life JSON data relevant for this RFID lab fetched by xAPI will not be taken into account.
The Figma prototype connections is below:
3. High Fidelity Prototype: This is the ultimate working application. A technology stack and an architecture are selected which works in the backend of the application and real-life data is fetched and displayed on the frontend.
The size of the bubbles is dependent on the time taken for the experiment. The working dashboard is available as open-source software, for the end-user and can be used for analyzing the learning experiences of the students.
Evaluation or Testing Phase of Lo-fi and Me-fi prototypes: This is the last and final phase of the design thinking process. In this phase, previously built prototypes are tested. There is a broad variety of testing methods that involve end-users, design experts, stakeholders, or other user groups. However, the target is to find out if the previous work — starting from empathizing is correct and complete or if some features were missed. Here in this section, the evaluation result of low fidelity and medium fidelity prototypes are documented. For the evaluation result of the working dashboard, please go to chapter 7, which details out the evaluation phase of the prototype and the xAPI profiles created. This is because, for the high-fidelity prototype, the backend with JSON data from xAPI is needed to be created for this RFID lab and therefore. Here, an overview of important advice is jotted down. An in-depth evaluation report is available in chapter 7 of this document. Low Fidelity Prototype: “An option to share the graphs will be good”, this is important advice received over the design. Along with it, to understand which page a user is on, it could be useful to grey out the button or by any means highlight the buttons. Medium Fidelity Prototype: A very important recommendation from the user is to add important information in a box format at the top of the page, just above the graphs, like the total number of students and so on. This is very vital information as it gives an overview of the data in a quick glance at the web app. This recommendation is taken into consideration and is implemented in the final working dashboard. High Fidelity Prototype: This is the final working prototype which is evaluated and is liked by the designers from all point of view as it adhered to the important principles of design and as per their testimonies, the UI is simple, sleek, and easy to use with vital information available at the first glance. They also liked the interactivity of the dashboard whereby they could easily zoom in or out, save the graphs as png, and many more interactive features available. The main aim of the evaluation phase is to find out if the previous work starting from empathizing is correct and complete or if I missed any important features. Design Thinking is not a linear process. Therefore, iterations can happen between any stage of this process. Most of the iterations take place in low-fidelity and medium-fidelity prototype because the low cost is associated with this prototype. But once the high-fidelity prototype or the working model is developed, any iterations will cost heavily. The different methods of evaluation used are as follows:
- Heuristic Evaluation: It is conducted by a group of UI/UX experts and they determine whether the design is following Nielsen’s 10 golden rules or not.
- Think-Aloud: In this method, the user uses the system and speaks about their movement in the system. This is how a designer can understand whether the design is easy enough for the user to navigate.
- Questionnaire: A set of questions asked to the users regarding the design to understand the effectiveness of the design.
All the 3 methods mentioned above are not used for each of the fidelity prototypes. Depending upon the effectiveness the methods are chosen.
The methods chosen for evaluating the Learning Analytics Dashboard are a combination of ‘Heuristic Evaluation’ and ‘Think Aloud’ and ‘Retrospective Test.
User Group: Given that the evaluation methodology and dataset to generate the models are fixed, it is required to decide the user group for the evaluation process and the process of recruiting the users for the same. A number of users for evaluation. It is important to decide the number of users who will evaluate the prototype. As per Jacob Nielsen. people believe that usability testing is expensive and complex and that user tests should be done only for the huge budgeted projects with a luxurious time plan. But that’s not true. Expansive usability tests are a misuse of assets. The best outcomes come from testing no more than 5 users. From the graph below I can deduce that zero users give no insights. While testing with one single user we get one-third of all the insights that we even need out of the software. With the second user, we can see that their process overlaps a lot with the first user. But along with that, we get new insights as well, because people are definitely different from each other. The same goes for the third users who will do similar things as done by the first and second users but will add a few new insights as well. And when we add more users we learn fewer things.
The cost-benefit advantage of user testing gives the ideal ratio of around 3 or 5 users, depending upon the style of testing.
The UI/UX Expert category user profile is given below:
For the teacher module, the participants of the study had to fulfill the below characteristics as good as possible
For the evaluation, a handout for the teachers and a handout for the UI/UX experts were prepared in Appendix A Evaluation Handout. The first page provided them with a brief overview of the project, and functionalities of the Learning Analytics Dashboard. The information regarding the pre-requisites for being in the user group is collected by the following questions:
For UI/UX experts and teachers participating in evaluating the Software Design module,
- Have you taken any Design-related courses? / Do you have any knowledge about learning analytics?
- Describe your experience in: In designing software. / Describe your experience of using learning analytics.
Task Lists: After I decided on the participant list it is important to decide the tasks that are to be carried out by the users on the high-fidelity prototype developed. The tasks need to be carried out in a goal-driven manner. The task list of the Software Design Module is decided as follows:
- Task 1 (ST1): Interact with the graphs (any of zoom in or out, select area to analyze, download charts, etc.)
- Task 2 (ST2): Check out the data generated in form of a table and perform operations like searching a name and sort any of the numeric columns as per the user’s wish.
- Task 3 (ST3): Determine the transponder and substrate selected for the experiment from the graph.
- Task 4 (ST4): Review Visual Statistical data that gives an overview of the data fetched by xAPI.
Recruiting Users: For recruiting, users for the evaluation process of both the modules, an email-based invitation process was used, which had a brief overview of the project. This was sent out to both teachers and the UI/UX experts with whom there were contacts based on previous working relationships or collaborations. From respondents, three UI/UX experts and five teachers were chosen based on the pre-requisites earlier mentioned. Amon the five teachers, two of them participated in evaluating both the modules. As per Nielsen’s recommendation, 3–4 users are great to have, but five users were evaluated since the evaluation was carried out mostly online. Each of them was given a copy of the Evaluation Handout Appendix A (Evaluation Handout) which explained the origins of the software or xAPI profiles and listed the Tasks discussed previously with provisions for the user to write their responses.
For each task, individual ratings were collected and at the end, an average rating of the overall system from 5 users are given below:
After the evaluation of both the dashboard, I did take into consideration all the valuable feedback that I received and changed both the deliverables accordingly. The evaluation was a very important step to understanding the shortcomings as it being a developer I could have been biased while evaluating it. Hence understanding my work from someone else’s perspective was extremely vital.