Sunday, 24 November 2013

Realigning Higher Education for the 21st Century Learner through Multi-Access Learning

         Realigning Higher Education for the 21st-Century Learner through Multi-Access Learning


Valerie Irvine

Assistant Professor of Educational Technology and
Co-Director, Technology Integration and Evaluation Research Lab
Faculty of Education
University of Victoria
Victoria, BC V8W 3N4 CANADA
virvine@uvic.ca

Jillianne Code
Assistant Professor of Educational Technology and
Co-Director, Technology Integration and Evaluation Research Lab
Faculty of Education
University of Victoria
Victoria, BC V8W 3N4 CANADA
jcode@uvic.ca

Luke Richards
Doctoral Student and Graduate Research Assistant
Faculty of Education
University of Victoria
Victoria, BC V8W 3N4 CANADA
lukejr@uvic.ca

Abstract

Twenty-first-century learners have expectations that are not met within the current model of higher education. With the introduction of online learning, the anytime/anywhere mantra taken up by many postsecondary institutions was a first step to meeting learner needs for flexibility; however, the choice and determination of delivery mode still resides with the institution and course instructors. Recently, the massive open online course (MOOC) movement has been introduced as an undeniable force in higher education, and the authors argue that it is distracting leadership from focusing on alternative options for supporting the needs of learners who demand both personalization and real access to learning opportunities. The key element to the MOOC movement is its openness that enables student access to education. In this article, the authors present the multi-access learning framework that envelops the MOOC phenomenon and merges course access modes enabling student choice and agency. The authors report results from a pilot study on one type of multi-access course, where students were able to choose their mode of access. In this case, remote students accessed the course via webcam and joined their on-campus classmates and instructor who were together face-to-face. Implications for multi-access learning in relation to the MOOC movement are discussed.

Keywords: massive open online course (MOOC), hybrid learning, open education, learner agency, learner access, learner choice, multi-access learning

Introduction

Twenty-first-century learners have expectations that are not met within the traditional model of mainstream higher education (Castle & McGuire, 2010; Jean-Louis, 2011; Siemens, 2005). Further, as cutbacks to educational budgets continue, and centralized professional development opportunities decrease along with them, it will be difficult for universities to keep up with expectations and demands of students. Dialogue has begun to emerge on the future of higher education. Within this context, the massive open online course (MOOC) has been introduced as a movement that threatens to fragment higher education (Daniel, 2012; Friedman, 2013; Harden, 2013; Kolowich, 2013). With this dialogue and administrative attention directed toward MOOCs, it may be distracting higher education leadership from focusing on alternative options for supporting the needs of learners who demand both personalization and access to learning opportunities. "Universities and academics are, as always, faced with choices about how to change, and these choices need to be better informed about the kinds of students that are entering [our] institutions" (Jones, Ramanau, Cross, & Healing, 2010, p. 731).

Postsecondary institutions (PSIs) are moving toward learner-centered designs, shifting focus to process and not product. For example, 47 European nations who are members of the Bologna Process have adopted the Budapest–Vienna Declaration on the European Higher Education Areacalling for reform and cooperation among European PSIs, but most importantly calling upon institutions to "foster student-centered learning as a way of empowering the learner in all forms of education" (European Higher Education Area, 2010, p. 2). Further, at the program level, instructional approaches such as problem based learning (PBL) (Jurewitsch, 2012; Klegeris & Hurren, 2011; Salvatori, 2000) and inductive teaching and learning (Prince & Felder, 2006) foster the development of problem solving and inquiry skills in real-world contexts. Medical schools in Canada were among the first to use PBL as a core instructional approach and since then, "the PBL methodology has spread to a variety of different content areas ... and is practiced in many universities and colleges around the globe" (Klegeris & Hurren, 2011, p. 403). In a recent review of the literature, Spronken-Smith and Walker (2010) found that inquiry-based learning was gaining in popularity across many academic disciplines. Although these instructional approaches foster critical-thinking and problem-solving development in students, not enough has been done to move the locus of control to learners when it comes to how they access their courses – potentially one of the many reasons why MOOCs are so popular.

One of the salient features of the MOOC phenomenon is how learners can engage in a large open course community where they often learn more from peers, while being supported by a team of one or more university professors and teaching assistants. However, one of the most challenging aspects of MOOCs is attrition and accreditation, as the majority of learners will either drop out of the course entirely or complete the course without any transferable postsecondary credits (Daniel, 2012; Hill, 2013; Jordan, 2013). Some argue that despite these issues, the clarion call of MOOCs is "disregard the dropouts and celebrate giving huge numbers of people access to free, high-quality, education" (Gee, 2012, para. 19).

An examination of current university offerings reveals a dichotomy in which students are currently able to access courses: (1) on-campus face-to-face; or (2) online using a mixture of synchronous and asynchronous technologies.  Dissatisfaction, lack of incentives for developing and teaching online courses, and the perception of online courses as poor quality are commonplace in brick-and-mortar universities (Parry, 2009; Seaman, 2009). A recent study of 10,700 faculty members across the United States conducted by the Sloan National Commission found that "over 80 percent of faculty with no online teaching or development experience believe that the learning outcomes for online are 'inferior' or 'somewhat inferior' to those for face-to-face instruction" (Seaman, 2009, p. 6), despite considerable evidence in the literature refuting these beliefs (Tallent-Runnels et al., 2006; Ward, Peters, & Shelley, 2010). As a result, most brick-and-mortar PSIs continue to offer a majority of their courses face-to-face, resulting in the limiting of access to those individuals within the geographical and temporal regions surrounding the institution offering those courses. Further, for those PSIs that do offer online options, those courses are often one-off courses within the context of a traditional program, thus contributing to the inflexibility faced by learners as they are unable to access the full array of courses required to earn a credential (Parry, 2009). These programs and their institutions often claim high hidden costs associated with offering a complete online program, a lack of administrative support structure, and the changing role of faculty members (Neely & Tucker, 2010). According to Neely and Tucker, "significant per course costs that are often unaccounted for in university budgets ... include leadership and support ... in coordinating the design, development, and implementation of new courses" (p. 28).

A finding also supported by earlier work by Robinson (2005) is that "there are significant labor and technology costs in building a quality online course. In some cases, this initial cost can be enough to dissuade a real commitment to quality online delivery" (p. 180). These upfront design and development costs can be costly as was the case when the University of California Online took a single $6.9 million USD loan to do so (Farr, 2013). All of these factors are arguably why some online courses offered at brick-and-mortar PSIs are often for courses that have higher demand, since it is easier to justify the cost of design, development, implementation, and support of these courses. These factors may also limit the likelihood of niche and highly specialized courses required for students to complete credentials because those courses often have lower potential enrollment and therefore do not receive consideration as online offerings. Because of this dichotomy in offerings, the flexibility that students require to complete credentials increases the risk of restricting access to postsecondary education to only those who can either afford full-time schooling or have minimal other commitments. For example, in Canada, the typical college student is between the ages of 17 and 27 representing over 75% of the student body, and over 90% of postsecondary students are under the age of 40 (Dale, 2010). Given these demands, this research explores a way in which it might it be possible to offer specialized course offerings, while maintaining quality, keeping costs down, and enabling access to remote learners, thus accommodating students with varying temporal, familial, monetary, and geographic characteristics.

Customization of higher education for personalization of course delivery has much more potential for disrupting the status quo on campuses than currently recognized. The authors posit that the brick-and-mortar PSI is a "sleeping giant" in the world of online learning as evidenced by the popularity of MOOCs being offered by more traditionalist universities (e.g., Harvard University, Stanford University, University of Toronto) through various means. The authors provide a theoretical foundation for why learners are choosing MOOCs and why it is important for PSIs to support this type of learner. A multi-access framework for learner choice is introduced as a method to provide traditional PSIs a means to support a variety of learners and provide the opportunity to "open" their on-campus courses to the MOOC phenomenon. Findings from a multi-access pilot study are shared, providing preliminary evidence for the framework as a method of supporting learner choice in access to educational opportunities.

Theoretical Framework

During the course of this [the 20th] century, there have been major changes in the aims and goals of education. Whereas the early goal was to produce graduates who possessed basic literacy skills, more recently the stakes have been increased to emphasize higher levels of literacy, greater understanding of traditional subject matters and technology, and the capacity to learn and adapt to changing workforce demands.(Brown & Campione, 1994, p. 289)

The quote above comes from a chapter by Ann L. Brown and Joseph C. Campione in an edited volume entitled Innovations in Learning: New Environments for Education. The 20th century saw great strides and changes in understandings and theories of how people learn, from behaviorist beginnings in animal research to the cognitive revolution. The recent emergence of numerous ubiquitous technologies enabling students' choice in personalizing their learning experiences has encouraged students to become more active agents in their own learning. As a result, research on how people learn in learning communities is finally becoming embedded into mainstream practice.

Fostering a Community of Learners

Environments that foster lasting learning in collaboration with others in the community, whose interactions are as much a matter of collective understanding and shared experience, comprise the Fostering a Community of Learners (FCL) model (Brown, 1994; Brown & Campione, 1994). FCL research has particular relevance in framing the popularity of MOOCs as a medium for learning. Although much of the work that Brown and Campione did in FCL was with children, if one subtracts the what of learning (the domain of study), aspects of this model remain quite salient when speaking about MOOCs. Of particular relevance is the emphasis on the where and how of learning, the situation – the collaborative culture within which learning takes place. FCL environments are designed with the intention to promote critical thinking and reflection underlying the multiple forms of higher literacy such as writing, argumentation, and technological sophistication. FCL environments are by nature a "system of interacting activities" that engage students in principles of research, in order to share information so they can perform a consequential task (Brown & Campione, 1994, p. 293). Based on this approach one may also draw considerable parallels to the later work of Wegner (1998) and Lave (1996) on communities of practice. Of course, this research–share–perform process cannot be done without considering the specific learning principles underlying FCL environments that research has shown will support successful experiences for students.

Jerome Bruner summarizes four critical ideas underlying FCL in his landmark work, The Culture of Education (Bruner, 1996): agency, reflection, collaboration, and culture. The first of these is agency, where learners take control of their own cognitive activity. The second is reflection, where students attempt to make what they learn make sense, often referred to in more recent literature as a critical feature of self-regulated learning (Boekaerts, Pintrich, & Zeidner, 2000; Brown, 1987; Winne & Hadwin, 1998; Zimmerman, 2000). The third is collaboration, where individuals work together in the teaching and learning context. Finally, the fourth is culture – the way that we construct, negotiate, institutionalize, and make it "reality" (Bruner, 1996, p. 87). Each of the aforementioned ideas, or principles, is actively being explored in the research literature across multiple domains and contexts. However, as Bruner argues, agency, where learners take control of their activity, is the first and most critical feature of FCL.

Agency for Learning

As agents and social beings, humans make decisions and enact them on themselves and their environment. Thus, agency arises within social structures and contexts, and once emergent may exert influence capable of altering social, cultural, and structural contexts (Bandura, 2001, 2006). A model of agency for learning (Code, 2010) proposes that the emergence of learner agency is manifested in student choice and abilities to interact with personal, behavioral, environmental, and social factors specifically relevant in the learning context. Within this context, learner agency can be enacted through three different modalities: personal, proxy, and collective agency. Personal agency is the ability for students to choose to originate action (Bandura, 2001). Proxy agency is a socially mediated mode of agency through which individuals choose to have others to act at their will to secure outcomes they desire (Bandura, 2001, 2006). Collective agency relies on people's shared beliefs in their collective power to attain desired outcomes; it enables people to act together on a shared belief through interactive, coordinated, and dynamic means (Bandura, 2001, 2006). Building upon the ideas of FCL, where environments are intended to foster lasting collaboration with others in the community, it is critical to this model whether learning environments enable students to choose how to best meet their learning needs through self-directed, proxy, or collective forms of teaching and learning. The authors hypothesize that it is learner agency that is the primary reason why enabling student choice is critical for 21st-century learners. Further, the authors also argue that learner agency is likely one of the reasons why the MOOC phenomenon has garnered so much attention.

Multi-Access Learning

Multi-access learning is an opportunity to meet both student needs for access to learning experiences and faculty needs for graduate student recruitment (Irvine, 2009; Irvine & Code, 2011, 2012; Irvine & Richards, 2013). Irvine defines multi-access learning as a framework for enabling students in both face-to-face and online contexts to personalize learning experiences while engaging as a part of the same course. Multi-access learning is different than blended learning because it places the student at the center of the learning experience as opposed to the instructor or the institution. Further, "blended learning" is a problematic term due to its multiple interpretations in the literature and in daily practice, leaving one to ask, "Who controls the blend?" When and where the face-to-face sessions occur and when and how the online synchronous or asynchronous sessions occur are often controlled in blended learning settings. At the core, the institution or instructor is in control of the blend, no matter the configuration. Multi-access learning, however, has the learner at the center, with the ability to choose how he/she wants to access the course. The core principle of the multi-access framework is one of enabling student choice in terms of the combination of course delivery methods through which the learning environment is accessed; that is, each individual learner decides how he/she wishes to take the course (e.g., face-to-face or online) and can then participate with other students and the instructor – each of whom have their own modality preferences – at the same time (see Table 1). To illustrate, each configuration of the multi-access framework will be discussed in detail relative to supporting the various preferences illustrated in Table 1.

Table 1. Matrix of learner access by course delivery mode

Note. F2F = face-to-face; "Blended" refers to a mix of consecutive face-to-face and online activities; BOL = blended online (mixing synchronous and asynchronous online activities).

Tier 1 Access: Face-to-Face

The trademark of so many PSIs is the face-to-face classroom. That it is synchronous only needs no explanation. Face-to-face learning comes in many different enrollments ranging from small seminars to computer labs, large classrooms to large lecture halls. The synchronous, on-campus, face-to-face nature of this delivery is the central focus of most of the brick-and-mortar PSIs. This is represented as the "core" in Figure 1. Since it is highly unlikely that this "traditional" model will become obsolete and brick-and-mortar campuses may be unlikely to offer parallel online programs in this period of fiscal restraint, to support the demands of remote students and to increase flexibility, expanding these classrooms to support different modes of access are possible. To do this simply would be to enable the multi-access framework through a combination of tiers as described below.


Figure 1. Tiers of the multi-access framework

Tier 2 Access: Synchronous Online

Tier 2 multi-access learning has synchronous connectivity for online learner access. Learners on campus are together in a multi-access enabled classroom and have the instructor present in the classroom with them. Remote learners participate by joining in via Internet webcam and content can be exchanged between participants using desktop sharing. Additional sites of face-to-face student "pods" may also be included in this design provided the technology is sufficient to capture group video and to zoom in as required to points of interest or to share content as required. The next layer of Figure 1 represents Tier 2 synchronous online access. The pilot study reported in this article involved a multi-access learning design including Tiers 1 and 2. Future research will be conducted into how the third asynchronous access option can be incorporated without compromising the quality of the learning experience. If there is a requirement for students to access the courses outside of scheduled meeting times, then the multi-access framework may be expanded to include asynchronous technologies adding a third tier to this access framework.

Tier 3 Access: Asynchronous Online

Tier 3 multi-access courses include those who are accessing the course asynchronously. The third layer of Figure 1, the asynchronous online layer, represents Tier 3 of the multi-access framework. Having an asynchronous access to archived synchronous learning events may be a suitable alternative for those who are unable to attend synchronously. Recent research suggests that students who only listened to asynchronous recordings of lectures (podcasts) were able to achieve significantly higher exam results than those who attended classes face to face (McKinney, Dyck, & Luber, 2009). That said, watching archives is often considered a poor form of learning design, so careful attention will need to be paid to how multi-access gets extended into utilizing the third tier. It may involve a mixed synchronous/asynchronous class so there is overlap in the way the whole class can connect. To move the asynchronous group beyond simply watching archived class videos, learning design will need to address student collaboration and co-construction of meaning. Separate personalized synchronous student-led or teaching assistant-led sessions for a pod within a different time zone could also be established. For example, a Tier 3 design may involve synchronous students in Tiers 1 and 2 having a mix of synchronous (Tiers 1 and 2) and asynchronous (Tier 3) course access, creating an overlap between the Tiers so students across the entire class can participate. Further extending upon a combination of tiers, Tier 4 would encompass opening enrollment – to global participation; Tier 4 is represented in Figure 1 as Open Learning.

Tier 4 Multi-Access: Open Learning and the MOOC

Tier 4 multi-access is the extension of enrollment to non-credit students effectively "globalizing" the learning experience encompassing open learning and MOOCs. The interactivity levels required in each of the Tiers will influence the type of MOOC that is capable in the four-tier mode. For example, a course that is more content or lecture-style oriented is analogous to an on-campus large lecture course, where content mastery is the main learning outcome; this would be referred to as an xMOOC in the open education domain (Rodriguez, 2012; Siemens, 2012). Whereas a course that requires more student–student interaction around a subject domain (seminar or small class) is more connectivist and community-focused, and may suit more student-centered classroom designs; this would be referred to as a cMOOC (Rodriguez, 2012; Siemens, 2012). Other possibilities to create educational opportunities for open learners might include a "fishbowl" design in which learners engaged in discussion are observed by other surrounding students (Sutherland, Reid, Kok, & Collins, 2012) with a certain number of reserved spots for open learners, joining or viewing live streams, accessing class archives, breakout audio or video rooms, and participation through wikis, blogs, Twitter, or text chat.

Purpose of the Study

The purpose of this research is to examine how a course involving Tier 1 and 2 multi-access affects the learner perception of quality of learning experience, the importance of choice in the selection of a mode of access when taking a course, and their preferences for learning delivery options.

Specifically, the following research questions were therefore addressed:

What was the rank order of learner preference for mode of access?How important was it for learners to have choice in selecting access modes?How did the experience participating in a multi-access course affect learner perceptions of quality of learning?Methods

A mixed methods approach was selected for this research because "the combination of quantitative and qualitative approaches provides a better understanding of research problems than either approach alone" (Creswell & Plano Clark, 2007, p. 8). An explanatory mixed methods design was chosen because the study focused on quantitative results, while using qualitative data to explain or build on the initial findings and determine whether any issues outside of the survey items influence learner acceptance of multi-access delivery. The follow-up explanations model was employed (Creswell & Plano Clark, 2007) as the qualitative data was collected after the initial quantitative phase. Data was collected using a web-based survey with additional student comments collected through an open response question at the end of the survey. Although follow-up interviews were conducted, for the purposes of this article, only survey and open response data will be reported.

Context

In Spring 2012, a petition was signed by approximately 50 students requesting that the final course prior to their practicum be offered online. That course, however, was an intensive discussion seminar on ethics in teaching prior to learners embarking on a three-month teaching practicum. The idea of an asynchronous online course, or even one with the typical voice over slide deck, was simply not considered an option by the instructional team. The instructors were initially reluctant to offer the course online when the idea was first proposed because skepticism was expressed about the quality of an online course in that format, not unlike the findings from the Seaman (2009) report cited earlier. However there was acceptance when the Tier 2 multi-access synchronous mode was provided, as it accommodated the seminar-like design the instructors desired. An emphasis on some face-to-face students being present mixed with video and conversation flow from those remote was of key importance and was supported through the Tier 1 (face-to-face) and Tier 2 (synchronous online) configuration provided. After finding this common ground, the course was then offered with a cohort of 26 pre-service teachers in a secondary education program in Western Canada. Seventeen students participated via desktop video conferencing while the instructor and nine students were in an on-campus classroom enabled with room and cloud-based video conferencing. On the last day of the course, the learners were invited by email to participate in the study, which was approved by a university human research ethics board. A follow-up reminder was sent on the conclusion of their practicum. The invitation contained a link to an online consent form and survey. Participants were compensated $50 CAD for participation in the study.

Although this sample size is limited, it is illustrative to the purpose of providing important feedback and evidence to the design of a specific multi-access course. Further, it should be noted that an additional limitation of this study was that the course ran using a student cohort, so the learners already had familiarity with each other.

Instruments

Students completed a 17-question online survey that included demographic information on age, gender, education program, and teaching area. Items also gathered information on student experience with online courses, preferences for course access mode, and open-ended questions. Following the administration of the online survey, additional qualitative data was obtained through multiple interviews conducted with a subsample of the participants and separately with the instructor.

Participant Profile

From this class of 26 secondary education students, 16 gave consent to participate in this study, including 11 women (41%) and five men (19%) (Mage = 28 years, age range 23-45). Ten of the participants were in the remote group (62.5%), while six were in the face-to-face group (37.5%). Of the 15 participants who provided valid responses, eight (53.3%) had taken an online course before (seven out of 10 women, or one out of five men). Despite the small sample size the results support further investigation. A larger study examining six sections of the same course (N = 180) is scheduled for the 2013-2014 academic year.

Results

Learner Preference for Course Modality

Students were asked to rank their preference for course modality if they were in the same city as the university they were attending and able to attend classes face to face. The options provided were: (1) face-to-face; (2) online; (3) multi-access in the face-to-face group; (4) multi-access in the remote group; and (5) blended. Of the 15 responses, five responses (or 33.3%) indicated they would choose "multi-access in the face-to-face group" as their first choice. The next most popular first-choice rank was "multi-access in the remote group," with 25% (or four) of the responses. Combining these responses, nine out of 15 or 60% of responses indicated multi-access as the preferred modality. Blended came in third with three responses, face-to-face came in fourth with two responses, and online came in last with one response.

In contrast, the lowest ranked choice out of five options was online with nine responses (56.3%). The second least popular modality was face-to-face with four responses (25%). The remaining responses were blended with one response and multi-access (remote) with one response. It is interesting to note the contrast between first-rank and lowest-rank course delivery mode. The lowest ranked modalities represented essentially opposite views with the most responses divided between face-to-face and online. These opposite views appear to become synced as the top two favorite modalities were multi-access options, but just with diverse preferences (online and face-to-face) being accommodated within one mode. Fourteen out of 15 learners (93.3%) had one of the multi-access options (face-to-face or remote) ranked as either their first or second choice. When analyzed using the Friedman test of non-parametric related samples (Field, 2005), the first through fifth ranking of the five delivery modes appears in Table 2. The small sample size is a limitation to be noted with the Friedman test and likely had an effect on the significance of the findings (p = .094).

Table 2. Rank order of the five modes of course access using the Friedman test

Multi-access in the face-to-face groupMulti-access in the remote group

Note. N = 15, ?2 = 7.947, df = 4, p = .094. 1 = first choice to 5 = last choice.

Descriptive statistics were used to look at the number one preference by the multi-access group. The remote group with 10 participants had seven (or 70%) of participants indicating the multi-access option was their first choice. Three or 30% chose the multi-access remote group again while four or 40% chose the multi-access face-to-face group. The face-to-face group had chosen responses for first choice almost equally across all delivery options except face-to-face, which had zero responses.

Importance of Choice of Delivery

Fifteen learners in the multi-access course were asked about the importance of choice of delivery using a 5-point scale, with 5 being "Very important" and 1 being "Very unimportant." The descriptive results are found in Table 3 and Table 4. A cross-tabulation, which summarizes categorical data, revealed that learners who had not taken an online course before were solely responsible for the lower responses of neutral or somewhat important, while all responses by the learners who had taken an online course before ranked the choice of delivery as very important. Interestingly, a strong majority of both remote (seven of 10) and face-to-face groups (four of five) of the multi-access class reported choice of delivery as very important.

Table 3. Descriptive statistics for the importance of choice of delivery

Note. N = 15. 5-point scale (5 = Very important, 4 = Somewhat important, 3 = Neutral, 2 = Somewhat unimportant, 1 = Very unimportant).

Table 4. Frequencies for the importance of choice of delivery

Note. N = 15.

Quality of Learning

Fourteen responses were received to an open-ended question asking how the learners perceived, with all else being equal (same instructor, same course), the impact of multi-access on the quality of teaching and learning. Eight responses (57%) reported they perceived the quality as increased and six responses (42.9%) reported the quality as being the same. A cross-tabulation showed no pattern between response and group membership (face-to-face or remote). Quotations from student comments that serve as exemplars are provided below.

Comments from remote learners. The following response from a remote learner captures the sentiment of those who reported they perceived the quality as the same:
"I think the quality of teaching and learning was not affected by the course being online. The instructor was effective in delivering the material and giving appropriate wait times after asking questions. It was a very interactive course which I believe would have the same impact if the course was fully F2F. We are going towards an online community, and it is great to know that there are already professors out there that are equipped with the skills and knowledge to effectively teach in any setting. Great experience. I wish more people this year had had the same opportunity." (Student 1)

The following is a response from remote student that is representative of those who felt the quality of learning was increased via multi-access mode:
"I would say that it enhanced it. I felt like I was in the class with live video and audio feeds, but at the same time I had access to review the teaching materials on my own computer and expand with my own research during the class without disrupting the flow of the lesson. For a long class (3 hours +) the opportunity to access from home was a huge advantage because the comfortable setting allowed me to hold focus and breaks were more refreshing." (Student 2)

Another remote student also pointed out the freedom to conduct deeper inquiries into topics during the sessions as a strength:
"I really enjoyed the multi-access experience. I had ongoing conversations on instant messenger with a classmate whilst listening and taking in a presentation for example. If you're in a face-to-face class you can't just pull out your laptop and start typing because it's rude, but when you're using multi-access, you can immediately check out any thought tangents online whilst keeping up with the presenter. This makes the learning experience fuller, because you can check things out as you think of them instead of forgetting them and not getting around to it after the class is done. I did feel part of the class as well. I also experienced the class from the other side of the monitor, and I have to say, it feels better on the technology. I felt the pace of the class was much slower when I was in the classroom F2F." (Student 3)

A remote learner who had taken online courses before reported her personal opinion of an online course versus a multi-access one as a remote participant:
"Hmmmmm. Personally I am an auditory learner so this was exponentially better than any previous online learning courses I have taken." (Student 4)

One remote student who reported the opinion that the multi-access remote mode was no different in terms of learning quality than a face-to-face class, went on to explain the significance of the convenience and her opinions about expanding it:
"It was fairly neutral, overall. I didn't feel like it was any better or worse in terms of learning quality, but I did feel that it was lightyears more convenient for me. Grow this opportunity! Offer these kinds of course mediums as often as possible! They really do make the grade, and it makes life for people in rural areas so much easier and more affordable!" (Student 5)

One remote learner even reported he would choose multi-access remote if only 45 minutes away from the campus:
"If I lived very close to campus year round, I think I would have preferred to be in a F2F class or a multi-access class in which I was in the room. However, I lived in [a town on the outskirts] and avoiding the 45 minute drive saved me a lot of money and valuable time that I could spend being more productive. On top of that, the flexibility that the multi-access course provided allowed me to move to another city to prepare for my practicum much further ahead of schedule than a F2F course would have permitted. I went to my practicum city 3 weeks before my start day; while a F2F class would have given me a long weekend to pack up and move, meet with teachers, supervisor, and admin, and plan my lessons with no time to observe." (Student 7)

The point of choice and student agency was brought up many times as described by this student who valued the ability to make choice as a remote learner:
"I think it contributes to the quality of learning because it's differentiated instruction. By having a multi-access course, students can choose how to participate. I felt like my needs were met and the video enhanced the quality of the teaching and learning. Without video, I wouldn't be able to concentrate for 3 hours." (Student 8)

Comments from face-to-face learners. The opinion that the quality of learning increased carried on with many other students who were in the face-to-face group, "I commend the individuals who designed and implemented this course. It was extremely successful, and accommodated many students who would have otherwise faced serious challenges regarding their living situations" (Student 6).

In this offering, only students with practica outside of the city or province were eligible to sign up as a remote learner, but some learners within the city expressed an interest in participating remotely: "I would have also appreciated the opportunity to choose whether I would be an online or F2F student, even though I reside in [University's city]" (Student 9). No one reported frustrations around accommodating those participating remotely as indicated by this face-to-face learner: "I know that the remote group benefitted from the online aspect of the class for monetary reasons, which I fully support. University is expensive, saving money any way that individuals can, should" (Student 15). In fact, one face-to-face learner pressed the issue of importance of making the multi-access mode available for the professional development of practicing teachers: "I think [multi-access] would be ESPECIALLY important for professional development courses that full-time teachers would want to take" (Student 10).

Students who were in the face-to-face group also reported perceptions that the quality of learning was no different:

"Multi-access allowed me to talk and discuss with students and hear their actual voices and their thoughts rather than just written comments. From other online classes I've taken there was very little student–student participation, with this class I felt like these peers were right there with us. It enhanced the experience." (Student 11)

Another student concurred, then raised the question of whether this might be the same for students who were not in a cohort program and previously acquainted:
"The multi access did not take away from the course – I found it to be more open for everyone – but how much of that has to do with the fact that we have worked together in separate courses. I was in the face-to-face component, but having taken online programs, I think the multi-access component made people feel more linked to one another." (Student 12)

She further explained the benefit of taking a few classes remotely as needed when she had life interruptions that would otherwise have had her miss class and instead she logged in: "This course was amazing. It allowed for freedom of life – the ability to participate online and face-to-face was essential in life as a parent, caregiver for an ailing parent and a full time student." (Student 13)

One face-to-face student who valued her own learning experience felt concerned about the absence of face-to-face interaction with the instructor for the remote group. She did, however, express the benefit derived by connecting with him via multi-access as opposed to a regular online course. She had taken online courses before so had the background to compare the experiences:

"In terms of our instructor, I am glad that I was a F2F student. I am grateful to have gotten to know [the instructor] and interact with him in a way that I do not think was possible or convenient for the online students. In this aspect, I think that multi-access detracted from the learning experience. However, without multi-access, these students would not have been able to interact with our class or with [him] at all. In this way, it is a positive thing. I definitely feel that regular online classes are nowhere near as effective as our multi-access experience, and I would advocate for multi-access over regular online learning anytime." (Student 14)

Discussion

With an increasing variety of access options to postsecondary courses for students, from online to massive open online courses, and an increased pressure to recruit students, future research on learning designs incorporating student choice of how and why they choose to access learning experiences will need to be completed. In this research, all of the students who had taken an online course in the past ranked choice in delivery mode as very important, unlike their counterparts. However, 11 out of the 15 students ranked the importance of choice in delivery as very important, adding evidence to the claim that if students are given the opportunity to be agents in their own learning they will make choices that are aligned with other personal, social, and environmental factors (i.e., flexibility is required for a variety of reasons). Although illustrative, it is recommended that this finding be explored to examine if the experience of taking online courses improves learner agency by providing choice, or if those who already have a strong sense of personal agency and a need for choice tend to take online courses.

With regard to learning designs, multi-access learning of varying combinations should be tested with different types of course enrollments on campuses, including those with open registration to determine impact on enrollment numbers, but more importantly examining the impact on the quality of the learning experience from the perspective of the learner. Since there is limited research available on the relationship MOOC enrollment has to the type of group (class versus open learners) and type of community, an examination into learning designs that foster a community of learners and their effects on student learning and retention are recommended.

Conclusion

Higher education institutions need to refocus on realigning their educational mandate to support increased access to courses for the 21st-century learner via alternative means. Improved access to learning experiences is one of the shared characteristics between the MOOCs and multi-access learning. However, the unique approach that is the multi-access framework allows institutions to take into account student choice and preferences of access, thus enabling PSIs to extend current offerings. As a result, the multi-access framework can help PSIs potentially increase enrollments from a variety of means through the different tiers of course access. MOOCs are but one of these tiers and would be particularly useful for high-demand courses, but, most importantly, the other tiers will be critical for expanding access for opening up more niche courses and thus entire programs for credit or open access. The authors posit that the multi-access framework has a better business model to offer PSIs. A question to investigate is the extent of scalability of multi-access courses in Tier 2 (synchronous online) and Tier 3 (asynchronous online) when combined with the Tier 1 face-to-face course.

The multi-access framework includes various tiers but may be better understood from the perspective of the learner as: (1) face-to-face registered; (2) online synchronous registered; (3) online asynchronous registered; and (4) open learner. The authors argue that the multi-access framework is an alternative approach to the MOOC design for those who want access to higher learning. This begs the question, that with this choice of access mode and its parallel to "traditional" course offerings, could this be extended to allow those who register in MOOCs to also gain PSI credit? With most brick-and-mortar universities dabbling in MOOCs, the results of this study should be of interest.

With the shrinking postsecondary student population and the echo generation leaving the common age range of postsecondary students, together with the increase in postsecondary institutions and decrease in funding, new models of learning must be considered. Since university enrollments are facing a declining student population for various reasons (The Association of Universities and Colleges of Canada, 2011; Means, Toyama, Murphy, Bakia, & Jones, 2010; Vedder, 2012), it is critical that PSIs deliver what they promise in offering distributed learning – choices for access to learning and learner-centered designs. The face-to-face, blended, online, and flexible models that are promoted are still limited in achieving learner-centered designs because they control the modality or the blend as opposed to the learner. With the advent of online learning in the late 1990s, the anytime/anywhere mantra taken up by many PSIs was a first step to meeting learner needs for flexibility, for the first time considering the student as a locus of control in the learning environment. Often seen in the marketing "lingo" of promotional materials for such programs is "anytime, anywhere ..." learning, yet PSIs have missed a critical component: learners want to connect in "any way."

References

The Association of Universities and Colleges of Canada. (2011). Trends in higher education: Volume 1 – enrolment. Ottawa, Canada. Retrieved from http://www.aucc.ca/wp-content/uploads/2011/05/trends-2011-vol1-enrolment-e.pdf

Bandura, A. (2001). Social cognitive theory: An agentic perspective. Annual Review of Psychology, 52(1), 1-26. doi:10.1146/annurev.psych.52.1.1

Bandura, A. (2006). Towards a psychology of human agency. Perspectives on Psychological Science, 1(2), 164-180. doi:10.1111/j.1745-6916.2006.00011.x

Boekaerts, M., Pintrich, P. R., & Zeidner, M. (2000). Self-regulation: Directions and challenges for future research. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 749-768). San Diego, CA: Academic Press. doi:10.1016/B978-012109890-2/50030-5

Brown, A. L. (1987). Metacognition, executive control, self-regulation, and other more mysterious mechanisms. In F. E. Weinert & R. H. Kluwe (Eds.), Metacognition, motivation, and understanding (pp. 65-116). Hillsdale, NJ: Erlbaum.

Brown, A. L. (1994). The advancement of learning. Educational Researcher, 23(8), 4-12. doi:10.3102/0013189X023008004

Brown, A. L., & Campione, J. C. (1994). Psychological theory and the design of innovative learning environments: On procedures, principles, and systems. In L. Schauble & R. Glaser (Eds.), Innovations in learning: New environments for education (pp. 289-325). Mahwah, NJ: Erlbaum.

Bruner, J. (1996). The culture of education. Cambridge, MA: Harvard University Press.

Castle, S., & McGuire, C. (2010). An analysis of student self-assessment of online, blended, and face-to-face learning environments: Implications for sustainable education delivery. International Education Studies, 3(3), 36-40. Retrieved from http://www.ccsenet.org/journal/index.php/ies/article/download/5745/5308

Code, J. R. (2010). Assessing agency for learning (Doctoral dissertation, Simon Fraser University, Burnaby, Canada). Retrieved from http://summit.sfu.ca/system/files/iritems1/11308/etd6068_JCode.pdf

Creswell, J. W., & Plano Clark, V. L. (2007). Designing and conducting mixed methods research. Thousand Oaks, CA: Sage.

Dale, M. (2010). Trends in the age composition of college and university students and graduates. Education Matters: Insights on Education, Learning and Training in Canada, 7(5). Retrieved from http://www.statcan.gc.ca/pub/81-004-x/2010005/article/11386-eng.htm

Daniel, J. (2012). Making sense of MOOCs: Musings in a maze of myth, paradox and possibility. Journal of Interactive Media in Education, 2012(3). Retrieved from http://jime.open.ac.uk/article/2012-18/html

European Higher Education Area. (2010). Budapest–Vienna declaration of the European higher education area. Retrieved from http://www.ehea.info/Uploads/news/Budapest-Vienna_Declaration.pdf

Farr, C. (2013, January 8). UC spends big to market its online courses – but reaches only one person [Web log post]. Retrieved from http://www.venturebeat.com/2013/01/08/uc-spends-big-to-market-its-online-courses-reaches-one-user/

Field, A. (2005). Discovering statistics using SPSS (2nd ed.). Thousand Oaks, CA: Sage.

Friedman, T. (2013, March 5). The professors' big stage. The New York Times. Retrieved from http://www.nytimes.com/2013/03/06/opinion/friedman-the-professors-big-stage.html?_r=2&

Gee, S. (2012, June 16). MITx – the fallout rate [Web log post]. Retrieved from http://www.i-programmer.info/news/150-training-a-education/4372-mitx-the-fallout-rate.html

Harden, N. (2013). The end of the university as we know it. The American Interest, 8(3), 54-62. Retrieved from http://www.the-american-interest.com/article.cfm?piece=1352

Hill, P. (2013). The most thorough summary (to date) of MOOC completion rates [Web log post]. Retrieved from http://www.mfeldstein.com/the-most-thorough-summary-to-date-of-mooc-completion-rates/

Irvine, V. (2009). The emergence of choice in "multi-access" learning environments: Transferring locus of control of course access to the learner. In G. Siemens & C. Fulford (Eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2009 (pp. 746-752). Chesapeake, VA: Association for the Advancement of Computing in Education. Available from EdITLib Digital Library. (31583)

Irvine, V., & Code, J. (2011, January). The 21st century university. Online presentation delivered to the Change11 MOOC. Retrieved from http://change.mooc.ca/week16.htm

Irvine, V., & Code, J. (2012, May). The 21st century university: Implications and benefits of choice of learner access and openness. Presentation delivered at the BCNET and HPCS Conference 2012, Vancouver, Canada. Abstract retrieved from http://2012.hpcs.ca/program/campus-it-solutions/the-21st-century-university-implications-and-benefits-of-choice-of-learner-access-and-openness/

Irvine, V., & Richards, L. (2013, January). Multi-access learning: Overview and preliminary project data. Online presentation delivered to the Canadian Institute of Distance Education Research. Retrieved from http://cider.athabascau.ca/CIDERSessions/irvine2013/sessiondetails

Jean-Louis, M. (2011). Final report: Engagement process for an Ontario Online Institute. Retrieved from http://www.tcu.gov.on.ca/pepg/publications/ooi_may2011.pdf

Jones, C., Ramanau, R., Cross, S., & Healing, G. (2010). Net generation or digital natives: Is there a distinct new generation entering university? Computers & Education, 54(3), pp. 722-732. doi:10.1016/j.compedu.2009.09.022

Jordan, K. (2013). MOOC completion rates: The data. Retrieved April 2, 2013, from http://www.katyjordan.com/MOOCproject.html

Jurewitsch, B. (2012). A mixed-methods systematic review of online versus face-to-face problem-based learning. The Journal of Distance Education, 26(2). Retrieved from http://www.jofde.ca/index.php/jde/article/view/787/1399

Klegeris, A., & Hurren, H. (2011). Impact of problem-based learning in a large classroom setting: Student perception and problem-solving skills. Advances in Physiology Education, 35(4), 408-415. doi:10.1152/advan.00046.2011

Kolowich, S. (2013). The professors behind the MOOC hype. The Chronicle of Higher Education. Retrieved from http://www.chronicle.com/article/The-Professors-Behind-the-MOOC/137905/

Lave, J. (1996). Teaching, as learning, in practice. Mind, Culture, and Activity, 3(3), 149-164. doi:10.1207/s15327884mca0303_2

McKinney, D., Dyck, J. L., & Luber, E. (2009). iTunes University and the classroom: Can podcasts replace professors? Computers & Education, 52(3), 617-623. doi:10.1016/j.compedu.2008.11.004

Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2010). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. Washington, DC: U.S. Department of Education. Retrieved from http://www2.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf

Neely, P. W., & Tucker, J. P. (2010). Unbundling faculty roles in online distance education programs. The International Review of Research in Open and Distance Learning, 11(2), 20-32. Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/798/1543

Parry, M. (2009, August 31). Professors embrace online courses despite qualms about quality. The Chronicle of Higher Education. Retrieved from http://www.chronicle.com/article/Professors-Embrace-Online/48235/

Prince, M. J., & Felder, R. M. (2006). Inductive teaching and learning methods: Definitions, comparisons, and research bases. Journal of Engineering Education, 95(2), 123-138. doi:10.1002/j.2168-9830.2006.tb00884.x

Robinson, R. (2005). The business of online education: Are we cost competitive? In J. Bourne & J. C. Moore (Eds.), Elements of quality online education: Engaging communities (pp. 173-181). Needham, MA: The Sloan Consortium.

Rodriguez, C. O. (2012). MOOCs and the Al-Stanford like courses: Two successful and distinct course formats for massive open online courses. European Journal of Open, Distance and E-Learning, 2012(2). Retrieved from http://www.eurodl.org/?p=archives&year=2012&halfyear=2&article=516

Salvatori, P. (2000). Implementing a problem-based learning curriculum in occupational therapy: A conceptual model. Australian Occupational Therapy Journal, 47(3), 119-133. doi:10.1046/j.1440-1630.2000.00216.x

Seaman, J. (2009). Online learning as a strategic asset. Volume II: The paradox of faculty voices: Views and experiences with online learning. Washington, DC and Babson Park, MA: Association of Public and Land-grant Universities and Babson Survey Research Group. Retrieved from http://www.aplu.org/document.doc?id=1879

Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning, 2(1), 3-10. Retrieved from http://www.itdl.org/Journal/Jan_05/article01.htm

Siemens, G. (2012, July 25). MOOCs are really a platform [Web log post]. Retrieved from http://www.elearnspace.org/blog/2012/07/25/moocs-are-really-a-platform/

Spronken-Smith, R., & Walker, R. (2010). Can inquiry-based learning strengthen the links between teaching and disciplinary research? Studies in Higher Education, 35(6), 723-740. doi:10.1080/03075070903315502

Sutherland, R., Reid, K., Kok, D., & Collins, M. (2012). Teaching a fishbowl tutorial: Sink or swim? The Clinical Teacher, 9(2), 80-84. doi:10.1111/j.1743-498X.2011.00519.x

Tallent-Runnels, M. K., Thomas, J. A., Lan, W. Y., Cooper, S., Ahern, T. C., Shaw, S. M., & Liu, X. (2006). Teaching courses online: A review of the research. Review of Educational Research, 76(1), 93-135. doi:10.3102/00346543076001063

Vedder, R. (2012). Five reasons college enrollments might be dropping. Retrieved from http://www.bloomberg.com/news/2012-10-22/five-reasons-college-enrollments-might-be-dropping.html

Wenger, E. (1998). Communities of practice: Learning as a social system, The Systems Thinker, 9(5), 2-3.

Ward, M. E., Peters, G., & Shelley, K. (2010). Student and faculty perceptions of the quality of online learning experiences. The International Review of Research in Open and Distance Education, 11(3), 57-77. Retrieved from http://www.irrodl.org/index.php/irrodl/article/download/867/1611

Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Metacognition in theory and practice (pp. 277-304). Mahwah, NJ: Erlbaum.

Zimmerman, B. (2000). Attaining self-regulation: A social cognitive perspective. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 13-39). San Diego, CA: Academic Press. doi:10.1016/B978-012109890-2/50031-7

Acknowledgments

This research was supported by the Canada Foundation for Innovation, the British Columbia Knowledge Development Fund, Knowledge North, BCNET, and the Government of Canada Social Sciences and Humanities Research Council (File No. 410-2010-0451).


View the original article here

Learner Participation and Engagement in Open Online Courses: Insights from the Peer 2 Peer University

         Learner Participation and Engagement in Open Online Courses:
Insights from the Peer 2 Peer University


June Ahn

Assistant Professor
College of Information Studies and College of Education
University of Maryland, College Park
College Park, MD 20740 USA
juneahn@umd.edu

Brian S. Butler
Associate Professor
College of Information Studies and Robert H. Smith School of Business
University of Maryland, College Park
College Park, MD 20740 USA
bsbutler@umd.edu

Alisha Alam
Graduate Research Assistant
College of Information Studies
University of Maryland, College Park
College Park, MD 20740 USA
alisha@umd.edu

Sarah A. Webster
Graduate Research Assistant
College of Information Studies
University of Maryland, College Park
College Park, MD 20740 USA
websters@umd.edu

Abstract

Recent developments in massive open online courses (MOOCs) have brought renewed attention to online learning, but most of the attention has been centered on courses offered through platforms such as Coursera and edX. This paper focuses on the importance of examining alternative, large-scale learning initiatives that promote more participatory modes of education production and delivery. It presents a comprehensive description of the Peer 2 Peer University (P2PU), a social computing platform that promotes peer-created, peer-led, online learning environments. Using log data from the P2PU platform, the ecosystem of this learner-generated education platform is described. The descriptive analysis highlights P2PU's growth in terms of the participatory learning environments that have been created – such as online study groups, courses, and challenges – and also describes the participation patterns of P2PU members. This paper provides one of the first empirical descriptions of an emerging open learning platform and illuminates how log data from the platform, particularly in relation to open courses, open badges, and learning tasks embedded in courses, can be used to track development of courses and engagement in learning activities across P2PU. The analyses reported here are aimed at helping researchers understand the P2PU ecosystem and identify potential areas for future study as P2PU works to open its data for public analysis.

Keywords: massive open online course (MOOC), open education, peer-to-peer learning, participatory learning, crowdsourced education

Introduction

While recent platforms for massive open online courses (MOOCs) such as Coursera and edX have garnered substantial public attention, MOOCs were originally conceived of as participatory environments where groups of learners (from small to massive) collaborate to aggregate self-created learning experiences (McAuley, Stewart, Siemens, & Cormier, 2010). In this model of peer-based, open learning, a course might organize topics and guide the schedule. However, individual learners coordinate the learning activities by contributing blog posts, links to resources, and other media. This model brings to the foreground a bottom-up, crowd-generated approach to delivering online education versus the top-down approach implemented in popular platforms. Yet in spite of their differences, learner participation and engagement is a concern in both types of MOOCs (Waters, 2012).

At a basic level, any learning platform requires participant engagement to achieve its goals. Individuals must participate in the provided activities in order to have the learning experiences that are the focus of the platform. Beyond this basic form of participation, crowd-generated learning platforms require learners to engage in many other activities such as creating courses and learning materials, joining online courses, contributing comments and discussion, and persisting in the group's learning activities. Creating a participatory MOOC environment requires both more coordinated work (Butler & Ahn, 2013) and more types of participation than top-down MOOCs that rely primarily on individual learner engagement (Ahn, Weng, & Butler, 2013; Kafai & Peppler, 2011). However, in spite of its importance, there has been little empirical work done on conceptualizing and measuring participation in large scale, open learning platforms.

In the following study, the patterns of participation and engagement present in a prominent participatory, open online learning community, the Peer 2 Peer University (P2PU), are examined. An outline is first provided of how P2PU is an example of a peer-generated open online course platform that speaks to the original notion of participatory MOOCs. Then, statistics derived from a comprehensive dataset about P2PU courses and participant activity are presented, with the aim of exploring the question of how learners have participated and engaged with open online courses in P2PU.

The data in this study describe the course ecology that exists on the P2PU platform using metrics such as the number of member-created courses, types of courses, patterns of course enrollment, and active contributions to these peer-to-peer courses over time. A descriptive analysis of the diverse learning environments that have arisen in this participatory, open education platform is also offered.

This paper makes several contributions to research on open online learning communities. Recent developments in top-down MOOCs (Coursera, edX, etc.) have brought renewed energy and attention to online learning. It highlights the importance of examining alternative, and equally exciting, open learning initiatives such as P2PU. This paper provides the first empirical descriptions of an emerging open learning platform, illuminating how log data from P2PU, particularly in relation to open courses, open badges, and learning tasks embedded in courses, can be used to track the development of courses and engagement in learning activities across P2PU. Different flavors of online education are developing at a rapid pace, bringing to the fore numerous research questions and concerns. The descriptions presented in this paper are aimed to help researchers understand the P2PU ecosystem and identify potential areas for future study as P2PU works to open its data for public analysis.

Related Work

Understanding the different definitions of the term MOOC is vital to develop a nuanced understanding of how educational experiences are developed and delivered in various online platforms. Recent, high profile MOOC examples such as Coursera, edX, and Udacity represent what some term "xMOOCs," with their institutional, top-down, content-delivery driven models (Siemens, 2012). Courses in these platforms are typically designed as a weekly syllabus of video lectures followed by quizzes or other assignments that evaluate whether students understood the content. Discussion boards are also often provided for students to ask questions and seek clarification on information related to the course. These courses mirror the structure of what was seen in most online education platforms. The major innovations have been the ability of these platforms to curate and freely deliver content from elite universities and garner "massive" enrollments of thousands and tens of thousands of learners.

In contrast to this traditional model of teaching and learning is what educators have referred to as connectivist MOOCs or "cMOOCs" (McAuley et al., 2010; Siemens, 2012). This original form of MOOC was viewed as an experiment that leveraged the distributed, networked information available on the Internet with the learners themselves generating most of the learning activities. Learners might write their own reflections about what they are learning on blogs and other social media platforms, share these back with their peers, create social networks with others in the MOOC, share information, and aggregate their ideas to create an emergent learning experience.

This form of collective learning differs from the xMOOC model in several significant ways (Kafai & Peppler, 2011). First, the modes of production differ dramatically. In xMOOCs, elite institutions such as Stanford University, Harvard University, Massachusetts Institute of Technology, and partner universities develop courses. In cMOOCs, the underlying philosophy is that any individual should be empowered to design a learning experience. Second, course implementation and the role of learners are conceptualized very differently. In an xMOOC, learner responsibilities focus on consuming the course content and completing evaluations to assess understanding of that content. In a cMOOC, learners may utilize the course-provided content, but as one resource among many that they may access through their own online searching and research. Learners take on the dual role of learners and teachers, as they help their peers through the learning process they are also undertaking. In a cMOOC, the content of the course is user-generated and emergent, arising from the persistent contributions of the learners themselves. Leadership is distributed and the learning experience is severely limited if there are only a few students generating ideas and information.

Connectivist or collectivist MOOC environments build on broader trends in computing and technology-mediated social interaction. These versions of MOOCs leverage the capabilities of social computing, where networked media and technology enable individuals to easily share information, communicate with others, and drive online content-creation (Parameswaran & Whinston, 2007). Thus, instead of treating learning as information transfer from instructor to student, learning is conceptualized as arising from social interaction and information sharing via networked tools. A collectivist philosophy of online education also represents a particular instantiation of crowdsourcing. The term crowdsourcing describes the use of technology to outsource a task typically done by a particular agent or person by distributing the task to a large group of people to do collectively (Howe, 2006). In the MOOC context, one could observe the process of course creation as a crowdsourcing example. In xMOOC platforms, course creation is crowdsourced, in a sense, to individual universities and their instructors, but aggregated in a central platform. Yet at a micro-level, courses are still designed and created by a single source, the instructor. In a cMOOC context, the course content and design are further crowdsourced to the learners themselves. Rather than one source of information and content, learners form online social networks in which they share and aggregate content across multiple channels. Through the use of social computing platforms, education could potentially be a crowdsourced endeavor, where learners create, manage, and collectively implement large-scale, online courses.

The Peer 2 Peer University

P2PU is non-profit organization and online platform that began in 2009. The online community promotes experimentation in open learning and peer-led education. The platform allows any member to design and create an educational course, which can then be taken by any other member in the online community. In P2PU, learning environments come in a variety of formats (with courses being only one instantiation). The general term used in P2PU is learning "project," and this term will be used in the present paper to speak generally to different P2PU learning situations. More specific terms refer to particular learning environments. In P2PU, members can create study groups to bring together learners around topics of interest for a set period of time. Members can also create more formal courses that are structurally similar to study groups. The most recent iteration of P2PU adds challenges, which are asynchronous courses that remain persistent and act as guides for learners through an educational experience (rather than being time-bound). Learners in P2PU can join, complete, and leave challenges at any moment. P2PU has also introduced new forms of collective assessment through open badges. Learners can earn badges associated with different learning tasks and courses, which tie into the Mozilla Open Badges framework (The Mozilla Foundation & P2PU, 2012). Study groups, courses, challenges, and badges are all important constructs that P2PU provides to organize peer-generated learning experiences, and each will be considered in the analysis presented below.

In many ways, P2PU is a social computing platform that crowdsources the creation and implementation of informal online education. Instead of formal teachers or established institutions acting as the sole developers of online courses, any stakeholder can create a course in P2PU. In addition, the mission and ethos of the P2PU community encourages a social process of learning. Courses are not designed merely as content delivery mechanisms, but as spaces that ask members to engage in active learning projects, share their progress and resources, start and participate in discussions, and collaborate to gain knowledge of a topic area. Similarly, P2PU's experiments with open badges are a way to observe how social assessments of learning can be recognized and credited in open, informal learning environments. As a crowdsourced education platform, the success of the community is critically dependent on each member's active and persistent participation.

In this study, the authors sought to better understand the notions of learner participation and engagement in open online courses, and explore how researchers could observe these activities using log and trace data from P2PU. The goal was to take the raw data provided by P2PU, and use descriptive analytics to understand the broader ecosystem of the P2PU community. The main research question guiding this study was:

How have learners participated and engaged with open online learning in P2PU?

In a participatory, open education setting like P2PU, the learning environments themselves are not givens. There is a substantial amount of member effort needed to create the basic learning contexts (e.g., courses and challenges) and assessments (e.g., badges) required for the platform to function. As Butler and Ahn (2013) observe, a great amount of cooperative work and coordination is needed to create materials, mobilize peers, and collaborate in productive ways, before learning can even occur. Thus, the descriptive analysis of the P2PU log data began with an exploration of exploring questions such as:

How many courses and challenges comprise the P2PU community?What patterns of participation can be observed in these learning contexts?What are the characteristics of learning environments within P2PU?How can learners' participation and engagement in these contexts be measured?These descriptive analyses of the P2PU community can then inform future researchers seeking to explore complex relationships between learning context characteristics and learner participation, and engagement.

Method

The study reported here was part of an ongoing partnership with P2PU, in which the main goals are to: (1) create policies and processes to publicly share datasets of the P2PU platform; and (2) conduct analyses of this widely used open education platform to shed light on the challenges of supporting large-scale, participatory, online learning. The project includes substantial data work to transform raw P2PU log data into usable variables and datasets for researchers in education and other related fields. In addition, the research team is exploring various learning analytics techniques for illuminating the learning processes that occur in this online community (Bienkowski, Feng, & Means, 2012).

In October 2012, P2PU provided the research team with a raw dataset containing a complete record of all elements of the platform (courses, challenges, users, learning activities, etc.). This dataset covered the entire history of P2PU from its beginning in late 2009 to October 2012. The data was a copy of the back-end mySQL database that is the operational foundation for P2PU.org and consisted of 82 tables. In an initial phase of data exploration, the research team developed a high-level mapping of the tables that corresponded to the primary components of the P2PU platform and their relationships to one another (Figure 1). This study utilized log data from those tables that described P2PU Schools, Projects (e.g., the different types of learning contexts), Badges, Users, and user Content contributions.


Figure 1. High-level description of the P2PU relational database structure

Although the database theoretically provided a rich, fine-grained record of P2PU schools, projects, users, their activity, and badges, the raw log data was structured in terms of operational transactions and entity status records which did not readily correspond to the constructs most likely to be of interest to researchers and analysts. To overcome this mismatch the research team first conducted a phase of data exploration and descriptive analytics to construct a holistic understanding of various levels of the P2PU community. Then on this basis, a project-level dataset was created that characterized the nature, history, and activity associated with every project (e.g., study group, course, challenge) ever created on the P2PU platform. This analysis dataset included a number of variables for each project, which are described in turn below.

Project name: Name of the P2PU project. Language: What language the project uses (e.g., English, Spanish). Project category: What type of project is it (study group, course, or challenge)? Under development: A binary flag that designates whether a given project is under development or "live" and available to the public. Deleted: A binary flag designating whether a project has been deleted. Archived: A binary flag designating whether a project is no longer publicly available and has been archived. Project pages: A count of how many website pages have been created for a given project. Organizer count: P2PU project may have one or more creators or leaders (known as organizers). This variable indicates how many unique organizers are associated with a project. Participant count: P2PU Members can join study groups and courses as active "participants." This variable is a count of how many members have joined and self-designated as participants. Follower count: Members can also join study groups and courses as "followers," indicating that they will not actively participate but are interested in receiving notices about activity within the course. This variable is a count of how many members have joined and self-designated as followers. Total number of tasks: Each challenge consists of a series of tasks to be completed by learners. As they complete them, members record their progress by checking off each task. This variable indicates how many tasks are associated with each challenge. Adopter count: In challenges, members can join as active participants by working on tasks in order to complete the challenge. This variable indicates how many members begun working on the tasks associated with a challenge. Completers: In challenges, members can check off tasks as they work to achieve the challenge. When all the tasks are marked as completed, the challenge is said to have been completed. This variable is a count of how many members have completed a challenge. Average tasks completed: While some individuals complete all the tasks associated with a challenge, many do not. This variable captures the average number of tasks completed by the adopters of each challenge, providing a metric of the overall progress of the members participating in it. Total number of badges: Some challenges and courses are associated with badges that members can earn if they complete certain tasks. This variable is a count of how many badges were associated with a given project. Average badges earned: As with tasks, individuals can vary with respect to how far they have progressed with respect to the badges associated with a project. This variable captures the average number of badges completed (from those associated with the project) among the participating members.These variables were constructed based on an examination of the P2PU interface, consultation with P2PU developers, and detailed analysis of the P2PU platform code, which is available through an open source license. The constructed measures were also validated with SQL queries to multiple combinations of data tables of P2PU's backend database.

Findings

The P2PU Ecosystem

To explore how learners have participated and engaged with open online learning in P2PU several platform-level metrics that provided a glimpse into P2PU's history, its ecosystem of learning projects, and members in the community were examined. The primary component of the P2PU learning ecosystem is the project. Projects are bounded collections of dynamic and static content, organizers, participants, and infrastructure. There were 2,034 project records in the P2PU database that are broken down into three different learning environment types: 1,135 study groups, 506 courses, and 393 challenges.

Study groups bring together groups of informal learners together around educational tasks and discussions. Courses are similar in structure to study groups, but represent more formal learning projects that are open to anyone in P2PU. Courses organize around tasks that learners complete (and are typically structured as weekly tasks), in addition to threaded discussion pages where learners can participate in dialogue and share thoughts. In the recent iteration of P2PU, another learning environment called challenges was introduced. Challenges utilize a similar structure as courses but are asynchronous and persistent (whereas courses ran for a set period of time and learners dispersed). A challenge is a series of tasks and discussion spaces, where a learner can enter and exit at any point. Badges can also be associated with these challenges, which learners earn by completing various tasks. In approximately two years, P2PU has supported the creation of just over 2,000 distinct learning environments that combine a set of common elements to provide rich learning experiences for participants.

While there are numerous projects in P2PU, they vary significantly with respect to their level of development. The majority of the over 2,000 projects in P2PU were tagged as "under development," which means that members experimented with creating these learning environments, but never implemented them live in the community. Most projects appear to have started development but were never finished and released live, and others were deleted from the site over time. Of the total, there were 368 projects (18.09%) that went live and were not deleted from the P2PU platform. Of these projects 159 were challenges, 132 were study groups, and 77 were courses. About 85% of these projects were in English and 12% in Spanish (with less than 3% in other languages).

At the same time, it was also observed that in P2PU's nascent history, there has been a general trend of 10 to 25 new learning projects being developed and publicly released each month, with a sharp rise in July 2012 due to a special initiative that month (see Figure 2 below). Together, these statistics suggest that as a platform P2PU is able to successfully encourage individuals to experiment with creating a diverse array of participatory learning environments, but that a relatively small percentage of those projects are ever implemented as operational. Future research should consider the issues related to the development of learning projects in P2PU, strategies for fostering more complete learning project development over time, and analysis of whether this ratio of experimentation to "published" projects is optimal. In addition, there are rich opportunities to understand the barriers that individuals may face in the creation of projects within such a participatory platform. Factors such as a lack of understanding of the P2PU platform, prior knowledge in topic areas, motivation, or lack of community involvement may influence the systematic production and availability of learning projects over time.


Figure 2. Projects created by month ("live" courses only)

Along with this steady development of peer-created learning environments, there were 41,281 registered members in P2PU at the time of writing of this paper. Such membership represents a truly massive group of potential learners, and positions P2PU as an example of a large-scale participatory learning environment. P2PU does not describe itself as a cMOOC environment, but rather a platform to promote open, informal learning. However, taking a broad view of the entire platform, it is an online community of over 40,000 potential learners who can share and create their own participatory learning environments and experiences. This large pool of potential learners is notable as an experiment in participatory, peer-led learning; nonetheless, participation and engagement issues remain, just as in any other online community. Data in the P2PU database provides time-stamped fields for when a member created their P2PU account and the last date they were active in the community (when they logged on). Using these fields, it is possible to create various overall participation metrics, such as how many P2PU members ever returned to the site after first creating their account. It was found that 6,483 members returned to P2PU at least once after account creation (approximately 16% of all registered members). This implies that that the majority of members (approximately 85%) created accounts and never again engaged with the community. While this pattern of overall engagement (or relatively lack thereof) is a potential problem for P2PU, it is consistent with participation patterns observed in other online environments (Cummings, Butler, & Kraut, 2002). Nevertheless, having over 6,000 active members in this community still represents a substantial pool of learners and contributors in this participatory learning community. Furthermore, as P2PU has continued to establish itself as an online community, one can also observe that they have consistently attracted 300-400 new members per month (who return to the site) over the 2012 year (see Figure 3). Overall, these descriptive measures of P2PU's growth and history show an online community that is in an early stage of development, but steadily growing both in terms of individual users and different types of participatory learning environments.


Figure 3. Member accounts created by month (for 6,483 members who have returned at least once to P2PU)

To explore the relationship between these two interdependent elements of the P2PU ecology, metrics that describe how members participated and engaged within the context of study groups, courses, and challenges were examined. Members in P2PU can relate to projects in various ways. They can organize and create projects, or they can join in on projects as participants or followers. Of the 6,483 members of P2PU who logged in at least once more after creating an account, 4,730 (73%) have signed up to be an organizer, participant, or follower of at least one learning project. This suggests that of P2PU members who do return to the site, a substantial number attempt to become active members in projects by organizing a project or signing up to be a participant.

There was also significant variation in the number of members participating in different learning environments both within and across the different types (see Table 1). For example, study groups and courses in P2PU typically had 1-2 organizers who created the learning group and initiated activities. The majority of these learning groups were small. The median number of participants in study groups was three members and the median in courses was five members. However, some study groups and courses grew into quite large learning environments. Study groups reached up to 370 participants and 1,167 followers (individuals who followed along but did not actively participate). The largest courses grew to 173 members and 319 followers.

Table 1. Number of members who participated in P2PU study groups, courses, and challenges (live projects only)

Membership in P2PU challenges is organized a little differently. Learners choose to adopt the challenge and the platform also keeps track of who has completed a given learning challenge (completers). Again, a majority of challenges are small, with the median number of adopters being two members – although the distribution is skewed with some challenges successfully having up to 158 (current) adopters and 694 completers. These descriptive statistics highlight the wide diversity of participatory learning environments that arise in P2PU. Many study groups, courses, and challenges attract only a few like-minded individuals. However, some projects attract hundreds of learners to create larger-scale learning environments. Again, while the skewed distribution of participation potentially presents a challenge for P2PU and other open learning environments, it is consistent with the distribution of group and forum sizes that has been observed in other online contexts (Butler, 2001; Butler & Wang, 2012; Jones, Ravid, & Rafaeli, 2004). Future research that explores the relationships between support for development of focused learning environments, characteristics of the created learning environments, and the resulting patterns of participation would make substantial contributions to understanding how open learning communities such as P2PU can function more effectively.

Tasks and Badges: Opportune Measures of Engagement

As the P2PU platform has developed, support for different types of learning environments have been added. Initially, study groups (time-bounded, unstructured group interaction spaces) were supported, then courses (time-bounded, guided exploration of topics, with associated group interaction). Most recently challenges were introduced as ways to organize learning. Challenges are organized as an informal syllabus, consisting of a checklist of tasks. P2PU members join challenges and when they complete various learning tasks, they check off these tasks as completed. One unique affordance of this feature is that P2PU logs a record of challenges, tasks, adopters, and when these adopters checked off tasks as complete. Using these logs, measures of the persistence and engagement of participants in P2PU challenges as of October 2012 could be calculated.

The detailed logging of tasks provides some nuanced ways to conceptualize and observe persistence and engagement. For example, challenges in P2PU had an average of 4.67 learning tasks that members were asked to complete (Table 2). On average, members reported completing roughly half of the available tasks (2.24 average number of tasks completed per adopter). Interesting future research streams emerge from this descriptive data. For example, using the P2PU data it is possible to examine longitudinal trends of when P2PU members complete tasks and how other variables influence this task completion. In addition, future research might examine how characteristics of challenges, tasks, and P2PU members and their social networks relate to the completion of tasks over time.

Table 2. Tasks and badges associated with challenges in P2PU

Average number of tasks completed per adopterAverage number of badges earned per member

Note. N = 159 active challenges in P2PU.

In addition to tasks, P2PU had recently begun piloting an open badge system based on the Mozilla Open Badge Framework (The Mozilla Foundation & P2PU, 2012). The idea behind open badges is to provide publicly visible credentials for informal learning, and there is a rich area of future research to examine the varied social, cultural, and educational functions of online badges and informal credentialing (Abramovich, Schunn, & Higashi, 2013; Antin & Churchill, 2011). As with study groups, courses, and challenges, members of P2PU can create badges that other members can earn by completing different learning tasks and challenges. Badges are also logged in the P2PU system and provide a way to measure and understand learner achievements and their engagement with learning over time.

As of October 2012, the implementation of badges was in an early stage. As a result, many challenges did not have associated badges (i.e., challenges had an average of less than one badge associated with them). Some challenges had up to eight associated badges with some activity in members earning badges (Table 2). In future research, some interesting avenues could include examining learners' trajectories of badge earning and the different social dynamics at play in P2PU in relation to badge earning (e.g., to better understand who creates and earns badges, as well as what the predictors badge earning are).

Conclusion

One lesson from the recent rise of MOOCs (Pappano, 2012) is that new forms of online education will most likely endure, evolve, and become a vital part of how education is delivered in the future. Much of the initial excitement and attention to MOOCs have risen from a particular version of online learning, the xMOOC. This paper has highlighted how alternative, participatory forms of education production and delivery can develop within and be supported by social platforms such as P2PU. In such participatory, collective cMOOC contexts, participation and engagement take on varied forms. Courses must be developed, assessments created, fellow learners recruited, and learning environments sustained over time through engagement and participation. These different functions bring about new challenges to sustaining robust, large-scale, participatory learning environments. The challenges illuminate potentially high-impact research areas that could make a substantial contribution to improving and evolving large-scale, online learning that are alternatives to the high-profile examples that are popular today (Coursera, etc.).

Through the descriptive analysis of P2PU presented in this paper, some of these challenges have been highlighted and potential research directions for future work illuminated. P2PU is clearly an early-stage, but growing, online community. From the log data, it is evident that steady streams of members are joining the community over time, and that these members are actively experimenting with creating their own learning environments online. However, a substantial number of learning projects on P2PU are never released live and implemented. Interesting future questions remain about the experiences of learners who take on the role of course creators, including the following:

What are their motivations?What challenges do they face in creating participatory learning environments?What skills are needed to create effective courses, such as instructional design, and more importantly, how can the learning of these skills be "scaled out" to everyday individuals to foster more effective crowdsourced course development?What factors are related to courses actually being implemented in P2PU?Research in this stream would shed light on issues such as new forms of organization, instructional design, and production of education in a participatory, open, social computing context.

The analysis also highlights the wide variability present in the online learning environments that are present in P2PU. In a participatory setting such as P2PU, the majority of learning projects serve small groups of informal learners who come together around specific topics. These learning arrangements represent the long-tail of the ecosystem of learning contexts in P2PU with many courses that serve a few individuals around niche interests (Brown & Adler, 2008). A small number of P2PU projects successfully garnered the participation of hundreds and thousands of interested learners, and could be viewed as a form of cMOOC. Individual P2PU projects that garner thousands of students introduce new issues of educational practice at scale. However, the aggregation of hundreds and thousands of smaller learning arrangements in a platform such as P2PU also combines to create an alternative, and uniquely large-scale, ecology of education opportunities. Future questions remain concerning how to foster robust learning communities from this ecological perspective (Butler & Ahn, 2013). How many courses and members are needed to foster participation and learning activities? What factors within courses or learning projects are related to fostering engagement over time (e.g., Ahn, Weng, & Butler, 2013)? How can platforms such as P2PU foster the development of participatory learning behaviors such as social network formation, knowledge sharing, and active learning opportunities?

Finally, the P2PU platform also affords opportunities to examine concepts such as participation and engagement through new artifacts such as online learning tasks and open badges. It has been demonstrated here how log data of learning tasks in P2PU can show the variability of learning activity conducted by members. Future work is possible to examine longitudinal patterns of the learning tasks P2PU members undertake, which may provide indicators of sustained participation and engagement. In addition, research that can examine the relationship between factors such as course design, social interaction, and networks with the completion of learning tasks over time can make a substantial contribution to fundamental questions around supporting online learners to persist and not drop out of courses over time. P2PU's early piloting of open badges, and their logging of badge creation and earning behaviors, can also serve as an exciting context to explore the social, cultural, and educational functions of badges in online learning. These research areas promise to rise in importance as badges, and other artifacts of informal educational credentialing, become a major element of online learning in the future.

Overall, P2PU provides a rich environment for addressing many fundamental questions about the development, operation, and use of large scale, open, online learning platforms. While this work is still in its early stages, it provides an initial look at the way that focused learning environments, such as P2PU study groups, courses, and challenges, are developed (or not), are adopted (or not), and are engaged by learners (or not). Realizing the potential of MOOCs and other forms of large scale, technology-enabled learning environments will depend not only on understanding how to provide appropriate experiences for individual learners, but also on our ability to design platforms that provide the affordances necessary to support development of diverse populations of focused learning environments.

References

Abramovich, S., Schunn, C., & Higashi, R. M. (2013). Are badges useful in education? It depends upon the type of badge and expertise of learner. Educational Technology Research & Development, 61(2), 217-232. doi:10.1007/s11423-013-9289-2

Ahn, J., Weng, C., & Butler, B. S. (2013). The dynamics of open, peer-to-peer learning: What factors influence participation in the P2P University? In R. H. Sprague, Jr. (Ed.), Proceedings of the 46th Annual Hawaii International Conference on System Sciences (pp. 3098-3107). Los Alamitos, CA: IEEE Computer Society. doi:10.1109/HICSS.2013.515

Antin, J., & Churchill, E. F. (2011, May). Badges in social media: A social psychological perspective. Paper presented at the ACM SIGCHI Conference on Human Factors in Computing Systems, Vancouver, Canada. Retrieved from http://labs.yahoo.com/files/Antin%20&%20Churchill%20-%20Badges%20in%20Social%20Media.pdf

Bienkowski, M., Feng, M., & Means, B. (2012). Enhancing teaching and learning through educational data mining and learning analytics: An issue brief. Washington, DC: U.S. Department of Education, Office of Educational Technology. Retrieved from http://www.ed.gov/edblogs/technology/files/2012/03/edm-la-brief.pdf

Brown, J. S., & Adler, R. P. (2008). Minds on fire: Open education, the long tail, and Learning 2.0. EDUCAUSE Review, 43(1), 16-32. Retrieved from http://www-cdn.educause.edu/ir/library/pdf/ERM0811.pdf

Butler, B. S. (2001). Membership size, communication activity, and sustainability: A resource-based model of online social structures. Information Systems Research, 12(4), 346-362. doi:10.1287/isre.12.4.346.9703

Butler, B. S., & Ahn, J. (2013, February). Ecological perspectives on creating and sustaining open learning environments. Paper presented as part of the Workshop on CSCW and Education at the 16th ACM Conference on Computer Supported Cooperative Work, San Antonio, TX. Retrieved from http://www.ahnjune.com/wp-content/uploads/2012/12/CSCW-Position-Paper-_final.pdf

Butler, B. S., & Wang, X. (2012). The cross-purposes of cross-posting: Boundary reshaping behavior in online discussion communities. Information Systems Research, 23(3), 993-1010. doi:10.1287/isre.1110.0378

Cummings, J. N., Butler, B., & Kraut, R. (2002). The quality of online social relationships. Communications of the ACM, 45(7), 103-108. doi:10.1145/514236.514242

Howe, J. (June 2, 2006). Crowdsourcing: A definition [Web log post]. Retrieved from http://crowdsourcing.typepad.com/cs/2006/06/crowdsourcing_a.html

Jones, Q., Ravid, G., & Rafaeli, S. (2004). Information overload and the message dynamics of online interaction spaces: A theoretical model and empirical exploration. Information Systems Research, 15(2), 194-210. doi:10.1287/isre.1040.0023

Kafai, Y. B., & Peppler, K. A. (2011). Beyond small groups: New opportunities for research in computer-supported collective learning. In H. Spada, G. Stahl, N. Miyake, & N. Law (Eds.), Connecting computer-supported collaborative learning to policy and practice. Proceedings of the Ninth International Conference on Computer-Supported Collaborative Learning (CSCL 2011) (Vol. 3, pp. 17-24). Atlanta, GA: International Society of the Learning Sciences. Retrieved from http://www.gerrystahl.net/proceedings/cscl2011/cscl2011proceedingsIII.pdf

McAuley, A., Stewart, B., Siemens, G., & Cormier, D. (2010). The MOOC model for digital practice. Charlottetown, Canada: University of Prince Edward Island. Retrieved from http://www.elearnspace.org/Articles/MOOC_Final.pdf

The Mozilla Foundation, & Peer to Peer University (with The MacArthur Foundation). (2012). Open badges for lifelong learning. Retrieved from https://wiki.mozilla.org/images/b/b1/OpenBadges-Working-Paper_092011.pdf

Pappano, L. (2012, November 2). The year of the MOOC. The New York Times, ED26. Retrieved from http://www.nytimes.com/2012/11/04/education/edlife/massive-open-online-courses-are-multiplying-at-a-rapid-pace.html

Parameswaran, M., & Whinston, A. B. (2007). Social computing: An overview. Communications of the Association for Information Systems, 19, 762-780. Available from the AIS Electronic Library. (http://aisel.aisnet.org/cais/vol20/iss1/1)

Siemens, G. (2012, March 5). MOOCs for the win! [Web log post]. Retrieved from http://www.elearnspace.org/blog/2012/03/05/moocs-for-the-win/

Waters, A. (2012, July 23). Dropping out of MOOCs: Is it really ok? [Web log post]. Retrieved from http://www.insidehighered.com/blogs/hack-higher-education/dropping-out-moocs-it-really-okay


View the original article here

Patterns of Engagement in Connectivist MOOCs

         Patterns of Engagement in Connectivist MOOCs


Colin Milligan

Research Fellow
Caledonian Academy
Glasgow Caledonian University
Glasgow G4 0BA UK
colin.milligan@gcu.ac.uk

Allison Littlejohn
Professor and Director
Caledonian Academy
Glasgow Caledonian University
Glasgow G4 0BA UK
allison.littlejohn@gcu.ac.uk

Anoush Margaryan
Senior Lecturer
Caledonian Academy
Glasgow Caledonian University
Glasgow G4 0BA UK
anoush.margaryan@gcu.ac.uk

Abstract

Connectivist massive open online courses (cMOOCs) represent an important new pedagogical approach ideally suited to the network age. However, little is known about how the learning experience afforded by cMOOCs is suited to learners with different skills, motivations, and dispositions. In this study, semi-structured interviews were conducted with 29 participants on the Change11 cMOOC. These accounts were analyzed to determine patterns of engagement and factors affecting engagement in the course. Three distinct types of engagement were recognized – active participation, passive participation, and lurking. In addition, a number of key factors that mediated engagement were identified including confidence, prior experience, and motivation. This study adds to the overall understanding of learning in cMOOCs and provides additional empirical data to a nascent research field. The findings provide an insight into how the learning experience afforded by cMOOCs suits the diverse range of learners that may coexist within a cMOOC. These insights can be used by designers of future cMOOCs to tailor the learning experience to suit the diverse range of learners that may choose to learn in this way.

Keywords: massive open online course (MOOC), connectivist massive open online course (cMOOC), connectivism, lurking, networks, active participation, passive participation

Introduction

Pedagogical models of learning online have been extensively theorized, particularly with respect to the interrelationship of technology and pedagogy (Anderson & Dron, 2010; Garrison, 1997; Kanuka & Anderson, 1999). As we approach near ubiquity of networked connections between people, content, and tools (reflecting the "networked society" described by Castells, 1996), researchers such as George Siemens have proposed new pedagogical approaches based on the principles of connectivism (Siemens, 2005), through which learning is viewed as residing in the connections that exist between people and digital artifacts within this ubiquitous network. One example of connectivist pedagogy in action is the massive open online course (MOOC) format pioneered by Siemens along with his colleague Stephen Downes, first in the Connectivism and Connected Knowledge 2008 (CCK08) course (Downes, 2008) and thereafter in many subsequent courses including the Change11 course that forms the basis of this study. These MOOCs, known as connectivist or cMOOCs, focus on knowledge creation and generation rather than "knowledge duplication" (Siemens, 2012, para. 3). In cMOOCs, the learners take a greater role in shaping their learning experiences than in traditional online courses, while facilitators focus on fostering a space for learning connections to occur. While cMOOCs can empower learners to take control of their learning, there remains a question about how the learning experience afforded by these cMOOCs is suited to learners with different skills, motivations, and dispositions. As cMOOCs are a relatively new phenomenon, there are few studies that have explored these issues (e.g., Kop & Fournier, 2010; Mackness, Mak, & Williams, 2010). The overall aim of this study is to address this lack of empirical data.

This study focuses on two research questions:

What patterns of engagement exist within the Change11 cMOOC course?What principal factors mediate this engagement?By gaining a deeper insight into the patterns of engagement in cMOOC courses, this study seeks to provide an insight into how future cMOOCs can be designed to better support the learning needs and expectations of the wide range of learners that coexist within cMOOCs. The paper begins with a review of relevant literature, focusing on existing research examining the learning experience in cMOOCs. Next, the Change11 course context is described, followed by a description of the methodology adopted in this study and the sample studied. The study findings are then presented and discussed. Finally, recognizing the limitations of this study, the implications of these findings for research and practice are discussed, particularly in relation to the design of future cMOOC learning experiences.

Literature Review

Connectivist MOOCs emerged initially as an instantiation of the pedagogic principles of connectivism developed by Siemens and Downes, and their first MOOC, CCK08, naturally explored the topic of connectivism as it attracted participation from learning researchers and practitioners who had been following the evolution of these ideas. Subsequent large scale cMOOCs, such as Personal Learning Environments Networks and Knowledge (PLENK 2010) and Connectivism and Connected Knowledge 2011 (CCK11), have continued in the same vein, exploring similar topics and attracting participants eager to experience the cMOOC format as well as the course content itself. This has led to the emergence of a rather unusual research base where a small amount of empirical research, published in niche journals and peer reviewed conferences, is supplemented by a large body of more anecdotal and reflective research published outside the traditional peer-reviewed journal system. To date, the key empirical research has been carried out by three groups of researchers: Fini (2009), Mackness et al. (2010), and Kop and her colleagues (Kop, 2011; Kop & Fournier, 2010; Kop, Fournier, & Mak, 2011), exploring the cMOOCs listed above.

Software tools for discovery, connection, and co-creation are a fundamental component of cMOOCs, together constituting the platform through which the course is delivered. In his study, Fini (2009) focused on the technological dimension of CCK08, exploring participants' perceptions of the course toolset as well as their views on the course. The research showed that one course component, The Daily newsletter, was valued by participants who used it as a tool to filter and organize their participation in the course, but that other tools, such as the discussion forums provided by Moodle, were viewed less positively. In addition, Fini's research highlighted digital literacy and English language proficiency as key skills needed to participate effectively in the course. The study also provided evidence of different behavior patterns being exhibited by course participants based on their personal objectives, backgrounds, and levels of engagement. As the first cMOOC, CCK08 can be seen as a prototype; and it is interesting to note that subsequent cMOOCs designed by Siemens and Downes adopted different toolsets, specifically ending the use of Moodle as a locus for discussion forums. This had the effect of moving discussion and interaction to blog posts and other spaces controlled by the learners, following the concept of the personal learning environment introduced by Wilson et al. (2007).

Mackness et al. (2010) also studied the CCK08 course, focusing on the four key characteristics of connectivist online courses suggested by Downes (2009): autonomy, diversity, openness, and connectedness/interactivity. Mackness et al.'s study, conducted by survey and e-mail interview, identified inherent tensions between these characteristics. The presence of these tensions raises the question of whether all four characteristics can be accommodated within the concept of a course in the traditionally understood sense, where the term creates expectations of structure, moderation, and support.

Research by Kop and associated researchers focused on the PLENK2010 cMOOC and carried on to examine the CCK11 cMOOC. This research focuses on the learning experience, and in particular on self-directed learning, asking whether cMOOCs facilitate the occurrence of self-directed learning. Kop and Fournier (2010) used quantitative (surveys) and qualitative (ethnography) data, supplemented by social network analysis to develop an understanding of autonomous learning within the PLENK2010 cMOOC. The research used Bouchard's (2009) four-dimensional model of learner control (conative, algorithmic, semiotic, and economic) to explore how motivation and confidence, the structure of learning, the delivery environment, and the perceived value of learning influenced learning strategies used by participants within the course. The study highlights additional factors necessary for learning in cMOOC type environments, principally the critical literacies essential to efficiently evaluate the large quantities of information present in a cMOOC – including an open mindset, the ability to learn cooperatively with others, and heightened critical analysis skills. In another paper based on the same study, Kop (2011) focuses on activities typical of cMOOC (or network)-based learning: aggregation, relation, creation, and sharing. The active and collaborative nature of these activities, similar to other classifications, such as the 4C learning behaviors (consume, connect, create, and contribute) identified by Littlejohn, Milligan, and Margaryan (2011), emphasize the importance of learning as a participatory process (Sfard, 1998) in the context of networked informal learning. Kop (2011) argues that to learn effectively in connectivist environments, learners must have key critical literacies such as those outlined above to be able to engage and participate, and have the confidence and competence to use the tools that mediate the key learning interactions that occur.

The Kop (2011) study also highlights the different types of learning approaches observed in the PLENK2010 cMOOC, recognizing the large number of lurkers (Rovai, 2000) who were present in these courses. In a third paper based on PLENK2010 but also incorporating data from the CCK11 cMOOC, Kop et al. (2011) return to the question of the underlying technology and explore whether specific learning environment designs can support effective learning in cMOOCs, arguing that structures must, at least to some extent, be emergent, and owned by the participants rather than imposed by the facilitators.
These studies have begun to provide a base of empirical data representing the learning that occurs in cMOOCs, but there is widespread recognition that more empirical research is urgently needed, particularly given the current widespread interest in the broad spectrum of MOOCs (Daniel, 2012). There is still a lack of data, drawn from too few courses, with a limited range of methodologies. This study seeks to build on these studies, examining a different course, the Change11 cMOOC, and using a different method (interview and survey instrument combined), to develop a greater understanding of the different patterns of engagement that coexist within cMOOC courses. Of particular interest is the nature of the networks these participants utilize to support their learning and the different factors that affect participation and engagement.

Course Context

The Change11 course was a large-scale cMOOC running from September 2011 to May 2012, organized and facilitated by George Siemens, Stephen Downes, and Dave Cormier. Over 35 weeks, participants were introduced to the work of a range of instructional design researchers and practitioners. Registration was open and course delivery was supported through a variety of technologies, principally a daily e-mail newsletter and online synchronous seminars delivered via the Elluminate platform. The course attracted more than 2,300 participants. An e-mail newsletter called The Daily communicated course announcements and content from presenters, while also aggregating blog posts and tweets from participants marked with the hashtag "#change11." This hashtag, along with self-organized spaces such as a Change11 group on Facebook and a multi-author blog associated with the course, provided a universal means of discovery of course related blog posts and content.

Method

The findings reported here represent one component of a larger study examining self-regulated learning behavior within the Change11 cMOOC. Participants for the study were recruited via an invitation and study description included as the first item in The Daily e-mail sent to everyone registered for the Change11 cMOOC during Week 17 of the course. Thirty-five individuals from a total of 2,300 registered learners agreed to participate. Study participants were initially invited to complete a short online survey. The survey used items adapted from a number of existing self-report self-regulated learning (SRL) instruments (Barnard, Lan, Paton, & Lai, 2009; Gijbels, Raemdonck, & Vervecken, 2010; Maclellan & Soden, 2006; Pintrich, Smith, García, & McKeachie, 1991; Schraw & Dennison, 1994; Toering, Elferink-Gemser, Jonker, van Heuvelen, & Visscher, 2012) to enable the researchers to derive SRL profiles for each participant. The survey also collected demographic information (country of residence, employer, and discipline/field) and asked respondents about their experience of previous MOOCs. From the initial sample of 35, 29 study participants were able to subsequently take part in a one-hour semi-structured interview via Skype, which explored various aspects of participation including motivation, goal-setting, and planning strategies, as well as exploring study participants' existing and emergent learning networks, their use of tools to support their learning, and their perceptions of their own participation in the course. Interviews were transcribed and stripped of identifying information. Transcripts were analyzed and a combination of predefined and emergent codes used to categorize the data. Ethical standards for the study were adopted in accordance with local regulations: all participants were provided with information about the study and how their data would be used, and were assured confidentiality and anonymity in any published work. Participants were asked to formally indicate their consent and were free to withdraw at any point.

Sample

Twenty-nine course participants out of 2,300 registrants on the course were interviewed in this study. Although a small number as a proportion of the total registrants, the sample is robust and appropriate for a qualitative study where a minimum sample of 12 is considered acceptable (Guest, Bunce, & Johnson, 2006). Everyone registered for the course received the e-mail that included the invitation to participate, so all those registrants who were active at Week 17 (midway through the course) would see the invitation. One limitation of the sampling procedure used is that anyone who had stopped actively following the course by this point would not have participated in the study. This limitation will be discussed in the results and discussion sections. The overarching topic of the cMOOC was instructional design, and all participants were either learning professionals (24, employed as lecturers, teachers, or instructional designers), or graduate students (five). Twelve of 24 learning professional participants were from the higher education sector, while 11 were drawn from the K-12 or community college sector. One participant was employed to support learning in a workplace outside the education sector. Eighteen of 29 participants were female, with 11 being male. Twenty of 29 participants had studied in previous cMOOCs.

Results

This study addressed two research questions: "What patterns of engagement exist within the Change11 cMOOC course?" and "What principal factors mediate this engagement?" These questions are now addressed in turn.

Patterns of Engagement

The semi-structured interview questions explored the topic of engagement in the course in a number of ways. Questions regarding motivation to do the course were complemented by questions exploring participation behavior. Finally, some questions were designed to probe the makeup of each respondent's personal learning network (their primary networks). Analyzing the responses to these questions allowed the authors to detect a number of different patterns of engagement with the course that are presented in this section.

Three distinct levels of engagement exhibited by the participants were identified in this study: Active participants, lurkers, and passive participants. Within these three levels, one key internal difference was observed: the location of the primary network for each individual, which might be internal or external to the course. Table 1 shows the number of participants in each category.

Table 1. Engagement and primary networks for the Change11 MOOC

Primary Network for this CourseThe different categories are described in turn below.

Active participants. The first group identified by this analysis (12 of 29) was a group described as active participants. These participants have adapted well to the connectivist pedagogy of cMOOCs, maintaining active blogs and Twitter accounts, actively and regularly discussing the course. All these active participants had formed wholly or primarily internal networks, connecting with other learners through Twitter and blogs. The live Elluminate sessions represented a key opportunity to make connections and widen their network. One participant, from South America, described the buzz of the chat sessions accompanying the live Elluminate sessions as follows: "You can read the comments of people who are participating from different places and they give links to things that they are doing or they think while you hear what is happening" (Participant 20). Not all active participants attended the live sessions (for reasons such as impractical scheduling in their time zone), but these sessions did seem to be an important focus for those who could attend.

Active participants were highly motivated to persist with the course and were able to overcome challenges in the course that might have proved a barrier to participation for others. For example one participant described how the live Elluminate sessions were sometimes confusing, and how she overcame this challenge:

"Yes I do, I found that when I was coming in cold to those speakers I was totally lost. So I had to come up with some short, sharp, effective strategies. First one being I'll check out the person on YouTube, I need to be able to get into the same space about what they're talking about, so at least I've got some sense of who they are and what they're going on about. So that was one strategy I used." (Participant 9)

Most active participants were energetic bloggers and Twitter users, using these tools as their main mechanism of communication. Active participants recognized that full participation entailed more than merely broadcasting ideas (creating tweets and blog posts) and had developed strategies to encourage connection with other participants through commenting on other blogs. One participant described how the majority of her contributions were made as responses to other posts: "I have no idea how scattered I am across this MOOC, I have no idea how many contributions I've made, 30? 50? I've got a lot of replies" (Participant 5). This participant went on to describe how she sought to encourage interaction: "So I usually end a reply on an open end. That's one of my, you know, I structure it that way" (Participant 5).

Aside from writing and commenting on blogs and following Twitter, active participants with primarily internal networks described other spaces that were particularly conducive to connection and collaboration. These spaces, established by the participants rather than facilitators, became the most vibrant community spaces for the course. One such space was Facebook, as described by one participant: "I found the Facebook group was where I really engaged with the course and teamed up with a few people there and it was good" (Participant 8). The same participant highlighted the value of some particularly active participants who could be inspirational to the rest of the cohort: "Oh there's some people who are everywhere you turn in the Change11 MOOC: there's this group of people who are inspirational, just phenomenal the way they just keep going and they know their way around it" (Participant 8).

While the majority (nine of 12) of active participants appeared to segregate their internal and external networks (effectively treating the Change11 as a closed course) a small number of participants (three of 12) seemed to have merged these networks to some extent – recognizing the artificiality of trying to separate different networks. One participant described how she actively sought to bring the two networks together when it was appropriate to do so:

"I try to think about taking what I find relevant in the [MOOC] and specifically advocating for it within [my external network] as well. So I do cross-pollinate like that. ... I hope that's helpful to people who normally wouldn't see this kind of stuff." (Participant 25)

The same participant summed up the value of being active in the course as follows, referring both to her motivation and to the development of her digital skills:
"I've gone further with the tools than I had in the past, so getting really familiar with Twitter, with Facebook, with LinkedIn, with Diigo for social bookmarking, for blogging, for setting up a blog ... and I remember when I crossed the threshold of having 1000 views ... yeahhh! It felt good to see the attraction, to have my work referred elsewhere." (Participant 25)

While a connectivist MOOC should cater to all types of participants, active participants represent the ideal learners for this type of course design: not just consuming content, but connecting with others, creating new content, and contributing these new resources back into the course for others to utilize. In fact, without a critical mass of active participants, a connectivist course would fail. Lurkers. The largest category of engagement identified in this analysis was of lurkers (13 of 29). These participants were actively following the course but did not actively engage with other learners within it. These participants were by no means disengaged with the course, or unhappy with their position. Instead, lurking was an active choice for them. One participant offered the view that "lurking is actually hugely beneficial" (Participant 18), before going on to describe how the course content was effectively new knowledge filtered by the course organizers and therefore had "more value than something I randomly come across on the Internet" (Participant 18).

As before, different types of lurking behavior were seen, based on the networks (external or internal) these lurkers engaged with. We saw three subcategories: (1) those who felt they did not engage with any network at all (four of 13); (2) those who engaged with networks external to the Change11 course but not with the internal networks (four of 13); and finally, (3) those who silently participated in internal networks (five of 13). Four of the participants did not engage with any network at all, and had a simple explanation for their behavior: that they were not interested in engaging with others to learn. One explained: "I guess I tend to be a loner and I've done more lurking and I'm quite happy lurking, I think it's an honourable profession" (Participant 21), while another simply stated: "I have not created any links with people, I have not said who I am and what I'm doing" (Participant 15). Even though this group had not engaged with others, the fact that they were still following the course after almost five months shows that the course format is compatible with their needs.

Four participants maintained a complex position whereby they were inactive within the course, but actively shared ideas from the course externally. As one participant reported:

"I'm going out to the MOOC and lurking and getting lots of great interesting ideas and I'm bringing these back to some of my home based networks, both ones within my institution but also ones within my network that I've built up professionally." (Participant 1)

A second participant made a similar point:
"I'm more or less like what do you call? A lurker and not very active ... I'm always invisible and the reason is that the way I've been using the MOOC is to put into things that I'm doing. Like I said, to be a network mentor." (Participant 17)

Both these participants had a clear understanding of why they wanted to participate in Change11. They wanted to apply the new knowledge they had gained to improve their own practice. Similarly, a third lurker in this group had shared her experience in the Change11 MOOC with colleagues at her own institution, while showing little engagement with the course community.

Finally, a group of five participants (all of whom self-identified as lurkers) silently participated in internal networks but did not contribute to the course in any way. Their behavior appears to be motivated by lack of confidence. For example, one participant provided the following explanation for her lurking:

"No, because I basically, I got caught up in my own learning and I didn't feel ... [it was worthwhile to contribute my] limited knowledge about what was being discussed. Beyond saying 'oh that resonates with me' well how many times have MOOC'ers said that! And I know it does resonate, but beyond that I couldn't add anything new." (Participant 18)

Others saw silent participation as a step to more full participation in future courses, acknowledging that in this course, they did not quite have the confidence to participate actively. As one participant remarked: "I did write a blog post but I conveniently wrote it somewhere no one would read [it] and I wrote it in [not English], so more like cathartic to me, rather than me putting it out in the open" (Participant 4). These silent lurkers saw a connection between level of participation and their success as learners, as exemplified by this quote from one participant: "I would have felt I accomplished more if I had personally networked and participated more" (Participant 33).

The lurkers category is somewhat complex as it includes a spectrum of participants from those who lacked the confidence to participate, to those who were so confident they didn't need to participate in the course in what they might regard as "the traditional manner." What links all these lurkers is that a cMOOC format works for them – they have the skills to leverage what they want from the course, on their terms.

Passive participants. The final group identified was the passive participant group. This category of four participants was united by their apparent frustration or dissatisfaction with the course. For one participant, the connectivist nature of the course just did not seem appropriate, as this extensive quote illustrates:
"I wrangled with the whole issue of connectivism, not the concept so much, as my ability to do that, to connect with others, and so I think I was looking inward a little bit and had some difficulty there as to whether or not I could succeed in that aspect as a learner, to be able to really make connections with other people on a deep level. I mean, sure, I can read other people's blogs and that's not a problem, and I comment occasionally, but as far as really putting my ideas out there in the open in my own blog to be trampled on, you know there's a bit of fear there I think that I have and so that has been difficult for me, to really put my ideas out there, which I know is good from the aspect of I could get feedback and learn from that, but there's still something in me that says what do I know? Who am I to contribute my thoughts to the world? I'm just this little person over here." (Participant 12)

Another member of this passive group failed to see the inherent value of learning through the network but instead seemed to be looking for a more formal course:
"I selected the MOOC because it was being led by a well-known, well established, very experienced names [sic] and that's actually the secret, I realize now that what I'm looking for when I'm looking to make a jump in development is actually more guidance rather than freeform learning. The whole point is I can do the freeform anywhere, anytime, 24 hours a day on the net." (Participant 23)

A third member expressed her frustration with not connecting: "I'm not really sure how to find a group of people online who really want to learn about what I most want to learn about" (Participant 13).

The invitation to participate in this study was sent out in Week 17 of the course. It is therefore surprising that these passive participants had persisted with the course to this point; perhaps they could be categorized as dissatisfied lurkers (in contrast to the satisfied lurkers identified above). Despite extensive efforts by the course organizers to accommodate learners of all types in the Change11 cMOOC, via videos and guidance in the orientation week of the course, it seems that the cMOOC format is not suitable for every learner. It is also important to acknowledge that it is likely that many course participants had dropped out by Week 17, and that it is this category of passive participants – whose needs are clearly not being met – who are most likely to have dropped out and are therefore underrepresented in this study.

Factors Affecting Engagement

Our second research question asked, "What principal factors mediate engagement?" From the accounts of engagement in the Change11 cMOOC that were collected, some key factors affecting engagement in this cMOOC can be identified:

Confidence. One key factor evident in the responses of both passive participants and lurkers relates to their confidence levels. Here, for example, a participant describes how her lack of confidence discouraged her from sharing her bookmarks socially:
"I actually did sign up with Diigo thinking right, this is it, I'm going to throw myself in. But once I was in the MOOC and running the idea, I actually felt almost as though I would be, instead of sharing with others, I was going to be simply showing my ignorance, even in terms of what I was selecting and why I was selecting it." (Participant 23)

Elsewhere, numerous participants described writing blog posts but not publishing them – in this way they gained the benefit of working through ideas, without having to expose those ideas to a potentially critical audience. Prior experience. An important factor was participants' prior experience. Twenty of 29 participants had previously participated in a cMOOC. As shown in Table 2, all but one of the active participants had previously participated in another cMOOC, while none of the four passive participants had previously participated in a cMOOC. Learning in a cMOOC is fundamentally different from learning in a formal course, and requires some adjustment.Table 2. Participant group by previous cMOOC experience

Participation in Previous cMOOCsMotivation. Finally, motivation was identified as an important determinant of engagement. Several of the active participants described a clear aim associated with their participation in the Change11 cMOOC. For example, this participant had clear ambitions to change his/her practice: "The ultimate aim of participating in this MOOC is to see how I can completely change the way that I teach and illustrate through example how others could do the same" (Participant 32). In contrast, passive participants had less well formed aims, as illustrated by this quote:
"Maybe that's part of my problem that I didn't maybe have a strong enough particular aim, like I'm interested in the idea of understanding better how they work, but I'm actually more interested in learning about other things that are nothing to do with online learning." (Participant 23)

Discussion and Conclusion

Active participants represent the key group in a cMOOC. A successful course becomes the content that these participants create and share, far more so than the live presentations and course readings. Moreover, the network of connections with other learners is something that can persist long after a cMOOC has ended. The more active or experienced members of the group provide a model for those who are less experienced, and are instrumental in creating the emergent spaces supporting connectedness and interactivity (Downes, 2009), which Kop et al. (2011) have argued are essential to successful connectivist learning environments. One key observation from this study is that all but one of the active participants had previously participated in at least one other cMOOC, while none of the four passive participants had participated in a cMOOC. It seems clear that learners must learn how to learn in a cMOOC.

In contrast to active participants, lurkers can potentially gain all the benefits of the course, but apparently contribute nothing in return. Discussing lurking in traditional online courses, Rovai (2000) describes lurkers as "learners who are bystanders to course discussions, lack commitment to the community, and receive benefits without giving anything back" (p. 291). This negative perception of lurkers was shared by some active participants who felt that being a lurker was incompatible with the concept of a cMOOC course, reliant as it is on participation and the activities of aggregation, relation, creation, and sharing identified by Kop (2011). However, cMOOCs must accommodate learners of all types to satisfy Downes' (2009) diversity and openness criteria. In practice, as long as there is a balance of these different types of learners, then lurkers can be accommodated, and the evidence from this study is that lurkers can learn effectively in connectivist environments: taking the knowledge they acquire to their own external networks.

In this study, the distinction between lurkers and passive participants reflects the observation that lurkers (most, but not all of whom self-identified as lurkers) were content with their participation in the cMOOC. In contrast, passive participants seemed frustrated with the course and their behavior indicates that they did not want the autonomy to choose where, when, how, and with whom to learn (Downes, 2009). Learner confidence was also a factor in this group, with some participants indicating that they felt they did not have anything useful to say. Others seemed to be lacking the critical literacies to learn effectively in a connectivist course.

The findings presented here are part of a larger study examining self-regulation of learning in cMOOCs. While the whole study contributes a substantial body of new empirical data about the learning experience afforded by cMOOCs, there are some limitations that must be acknowledged and considered. First, the sampling strategy used (recruitment via a call for participation included in the e-mail newsletter in Week 17 of the course) would not have reached participants who had already dropped out by this stage. Second, the authors noted that participants in this cMOOC were as interested in the cMOOC process, as in the Change11 content, and therefore it may be difficult to generalize the findings of this study to other courses. Careful design of future studies could avoid these limitations. Future studies should seek to compare the learning experience offered by different cMOOCs, target specific types of learners, and could attempt to follow learners across different cMOOCs to gain a better understanding of how critical literacies for learning in cMOOCs develop.

Understanding the nature of learners and their engagement is critical to the success of any online education provision, especially those where there is an expectation that the learner should self-motivate and self-direct their learning. Massive courses, by their very nature, bring in learners with a range of backgrounds, previous experience, and skill levels, and it is therefore incumbent on course organizers to design a learning experience that accommodates these diverse learner profiles. The three factors affecting engagement identified in this study (prior cMOOC experience, confidence, and motivation) provide an insight into how organizers of future cMOOCs might address this design challenge. Those participants who have not previously studied on connectivist courses can easily be identified. These participants may be given additional induction, or could be paired with a more experienced student who could act as a mentor. An approach like this would have helped Participant 13, who felt unable to find similar learners in the course. Those participants who lacked confidence could be paired with learners of similar experience to act as "buddies." Finally, learners could be encouraged to identify and articulate clear aims and goals for the course to increase motivation (Locke & Latham, 2002). Goals could also be used as a social object through which learners could find others with similar interests and aspirations (Milligan, Margaryan, & Littlejohn, 2012).

References

Anderson, T., & Dron, T. (2010). Three generations of distance education pedagogy. The International Review of Research in Open and Distance Learning,12(3), 80-97. Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/890/1663

Barnard, L., Lan, W. Y., To, Y. M., Paton, V. O., & Lai, S.-L. (2009). Measuring self-regulation in online and blended learning environments. The Internet and Higher Education, 12(1), 1-6. doi:10.1016/j.iheduc.2008.10.005

Bouchard, P. (2009). Pedagogy without a teacher: What are the limits? International Journal of Self-Directed Learning, 6(2), 13-22. Retrieved from http://www.sdlglobal.com/IJSDL/IJSDL6.2-2009.pdf

Castells, M. (1996). The rise of the networked society. Oxford, UK: Blackwell.

Daniel, J. (2012). Making sense of MOOCs: Musings in a maze of myth, paradox and possibility. Journal of Interactive Media In Education, 2012(3). Retrieved from http://jime.open.ac.uk/article/2012-18/html

Downes, S. (2008). Places to go: Connectivism & connective knowledge. Innovate: Journal of Online Education, 5(1). Retrieved from http://www.innovateonline.info/pdf/vol5_issue1/Places_to_Go-__Connectivism_&_Connective_Knowledge.pdf

Downes, S. (2009, February 24). Connectivist dynamics in communities [Web log post]. Retrieved from http://halfanhour.blogspot.co.uk/2009/02/connectivist-dynamics-in-communities.html

Fini, A. (2009). The technological dimension of a massive open online course: The case of the CCK08 course tools. The International Review of Research in Open and Distance Learning, 10(5). Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/643/1402

Garrison, D. R. (1997). Computer conferencing: The post-industrial age of distance education. Open Learning: The Journal of Open and Distance Learning, 12(2), 3-11. doi:10.1080/0268051970120202

Gijbels, D., Raemdonck, I., & Vervecken, D. (2010). Influencing work-related learning: The role of job characteristics and self-directed learning orientation in part-time vocational education. Vocations and Learning, 3(3), 239-255. doi:10.1007/s12186-010-9041-6

Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18(1), 59-82. doi:10.1177/1525822X05279903

Kanuka, H., & Anderson, T. (1999). Using constructivism in technology-mediated learning: Constructing order out of the chaos in the literature. Radical Pedagogy, 1(2). Retrieved from http://www.radicalpedagogy.org/Radical_Pedagogy/Using_Constructivism_in_TechnologyMediated_Learning__Constructing_Order_out_of_the_Chaos_in_the_Literature.html

Kop, R. (2011). The challenges to connectivist learning on open online networks: Learning experiences during a massive open online course. The International Review of Research in Open and Distance Learning,12(3), 19-38. Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/882/1689

Kop, R., & Fournier, H. (2010). New dimensions to self-directed learning in an open networked learning environment. International Journal of Self-Directed Learning, 7(2), 1-20. Retrieved from http://www.sdlglobal.com/IJSDL/IJSDL7.2-2010.pdf

Kop, R., Fournier, H., & Mak, S. F. J. (2011). A pedagogy of abundance or a pedagogy for human beings: Participant support on massive open online courses. The International Review of Research in Open and Distance Learning,12(7), 74-93. Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/1041/2025

Littlejohn, A., Milligan, C., & Margaryan, A. (2011). Collective learning in the workplace: Important knowledge sharing behaviours. International Journal of Advanced Corporate Learning, 4(4), 26-31. doi:10.3991%2Fijac.v4i4.1801

Locke, E. A., & Latham, G. P. (2002) Building a practically useful theory of goal setting and task motivation. American Psychologist, 57(9), 705-717. doi:10.1037/0003-066X.57.9.705

Mackness, J., Mak, S. F. J., & Williams, R. (2010). The ideals and reality of participating in a MOOC. In L. Dirckinck-Holmfeld, V. Hodgson, C. Jones, M. de Laat, D. McConnell, & T. Ryberg. (Eds.), Proceedings of the Seventh International Conference on Networked Learning (pp. 266-275). Lancaster, UK: University of Lancaster. Retrieved from http://www.lancaster.ac.uk/fss/organisations/netlc/past/nlc2010/abstracts/PDFs/Mackness.pdf

Maclellan, E., & Soden, R. (2006). Facilitating self-regulation in higher education through self-report. Learning Environments Research, 9(1), 95-110. doi:10.1007/s10984-005-9002-4

Milligan, C., Margaryan, A., & Littlejohn, A. (2012). Supporting goal formation, sharing and learning of knowledge workers. In A. Ravenscroft, S. Lindstaedt, C. Delgado Kloos, & D. Hernández-Leo (Eds.), 21st century learning for 21st century skills: Proceedings of the Seventh European Conference on Technology Enhanced Learning (EC-TEL 2012). Vol.  7563 Lecture Notes in Computer Science (pp. 519-524). Heidelberg, Germany: Springer. doi:10.1007/978-3-642-33263-0_53

Pintrich, P. R., Smith, D. A. F., García, T., & McKeachie, W. J. (1991). A manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ). Ann Arbor, MI: University of Michigan, National Center for Research to Improve Postsecondary Teaching and Learning. Available from ERIC database. (ED338122)

Rovai, A. P. (2000). Building and sustaining community in asynchronous learning networks. The Internet and Higher Education, 3(4), 285-297. doi:10.1016/S1096-7516(01)00037-9

Schraw, G., & Dennison, R. S. (1994). Assessing metacognitive awareness. Contemporary Educational Psychology, 19(4), 460-475. doi:10.1006/ceps.1994.1033

Sfard, A. (1998). On two metaphors for learning, and the dangers of choosing just one. Educational Researcher, 27(2), 4-13. doi:10.3102/0013189X027002004

Siemens, G. (2005). Connectivism: A learning theory for the digital age. International Journal of Instructional Technology and Distance Learning,2(1), 3-10. Retrieved from http://www.itdl.org/Journal/Jan_05/article01.htm

Siemens, G. (2012). MOOCs are really a platform [Web log post]. Retrieved from http://www.elearnspace.org/blog/2012/07/25/moocs-are-really-a-platform

Toering, T., Elferink-Gemser, M. T., Jonker, L., van Heuvelen, M. J. G., & Visscher, C. (2012). Measuring self-regulation in a learning context: Reliability and validity of the Self-Regulation of Learning Self-Report Scale (SRL-SRS). International Journal of Sport and Exercise Psychology, 10(1), 24-38. doi:10.1080/1612197X.2012.645132

Wilson, S., Liber, O., Beauvoir, P., Milligan, C., Johnson, M., & Sharples, P. (2007). Personal learning environments: Challenging the dominant design of educational systems. Journal of e-Learning and Knowledge Society, 3(2), 27-38. Retrieved from http://www.je-lks.org/ojs/index.php/Je-LKS_EN/
article/download/247/229

Acknowledgments

The authors would like to thank all those participants who gave their time for this study. They would also like to thank Lou McGill for conducting the interviews and Susan Houston for transcribing the audio of those interviews.


View the original article here