Skip to main content

Evaluating technologies for education: case of ePortfolio

Abstract

The purpose of this paper is to develop a decision model to evaluate and select a campus-wide ePortfolio solution using hierarchal decision modeling. A case study from Portland State University is used to validate the model. By utilizing an hierarchy structure, the requirements were filtered to reduce the business requirements and solutions to a manageable set of objectives, factors, criteria, and alternatives. The model was validated by a panel of subject-matter experts and the importance of factors and criteria were ranked and then analyzed through pairwise comparisons. The final outcome of the model is a decision and recommendation on the best ePortfolio platform for PSU based on the selected alternatives.

Background

Portland State University (PSU) is in the process of “ReThinking” the “changing models of educational delivery, declining state funding, alternative credentialing, demographic shifts in student populations, questions concerning the relevancy of the curriculum, increased cost, and increasing legislative scrutiny.” ReThink PSU (2014) is a program developed by Provost Dr. Sona Andrews and the Provost Challenge is the pilot effort in this overarching process. This pilot program has offered up $3 million dollars in funding for new and innovative ideas in three “challenge” categories: acceleration, reframing, and inspiration. The campus community presented over 160 unique programs, tools, or productivity enhancements with 24 projects selected that provide “innovative faculty-staff activities to accelerate online learning and the use of innovative technology in educational delivery and to improve student success and graduation.” Reframing challenge #169 is the focus of this paper: An ePortfolio initiative to transform learning and assessment at PSU.

This challenge is divided into several phases. The initial phase is to explore requirements and select an ePortfolio and assessment platform for use at PSU. The following phases involve bringing the solution online and designing it around the pedagogical and functional needs of the campus schools and colleges. For this paper, our focus was entirely on the requirements gathering, classification, and selection of an ePortfolio solution. As the reframing challenge has evolved, overarching goals have become clearer. The primary goal of the ePortfolio program is to focus on students and learning; the secondary goal is centered on assessment.

With an online ePortfolio solution, the idea centers around providing an electronic portfolio in which students can present their learning artifacts, in a way, to showcase their academic works to future prospective schools, program, or job opportunities. The solution gives students a place and method for designing this professional collection of works that can show a student’s experience, value, and skills more concretely than an academic transcript. In this way, by providing the platform in which to share a student’s knowledge, experience, and ideas, the ePortfolio solution will focus on providing the student with a tangible asset.

By engaging students inside and outside of the classroom, learning can be significantly enhanced. When students create their learning artifacts, others can then explore them. They no longer become a stack of papers for only the professor to examine and appreciate; it becomes part of the environment in which each student is learning, enhancing the engagement, and community of the learning space. It is this communalization and community of knowledge that can lead to an enhanced learning environment for all students.

Lastly, assessment is a core requirement from a multitude of angles. From the student’s perspective, they need to be assessed to receive an awarded grade. From a programmatic standpoint, assessment is necessary to understand trends and effectiveness, find blind spots in student learning, and to understand where and how to set future goals. At an institutional level, assessment procedures are a mandatory requirement of remaining accredited. Programs across the university rely on a myriad of different tools to conduct these various levels of assessment. The goal of this paper is to provide a platform that is malleable, robust, and efficient enough to provide these multiple layers of assessment. There are core areas that this assessment will engage with at first, namely the University Studies ePortfolio assessments, but there is a strong upwelling of need and interest in a solution that could be leveraged across many other academic units and contexts.

Case analysis

At PSU there is a diverse need for a portfolio platform and sometimes viewed separately, for an assessment platform. Many colleges and programs have been using open source shareware, such as Google Sites, or have gone out and purchased small licenses for program-specific needs. This diversity has resulted in a bit of a platform sprawl without a source that can meet the needs of each constituent. Some of the larger customers in University Studies and the School of Business are key sponsors of this project. By evaluating the needs of the majority, and narrowing a focus down to the two key goals above, a single platform choice will offer a centralized, useful, well-maintained, service to the entire campus community to greatly reduce cost in comparison to the present decentralized model currently affords.

As part of this analysis, several platforms had to be selected for evaluation. In the official evaluation process, a request for information and request for quote will be sent out to various providers which will respond in detail with their platform capabilities and costs. Until that is completed, and platform providers respond, it is impossible to know which companies will respond and which products will be evaluated under final review. Instead, we relied on expert opinion and research to determine which platforms to evaluate. The result of this initial appraisal delivered three platforms for consideration.

Desire2Learn’s ePortfolio tool is an additional module to the Desire2Learn (D2L) learning management system (LMS), a product that PSU already owns and operates as its primary online learning system. Tk20 is an assessment-focused tool that also creates electronic portfolios that directly integrate into its assessment frameworks. Lastly, Digication (2014) is an ePortfolio platform that was launched in 2004 out of the Rhode Island School of Design as a means to share rich media created by students.

These three platforms also represent three overarching product categories that are often prevalent in platform selections such as this: “integrated within an existing large product platform (LMS),” “an assessment tool with portfolios added on,” and “portfolios with assessment added on.” Evaluations of these platforms will likely show key differences between the methods used to implement each solution and design decisions made by each platform creator in balancing their priorities.

Literature review

Technology in education has been studied in many fields. Researchers have tried to understand computer anxiety and its impact (Tekinarslan 2008; Cazan et al. 2016), how context plays (Hallajow 2016), how academic performance and stress are impacted by it (Cerretani 2016). Use of more advanced platforms and its adoption have also been a focus (Boswell 2016; Farid et al. 2015; Pipes 1996).

Evaluating technologies require comprehensive approaches (Tran and Daim 2008; Daim and Kocaoglu 2009, 2008). Adoption is one of the most studied areas in technology management. Studies on adoption of information technologies including enterprise (Kerimoglu et al. 2008; Basoglu et al. 2007), mobile platforms (Basoglu et al. 2014; Kargin et al. 2009), personal software (Tanoglu et al. 2010), and on line services (Seneler Ozen et al. 2010; Seneler et al. 2009a, b) indicate that there are multiple factors important to consider.

One of the widely used methods in evaluating technologies is hierarchical decision modeling (HDM). It is a robust method and used in areas including energy (Daim et al. 2009; Daim and Intarode 2009; van Blommestein and Daim 2013; Daim et al. 2013; Wang et al. 2010), design (Hallum and Daim 2009), and human resource management (Harrel and Daim 2010). Studies exploring the interface between education and computers have also used HDM (Tseng 2010; Huang et al. 2011; Lin 2010; Wu and Lin 2012; Shee and Wang 2008; Bhuasiri et al. 2012).

There are copious amounts of research and documentation that cover literally hundreds of various methods that can be used for decision-making, when considering which technology and project to choose. Caution is expressed when deciding which model to be utilized, simply because each model can only produce the certainty for which is within its parameters. There are two types of models that were considered when constructing a HDM strategy which pertains to numeric and nonnumeric models.

The nonnumeric models were evaluated, and are not limited to, The Sacred Cow, The Operation and Competitive Necessity, Comparative Benefit Model, Cognitive Modeling, and Expert Judgment. The Sacred Cow model appears to be how many corporations choose projects because the senior level and powerful executive of the company is usually the one who makes the final decision on a project. This type of method can be taxing on a company, as in many cases, the project will in most cases, be pursued until the leader of the organization pulls the plug on the project which can cost the company substantial funds and relationships.

Proposals for a project are reviewed with the criteria that are determined by the organization with various goals and benchmarks which is a comparative benefit model. Another type of nonnumeric modeling is cognitive modeling. In this strategy, the management learns the procedure so that they can replicate the process to make similar decisions. The last nonnumeric model that is discussed, is expert judgment. In this approach, the experts' opinions are used to weigh and consider the probability and outcomes of the decisions being considered based on the experts' judgment.

Methodology

The complexity of the model did prove to be a challenge when evaluating the ePortfolio product. There were five steps that were considered when preparing a HDM system that would be utilized which are as follows; hierarchy development, validation of hierarchy, portfolio creation, and sensitivity analysis. Breaking this hierarchical model down, the first two levels of the model reflect the mission statement and goals, then the criteria that was used for evaluating the choices by system and factor which comprise the next levels.

The sole mission of this project is to identify all business requirements across the PSU campus for the creation of a campus-wide ePortfolio solution. These requirements will be used to generate an HD model which will be weighted by appropriate experts. The data will be analyzed and the outcome is to give a recommendation to PSU about which ePortfolio platform has the best fit with the actual needs. Our objectives and criteria will be focused on the service delivery, cost, functional, and technical. These criteria will be discussed further in the next section, however, you can see how our model is situated in Appendix. The ultimate goal is to select the best ePortfolio platform for use at PSU. Other selected platforms consist of D2L, Digication DIGI[cation], ant Tk20. The factors that we are considering are the genre/specification categories to be rated by unit or functional-level experts. The specific platform choices rated via researchers or experts will drive the model and dictate which alternative should be chosen.

In order to make the best decision for the previously described problem, a Hierarchical Decision Model (HDM) is employed. The HDM is frequently used in academia, as well as real-world applications. It relies on a hierarchical structure to analyze strategic decisions. The HDM builds on various levels, where the objects in the higher levels influence those on the lower levels. Once the model is built, it relies on objective data and subjective expert ratings. The data are entered in the form of pairwise comparisons for each level. The number of comparisons that need to be made is \(\left( {N - 1} \right) \times N/2\). This way, not all objects need to be compared to one another. After the comparison is entered, the judgments are quantified using the constant sum model. Because of the human nature of the data entered into the model, inconsistencies will occur. Because of that, the constant sum model has to be applied for all possible matrix orientations.

The HDM can be tested for robustness via multiple approaches. For one, the disagreement and inconsistency levels can be determined. The disagreement level describes how well different experts agree with each other. The inconsistency level describes how consistent the expert is in his rating. For example, if he has to compare A, B, and C, and A is better than B and B is better than C, this must mean that A is also better than C. For both measures, a score of lower than 0.1 is regarded as valuable data. Lastly, the model can undergo sensitivity analysis to offset inconsistency and accuracy concerns.

Model

Organizing the committee’s requirements, led to the four key “criteria” categories, the first being service delivery. The service delivery centers around the ePortfolio platform attributes that correlate to providing and supporting the instance over time. Further collation of data resulted in six specific factors congealing under this main criterion: accountability for quality; support responsiveness and availability; lifecycle management, upgrades, and change management; user community; redundancy, and backup and recovery capability; and lastly, track record, and feedback from peers.

Accountability for quality is the solution provider’s ability to respond to the platform's shortcomings, fix the bugs, service outages, or feature requests. A provider at the top end of this metric will guarantee, in writing, a high degree of uptime, will provide terms of retribution for failure to comply with the contract, and be proactive and transparent in the tracking and remediation of both bugs and feature requests.

Support responsiveness and availability describes the need to have knowledgeable support personnel available 24/7/365. It also means that support requests are handled quickly, professionally, and that any higher order or more complex problems are escalated in a timely manner with appropriate and consistent feedback based on including prioritization and timelines for resolution wherever possible.

Lifecycle management upgrades, and change management describes a firm's maturity in providing a well-communicated upgrade and maintenance plan for their platform. A top contender in this category will have highly transparent and well documented testing and development practices, as well as detailed change logs which will consistently be published for clients.

Modern platform providers have found that by creating a user community around their products, self-help and user-driver content, can markedly enhance collaboration and coordination in product uptake. Any contender, strong in this factor will have a highly active, robust, public arena for the community of users to share and collaborate within.

Redundancy, backup, and recovery capabilities focus on a given platform’s ability to recover in case of minor or catastrophic failure. With regard to on premise solutions, this means the ability to restore functionality from backups in case of hardware failure, database corruption, or other unforeseen circumstances in a timely manner. For hosted solutions, this also includes the possibility of service fail-over to redundant hosting facilities in case of localized data center outages.

Track record and feedback from peers means that the company maintains a superb reputation for their product(s), their service, their responsiveness, and their product’s development and evolution over time. We find value in understanding and knowing if their customers are happy.

Cost

Costs are associated to the implementation range from highly specific to the more abstract. The choice will have specific functional cost requirements as well as ongoing, incidental costs, associated with the support and adoption of the platform. The cost factors defined are:

Initial cost or the buy-in cost. What are the initial costs for which will be borne on the university and to be expected to be spent up-front to purchase the platform. With software solutions, this is generally the largest single expenditure over the life of the product, though ongoing maintenance cost can easily summate to more over the life of the product stack.

Maintenance cost an ongoing, generally yearly, price you pay to maintain access to support, product updates, and for hosted solutions, the price to house and provide the infrastructure to deliver the platform to your audience.

Any complex system will require some amount of internal support cost. This is usually in the form of a subject matter expert or platform specialist; a technician or analyst in charge of monitoring, maintaining, and upgrading the system over its lifespan. These can range from a partial full-time equivalent employee (FTE) up to a whole team depending on the complexity, depth, and usage rate of a given system.

Licensing costs are another ongoing cost associated with many software and sometimes hardware products. In some cases, this is per-user, per-connection, per-processor. With some platforms, this is incorporated into the ongoing maintenance costs, in others, it is a specific, billable items, also typically collected yearly.

Training costs as described for this model refer to the startup training costs, required building expertise, and utility for a given platform across the appropriate user-base. This will include system administrator training (highly technical, small audience) to end-user training (less technical, larger audience). Although there may be ongoing costs associated to employee turn-over, the majority of this cost is borne up-front to get the incumbent staff up to speed.

Ongoing professional development costs are the more abstract, incidental costs associated with supporting a thriving platform. This includes travel and fees associated with staff development in the use and effectiveness of the platform. This would be similar to the training costs, but ascribed in an ongoing manner rather than the one-time cost of initial training.

Functional

The functional criterion is where the majority of the interaction with the platform choice occurs. This criterion describes the intended use, the primary features, and the robustness of its integration potential with other learning and online systems.

Privacy can be interpreted broadly but encompasses the user’s ability to control the availability of their learning artifacts. As some artifacts might be private or sensitive in nature, a robust solution will provide artifact-level permissions and sharing controls so that students and educators can choose what is revealed precisely to what audience.

Customization and personalization capabilities describe the ability to transform the platform in two ways; for the user, and for the institution. The user will want to create a place that reflects their personality and experience as this will drive adoption and engagement. The institution needs to be able to customize the interface with branding, and templates for programs or courses.

Students are increasingly using social media platforms as an extension of their physical lives; to allow the extension of their academic lives into this digital realm would be a powerful tool for engagement and potentially lead to all kinds of new ways of learning. Integration that crosses various social media and rich media platforms will bridge the gap between an academic-only system and the public, life-long portfolio experience.

While creating and displaying content is the initial, primary goal, a system that provides workflow capabilities such that automation mechanisms can be developed around learning modules or assessment would greatly improve adoption and potentially be a huge time-saver. One example would be to automate the random selection of portfolios for assessment and an assessment process that simplifies and reduces the time to analyze each portfolio.

Assessment, analysis, and reporting is a big category describing a big set of tools. Assessment is the secondary goal of the project and the system should allow for assessment of any arbitrary collection of artifacts that have been presented within the ePortfolio environment. Analysis adds a layer of cataloging and reporting provides a means to assimilate that analysis into useful, meaningful, and repeatable records.

Accessibility is key attribute for today’s online systems as they must be flexible enough to interface with various technologies designed to provide consistent access to technology regardless of physical or mental disability or impairment.

As part of the primary goal of learning, pedagogical utility is the platforms' ability to deliver on this promise. Are their elements that present a roadblock to learning and engagement? What design elements are novel, useful, and approachable that will assist in this end?

Technical

The final criterion category encompasses the technical requirements for a large, university-wide system such as this. The factors involved are

Identity, data, and access management which has to do with properly identifying users, getting them logged in, and ensuring that only they have access to that which they are provisioned. This category starts with supporting single sign-on so that the existing account infrastructure and lifecycle can be leveraged. It also includes tools and techniques built into the system to manage and realize the types of permission granularity described under privacy.

System and application administration and management capabilities include the satellite of tools used for administering and managing the environment. A strong system will have thought of the flexible needs and nature of such a system and designed their tools to minimize the amount of work necessary to implement both systematic and novel changes, both on the small and large scale.

Security is the systematic thoughtful implementation of security into the design and implementation of the product. Larger solutions generally provide a white paper describing both the development methodology and service delivery design consideration and mechanisms to create, maintain, monitor, and respond to security concerns. Any viable product offering will provide this type of robust, transparent communication regarding the security infrastructure.

Separate from the functional integration is the technical integration. Whereas, functional focused on the uses, this integration factor refers to integration with existing tools and services owned by the university. This includes our student information system, our LMS, Google Apps for Education, and others. A successful platform should have vehicles for tying these together with minimal effort on the part of our organization.

Interoperability is becoming ubiquitous in the modern era, but it is not necessarily common or robust when it comes to larger platform providers. A winning solution should clearly communicate their capabilities across a diverse set of user-devices (mobile devices, tablets, browsers) and be as agnostic as possible. The provider should also show a track record of adoption as new technologies arise as this will correlate to the nimbleness and adaptability of both the provider and the platform.

For this project, we had direct access to several of the key-players involved in the ePortfolio platform process. From this population, we formed five distinct panels for conducting our model validation and pairwise comparisons: criterion level, service delivery factors, cost factors, functional factors, and technical factors.

The interview process was fruitful, as each panelist provided valuable feedback for our model development, and each of the individual’s backgrounds correlated extremely well within the model. This is reflected further in the analysis section as the consistency and disagreement values were all extremely low.

Results and analysis

The following chapter will summarize the results of the HDM from our analysis. Starting off, the previously mentioned experts rated the criteria level of the model. The results show that the experts value functional requirements for ePortfolio platforms being the highest (relative weight of 0.38). This is closely followed by technical and service delivery considerations, both with a relative weight of 0.25. The experts rated cost requirements as least important (relative weight of 0.13). It can clearly be seen that cost is considerably less important than service delivery, functional, and technical requirements. This might be attributed to the fact that the money needed to fund this project has already been acquired. Nonetheless, cost still plays an important role when deciding which ePortfolio platform best fits with the needs of PSU (see Fig. 1).

Fig. 1
figure 1

Relative weights criteria level

As described by the HDM, each of the requirements of level 2 is split up into various sub-categories. For this case, five to seven sub-categories were chosen for each of the categories in the second level. Various experts for the different fields were again responsible to perform a pairwise comparison of these. First, a closer look at the different service delivery requirements will be taken. The relative weights attributed for this category vary between 0.12 and 0.21. Overall, the experts deem “support” to be the most important aspect of service delivery (relative weight of 0.21), closely followed by “user community” (relative weight of 0.2). On the other hand, “lifecycle management” (relative weight of 0.12) and “track record” are not as important (relative weight of 0.14). Generally speaking, there is little variety within this category, with the standard deviation only being 0.035. Figure 2 shows the results for this subcategory.

Fig. 2
figure 2

Relative weights factor level (service delivery)

The second criterion that will be evaluated is service delivery. On this factor level, again little variance between the different categories can be observed. The standard variation is only 0.031 and the relative weights vary between 0.14 (for “initial cost” and “training cost”) and 0.21 for “ongoing professional development cost.” Overall, it has already been observed that cost is not seen as an equally important criterion to the other three previously described criteria and this can possibly be attributed to the fact that the money has already been acquired. This is underlined by the fact that “initial cost” is viewed by the experts as the least important factor. The results for the relative weight at the factor level for the criterion “cost” are summarized in Fig. 3.

Fig. 3
figure 3

Relative weights factor level (cost)

The third criterion that was evaluated in further depth is functional requirements. It is split up into seven sub-criteria. The results for this factor are a little more spread out, with the standard deviation reaching 0.048. The lowest scoring factor is “accessibility” with a relative of only 0.08. This is considerably low, especially compared to the two highest scoring factors “assessment” and “pedagogical utility” (with relative score of 0.2 and 0.21, respectively). One possibility to explain why the relative weights for this category are a little more spread out could be the fact that seven sub-criteria exist for this factor. Besides the two highest and one lowest scoring criteria, the other four (“privacy,” “customization,” “integration,” and “workflow capabilities”) score pretty evenly with relative weights between 0.1 and 0.14. The results for all relative weights on the factor level for the functional requirements are outlined in Fig. 4.

Fig. 4
figure 4

Relative weights factor level (functional)

Lastly, technical requirements are analyzed at the factor level. This was the smallest category with only five sub-categories. Not surprisingly, this level sees the two highest relative weights with 0.22 for “identity data” and “integration capabilities” at 0.33. However, these are only relative weights and they cannot be compared with the other relative weights before normalizing them to global weights. Due to the small number of sub-categories, this factor level also has the highest standard deviation between the sub-criteria with 0.079. This can be explained when taking a look at the lowest scoring category (“system and application”). The score of 0.14 is essentially only a third of that of the highest relative weight. Again, the relative weights for each of the sub-criteria can be found in Fig. 5.

Fig. 5
figure 5

Relative weights factor level (technical)

After having analyzed the relative weights for all of the four criteria at the factor level, it is important to be able to compare each of the sub-criteria against each other. In order to be able to do so, the global weights were determined by combining the relative weights of the criteria level with the subsequent relative weights of each of the factor level’s categories. Overall, 24 global weights can be calculated at the factor level. There are three factors that clearly stand out and score the highest global weight of 0.08: “assessment, analysis and reporting,” “Pedagogical utility,” and “integration capabilities.” Interestingly enough, these come from only two of the criteria. Namely, the former two are part of the functional category, while the latter is a technical requirement. This can partially be explained because functional criteria are the highest rated ones on the second level. Next, the four lowest scoring sub-categories can be identified. These are “initial cost,” “licensing cost,” “maintenance cost,” and “training cost” with a global weight of 0.02 each. As it was observed throughout the course of this analysis, cost is the least important criterion. Hence, the least important global weights all come from the cost criterion. Inclusion, there is small variation between the 24 factors (standard deviation = 0.0182). The global weights are summarized in Fig. 6 and visualized in Fig. 7. More important factors are represented in darker colors, while less important factors are given lighter colors. This scheme visualizes the great important of technical and especially functional factors, while cost is considered inferior.

Fig. 6
figure 6

Global weights factor level

Fig. 7
figure 7

Selection of the best ePortfolio platform

After having established the global importance of all categories, a review of the possible ePortfolio platform options was taken next. Namely, Tk20, Digication, and D2L ePortfolio were evaluated. Each of these portfolios was scored separately by an expert on a 7-point Likert scale, with 7 being the best performance possible. In a next step, these values were normalized appropriately. The results are summarized in Fig. 8. Taking a closer look already points in the direction that D2L ePortfolio seems like the dominant decision, because its values exceed those of the other two options multiple times. However, this is not very accurate. Hence, the results were accumulated and are summarized in Fig. 9. It shows that D2L ePortfolio scores the highest overall weight, followed by Digication and Tk20. This leads to the recommendation that D2L ePortfolio should be implemented at PSU.

Fig. 8
figure 8

Normalized utility values

Fig. 9
figure 9

Alternative comparison

We conducted a sensitivity analysis for level 2 (criteria) and level 3 (factors). In order to do so, first the relative weight (local weight) of the examined criteria/factor, has been changed until a “tipping point” was reached. A tipping point here is defined as a value, which changes the order of the alternatives. In the case of this project, this is the order of the ranks, the three ePortfolio platforms get assigned due to the HDM methodology. However, the local weights of the panel examined criteria/factor have been normalized and then the global weights of all criteria have been re-calculated with these changed local weights. Only one weight at a time was changed (ceteris paribus). Afterwards, the results have been compared to the original values. Figure 10 illustrates the approach:

Fig. 10
figure 10

Utilized sensitivity model

The analysis shows that the results are not sensitive to changes of the weights of “service delivery” and “functionality.” Increasing the local importance of “cost” more than 44.3 % changes the 2nd and 3rd place (Digication and Tk20). Increasing the local importance of “technical” more than 77.6 % also changes the 2nd and 3rd place. As the original values are based on expert judgment and the increase of the corresponding weights is high (44.3 and 77.6 %) a scenario, which would lead to this change is unlikely to occur. Moreover, a change of the 1st place (D2L ePortfolio) does not occur. The model is therefore very robust to changes of the criteria (level 2).

After analyzing the weights of the factors (level 3) the same can be concluded. The 2nd and 3rd place of the alternatives would be changed if….

  • The local importance of “User community” is increased more than 104 %*

  • The local importance of “Ongoing professional development” is increased more than 919 %

  • The local importance of “Assessment, analysis, and reporting” is increased more than 74 %

  • The local importance of “Identity data, and access management” is increased more than 89 %

The results are not sensitive to any other changes (ceteris paribus). Further analysis shows why the model is so robust: D2L ePortfolio has the highest utility values in most categories (level 4), followed by Digication and Tk20.

Conclusively, the decision which platform to choose, is not sensitive to a change of the current scenario.

Conclusions

The model has chosen the “D2L ePortfolio” solutions as the optimal choice based on the inputs and model design.

ePortfolio “Functionality” was found to be the most important criteria in selecting a solution, followed equally by “Service delivery” and “Technical.” “Cost” was the lowest ranked criteria, however, given that a budget has already been allocated to the program, this weight may be artificially low, highlighting a potential bias among the experts with this knowledge.

The highest global ranked factors were “Pedigological utility,” “Integration,” and “Assessment, analysis and reporting” all achieving ranks of 0.08 each. The four lowest global ranked factors were the “Costs” sub-criteria, achieving ranks of 0.02 each.

The model has been proven to be robust against significant change to the criteria or factors. The output decision is not altered under the sensitivity analysis where significant changes are made to any of the factors or criteria. The ranking of 2nd and 3rd alternatives were found to be sensitive to significant increase in importance of the criteria “Cost” or “Service delivery,” where a normalized importance increase of ~44 or ~77 %, respectively, would change the ranking. However, an increase is unrealistic given that 25 % is the largest delta between the maximum- and minimum-rated criteria. The sensitivity analysis also shows that no change in any of the factors could impact the ranking of the 1st alternative and only 4/24 of the factors could change the outcome of the 2nd and 3rd ranked alternatives.

The inconsistency of the expert’s weighting is quite low ranging from 0 to 0.03. The expert panel disagreements are also very low ranging from 0.01 to 0.03. These facts add to the model’s validity and robustness.

References

  • Basoglu N, Daim T, Kerimoglu O (2007) Organizational adoption of enterprise resource planning systems: a conceptual framework. J High Technol Manage Res 18(1):73–97

    Article  Google Scholar 

  • Basoglu N, Daim T, Polat E (2014) Exploring adaptivity in service development: case of mobile platforms. J Prod Innov Manage 31(3):501–515

    Article  Google Scholar 

  • Bhuasiri W, Xaymoungkhoun O, Zo H, Rho JJ, Ciganek AP (2012) Critical success factors for e-learning in developing countries: a comparative analysis between ICT experts and faculty. Comput Educ 58(2):843–855

    Article  Google Scholar 

  • Boswell SS (2016) Ratemyprofessors is hogwash (but I care): effects of ratemyprofessors and university-administered teaching evaluations on professors. Comput Hum Behav 56:155–162

    Article  Google Scholar 

  • Cazan AM, Cocoradă E, Maican CI (2016) Computer anxiety and attitudes towards the computer and the internet with Romanian high-school and university students. Comput Hum Behav 55(Part A):258–267

    Article  Google Scholar 

  • Cerretani PI, Iturrioz EB, Garay PB (2016) Use of information and communications technology, academic performance and psychosocial distress in university students. Comput Hum Behav 56:119–126

    Article  Google Scholar 

  • Daim T, Intarode N (2009) A framework for technology assessment: case of a thai building material manufacturer. Energy Sustain Dev 13(4):280–286

    Article  Google Scholar 

  • Daim T, Kocaoglu D (2008) Exploring Technology acquisition in Oregon, Turkey and in the U.S. electronics manufacturing companies. J High Technol Manage Res 19(1):45–58

    Article  Google Scholar 

  • Daim T, Kocaoglu D (2009) Exploring the role of technology evaluation in the competitiveness of US electronics manufacturing companies. Int J Technol Manage 48(1):77–94

    Article  Google Scholar 

  • Daim T, Yates D, Peng Y, Jimenez B (2009) Technology assessment for clean energy technologies. Technol Soc 31(3):232–243

    Article  Google Scholar 

  • Daim T, Bhatla A, Mansour M (2013) Site selection for a data center—a multi criteria decision making model. Int J Sustain Eng 6(1):10–22

    Article  Google Scholar 

  • Digication (2014) http://en.wikipedia.org/wiki/Digication. Accessed 03 Jan 2014

  • Farid S, Ahmad R, Niaz IA, Arif M, Shamshirband S, Khattak MD (2015) Identification and prioritization of critical issues for the promotion of e-learning in Pakistan. Comput Hum Behav 51(Part A):161–171

    Article  Google Scholar 

  • Hallajow N (2016) The interplay of technology and context in Syrian university students’ electronic literacy practices. Comput Hum Behav 55(Part A):178–189

    Article  Google Scholar 

  • Hallum D, Daim T (2009) A hierarchical decision model for optimum design alternative selection. Int J Decis Sci Risk Manage 1(s1-2):2–23

    Google Scholar 

  • Harrel G, Daim T (2010) HDM modeling as a tool to assist management with employee motivation. Eng Manage J 22(1):23–33

    Article  Google Scholar 

  • Huang Yueh-Min, Chiu Po-Sheng, Liu Tzu-Chien, Chen Tzung-Shi (2011) The design and implementation of a meaningful learning-based evaluation method for ubiquitous learning. Comput Educ 57(4):2291–2302

    Article  Google Scholar 

  • Hung-Yi Wu, Lin Hsin-Yu (2012) A hybrid approach to develop an analytical model for enhancing the service quality of e-learning. Comput Educ 58(4):1318–1338

    Article  Google Scholar 

  • Kargin B, Basoglu N, Daim T (2009) Factors affecting the adoption of mobile services. Int J Serv Sci 1(2):29–52

    Google Scholar 

  • Kerimoglu O, Basoglu N, Daim T (2008) “Organizational adoption of information technologies: case of enterprise resource planning systems. J High Technol Manage Res 19(1):21–35

    Article  Google Scholar 

  • Lin Hsiu-Fen (2010) An application of fuzzy AHP for evaluating course website quality. Comput Educ 54(4):877–888

    Article  Google Scholar 

  • Pipes RB, Wilson JM (1996) A multimedia model for undergraduate education. Technol Soc 18(3):387–394

    Article  Google Scholar 

  • ReThink PSU (2014) http://www.pdx.edu/oai/provosts-challenge. Accessed 03 Jan 2014

  • Senele rOzen C, Basoglu N, Daim T (2010) An empirical analysis of the antecedents of adoption of online services: a prototype-based framework. J Entrep Inf Manage 23(4):417–438

    Article  Google Scholar 

  • Seneler C, Basoglu N, Daim T (2009a) Interface feature prioritization for web services: case of online flight reservations. Comput Hum Behav 25:862–877

    Article  Google Scholar 

  • Seneler CO, Basoglu N, Daim T (2009b) Exploring the contribution of the design characteristics information systems’ user interface to adoption process. Int J Bus Inf Syst 4(5):489–508

    Google Scholar 

  • Shee DY, Wang YS (2008) Multi-criteria evaluation of the web-based e-learning system: a methodology based on learner satisfaction and its applications. Comput Educ 50(3):894–905

    Article  Google Scholar 

  • Tanoglu I, Basoglu N, Daim T (2010) Exploring technology diffusion: case of information technologies. Int J Inf Technol Decis Mak 9(2):195–222

    Article  Google Scholar 

  • Tekinarslan Erkan (2008) Computer anxiety: a cross-cultural comparative study of Dutch and Turkish university students. Comput Hum Behav 24(4):1572–1584

    Article  Google Scholar 

  • Tran T, Daim T (2008) A taxonomic review of methods and tools applied in technology assessment. Technol Forecast Soc Chang 75(9):1396–1405

    Article  Google Scholar 

  • Tseng ML (2010) Implementation and performance evaluation using the fuzzy network balanced scorecard. Comput Educ 55(1):188–201

    Article  Google Scholar 

  • van Blommestein K, Daim T (2013) Residential Energy Efficient Device Adoption in South Africa. Sustain Energy Technol Assess 1(1):13–27

    Article  Google Scholar 

  • Wang B, Kocaoglu D, Daim T, Yang J (2010) A decision model for energy resource selection in China. Energy Policy 38(11):7130–7141

    Article  Google Scholar 

Download references

Authors’ contributions

All authors contributed to this project equally from the inception to the end. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tugrul U. Daim.

Appendix

Appendix

Model

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Daim, T.U., Boss, V., Thomas, J. et al. Evaluating technologies for education: case of ePortfolio. Technol Innov Educ 2, 4 (2016). https://doi.org/10.1186/s40660-016-0010-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40660-016-0010-8

Keywords