First Study
Study title: A comparison of student outcomes with and without teacher
facilitated computer-based instruction
Conducted by: Jack V. Powella,*, Victor G. Aeby Jr.b, Tracy Carpenter-Aebyc
Published in 2003
This study is a comparative study related to Learners’ perception and performance and the purpose of this study was to determine if there were significant differences in academic and psychosocial outcomes for
disruptive students assigned to an alternative school who received teacher facilitation with CBI, As it compares student outcomes with and without teacher facilitated computer-based instruction. Basically the evaluation instrument used in this study were student grades so called documentation review, as this was
the main data collection strategy used in the study, and the purpose of this data collection methodology was to find out the final results of both study groups and compare their result, since the study is a comparative study, and the authors need to conduct an outcomes comparison of both group as the first group is considered as control group which taking teacher facilitated CBI where the second group is taking CBI without teacher facilitation, and basically the participant were around 215 students and the study monitored student results for period of 2 academic years and once again the evaluation instrument used was comparison of student results or degrees in period of 2 years.
Advantages and Disadvantages of this evaluation methodology
Basically documentation review is a great instrument when evaluating for long period of time as it gives overall results of a program being evaluated and it also provide accurate information since documents were created original for example student grades are original grades not affected by student and/or teacher perception as we know many instrument could be affected by social related response such as survey could be affected by student readiness or aptitude toward completing that survey, does the student completed it because he have to or he really wants to give accurate information for the study. The benefits of using this evaluation methodology are the following first this method provide accurate data, second it wouldn’t interferes with the program being evaluated for example the evaluator doesn’t need to attend the class to evaluate, going to disadvantages of this evaluation instrument firstly data be exists but not available for research purposes, administrative issues, second need for longer time to collect data for example if need to collect student results/outcomes.
Finally the results were as the following the group 2 with teacher facilitated CBI scored significantly higher results comparing with the group 1 without teacher facilitated CBI, results were in terms of student achievement.
Second Study
Study title : Multi-dimensional students’ evaluation of e-learning systems in the higher education context: An empirical investigation
By: Sevgi Ozkan *, Refika Koseler
Informatics Institute, Middle East Technical University, Ankara, Turkey This study is non-comparative studies, and the purpose of evaluation of this study is to develop a comprehensive e-learning assessment model using existing literature as a base, incorporating concepts from both information systems and
education disciplines, and it attempted to propose an e-learning evaluation model comprising a collective set of measures associated with an e-learning system and it use a survey that applied to 84 graduate and post-graduate students at Brunel University, UK. This survey used to collect data from students about their perceptions of the blended learning environment and LMS in regards to their benefits and satisfaction level. The responses to the questionnaire were analyzed using the Statistical Package for the Social
Sciences (SPSS), and the Reliability was evaluated by assessing the internal consistency of the items representing each factor using Cranach’s alpha.
The advantageous of this instrument are: it measures the learner’s perspective, instructor attitude, information content quality, and it is easy to collect information and result, also it has deferent types of question.
There are some disadvantages of this instrument e.g. it need more time to create, and apply for all students, spatially the sample of study was two type “graduate and post-graduate students”,
also it doesn’t measure the usability of the e-learning system.
The result of this study, there is a positive statistically significant relationship between learner’s
attitudes and overall learner satisfaction; also the attitudes of learners towards U-Link are positively related with the learner’s past LMS experience, and for the instructor quality there is a strong
relationship between the instructor’s quality and learner’s perceived satisfaction. Then the system quality, the result show that there is a highly positive relationship between system quality of the LMS and overall learner satisfaction, and the result of information (content) quality show that there is a strong positive relationship between information quality of the LMS and overall learners’ perceived
satisfaction
TECH4102
Tuesday, May 10, 2011
Comparative and non-comparative evaluation in educational
CALL Software and Website Evaluation Forms
Evaluating software and websites for teaching and learning foreign languages
Target udiance:
It is for language teachers already in service, although parts of the website are suitable for teachers undergoing initial training and for teachers following short intensive courses.
Tools used:
Two guestionares:
• Software Evaluation Form
• Website Evaluation Form
The link:
http://www.google.com.om/url?sa=t&source=web&cd=7&ved=0CE0QFjAG&url=http%3A%2F%2Fwww.ict4lt.org%2Fen%2Fevalform.doc&ei=yRPJTbCkKo_wrQfXyMyXBQ&usg=AFQjCNG6x9r6yy49SLGqXNso_UYqfQaEtw
Usability Evaluation
Saturday, April 9, 2011
Evaluation Of Online Learning
https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B1UCGaHYk-U2OWQ0ODMwODktNWMzMC00ZWZhLWFjMWMtZTZhMTI3NTJkOGRj&hl=en
Basically this document discuss and evaluate three online learning environments
blog , Online discussion board and an instrument to evaluate online grade books
please use the link above if the embedded document and not easy to read or small in your browser
Sunday, March 13, 2011
Evaluation of States of Matter (Simlulation software)
Short Description of the software:
Basically State of Matters is simple web-based and downloadable software which provide the learner with interactive simulation environment to learn about the atoms and their relation with other factors such as heat, pressure and so on.
Learning goals (As listed in their web-site)
1.Describe a molecular model for solids, liquids, and gases.
2.Extend this model to phase changes.
3.Describe how heating or cooling changes the behavior of the molecules.
4.Describe how changing the volume can affect temperature, pressure, and state.
5.Relate a pressure-temperature diagram to the behavior of molecules.
6.Interpret graphs of interatomic potential.
7.Describe how forces on atoms relate to the interaction potential.
8.Describe the physical meaning of the parameters in the Lennard-Jones potential, and how this relates to the molecule behavior
Here is the direct link to the software:
http://phet.colorado.edu/en/simulation/states-of-matter
Click on run now to run the software, note you must have java installed on your machine
Basically here we need to evaluate learner control of this software, we’ve developed a survey to measure how much control this software is providing to the learner
Please participate with us in this research and fill the survey through the following link
http://lc.surveyconsole.com/
We Appreciate your participation
Regards
Azeer 82920 & Abdullah 61513
Tuesday, March 1, 2011
Revisiting Kirkpatrick’s model – an evaluation of an academic training course
Authors: P. Rajeev*, M. S. Madan and K. Jayarajan
The Purpose of this study is to analyze the theories of training evaluation in general and illustrating a case study of training evaluation of the academic training courses.
The Model of evaluation used was “Kirkpatrick like model” basically Kirkpatrick model is a four level evaluation model which review the following components: Reaction, learning, behavior and results, however in the case study the author used a modified version of Kirkpatrick model which is a four-stage evaluation model to evaluate the training program which are as the following: training orientation and pre-training evaluation, concurrent evaluation and post-training evaluation and Knowledge gain.
Clear information around what is being evaluated:
Among other programmes, training courses are organized presently at the Institute in the field of bioinformatics and biotechnology and biochemistry. The trainees are postgraduate students in life sciences and the objective of the course is to impart knowledge on concepts and methods in these fields and impart skills in using various scientific tools. The curriculum involves hands-on training on skills related to analytical techniques in biochemistry like GLC, GCMS, HPLC, etc., isolation of proteins, enzymes, DNA, RNA, plant tissue culture and micropropagation, DNA Markers, preparation of molecular maps and molecular approaches in the detection and isolation of plant pathogens. The course duration is 30 days and trainees are selected based on acceptable standards of their performance in the regular courses they do in order to ensure homogeneity as far as possible.
Tool used at each evaluation stages/levels and why the specific tool is used:
Training orientation session:
Open discussion was used to find out trainee expectations from the training program.
Pre-training session:
a comprehensive, quiz type knowledge test is administered to assess the initial level of knowledge.
Concurrent evaluation session:
Performance-oriented tests are given to reflect on performance, soft skills
and measuring the knowledge as well as skills (behavioral component).
Post-training session/knowledge gain:
The knowledge test, similar to the one given during the pre-training session is repeated with the purpose of measuring the knowledge gain.
Result at each stage/level:
Pre-training session:
The mean pre-training knowledge test score was as low as 8.67 out of 25.
Concurrent evaluation session:
Performance test result was 11.78, out of a maximum possible score of 20
Post-training session/knowledge gain:
After completing the training the test result of the same pretest was raised to 17.45