Steps in the Process of
Data Collection :
1.Determining
the participant to study
2.Obtaining
permission needed from several individuals and organization
3.Considering
what types of information to collect from several sources, available to the
quantitative research
4.Locating
and selecting instrument to use
5.Administering
the data collection process to collect data
Population and Sample
Population
: a group of individual who have the same characteristics
Sample
: a subgroup of the target population that the researcher plans to study for
generalizing about the target population.
1
Probability Sampling: the researcher
selects individual from the population who are representative of that
population
a.
Simple random sampling : the researcher selects participant for the sample so
that any individual has an equal probability of being selected from the
population.
b.
Systematic sampling : the researcher choose every nth individual or site in the
population until the researchers reaches the desired sample size
c.
Stratified sampling : the researcher divides (stratify) the population on some
specific characteristic and then using simple random sampling, sampling from
population.
d.
Multistage cluster sampling : the researcher chooses a sample in two or more
stages because either the researcher cannot easily identify the population is
extremely large
2.
Non Probability Sampling: the
researcher selects individuals because they are available, convenient, and
represent some characteristic the investigator seeks to study.
a.
Convenience sampling : the researcher selects participant because they are
willing and available to be studied.
b.
Snowball sampling : the researcher asks participants to identify others to
become members of
sample
size : the larger the sample the less the potential error is that the sample
will be different from the population.
What
permission will the researcher need ?
1.The
purpose of the study
2.The
amount of time the researcher will be at the site collecting data
3.The
time required of participants
4.How
the researcher will use the data or result
5.State
specific activities that will be conducted
6.The
benefits to the organization or the individual because of the study.
Reliability and Validity
Reliability means that scores from an
instrument are stable and consistent. Scores should be nearly the same when
researchers administer the instrument multiple times at different times. Also,
scores need to be consistent. When an individual answers certain questions one
way, the individual should consistently answer closely related questions in the
same way.
Validity is the development of sound
evidence to demonstrate that the test interpretation (of scores about the
concept or construct that the test is assumed to measure) matches its proposed
use.
Types
of Reliability
No
|
Form of
Reliability
|
Number of
Times
Instrument
Administered
|
Number of
Different
Versions of
the Instrument
|
Number of
Individuals
Who Provide
Information
|
1
|
Test–retest
reliability
|
Twice
at different time
intervals
|
One
version of the
instrument
|
Each
participant in the
study
completes the instrument twice.
|
2
|
Alternate
forms reliability
|
Each
instrument
administered
once
|
Two
different versions of the same concept or variable
|
Each
participant in the study completes each
instrument.
|
3
|
Alternate
forms and test– retest reliability
|
Twice
at different time intervals
|
Two
different versions of the same concept or variable
|
Each
participant in the study completes each instrument.
|
4
|
Interrater
reliability
|
Instrument
administered once
|
One
version of the
instrument
|
More
than one individual observes behavior of the
participants.
|
5
|
Internal
consistency
reliability
|
Instrument
administered once
|
One
version of the
instrument
|
Each
participant in the study completes the instrument .
|
Sources
of Validity Evidence and Examples
No
|
Validity
Evidence
|
Types of Tests
or
Instruments to
Which Validity Evidence Is
Applicable
|
Type of
Evidence Sought
|
Examples of
Evidence
|
1
|
Evidence
based
on test
content
|
Achievement
tests, credentialing tests, and employment tests
|
Evidence
of an analysis
of
the test’s content (e.g.,
themes,
wording, format)
and
the construct it is
intended
to measure
|
•
Examine logical or empirical evidence
(e.g.,
syllabi, textbooks, teachers’
lesson
plans)
•
Have experts in the area judge
|
2
|
Evidence
based
on
response
processes
|
Tests
that assess
cognitive
processes,
rate
behaviors, and
require
observations
|
Evidence
of the fi t between
the
construct and how
individuals
taking the test
actually
performed
|
•
Interviews with individuals taking tests
to
report what they experienced/were
thinking
•
Interviews or other data with observers to determine if they are all
responding to the same stimulus in the same way
|
3
|
Evidence
based
on
internal
structure
|
Applicable
to all tests
|
Evidence
of the relationship among test items, test parts, and the dimensions of the test
|
•
Statistical analysis to determine if factor structure (scales) relates to
theory, correlation of items
|
4
|
Evidence
based
on
relations
to
other
variables
|
Applicable
to all tests
|
Evidence
of the relationship
of
test scores to variables
external
to the test
|
•
Correlations of scores with tests
measuring
the same or different constructs
(convergent/discriminant
validity)
•
Correlations with scores and some external criterion (e.g., performance assessment—test-criterion
validity)
•
Correlations of tests scores and their prediction of a criterion based on
cumulative
databases (called meta analysis— validity generalization)
|
5
|
Evidence
based
on the
consequences
of
testing
|
Applicable
to all tests
|
Evidence
of the intended
and
unintended consequences of the test
|
•
Benefits of the test for positive treatments for therapy, for placement of
workers in suitable jobs, for prevention of unqualified
individuals
from entering a profession, for improvement of classroom instructional
practices,
and so forth
|
Tidak ada komentar:
Posting Komentar