]>
have_kind_ofed
have_kind_of
have_kind_ofs
is_used_in
is_useded_in
is_useds_in
describes
describe
described
use_toolsed
use_tools
use_toolses
used_by
used_bied
used_bies
use_specific_methods
use_specific_methodses
use_specific_methodsed
Dependant_variable
Dependant_variables
Experimental_designs
Experimental_design
Extraneous_variables
Extraneous_variable
Historical_comparison
Historical_comparisons
IRM
IRM
IRMs
Independant_variable
Independant_variables
Methodology_Tools
Methodology_Toolses
Methodology tools are all the different tools, artefacts that it is possible to use in a research like an IRM, a questionnaire, etc.
It can be extend depending of the need (see if it is important to externalise it)
Methods
Methodses
Methods_techniqueses
Methods_techniques
1
Qualitative_methods
Qualitative_methodses
Quantitative_methods
Quantitative_methodses
Secondary_datas
Secondary_data
Statistical_analyses
Statistical_analysis
Survey
Surveys
Types_of_variables
Types_of_variableses
Variable
Variables
content_analysis
content_analyses
content_analysis
direct_interviews
direct_interview
direct_interview
directif_interview
directif_interviews
directif_interview
existing_data
existing_datas
existing_data
focus_group
focus_groups
focus_group
implementations
implementation
implementation
interview
interviews
interview
large_scale_datas
large_scale_data
large_scale_data
methods
non_directif_interview
non_directif_interview
non_directif_interviews
on_line_survey
on_line_surveys
on_line_survey
paper_questionnaires
paper_questionnaire
paper_questionnaire
participant_observatories
participant_observatory
participant_observatory
story_life
story_lives
story_life
Descriptive_statisticses
Descriptive_statistics
Descriptive_statistics
Statistical_inferences
Statistical_inference
statistic_type
statistic_types
Cronbach's alpha
Cronbach
IRM
IRM
IRMs
Accidental sampling (sometimes known as grab, convenience sampling or opportunity sampling) is a type of non-probability sampling which involves the sample being drawn from that part of the population which is close to hand. That is, a sample population selected because it is readily available and convenient,as researchers are drawing on relationships or networks to which they have easy access. The researcher using such a sample cannot scientifically make generalizations about the total population from this sample because it would not be representative enough. For example, if the interviewer was to conduct such a survey at a shopping center early in the morning on a given day, the people that he/she could interview would be limited to those given there at that given time, which would not represent the views of other members of society in such an area, if the survey was to be conducted at different times of day and several times per week. This type of sampling is most useful for pilot testing. Credibility of a researcher's results by convenience sampling will depend on convincing the reader that the sample chosen equates to a large degree of the population from which they are drawn. (http://en.wikipedia.org/wiki/Accidental_sampling)
A quota is established (say 65% women) and researchers are free to choose any respondent they wish as long as the quota is met. (http://en.wikipedia.org/wiki/Non-probability_sampling)
In the theory of finite population sampling, Bernoulli sampling is a sampling process where each element of the population that is sampled is subjected to an independent Bernoulli trial which determines whether the element becomes part of the sample during the drawing of a single sample. An essential property of Bernoulli sampling is that all elements of the population have equal probability of being included in the sample during the drawing of a single sample.
Bernoulli sampling is therefore a special case of Poisson sampling. In Poisson sampling each element of the population may have a different probability of being included in the sample. In Bernoulli sampling, the probability is equal for all the elements.
Because each element of the population is considered separately for the sample, the sample size is not fixed but rather follows a binomial distribution. (http://en.wikipedia.org/wiki/Bernoulli_sampling)
In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is conducted.[1] The mathematical formalization of the Bernoulli trial is known as the Bernoulli process. This article offers an elementary introduction to the concept, whereas the article on the Bernoulli process offers a more advanced treatment. (http://en.wikipedia.org/wiki/Bernoulli_trial)
dichotomous variable
binary_variable
The research is limited to one group, often with a similar characteristic or of small size.(http://en.wikipedia.org/wiki/Non-probability_sampling)
Cluster sampling is a sampling technique used when "natural" but relatively homogeneous groupings are evident in a statistical population. It is often used in marketing research. In this technique, the total population is divided into these groups (or clusters) and a simple random sample of the groups is selected. Then the required information is collected from a simple random sample of the elements within each selected group. This may be done for every element in these groups or a subsample of elements may be selected within each of these groups. A common motivation for cluster sampling is to reduce the total number of interviews and costs given the desired accuracy. Assuming a fixed sample size, the technique gives more accurate results when most of the variation in the population is within the groups, not between them. (http://en.wikipedia.org/wiki/Cluster_sampling)
content_analysis
content_analyses
content_analysis
continuous_variable
The demon algorithm is a Monte Carlo method for efficiently sampling members of a microcanonical ensemble with a given energy. An additional degree of freedom, called 'the demon', is added to the system and is able to store and provide energy. If a drawn microscopic state has lower energy than the original state, the excess energy is transferred to the demon. For a sampled state that has higher energy than desired, the demon provides the missing energy if it is available. The demon can not have negative energy and it does not interact with the particles beyond exchanging energy. Note that the additional degree of freedom of the demon does not alter a system with many particles significantly on a macroscopic level.(http://en.wikipedia.org/wiki/Demon_algorithm)
explained variable
experimental variable
response variable
mesured variable
output variable
outcome variable
regressand
measured variable
dependent_variable
Get cases that substantially differ from the dominant pattern (a special type of purposive sample). (http://en.wikipedia.org/wiki/Non-probability_sampling)
direct_interviews
direct_interview
direct_interview
directif_interview
directif_interviews
directif_interview
Distance sampling is a widely used group of closely related methods for estimating the density and/or abundance of populations. The main methods are based on line transects or point transects.[1] In this method of sampling, the data collected are the distances of the objects being surveyed from these randomly placed lines or points, and the objective is to estimate the average density of the objects within a region.[2] (http://en.wikipedia.org/wiki/Distance_sampling)
existing_data
existing_datas
existing_data
In probability theory, an experiment or trial (see below) is any procedure that can be infinitely repeated and has a well-defined set of possible outcomes, known as the sample space.[1] An experiment is said to be random if it has more than one possible outcome, and deterministic if it has only one. A random experiment that has exactly two (mutually exclusive) possible outcomes is known as a Bernoulli trial.[2](http://en.wikipedia.org/wiki/Experiment_(probability_theory))
experimental_design
control variable
controlled variable
extraneous_variable
focus_group
focus_groups
focus_group
A gradsect or gradient-directed transect is a low-input, high-return sampling method where the aim is to maximise information about the distribution of biota in any area of study. Most living things are rarely distributed at random, their placement being largely determined by a hierarchy of environmental factors. For this reason, standard statistical designs based on purely random sampling or systematic (e.g. grid-based) systems tend to be less efficient in recovering information about the distribution of taxa than sample designs that are purposively directed instead along deterministic environmental gradients. (http://en.wikipedia.org/wiki/Gradsect)
historical_comparison
implementations
implementation
implementation
input variable
regressor
feature
exogenous variable
exposure variable
explanatory variable
manipulated variable
predictor variable
controlled variable
risk factor
independent_variable
interview
interviews
interview
The researcher chooses the sample based on who they think would be appropriate for the study. This is used primarily when there is a limited number of people that have expertise in the area being researched. (http://en.wikipedia.org/wiki/Non-probability_sampling)
The Kish grid or Kish selection table is a method for selecting members within a household to be interviewed. It uses a pre-assigned table of random numbers to find the person to be interviewed. It was developed by statistician Leslie Kish in 1949.[1] (http://en.wikipedia.org/wiki/Kish_grid)
large_scale_datas
large_scale_data
large_scale_data
LHS
Latin hypercube sampling (LHS) is a statistical method for generating a sample of plausible collections of parameter values from a multidimensional distribution. The sampling method is often used to construct computer experiments.
In statistics, line-intercept sampling (LIS) is a method of sampling elements in a region whereby an element is sampled if a chosen line segment, called a “transect”, intersects the element.[1](http://en.wikipedia.org/wiki/Line-intercept_sampling)
methodology_technics
Multistage sampling is a complex form of cluster sampling. Cluster sampling is a type of sampling which involves dividing the population into groups (or clusters). Then, one or more clusters are chosen at random and everyone within the chosen cluster is sampled. (http://en.wikipedia.org/wiki/Multistage_sampling)
nominal categorical
nominal_categorical_variable
nominal_variable
non_directif_interview
non_directif_interview
non_directif_interviews
Sampling is the use of a subset of the population to represent the whole population. Probability sampling, or random sampling, is a sampling technique in which the probability of getting any particular sample may be calculated. Nonprobability sampling does not meet this criterion and should be used with caution. Nonprobability sampling techniques cannot be used to infer from the sample to the general population.
The advantage of nonprobability sampling is its lower cost compared to probability sampling. However, one can say much less on the basis of a nonprobability sample than on the basis of a probability sample. Of course, research practice appears to believe this claim, because many analysts draw generalizations (e.g., propose new theory, propose policy) from analyses of nonprobability sampled data. One must ask, however, whether those published works are publishable because tradition makes them so, or because there really are justifiable grounds for drawing generalizations from studies based on nonprobability samples.(http://en.wikipedia.org/wiki/Non-probability_sampling)
on_line_survey
on_line_surveys
on_line_survey
ordinal categorical
ordinal_categorical_variable
paper_questionnaires
paper_questionnaire
paper_questionnaire
participant_observation
participant_observatories
participant_observatory
participant_observatory
In the theory of finite population sampling, Poisson sampling is a sampling process where each element of the population that is sampled is subjected to an independent Bernoulli trial which determines whether the element becomes part of the sample during the drawing of a single sample.(http://en.wikipedia.org/wiki/Poisson_sampling)
population
statistical population
qualitative_methods
quantitative_methdos
Quota sampling is a method for selecting survey participants. In quota sampling, a population is first segmented into mutually exclusive sub-groups, just as in stratified sampling. Then judgment is used to select the subjects or units from each segment based on a specified proportion. For example, an interviewer may be told to sample 200 females and 300 males between the age of 45 and 60. This means that individuals can put a demand on who they want to sample (targeting)
This second step makes the technique non-probability sampling. In quota sampling, there is non-random sample selection and this can be unreliable. For example, interviewers might be tempted to interview those people in the street who look most helpful, or may choose to use accidental sampling to question those closest to them, for time-keeping sake. The problem is that these samples may be biased because not everyone gets a chance of selection. This non-random element is a source of uncertainty about the nature of the actual sample and quota versus probability has been a matter of controversy for many years.[1]
Quota sampling is useful when time is limited, a sampling frame is not available, the research budget is very tight or when detailed accuracy is not important. Subsets are chosen and then either convenience or judgment sampling is used to choose people from each subset. The researcher decides how many of each category is selected.
Quota sampling is the non probability version of stratified sampling. In stratified sampling, subsets of the population are created so that each subset has a common characteristic, such as gender. Random sampling chooses a number of subjects from each subset with, unlike a quota sample, each potential subject having a known probability of being selected. (http://en.wikipedia.org/wiki/Quota_sampling)
secondary_data
In statistics, a simple random sample is a subset of individuals (a sample) chosen from a larger set (a population). Each individual is chosen randomly and entirely by chance, such that each individual has the same probability of being chosen at any stage during the sampling process, and each subset of k individuals has the same probability of being chosen for the sample as any other subset of k individuals.[1] This process and technique is known as simple random sampling, and should not be confused with systematic random sampling. A simple random sample is an unbiased surveying technique.
Simple random sampling is a basic type of sampling, since it can be a component of other more complex sampling methods. The principle of simple random sampling is that every object has the same probability of being chosen. For example, suppose N college students want to get a ticket for a basketball game, but there are only X < N tickets for them, so they decide to have a fair way to see who gets to go. Then, everybody is given a number in the range from 0 to N-1, and random numbers are generated, either electronically or from a table of random numbers. Numbers outside the range from 0 to N-1 are ignored, as are any numbers previously selected. The first X numbers would identify the lucky ticket winners.(http://en.wikipedia.org/wiki/Simple_random_sample)
In sociology and statistics research, snowball sampling[1] (or chain sampling, chain-referral sampling, referral sampling[2][3]) is a non-probability sampling technique where existing study subjects recruit future subjects from among their acquaintances. Thus the sample group appears to grow like a rolling snowball (similarly to breadth-first search (BFS) in computer science). As the sample builds up, enough data is gathered to be useful for research. This sampling technique is often used in hidden populations which are difficult for researchers to access; example populations would be drug users or sex workers. As sample members are not selected from a sampling frame, snowball samples, analogously to BFS samples,[4][5] are subject to numerous biases. For example, people who have many friends are more likely to be recruited into the sample. (http://en.wikipedia.org/wiki/Snowball_sampling)
Square root biased sampling is a sampling method proposed by William H. Press, a computer scientist and computational biologist, for use in airport screenings. It is the mathematically optimal compromise between simple random sampling and strong profiling that most quickly finds a rare malfeasor, given fixed screening resources.[1][2]
Using this method, if a group is n times as likely as the average to be a security risk, then persons from that group will be square root of n times as likely to undergo additional screening.[1] For example, if someone from a profiled group is nine times more likely than the average person to be a security risk, then when using square root biased sampling, people from the profiled group would be screened three times more often than the average person.(http://en.wikipedia.org/wiki/Square_root_biased_sampling)
statistical_analysis
story_life
story_lives
story_life
survey
Systematic sampling is a statistical method involving the selection of elements from an ordered sampling frame. The most common form of systematic sampling is an equal-probability method. In this approach, progression through the list is treated circularly, with a return to the top once the end of the list is passed. The sampling starts by selecting an element from the list at random and then every kth element in the frame is selected, where k, the sampling interval (sometimes known as the skip): this is calculated as:[1](http://en.wikipedia.org/wiki/Systematic_sampling)
Descriptive_statisticses
Descriptive_statistics
Descriptive_statistics
statistical_induction
statistical_inference