Aktuelle Forschungsergebnisse und nützliche Links und Downloads zum Thema Marketing and Research.
Zielgruppe: Studierende der Sozialwissenschaften, Kolleginnen und Kollegen, Journalisten.
Ergebnisse Weihnachtsumfrage 2016
Ergebnisse Weihnachtsumfrage 2015
Konsumforschung: Wie Werte unsere Käufe beeinflussen
Einsatz von Marktreaktionsfunktionen in der Praxis
Die Vorteilhaftigkeit multisensueller Kommunikation in Flagshipstores
Erfolgsfaktoren von Social Media Strategien
Work-Life-Balance wirkt sich negativ auf Arbeitgeberattraktivität aus
Weihnachtsgeschenke eingeplant: FOM Hochschule befragt über 7.680 Menschen in Deutschland
FOM-Studie belegt: Betriebliches Gesundheitsmanagement fördert Mitarbeiterbindung
Von wegen Heimatgefühle: Deutsche verreisen am liebsten ins Ausland
Xenius (Arte): Konsum - Ich kaufe, also bin ich (Arte, ausgestrahlt am 18.08.2016 und 24.10.2015)
Bescherung 2015 (Zeit, 10.12.2015, S. 48)
Ein Marktforschung erklärt die Milieu-Typen (BR Fernsehen, 30.03.2015)
Ich kaufe, also bin ich (BR Fernsehen, 30.03.2015)
Psychologie - Vom Kaufen und sein (BR Fernsehen, 22.12.2014)
Genießer und Andere (SZ, 17.11.2014)
Wie Ideale das Einkaufsverhalten prägen (SZ, 17.11.2014)
Für Studierende der FOM Hochschule für Oekonomie & Management:
Um Fachartikel in der Betriebswirtschaftlehre zu recherchieren, können Sie bequem die Volltextrecherche über den Online-Campus der FOM benutzen.
Unter dem Link: FOM/Meine Hochschule/Tools/Literaturrecherche/EHIS
können Sie eine Volltextrecherche starten, in dem Sie die relevanten Datenbanken anklicken und dann eine einfach oder eine erweiterte Suche durchführen. Die Suchfunktionen funktionieren ähnlich wie in google.
Sollte eine Zeitschrift nicht über die FOM erhältlich sein, können Sie alternativ auch den Online-Zugang der
Bayerischen Staatsbibliothek (BSB) nutzen.
Für einen externen Zugriff auf die Volltextrecherche benötigen Sie lediglich einen Ausweis der STABI (Achtung: Sie müssen bayerischer Staatsbürger sein). Den Ausweis bekommen Sie nur vor Ort.
Gehen Sie dann bei Literatureuche entweder auf den BSB-Katalog, Datenbank-Infosysteme (DBIS) oder Elektronische Zeitschriftenbibliothek (EZB)
Über DBIS gibt es für die Sozialwissenschaften gute Ergebnisse in den Datenbanken:
- Wiso (deutschsprachig) oder
- SciVerse bzw. ScienceDirect (englischsprachig)
- Periodicals Archive Online (englischsprachig, Achtung: Alle Datenbanken verwenden)
Wenn Sie über die EZB gehen, müssen Sie vorher wissen in welcher Zeitschrift Sie recherchieren wollen.
Because organizations must make informed decisions, marketing managers must find out what they need to know to develop marketing objectives, select a target market, position (or reposition) their product, and develop product, price, promotion, and place strategies. Therefore, they need information. To make good decisions, marketing managers must have information that is accurate, up-to-date, and relevant. To understand these needs, they first need to conduct marketing research to identify them. Typically, marketing research is an ongoing process, a series of steps marketing managers take repeatedly to learn about the market. A company can carry out the research itself or commission another company to achieve the goal. Management must be informed to make decisions. A research process usually has seven phases (Solomon et al. 2017):
A Problem refers to the overall questions for which the firm needs answers. There are three questions to answer.
The strategic triangle can be used to analyze the situation if the problem is not directly apparent from the company. For example, it could be clear that the company's revenues are declining. The analysis of the strengths and weaknesses and the coverage contribution analysis show, for example, that the sales program of the company is obsolete. This phase usually ends with the need for further information (through market research) to create a new concept with the content objectives, strategies and measures. The link between the strategic triangle (figure 1) and the concept pyramid consists of information on the three corner points of the strategic triangle and the analysis and forwarding of the relevant information for the creation of a new company concept (Gansser 2014a).
In the case of inadequate definition of population totals, systematically distorted selection of information subjects, or sloppy realization of the selection criteria, the results of the investigation may deviate significantly from the circumstances in the population of interest. This is to be avoided.
The analysis of exogenous influences leads to an analysis of the environment. In this context, the chances and risks arising from the general environment (analysis of the general environment), the relevant markets, the sales markets, the procurement markets, the capital market and the labor market (market analysis), and the relevant economic branch as whole. Internal factors as the cause of a crisis are generally limited to the way a company's strategic business unit is viewed. In doing so, the performance-creating area is also affected by the area of corporate management (Gansser 2014a).
Figure 1. The role of market research in interaction with company concept and situation analysis
The research design concretizes what information marketers collect and what kind of study they perform. Research designs fall into two categories of secondary research and primary research. Not all research problems need the same research techniques, and marketers solve many problems most effectively with a combination of different techniques. In a first step, market researchers always must ask whether the information they need to decide is already there. Data that are intended for a different purpose than the specific one, are called secondary data. Information that is sourced for specific problems is called primary data. Primary data includes demographic and psychological information about customers and prospects, customer attitudes, and opinions about products and competing products, as well as their awareness or knowledge of a product and their beliefs about the people using these products.
Figure 2. Research designs (based on Solomon et al. 2017)
Primary data collection methods are described as either survey or observation. The degree of structuring is very high for quantitative data collection methods. Qualitative data collection techniques are not or only partly structured. It should be emphasized that the data collection tools are the same for both qualitative and quantitative research. Figure 3 illustrates this. Characteristically, three factors influence the quality of research results: validity, reliability, and representativeness. Validity is the extent to which research measures what it was intended to measure. Reliability is the extent to which research measurement techniques are free of errors. Representativeness is the extent to which consumers in a study are like a larger group in which the organization has an interest. It is about the predictability of the opportunity for all elements of the population to fall into the sample.
Figure.3. Qualitative versus quantitative research
Market researchers usually collect most of their data from a sample of the population of interest. Based on the answers from this sample, they infer these results on the population. Whether such conclusions are true or inaccurate depends on the nature and quality of the sample. There are two main types of samples: Probability and Non-Probability samples. A probability sample is a sample in which each member of the population has some known chance of being included. A nonprobability sample is a sample in which personal judgment is used to select respondents and a convenience sample is a nonprobability sample composed of individuals who just happen to be available when and where the data are being collected.
Primary data can be collected in many ways. Surveys, physical measurements and observation are three of the possibilities. The quality of our inference is only as good as the data are. The same logic applies to the people who collect the data: the quality of the research results is only as good as the worst interviewer in the study. Therefore, interviewers must be trained and cared for.
The basic prerequisite for data analysis is the determination of the measuring level of the available data. As a rule, the application of particularly powerful methods of statistics is only permitted if certain measurement levels are available. Measurement is the assignment of numbers or other symbols to characteristics of objects according to certain prespecified rules. Scaling is the generation of a continuum upon which measured objects are located. There are four primary scales of measurement: nominal, ordinal and metric (interval and ratio). A nominal scale is a scale whose numbers serve only as labels or tags for identifying and classifying objects with a strict one-to-one correspondence between the numbers and the objects. Ordinal scale is a ranking scale in which numbers are assigned to objects to indicate the relative extent to which some characteristic is possessed. Thus, it is possible to determine whether an object has more or less of a characteristic than some other object. Interval scale is a scale in which the numbers are used to rank objects such that numerically equal distances on the scale represent equal distances in the characteristic being measured. Ratio scale is the highest scale. This scale allows the researcher to identify or classify objects, rank order the objects, and compare intervals or differences. It is also meaningful to compute ratios of scale values (Malhotra 2009).
This refers to how normal the data is distributed. The first techniques discussed are sensitive to the linearity, normality and the same variability assumptions of the data. Studies of distribution, skewness and curiosity are helpful in the examination of the distribution. It is also important to understand the size of the missing values in observations and to determine whether to ignore them or to refer values to the missing observations. Another data quality measure is outliers, and it is important to determine whether the outliers are to be shifted again. When held, they can cause a distortion of the data; When they are eliminated, they can help with the assumptions of normality. The key is to try to understand what the outliers represent (Diez et al. 2014).
One of the tasks of statistical methods is to summarize data on many individual cases (for example: consumers, companies). Statistical metrics and representations are used in the form of tables and graphs.
In Market research marketers must deal with complex relationships between numerous variables. Thus, aspects of consumer behavior (e.g. brand selection, type of needs) can hardly be explained by one variable, and the success or failure of a product never depends on one factor (e.g. advertising budget or price). For marketers, therefore, multivariate analytical methods, which are suitable for the simultaneous analysis of many variables at the same time, play an important role (Malhotra 2009).
There are procedures which are designed to explain a dependent variable by a certain number of independent variables, for example the market share of a product by advertising budgets, price, purchasing power of the target group, relative product quality, etc. Common methods are analysis of variance, regression and conjoint measurement
In other multivariate procedures, connections between a larger number of variables are the focus. the variables are not classified as dependent or independent; rather, the whole set of interdependent relationships is examined. Common methods are principal component analysis, exploratory factor analysis and cluster analysis.
The last step in market research is to report on the research results and to document the results. In general, a research report must be clear and concise. The readers - top management, clients, creative departments and many others - must be able to easily understand the results of the research.
The purpose of this chapter is to provide a comprehensive understanding of common multivariate analytical methods in market research, leading to an understanding of the corresponding uses for each of the techniques. We do not discuss the underlying statistics of each method. Rather, it is a field leader to understand the types of research questions that can be formulated, and the skills and limitations of any technique in answering these questions. In this chapter, the essential application of the method of market research like regression analysis, logistic regression analysis, factor analysis, structure equilibrium models, conjoint analysis and cluster analysis are described.
The analysis of variance (ANOVA) can be used to check whether there is a significant difference between two or more mean values. The variance analysis is particularly suitable for comparison between groups, which in turn explains their application for the evaluation of experiments, where comparisons between measured values from experimental and control groups must be made. The number of groups (independent variable) determines whether it is a one-factor analysis (one group) or a multi-factorial analysis (several groups). In variance analysis, a distinction is made between the explained and unexplained variance of the dependent (metric) variables. The influence of the independent variables (group membership) is assessed based on the relation between explained variance and unexplained variance. One of the central ideas of the variance analysis is to compare variances of the dependent variables within the groups with variances between the groups (deviations of the group mean values from the total mean value). If the variance between the groups is large compared to the variance within the groups, then this indicates a clear influence of the independent variables, which determines the group membership.
The error term must be normally distributed, which at the same time is a normal distribution of the measured values in the population.
The error term must be the same or homogeneous between the groups.
The measured values must be independent of each other.
The central idea of this method is that the different values of a dependent variable (target variable) are to be fed back to another (independent) variable (influencing variable). In this sense, the dependent variable is explained by the independent or explanatory variables. Regression analysis method for analyzing associative relationships between a metric-dependent variable (target variable) and one or more independent variables (influencing variable). It can be used in the following ways:
To determine whether the independent variables explain a significant variation in the dependent variable: whether a relationship exists.
To determine how much of the variation in the dependent variable can be explained by the independent variables: strength of the relationship.
To determine the structure or form of the relationship: the mathematical equation relating the independent and dependent variables.
To predict the values of the dependent variable.
To control for other independent variables when evaluating the contributions of specific variable or set of variables.
Formulation of the regression model:
Based on theoretical and empirical findings as well as previous experience, it is necessary to determine which independent variables could explain the variable of interest (dependent).
For the estimation of the regression model, some assumptions are necessary, which can be inferred from the relevant literature.
A regression model in the bivariate case looks like
Y= b0 + b1*X
and in the multiple (multivariate) case
Y = b0 + b1*X1+ b2*X2+ bn*Xn
where Y = dependent variable, X = independent Variable, b0 = intercept and b1-n = slope of the variable
The most commonly used technique for fitting a function is a minimum square estimate (least squares procedure). This technique determines the best-fitting by minimizing the vertical distances of all the observations. The determined parameters (regression coefficients) determine the relationship between the independent and the dependent variables for the examined data record. By using these parameters and the respective variable values of the independent variables, the value of the dependent variables can be estimated for each case.
An important measure for the assessment of a regression model is the proportion of the variance explained by the model divided by the total variance. The strength of association is measured by the square of the multiple correlation
coefficient R2, which is also called the coefficient of determination. R2 is between 0 and 1. The extreme values 0 and 1 mean that a model does not explain any variance or the dependent variable is completely explained.
With the logistic regression, the influences on a dependent nominal scaled variable can be examined. It is assumed that the dependent variable is dichotomous, that means, it can take only two values (0 and 1). With the help of several independent variables the probabilities for the values of the dependent variable (an "event") are estimated. In a multinomial logistic regression, categorial dependent variables with more than two expressions can also be analyzed. Since there is no linear relationship tested, the Regression coefficients are not interpreted exactly as in linear regressions. Only the direction of the influence can be interpreted.
Data often have many variables - or dimensions - and it is beneficial to reduce them to a smaller number of variables (or dimensions). Contexts between constructs can be identified more clearly. There are two common methods to reduce the complexity of multivariate metric data by reducing the number of dimensions in the data (Chapmann and Feit 2014):
attempts to find uncorrelated linear combinations that capture the maximum variance in the data. The direction of view is from the data to the components.
attempts to model the variance based on a small number of dimensions, while at the same time trying to make the dimensions of the original variables interpretable. It is assumed that the data correspond to a factor model. The direction of the view is from the factors to the data.
Reasons for the need for data reduction:
In the technical sense of dimensional reduction, we can use the factor/component values instead of variable sets (e.g. for mean value comparisons, regression analysis and cluster analysis).
We can reduce uncertainty. If we believe that a construct is not clearly measurable, the uncertainty can be reduced with a variable set.
We can simplify the data acquisition effort by focusing on variables that are known to make a significant contribution to the factor/component of interest. If we find that some variables are not important for a factor, we can eliminate them from the record.
The PCA calculates a set of variables (components) in the form of linear equations that capture the linear relationships in the data. The first component captures as much variance as possible from all variables as a single linear function. The second component detects as much variance as possible which remains after the first component, uncorrelated to the first component. This continues as long as there are as many components as variables. A scree plot is usually used to determine the number of components. This shows us in the order of the main components the variance explained by this principal component. The point is to be found from which the variances of the main components are significantly smaller. The smaller the variances, the less variance explains this principal component. The elbow-criterion considers all the main components that lie to the left of the kink point in the scree plot. If there are several kinks, then those principal components which are to the left of the rightmost kink are selected. If there is no kink, the scree plot does not help.
EFA is a method to assess the relationship of constructs (concepts that are factors) to variables. The factors are considered as latent variables, which cannot be directly observed. Instead, they are observed empirically by several variables, each of which is an indicator of the underlying factors. These observed values are referred to as manifest variables and include indicators. The EFA tries to determine the degree to which factors consider the observed variance of the manifest variables.
The result of the EFA is similar to the PCA: a matrix of factors (similar to the PCA components) and their relationship to the original variables (charge of the factors on the variables). In contrast to the PCA, the EFA is trying to find solutions that are interpretable in the manifest variables. In general, it tries to find solutions where a small number of loadings is very high for each factor, while other loadings are small for this factor. If this is possible, this factor can be interpreted with this variable set. Within a PCA the interpretability can be increased via a rotation (e.g. varimax).
First, the number of factors to be estimated must be determined. We use two common methods: elbow-criterion and eigenvalues. The eigenvalue is a metric for the proportion of variance explained. The eigenvalue of a factor tells how much variance this factor explains to the total variance. According to the eigenvalue criterion, only factors with an eigenvalue greater than 1 are to be extracted. They can also be printed graphically with a scree plot. A kink in the scree plot in figure 7.6.4. could be found in the fifth main component. Thus, the scree plot shows a 5-factor solution.
Figure 4. Eigenvalues of 19 components of human values
Then the factor analysis is estimated, whereby the number of factors to be extracted must be specified. By default, a Varimax rotation is performed at the EFA (the coordinate system of the factors is rotated in such a way that the variables are optimally assigned to the variables). In Varimax, there are no correlations between the factors. If correlations between the factors are permitted, the oblimin rotation is recommended.
The simplest measure of internal consistency is split-half reliability. The items are divided into two parts and the resulting scores should be similar in their characteristics. High correlations between the parts indicate a high internal consistency. The problem is that the results depend on how the items are split. A common approach to solving this problem is to use the alpha (cronbachs alpha) coefficient. The coefficient alpha is the average of all possible split-half coefficients, which result from different types of the distribution of the items. This coefficient varies from 0 to 1. Formally, it is a corrected average correlation coefficient. Rules for the evaluation of cronbachs alpha: < 0,6 is bad, between 0,6 and 0,7 is questionable, between 0,7 and 0,8 is acceptable, between 0,8 und 0,9 is good and greater than 0,9 is excellent.
MDS is a method that can also be used to find low-dimensional representations of data. Instead of extracting components or latent factors, as with the PCA or EFA, the MDS instead works with distances (or similarities). The MDS tries to find a low-dimensional map that best preserves all observed similarities between objects. Information provided by MDS has been used for a variety of marketing applications (Malhotra 2009): Image measurement, market segmentation, new product development, assessing advertising effectiveness, pricing analysis, channel decisions and attitude scale construction.
In a study to explore behavioral types by the FOM in 2016, 22.131 people were asked on a scale from 1 to 7 about their values and their purchasing behavior. With PCA 13 principal components were formed (5 Dimensions of human values and 8 Dimensions of consumer behavior):
To represent these 13 Dimensions in a two-dimensional space, a metric MDS was calculated. To calculate the MDS, the individual Dimensions were correlated with each other (product-moment correlation). With this approach, the dimensions are represented as points in the multidimensional space so that the distances between the points represent the intercorrelations of the dimensions.
Figure 5: Perception space with 13 Dimensions of human values and consumer behavior
For non-metric data such as rankings or categorial variables, an MDS algorithm is used which does not take any metric distances (Chapman and Feit 2014).
For practical application, it is important to get a plausible interpretation of the perception space created by the MDS. In this context, the additional integration of independent assessment dimensions into the perception space with the aid of the vector model can represent a valuable interpretation aid. In a pre-study of the behavioral types in 2014 (n = 15.563), 40 values were calculated by MDS in a perception space. For interpretation, feature vectors in the form of the dimensions of the purchasing behavior are now included in the configuration of the MDS analysis.
Figure 6: Perception space with an integrated vector model (Gansser 2014b)
First, the 6 dimensions of purchasing behavior were correlated with the individual items of the value orientations.
Subsequently, a linear regression analysis was carried out for each of the dimensions of the purchasing behavior.
The two coordinates of the values in the MDS are the independent variables which are used to explain the variance of the purchasing behavior (here the correlation with the values).
For the vector model, the beta coefficients of the two dimensions are of interest. These are used as coordinates for the vector to be drawn.
The vector runs in the diagram as a straight line through the origin and the point defined by these two coordinates, namely as an arrow in the direction of the point.
The cluster analysis is used to find homogeneous groups (usually observations) within the data which are as heterogeneous as possible between the groups. Market segmentation is a typical application of cluster analysis. To determine the similarity of observations, different distance measures can be used. For metric features, for example, the Euclidean metric is often used, that is, similarity and distance are determined based on the Euclidean distance. Other distances like Manhattan or Gower are also possible. These have the advantage that they can be used not only for metric data but also for mixed variable types.
Selection of the variables to be used for group formation (e.g. sociodemographic features, setting variables, lifestyle characteristics).
Quantification of similarities or dissimilarities of objects based on a so-called proximity measure and determination of a distance or similarity matrix.
Summary of the objects into homogeneous groups based on the values of the degree of proximacy using the application of a fusion algorithm.
There are two methods of cluster analysis: Hierarchical methods and partitioning methods. In hierarchical clustering procedures, observations are summarized successively. First, each observation is a separate cluster, which is then grouped according to the similarity measure. Partitioning processes start from a given grouping of the objects and rearrange the objects between the groups until an optimal value of a target function (the sum of squares of the deviations of the observations is to be minimized in the cluster to the cluster center) is reached.
A typical approach to cluster analysis is to identify the number of clusters by hierarchical methods in a first step. This number serves as a pre-requisite for the application of partitioning methods, with which in a second step the assignment of the objects to the clusters is tried to optimize. In a study by the FOM in 2016, 22131 people were asked about their values and their purchasing behavior. By means of PCA 13 principal components were formed (5 values and 8 purchasing behavior). A cluster analysis according to the above procedure revealed 7 clusters, which are graphically represented in figure 7 with a random sampling of n = 200. The Cluster plot plots cluster assignment by color and ellipses against the first two principle components of the predictors.
Figure 7: Cluster plot for the seven behavioral types
Structural equation models are widespread, especially in scientific research, whereas the practical application of these models is rare, but not insignificant.
SEM combine the methods of factor analysis and regression analysis with the possibility of simultaneously estimating both models (Kline 2017). In SEM, conclusions are drawn about the dependencies between constructs (latent variables or factors) by variances and covariances of indicators (manifest variables). The advantage is that a larger number of interdependent dependencies can be analyzed and latent variables can be included in these relationships at the same time. Finally, it is a matter of examining theories related to the existence of latent variables and their connections. Figure 5 is intended to illustrate aspects of the analysis of multiple dependencies. Gansser and Krol (2017) investigate the influence factors on the behavioral intention to use location based services. The numerous direct and indirect dependencies between the constructs under consideration are immediately apparent.
Figure 8: SEM to explore the potential of location-based services for market research (Gansser/Krol 2017)
The measurement of latent variables has already been described in the section Dimension reduction. The setup and analysis of an SEM is essentially done in three steps: Formulation of a model, model estimate and evaluation of the model:
Structural equation models are formulated based on theoretical considerations. The development of the model for the acceptance of location-based services (figure 8) can be derived, for example, from four basic theories on behavioral research in technology use: Theory of Reasoned Action (TRA), Theory of Planned Behavior (TPB), Technology Acceptance Model (TAM) and Theory of Acceptance and Use of Technology (UTAUT). The relationships between the latent variables ultimately represent the hypotheses to be examined.
Figure 9: Reflective and formative measurement models
For the operationalization of the latent constructs, it must be decided whether a reflective or formative specification of the measurement models is appropriate. In reflective measurement models, it is assumed that the values of the observable indicators are caused by the latent construct. A change in the latent construct would be reflected in a change in all the indicators assigned to it. In formal measurement models the relationship between indicators and latent constructs is exactly the opposite. Here, the observable indicators cause the latent construct to develop.
The estimation of the parameters of the model can be done in different ways. Basically, there are two different approaches. For most researchers, SEM is equivalent to carrying out covariance-based SEM (CB-SEM). The goal of CB-SEM is theory testing, theory confirmation, or the comparison of alternative theories. The parameter estimation of the model takes place simultaneously. SEM-partial least squares SEM (PLS-SEM). The goal at variance-based SEM (partial least squares SEM or PLS-SEM) is predicting key target constructs or identifying keydriver constructs. The factor values are first determined successively for the measurement models and then used in a second step as measured values for the latent variables in a regression analysis.
The evaluation of the results of the tests is carried out with various quality criteria and inferential statistical tests. It is distinguished into global and local quality criteria. While global quality criteria allow an assessment of the consistency of the overall model with the collected data, local quality criteria allow to check the measurement quality of individual indicators and latent variables. This also depends on the type of parameter estimation. At this point, we refer to the numerous special literature on SEM.
Among the large number of forecasting instruments in financial management and controlling, only the three most important methods for predicting the behavior of market participants are presented. Conjoint analysis, marketing intelligence and Monte-carlo simulation.
Conjoint analysis is a widely used and established method for measuring preferences. In practice, they are mainly used for price estimation, new product planning, and for customer segmentation. Conjoint measurements are compared with other methods a more realistic form of the preference measurement with a higher validity. Depending on the procedure, one or several product concepts are submitted for assessment. The products are defined by features that have a certain set of characteristics. Thus, the subject identifies share values for each characteristic of a feature. Based on the measured preferences, a prognosis can be generated, which product is preferred and likely to buy in the future.
Conjoint analysis has been used in marketing for a variety of purposes, including the following (Malhotra 2009):
Determining the relative importance of attributes in the consumer choice process.
Estimating market share of brands that differ in attribute levels.
Determining the composition of the most preferred brand.
Segmenting the market based on similarity of preferences for attribute levels.
To calculate these predictions, purchase decision models are applied. For various hypothetical product alternatives, total utility values are estimated, which are subsequently transformed into selection probabilities. For all models, the choice is based on the principles of utility maximization, so that alternatives with higher benefits are preferred to such alternatives with lower benefits. The use of such decision-making models for forecasting purposes is problematic if there is no information on the real purchasing decision processes and therefore the market researcher must make an individual selection of the decision models. This disadvantage exists in the group of traditional conjoint analysis, in which the assessed alternatives are placed in a preference ranking of the information subjects or are evaluated by means of rating scales.
This disadvantage eliminates the choice-based conjoint analysis, in which the persons select the most attractive alternative from different choice sets. A choice set consists of two hypothetical alternatives and the possibility of non-selection. Thus, it can be assumed that the natural buying behavior of the person is analyzed. The choice-based conjoint analysis is a probabilistic method for the preference structure measurement. The partial utilities of the individual characteristics are estimated from the total benefit. The assessment objects are constructed based on experimental designs.
Procedure for Choice based Conjoint analysis (Gansser and Füller 2015)
Selection of the characteristic expressions
Definition of experimental design
Creation of orthogonally fractionated choice sets
Presentation of the stimuli from the survey participants
Estimation of the utility function
Figure 10: First choice set of a seminar concept (Gansser and Füller 2015)
The result of the conjoint analysis is the calculation of odds ratios. In the case of nominal characteristics, the odds ratio (exp(coef)) of a variable indicates the chances of the odds of a characteristic of a feature compared to the basic category, i.e. the ratio of a chance. It is then possible to interpret the feature expression with the highest chance ratio as an example for each characteristic compared to its basic category. The odds ratio of 5.7 for the location of the training and for the expression exclusively presence seminar (T.1) signals that the chances of a booking increase by about 5.7 times in favor of the presence seminar (T.1) compared to the basic category of on-line seminars (T.1).
Figure 11: Results of the parameter estimation of the utility function (Gansser and Füller 2015)
Finally, the Conjoint analysis can be used to answer which offer persons prefer and are likely to buy in the future. In addition to the chances of a feature compared to a basic form, this method also allows the relevance and therefore the importance of different characteristics to be measured.
A marketing intelligence system includes a set of procedures to maintain everyday information about developments in the marketing environment. The purpose of the information collection is the accurate and confident decision-making in determining the marketing concept (goals, strategies and activities). Once the data are collected (manually or automatically), the analysis is usually carried out using software-based systems. The intelligence approach is that the sources of information are of a different nature and are placed in a single environment after being captured. The goal is the integrated information collection and visualization of internal and external data sources. This enables current key performance indicators (KPI) to be viewed in real-time, or as fast as the data can be captured, and to analyze trends. The term "business intelligence" (BI) has become established as a concept for all methods for analyzing business performance. In this way, the company has different areas of the intelligence approach, which are aimed at analyzing and optimizing the partial performance. In addition to marketing intelligence, the sales intelligence division has also become established. In both methods, the requirement for the integrated efficiency measurement across the departments consists. The trend is that classic controlling tasks are transferred from the central department Controlling to the operating divisions. Business Intelligence (BI), as a closed system, includes all the analysis and optimization capabilities that can be used to capture, analyze, and improve business information.
Analyzes in the context of intelligence approaches should meet the following requirements:
Evaluations must be possible in real time.
Accessibility inside and outside the company with web applications (also mobile).
The data are available in standardized and consolidated form. They do not have to be collected, consolidated and evaluated by the user.
Users can customize the analyzes and reports to their individual requirements.
For the analyzes, menus are available for clear and meaningful charts.
Multi-dimensional analyzes (OLAP) and data integration from all business areas lead to new findings.
To efficiently capture data from the environment and the strategic triangle, some specific data sources are being developed that are particularly suitable for marketing intelligence. These data can usually be collected by the company itself. In the case of less sensitive data, there is also the possibility of commissioning external agencies:
The sales assistant as a free market researcher: The field service is the person closest to the customer. You can observe the way customers use the products most easily and without complex market research. Ideas for new products can be generated in this way. In addition, information about competitors and retailers can be recorded.
Mystery Shopping in the retail store: Using camouflaged employees and their observations, the quality of consulting and competence of the salesmen should be determined. This is not undisputed and should be provided with an extended focus also on the quality of the facilities. Companies can also assess the quality of customer experience with the use of Mystery Shoppers.
Competition analysis: This can be done by purchasing the competitor's products, reviewing the advertising campaigns, press reporting, reading their published reports, and so on. Competitive intelligence must be legal and ethical.
Customer community: Customers (size, demand, representativeness) that are to be identified can provide valuable information on the product, product use and sales channels as participants in a community (online or offline) or a panel. This enables them to be actively involved in the company's improvement processes. Online platforms such as chatrooms, blogs, discussion forums, customer review boards can be used to generate customer feedback. This allows the company to understand customer experiences and impressions.
Official data: Governments from almost all countries publish reports on population development, demographic characteristics, agricultural production and other data. These country-specific basic data can be helpful when planning business concepts.
Marketing planning is trying to capture the future in terms of numbers. Based on assumptions, future values are determined and target values are calculated. For the calculation of the target variable, a model is formed with cause-effect relationships. Since the target variables are indeterminate, various risk scenarios for various scenarios are identified in a risk analysis.
The Monte Carlo method simulates random numbers. The focus of the simulation is a random number generator which generates random numbers whose distribution corresponds to the respectively considered probability distribution. In each simulation run, a value is determined by random selection for each influencing variable, whereby any number of simulations can be carried out. The result value is calculated with the realizations of all influencing variables. In other words, the Monte Carlo simulation generates distributions of possible results. After a sufficiently large number of simulations, the probability distribution and the relevant characteristic numbers such as mean value, variance and confidence interval can then be calculated. An application-related introduction to the Monte-Carlos simulation is provided by Robert and Casella in 2009.
Studie zur Validierung der Persönlichkeitsmerkmale Abenteuerlust und Routineverhalten
(inklusive Selbsttest: Wie viel Abenteuerer steckt in Ihnen?)
Marketingplanung als Instrument zur Krisenbewältigung
(alle Schritte zur Erstellung eines Marketingplans)
Frei verfügbare Materialien zu Statistik und Datenanalyse auf OpenIntro
Installation von R Markdown
Anleitung Installation R Markdown
Getting used to R, RStudio, and R Markdown
Getting used to R, RStudio, and R Markdown
Nützliche Links zur quantitativen Datenanalyse:
Tips und Tricks in RStudio
R for Data Science
Wertvolle Tipps in Statistik und zur Fragebogenerstellung
corrplot in R (Grafische Visualisierung von Korrelationen)
Boxplot with mean and standard deviation in ggPlot2
Mediatoreffekte in der Regressionanalyse
Wo bekomme ich Daten?
Academic Journal Guide
Ich möchte eine Umfrage machen:
Für die Durchführung von Online Umfragen empfehle ich das kostenlose Online Umfragetool von Soscisurvey
Wo finde ich Fragebögen oder Teile für meinen Fragebogen?
Eine Zusammenstellung und Auswahl sozialwissenschaftlicher Items und Skalen
Für die Datenanalyse qualitativer Studien gibt es ebenfalls Software, z. B. hier ...