全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Methodological Triangulation Using Neural Networks for Business Research

DOI: 10.1155/2012/517234

Full-Text   Cite this paper   Add to My Lib

Abstract:

Artificial neural network (ANN) modeling methods are becoming more widely used as both a research and application paradigm across a much wider variety of business, medical, engineering, and social science disciplines. The combination or triangulation of ANN methods with more traditional methods can facilitate the development of high-quality research models and also improve output performance for real world applications. Prior methodological triangulation that utilizes ANNs is reviewed and a new triangulation of ANNs with structural equation modeling and cluster analysis for predicting an individual's computer self-efficacy (CSE) is shown to empirically analyze the effect of methodological triangulation, at least for this specific information systems research case. A new construct, engagement, is identified as a necessary component of CSE models and the subsequent triangulated ANN models are able to achieve an 84% CSE group prediction accuracy. 1. Introduction Artificial Neural networks (ANNs) have been used as a popular research and implementation paradigm in multiple domains for several decades now [1–9]. Recent literature is advocating the further usage of ANNs as a research methodology, especially in previously untried or underutilized domains [10, 11]. However, due to the early premise that ANNs are black boxes (i.e., it is difficult to evaluate the contribution of the independent variables) the demonstration of rigor and generalization of results from neural network research has been problematic. Similarities between ANNs and various statistical methods (which have been shown to be both rigorous and generalizable) have been described for potential adopters [10, 12]. A common research paradigm for ANN researchers is to compare results obtained using an ANN to other more traditional statistical methods, including regression [13–16], discriminant analysis [17–21], other statistical methods [22–24], and multiple statistical methods [25–28]. Of the 16 articles just referenced, the majority of these results show ANNs being either similar to (with 2 being similar) or better than (with 12 outperforming) the compared statistical methods within the specific application domain. While ANNs have a history, though short, their black box nature has led to adoption resistance by numerous-business related disciplines [29]. Methodological triangulation may help to overcome these adoption and usage reservations as well as providing a means for improving the overall efficacy of ANN applications. Methodological triangulation is the utilization of multiple methods on

References

[1]  R. Dybowski and V. Gant, “Artificial neural networks in pathology and medical laboratories,” The Lancet, vol. 346, no. 8984, pp. 1203–1207, 1995.
[2]  E. Y. Li, “Artificial neural networks and their business applications,” Information and Management, vol. 27, no. 5, pp. 303–313, 1994.
[3]  S. H. Liao and C. H. Wen, “Artificial neural networks classification and clustering of methodologies and applications—literature analysis from 1995 to 2005,” Expert Systems with Applications, vol. 32, no. 1, pp. 1–11, 2007.
[4]  G. Montague and J. Morris, “Neural-network contributions in biotechnology,” Trends in Biotechnology, vol. 12, no. 8, pp. 312–324, 1994.
[5]  K. A. Smith and J. N. D. Gupta, “Neural networks in business: techniques and applications for the operations researcher,” Computers and Operations Research, vol. 27, no. 11-12, pp. 1023–1044, 2000.
[6]  B. Widrow, D. E. Rumelhart, and M. A. Lehr, “Neural networks: applications in industry, business and science,” Communications of the ACM, vol. 37, no. 3, pp. 93–105, 1994.
[7]  B. K. Wong, T. A. Bodnovich, and Y. Selvi, “Neural network applications in business: a review and analysis of the literature (1988-95),” Decision Support Systems, vol. 19, no. 4, pp. 301–320, 1997.
[8]  B. K. Wong, V. S. Lai, and J. Lam, “A bibliography of neural network business applications research: 1994–1998,” Computers and Operations Research, vol. 27, no. 11-12, pp. 1045–1076, 2000.
[9]  F. Zahedi, “A meta-analysis of financial applications of neural networks,” International Journal of Computational Intelligence and Organization, vol. 1, no. 3, pp. 164–178, 1996.
[10]  K. B. DeTienne, D. H. DeTienne, and S. A. Joshi, “Neural Networks as Statistical Tools for Business Researchers,” Organizational Research Methods, vol. 6, no. 2, pp. 236–265, 2003.
[11]  G. P. Zhang, “Avoiding pitfalls in neural network research,” IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews, vol. 37, no. 1, pp. 3–16, 2007.
[12]  B. Warner and M. Misra, “Understanding Neural Networks as Statistical Tools,” American Statistician, vol. 50, no. 4, pp. 284–293, 1996.
[13]  A. Bansal, R. J. Kauffman, and R. R. Weitz, “Comparing the modeling performance of regression and neural networks as data quality varies: a business value approach,” Journal of Management Information Systems, vol. 10, no. 1, pp. 11–32, 1993.
[14]  U. A. Kumar, “Comparison of neural networks and regression analysis: a new insight,” Expert Systems with Applications, vol. 29, no. 2, pp. 424–430, 2005.
[15]  S. W. Palocsay and M. M. White, “Neural network modeling in cross-cultural research: a comparison with multiple regression,” Organizational Research Methods, vol. 7, no. 4, pp. 389–399, 2004.
[16]  S. Sakai, K. Kobayashi, S. I. Toyabe, N. Mandai, T. Kanda, and K. Akazawa, “Comparison of the levels of accuracy of an artificial neural network model and a logistic regression model for the diagnosis of acute appendicitis,” Journal of Medical Systems, vol. 31, no. 5, pp. 357–364, 2007.
[17]  S. Ghosh-Dastidar, H. Adeli, and N. Dadmehr, “Mixed-band wavelet-chaos-neural network methodology for epilepsy and epileptic seizure detection,” IEEE Transactions on Biomedical Engineering, vol. 54, no. 9, pp. 1545–1551, 2007.
[18]  K. E. Graves and R. Nagarajah, “Uncertainty estimation using fuzzy measures for multiclass classification,” IEEE Transactions on Neural Networks, vol. 18, no. 1, pp. 128–140, 2007.
[19]  R. C. Lacher, P. K. Coats, S. C. Sharma, and L. F. Fant, “A neural network for classifying the financial health of a firm,” European Journal of Operational Research, vol. 85, no. 1, pp. 53–65, 1995.
[20]  R. Sharda, “Neural networks for the MS/OR analyst: an application bibliography,” Interfaces, vol. 24, no. 2, pp. 116–130, 1994.
[21]  V. Subramanian, M. S. Hung, and M. Y. Hu, “An experimental evaluation of neural networks for classification,” Computers and Operations Research, vol. 20, no. 7, pp. 769–782, 1993.
[22]  J. Farifteh, F. Van der Meer, C. Atzberger, and E. J. M. Carranza, “Quantitative analysis of salt-affected soil reflectance spectra: a comparison of two adaptive methods (PLSR and ANN),” Remote Sensing of Environment, vol. 110, no. 1, pp. 59–78, 2007.
[23]  B. A. Jain and B. N. Nag, “Performance Evaluation of Neural Network Decision Models,” Journal of Management Information Systems, vol. 14, no. 2, pp. 201–216, 1997.
[24]  L. M. Salchenberger, E. M. Cinar, and N. A. Lash, “Neural networks: a new tool for predicting thrift failures,” Decision Sciences, vol. 23, no. 4, pp. 899–916, 1992.
[25]  I. Kurt, M. Ture, and A. T. Kurum, “Comparing performances of logistic regression, classification and regression tree, and neural networks for predicting coronary artery disease,” Expert Systems with Applications, vol. 34, no. 1, pp. 366–374, 2008.
[26]  T. S. Lim, W. Y. Loh, and Y. S. Shih, “Comparison of prediction accuracy, complexity, and training time of thirty-three old and new classification algorithms,” Machine Learning, vol. 40, no. 3, pp. 203–228, 2000.
[27]  K. Y. Tam and M. Y. Kiang, “Managerial applications of neural networks: the case of bank failure predictions,” Management Science, vol. 38, no. 3, pp. 926–947, 1992.
[28]  G. K. F. Tso and K. K. W. Yau, “Predicting electricity energy consumption: a comparison of regression analysis, decision tree and neural networks,” Energy, vol. 32, no. 9, pp. 1761–1768, 2007.
[29]  L. Yang, C. W. Dawson, M. R. Brown, and M. Gell, “Neural network and GA approaches for dwelling fire occurrence prediction,” Knowledge-Based Systems, vol. 19, no. 4, pp. 213–219, 2006.
[30]  J. Mingers, “Combining IS research methods: towards a pluralist methodology,” Information Systems Research, vol. 12, no. 3, pp. 240–259, 2001.
[31]  A. Tashakkori and C. Teddlie, Mixed Methodology: Combining Qualitative and Quantitative Approaches, Sage, London, UK, 1998.
[32]  S. C. Petter and M. J. Gallivan, “Toward a framework for classifying and guiding mixed method research in information systems,” in Proceedings of the 37th Hawaii International Conference on System Sciences, pp. 4061–4070, IEEE Computer Society, Los Alamitos, Calif, Usa, 2004.
[33]  P. M. Podsakoff, S. B. MacKenzie, J. Y. Lee, and N. P. Podsakoff, “Common method biases in behavioral research: a critical review of the literature and recommended remedies,” Journal of Applied Psychology, vol. 88, no. 5, pp. 879–903, 2003.
[34]  P. M. Podsakoff and D. W. Organ, “Self-reports in organizational research: problems and prospects,” Journal of Management, vol. 12, no. 4, pp. 531–554, 1986.
[35]  S. Walczak, “Evaluating medical decision making heuristics and other business heuristics with neural networks,” in Intelligent Decision Making an AI Based Approach, G. Phillips-Wren and L. C. Jain, Eds., chapter 10, Springer, New York, NY, USA, 2008.
[36]  K. Hornik, “Approximation capabilities of multilayer feedforward networks,” Neural Networks, vol. 4, no. 2, pp. 251–257, 1991.
[37]  K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, no. 5, pp. 359–366, 1989.
[38]  H. White, “Connectionist nonparametric regression: multilayer feedforward networks can learn arbitrary mappings,” Neural Networks, vol. 3, no. 5, pp. 535–549, 1990.
[39]  N. R. Swanson and H. White, “A model-selection approach to assessing the information in the term structure using linear models and artificial neural networks,” Journal of Business & Economic Statistics, vol. 13, no. 3, pp. 265–275, 1995.
[40]  S. A. Billings and G. L. Zheng, “Radial basis function network configuration using genetic algorithms,” Neural Networks, vol. 8, no. 6, pp. 877–890, 1995.
[41]  J. N. D. Gupta, R. S. Sexton, and E. A. Tunc, “Selecting Scheduling Heuristics Using Neural Networks,” INFORMS Journal on Computing, vol. 12, no. 2, pp. 150–162, 2000.
[42]  M. Rocha, P. Cortez, and J. Neves, “Evolution of neural networks for classification and regression,” Neurocomputing, vol. 70, no. 16-18, pp. 2809–2816, 2007.
[43]  R. Sexton, “Identifying irrelevant variables in chaotic time series problems: using the genetic algorithm for training neural networks,” Journal of Computational Intelligence in Finance, vol. 6, no. 5, pp. 34–42, 1998.
[44]  L. Yi-Hui, “Evolutionary neural network modeling for forecasting the field failure data of repairable systems,” Expert Systems with Applications, vol. 33, no. 4, pp. 1090–1096, 2007.
[45]  A. Tahai, S. Walczak, and J. T. Rigsby, “Improving artificial neural network performance through input variable selection,” in Applications of Fuzzy Sets and The Theory of Evidence to Accounting II, P. Siegel, K. Omer, A. deKorvin, and A. Zebda, Eds., pp. 277–292, JAI Press, Stamford, Conn, USA, 1998.
[46]  Z. Hua, Y. Wang, X. Xu, B. Zhang, and L. Liang, “Predicting corporate financial distress based on integration of support vector machine and logistic regression,” Expert Systems with Applications, vol. 33, no. 2, pp. 434–440, 2007.
[47]  R. J. Kuo, Y. L. An, H. S. Wang, and W. J. Chung, “Integration of self-organizing feature maps neural network and genetic K-means algorithm for market segmentation,” Expert Systems with Applications, vol. 30, no. 2, pp. 313–324, 2006.
[48]  R. J. Kuo, L. M. Ho, and C. M. Hu, “Integration of self-organizing feature map and K-means algorithm for market segmentation,” Computers and Operations Research, vol. 29, no. 11, pp. 1475–1493, 2002.
[49]  T. Marwala, “Bayesian training of neural networks using genetic programming,” Pattern Recognition Letters, vol. 28, no. 12, pp. 1452–1458, 2007.
[50]  D. Dancey, Z. A. Bandar, and D. McLean, “Logistic model tree extraction from artificial neural networks,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 37, no. 4, pp. 794–802, 2007.
[51]  K. L. Hsieh and Y. S. Lu, “Model construction and parameter effect for TFT-LCD process based on yield analysis by using ANNs and stepwise regression,” Expert Systems with Applications, vol. 34, no. 1, pp. 717–724, 2008.
[52]  R. Setiono, J. Y. L. Thong, and C. S. Yap, “Symbolic rule extraction from neural networks An application to identifying organizations adopting IT,” Information and Management, vol. 34, no. 2, pp. 91–101, 1998.
[53]  T. G. Diettrich, “Ensemble methods in machine learning,” in Proceedings of the 1st International Workshop Multiple Classifier Systems, Lecture Notes in Computer Science, pp. 1–15, Springer Verlag, Cagliari, Italy, 2000.
[54]  L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 10, pp. 993–1001, 1990.
[55]  G. P. Zhang and V. L. Berardi, “Time series forecasting with neural network ensembles: an application for exchange rate prediction,” Journal of the Operational Research Society, vol. 52, no. 6, pp. 652–664, 2001.
[56]  D. Chetchotsak and J. M. Twomey, “Combining neural networks for function approximation under conditions of sparse data: the biased regression approach,” International Journal of General Systems, vol. 36, no. 4, pp. 479–499, 2007.
[57]  Z. H. Zhou and Y. Jiang, “NeC4.5: neural ensemble based C4.5,” IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 6, pp. 770–773, 2004.
[58]  S. Walczak and M. Parthasarathy, “Modeling online service discontinuation with nonparametric agents,” Information Systems and e-Business Management, vol. 4, no. 1, pp. 49–70, 2006.
[59]  R. Pakath and J. S. Zaveri, “Specifying critical inputs in a genetic algorithm-driven decision support system: an automated facility,” Decision Sciences, vol. 26, no. 6, pp. 749–771, 1995.
[60]  M. Smith, Neural Networks for Statistical Modeling, Van Nostrand Reinhold, New York, NY, USA, 1993.
[61]  S. Walczak and N. Cerpa, “Heuristic principles for the design of artificial neural networks,” Information and Software Technology, vol. 41, no. 2, pp. 107–117, 1999.
[62]  H. Z. Huang, R. Bo, and W. Chen, “An integrated computational intelligence approach to product concept generation and evaluation,” Mechanism and Machine Theory, vol. 41, no. 5, pp. 567–583, 2006.
[63]  S. Walczak, “An empirical analysis of data requirements for financial forecasting with neural networks,” Journal of Management Information Systems, vol. 17, no. 4, pp. 203–222, 2001.
[64]  A. Durand, O. Devos, C. Ruckebusch, and J. P. Huvenne, “Genetic algorithm optimisation combined with partial least squares regression and mutual information variable selection procedures in near-infrared quantitative analysis of cotton-viscose textiles,” Analytica Chimica Acta, vol. 595, no. 1-2, pp. 72–79, 2007.
[65]  F. D. Davis, “Perceived usefulness, perceived ease of use, and user acceptance of information technology,” MIS Quarterly: Management Information Systems, vol. 13, no. 3, pp. 319–339, 1989.
[66]  G. M. Marakas, M. Y. Yi, and R. D. Johnson, “The multilevel and multifaceted character of computer self-efficacy: toward clarification of the construct and an integrative framework for research,” Information Systems Research, vol. 9, no. 2, pp. 126–163, 1998.
[67]  J. J. Martocchio, “Effects of Conceptions of Ability on Anxiety, Self-Efficacy, and Learning in Training,” Journal of Applied Psychology, vol. 79, no. 6, pp. 819–825, 1994.
[68]  R. T. Christoph, G. A. Schoenfeld Jr, and J. W. Tansky, “Overcoming barriers to training utilizing technology: the influence of self-efficacy factors on multimedia-based training receptiveness,” Human Resource Development Quarterly, vol. 9, no. 1, pp. 25–38, 1998.
[69]  M. E. Gist, C. Schwoerer, and B. Rosen, “Effects of Alternative Training Methods on Self-Efficacy and Performance in Computer Software Training,” Journal of Applied Psychology, vol. 74, no. 6, pp. 884–891, 1989.
[70]  J. E. Mathieu, J. W. Martineau, and S. I. Tannenbaum, “Individual and situational influences on the development of self-efficacy: implications for training effectiveness,” Personnel Psychology, vol. 46, no. 1, pp. 125–147, 1993.
[71]  R. Agarwal, V. Sambamurthy, and R. M. Stair, “Research report: the evolving relationship between general and specific computer self-efficacy—an empirical assessment,” Information Systems Research, vol. 11, no. 4, pp. 418–430, 2000.
[72]  W. Hong, J. Y. L. Thong, W. M. Wong, and K. Y. Tam, “Determinants of user acceptance of digital libraries: an empirical examination of individual differences and system characteristics,” Journal of Management Information Systems, vol. 18, no. 3, pp. 97–124, 2001.
[73]  V. Venkatesh, “Determinants of perceived ease of use: integrating control, intrinsic motivation, and emotion into the technology acceptance model,” Information Systems Research, vol. 11, no. 4, pp. 342–365, 2000.
[74]  M. Igbaria and J. Iivari, “The effects of self-efficacy on computer usage,” Omega, vol. 23, no. 6, pp. 587–605, 1995.
[75]  D. R. Compeau and C. A. Higgins, “Computer self-efficacy: development of a measure and initial test,” MIS Quarterly: Management Information Systems, vol. 19, no. 2, pp. 189–210, 1995.
[76]  B. Hasan, “The influence of specific computer experiences on computer self-efficacy beliefs,” Computers in Human Behavior, vol. 19, no. 4, pp. 443–450, 2003.
[77]  R. D. Johnson and G. M. Marakas, “research report: the role of behavioral modeling in computer skills acquisition—toward refinement of the model,” Information Systems Research, vol. 11, no. 4, pp. 402–417, 2000.
[78]  R. W. Stone and J. W. Henry, “The roles of computer self-efficacy and outcome expectancy in influencing the computer end-user's organizational commitment,” Journal of End User Computing, vol. 15, no. 1, pp. 38–53, 2003.
[79]  S. Taylor and P. Todd, “Assessing IT usage: the role of prior experience,” MIS Quarterly: Management Information Systems, vol. 19, no. 4, pp. 561–568, 1995.
[80]  R. Torkzadeh, K. Pflughoeft, and L. Hall, “Computer self-efficacy, training effectiveness and user attitudes: an empirical study,” Behaviour and Information Technology, vol. 18, no. 4, pp. 299–309, 1999.
[81]  J. B. Thatcher and P. L. Perrewé, “An empirical examination of individual traits as antecedents to computer anxiety and computer self-efficacy,” MIS Quarterly: Management Information Systems, vol. 26, no. 4, pp. 381–396, 2002.
[82]  S. Taylor and P. A. Todd, “Understanding information technology usage: a test of competing models,” Information Systems Research, vol. 6, no. 2, pp. 144–176, 1995.
[83]  D. S. Staples, J. S. Hulland, and C. A. Higgins, “A Self-Efficacy Theory Explanation for the Management of Remote Workers in Virtual Organizations,” Organization Science, vol. 10, no. 6, pp. 758–776, 1999.
[84]  R. Agarwal and J. Prasad, “A Conceptual and Operational Definition of Personal Innovativeness in the Domain of Information Technology,” Information Systems Research, vol. 9, no. 2, pp. 204–215, 1998.
[85]  J. Webster and H. Ho, “Audience engagement in multimedia presentations,” Data Base for Advances in Information Systems, vol. 28, no. 2, pp. 63–76, 1997.
[86]  J. Webster and J. J. Martocchio, “Microcomputer playfulness: development of a measure with workplace implications,” MIS Quarterly: Management Information Systems, vol. 16, no. 2, pp. 201–224, 1992.
[87]  F. D. Davis, R. P. Bagozzi, and P. R. Warshaw, “Extrinsic and intrinsic motivation to use computers in the workplace,” Journal of Applied Social Psychology, vol. 22, no. 14, pp. 1111–1132, 1992.
[88]  J. F. Hair, W. C. Black, B. J. Babin, and R. E. Anderson, Multivariate Data Analysis, Prentice Hall, Upper Saddle River, NJ, USA, 7th edition, 2010.
[89]  W. W. Chin, “The partial least squares approach for structural equation modeling,” in Modern Methods for Business Research, G. A. Marcoulides, Ed., pp. 295–336, Lawrence Erlbaum Associates, Hillsdale, NJ, USA, 1998.
[90]  E. Barnard and L. Wessels, “Extrapolation and interpolation in neural network classifiers,” IEEE Control Systems, vol. 12, no. 5, pp. 50–53, 1992.
[91]  B. Efron, The Jackknife, the Bootstrap, and Other Resampling Plans, SIAM, Philadelphia, Pa, USA, 1982.
[92]  F. H?ppner, F. Klawonn, R. Kruse, and T. Runkler, Fuzzy Cluster Analysis, Wiley, New York, NY, USA, 1999.
[93]  L. A. Zadeh, “Fuzzy sets,” Information and Control, vol. 8, no. 3, pp. 338–353, 1965.
[94]  M. Gist, “Self-efficacy: implications for organizational behavior and human resource management,” Academy of Management Review, vol. 12, no. 3, pp. 472–485, 1987.
[95]  T. H. Lee, H. White, and C. W. J. Granger, “Testing for neglected nonlinearity in time series models. A comparison of neural network methods and alternative tests,” Journal of Econometrics, vol. 56, no. 3, pp. 269–290, 1993.
[96]  D. Scarborough and M. J. Somers, Neural Networks in Organizational Research, American Psychological Association, Washington, DC, USA, 2006.
[97]  M. J. Somers, “Thinking differently: assessing nonlinearities in the relationship between work attitudes and job performance using a Bayesian neural network,” Journal of Occupational and Organizational Psychology, vol. 74, no. 1, pp. 47–61, 2001.
[98]  M. S. Hung, M. Y. Hu, M. S. Shanker, and B. E. Patuwo, “Estimating posterior probabilities in classification problems with neural networks,” International Journal of Computational Intelligence and Organization, vol. 1, no. 1, pp. 49–60, 1996.
[99]  R. Rustum and A. J. Adeloye, “Replacing outliers and missing values from activated sludge data using kohonen self-organizing map,” Journal of Environmental Engineering, vol. 133, no. 9, pp. 909–916, 2007.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413