%0 Journal Article %T Smaller & Smarter: Score-Driven Network Chaining of Smaller Language Models %A Gunika Dhingra %A Siddansh Chawla %A Vijay K. Madisetti %A Arshdeep Bahga %J Journal of Software Engineering and Applications %P 23-42 %@ 1945-3124 %D 2024 %I Scientific Research Publishing %R 10.4236/jsea.2024.171002 %X With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily measured by the number of parameters, but also the subsequent escalation in computational demands, hardware and software prerequisites for training, all culminating in a substantial financial investment as well. In this paper, we present novel techniques like supervision, parallelization, and scoring functions to get better results out of chains of smaller language models, rather than relying solely on scaling up model size. Firstly, we propose an approach to quantify the performance of a Smaller Language Models (SLM) by introducing a corresponding supervisor model that incrementally corrects the encountered errors. Secondly, we propose an approach to utilize two smaller language models (in a network) performing the same task and retrieving the best relevant output from the two, ensuring peak performance for a specific task. Experimental evaluations establish the quantitative accuracy improvements on financial reasoning and arithmetic calculation tasks from utilizing techniques like supervisor models (in a network of model scenario), threshold scoring and parallel processing over a baseline study. %K Large Language Models (LLMs) %K Smaller Language Models (SLMs) %K Finance %K Networking %K Supervisor Model %K Scoring Function %U http://www.scirp.org/journal/PaperInformation.aspx?PaperID=130661