%0 Journal Article %T Parallel Inference for Real-Time Machine Learning Applications %A Sultan Al Bayyat %A Ammar Alomran %A Mohsen Alshatti %A Ahmed Almousa %A Rayyan Almousa %A Yasir Alguwaifli %J Journal of Computer and Communications %P 139-146 %@ 2327-5227 %D 2024 %I Scientific Research Publishing %R 10.4236/jcc.2024.121010 %X Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes the performance gains from parallel versus sequential hyperparameter optimization. Using scikit-learn¡¯s Randomized SearchCV, this project tuned a Random Forest classifier for fake news detection via randomized grid search. Setting n_jobs to -1 enabled full parallelization across CPU cores. Results show the parallel implementation achieved over 5¡Á faster CPU times and 3¡Á faster total run times compared to sequential tuning. However, test accuracy slightly dropped from 99.26% sequentially to 99.15% with parallelism, indicating a trade-off between evaluation efficiency and model performance. Still, the significant computational gains allow more extensive hyperparameter exploration within reasonable timeframes, outweighing the small accuracy decrease. Further analysis could better quantify this trade-off across different models, tuning techniques, tasks, and hardware. %K Machine Learning Models %K Computational Efficiency %K Parallel Computing Systems %K Random Forest Inference %K Hyperparameter Tuning %K Python Frameworks (TensorFlow %K PyTorch %K Scikit-Learn) %K High-Performance Computing %U http://www.scirp.org/journal/PaperInformation.aspx?PaperID=130822