Research Article | | Peer-Reviewed

An Alternative Way of Determining Biases and Weights for the Training of Neural Networks

Received: 22 July 2025     Accepted: 4 August 2025     Published: 18 August 2025
Views:       Downloads:
Abstract

The determination of biases and weights in neural networks is a fundamental aspect of their performance, traditionally employing methods like steepest descent and stochastic gradient descent. While these supervised training approaches have proven effective, this technical note confidently presents a groundbreaking alternative that eliminates randomness in calculations altogether. This innovative method calculates biases and weights as precise solutions to a system of equations. This approach not only reduces computational demands but also enhances energy efficiency significantly. By strategically incorporating target values during training, we can expand the number of target values within acceptable variance limits, enabling the formation of square matrices. This ensures that input and output nodes remain perfectly balanced through the generation of fictive data, particularly for output nodes. These fuzzy sets guarantee that the variance of neuron target values stays within permissible limits, effectively minimizing error. The generated data is intentionally minimal and can also be produced using random processes, facilitating effective learning for the neural networks. Unlike conventional techniques, the values of biases and weights are determined directly, leading to a process that is both faster and less energy-intensive. Our primary objective is to establish an efficient foundation for the training data of the neural network. Moreover, these calculated values serve as robust initial parameters for integration with other determination methods, including stochastic gradient descent and steepest descent. This presentation showcases a powerful new algorithm, poised to significantly enhance the efficiency and effectiveness of neural network training.

Published in American Journal of Artificial Intelligence (Volume 9, Issue 2)
DOI 10.11648/j.ajai.20250902.14
Page(s) 129-132
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2025. Published by Science Publishing Group

Keywords

Neural Networks, Stochastic Random Steepest Descent, Steepest Descent, Data Training in Neural Networks, Initialization of Data Training

References
[1] Zell, Andreas (2003). Simulation neuronaler Netze [Simulation of Neural Networks] (in German) (1st ed.), Addison-Wesley.
[2] Dawson, Christian W. (1998). "An artificial neural network approach to rainfall-runoff modelling". Hydrological Sciences Journal. 43 (1): 47-66. Bibcode: 1998HydSJ. 43. 47D.
[3] Yang Z. and Yang Z. (2014). Comprehensive Biomedical Physics. Karolinska Institute, Stockholm, Sweden: Elsevier. p. 1. Archived from the original on July 28, 2022, retrieved on July 28, 2022.
[4] Shao, Feng and Shen, Zheng (January 9, 2022)."How can artificial neural networks approximate the brain?". Front. Psychol. 13: 970214.
[5] Levitan, Irwin and Kaczmarek, Leonard (August 19, 2015). "Intercellular communication". The Neuron: Cell and Molecular Biology (4th ed.). New York, NY: Oxford University Press. pp. 153-328.
[6] Hestenes, Magnus R. and Stiefel, Eduard (December, 1952). "Methods of Conjugate Gradients for Solving Linear Systems" (PDF). Journal of Research of the National Bureau of Standards. 49 (6): 409.
[7] Straeter, T. A. (1971). On the Extension of the Davidon-Broyden Class of Rank One, Quasi-Newton Minimization Methods to an Infinite Dimensional Hilbert Space with Applications to Optimal Control Problems (PhD thesis). North Carolina State University. hdl: 2060/19710026200 - via NASA Technical Reports Server.
[8] Speiser, Ambros (2004). "Konrad Zuse und die ERMETH: Ein weltweiter Architektur-Vergleich" [Konrad Zuse and the ERMETH: A worldwide comparison of architectures]. In Hellige, Hans Dieter (ed.). Geschichten der Informatik. Visionen, Paradigmen, Leitmotive (in German), Berlin: Springer. p. 185.
[9] Rosenblatt, F. (1958). "The Perceptron: A Probabilistic Model For Information Storage and Organization In The Brain". Psychological Review. 65 (6): 386-408. Cite Seer X 10.1.1.588.3775.
[10] Bishop, C. M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics), Springer.
[11] Needell, D., Srebro, N. and Ward, R. (January, 2015). Stochastic gradient descent weighted sampling, and the randomized Kaczmarz algorithm.
[12] Bottou, L. (1991) Stochastic gradient learning in neural networks. Proceedings of Neuro-Nimes, 91.
[13] Li, P., Tao, H., and Zhou, H. et al. (2025). Enhanced Multiview attention network with random interpolation resize for few-shot surface defect detection. Multimedia Systems 31, 36.
[14] Wang, Z., Tao, H. and Zhou, H. et al. (2025). A content-style control network with style contrastive learning for underwater image enhancement. Multimedia Systems 31, 60.
[15] Apedo, Y., Tao, H. (2025) A weakly supervised pavement crack segmentation based on adversarial learning and transformers. Multimedia Systems 31, 266.
[16] Glorot, Xavier and Bengio, Y. (January, 2010) Understanding the difficulty of training deep feedforward neural networks, Journal of Machine Learning Research 9: 249-256.
[17] He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing and Sun, Jian (2015). "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification". arXiv: 1502.01852.
[18] Hardesty L. (April 14, 2017). "Explained: Neural networks". MIT News Office. Archived from the original on March 18, 2024, retrieved on June 2, 2022.
[19] Bishop C. M. (August 17, 2006). Pattern Recognition and Machine Learning. New York: Springer.
[20] Vapnik V. N. (1998). The nature of statistical learning theory (Corrected 2nd print. ed.). New York Berlin Heidelberg: Springer.
[21] Goodfellow, Ian; Bengio, Yoshua and Courville, Aaron (2016). Deep Learning. MIT Press. Archived from the original on 16 April 2016, retrieved on June 1, 2016.
[22] Cekirge, H. M. (2025), Tuning the Training of Neural Networks by Using the Perturbation Technique, American Journal of Artificial Intelligence, Vol. 9, No. 2, pp. 107-109.
Cite This Article
  • APA Style

    Cekirge, H. M. (2025). An Alternative Way of Determining Biases and Weights for the Training of Neural Networks. American Journal of Artificial Intelligence, 9(2), 129-132. https://doi.org/10.11648/j.ajai.20250902.14

    Copy | Download

    ACS Style

    Cekirge, H. M. An Alternative Way of Determining Biases and Weights for the Training of Neural Networks. Am. J. Artif. Intell. 2025, 9(2), 129-132. doi: 10.11648/j.ajai.20250902.14

    Copy | Download

    AMA Style

    Cekirge HM. An Alternative Way of Determining Biases and Weights for the Training of Neural Networks. Am J Artif Intell. 2025;9(2):129-132. doi: 10.11648/j.ajai.20250902.14

    Copy | Download

  • @article{10.11648/j.ajai.20250902.14,
      author = {Huseyin Murat Cekirge},
      title = {An Alternative Way of Determining Biases and Weights for the Training of Neural Networks
    },
      journal = {American Journal of Artificial Intelligence},
      volume = {9},
      number = {2},
      pages = {129-132},
      doi = {10.11648/j.ajai.20250902.14},
      url = {https://doi.org/10.11648/j.ajai.20250902.14},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajai.20250902.14},
      abstract = {The determination of biases and weights in neural networks is a fundamental aspect of their performance, traditionally employing methods like steepest descent and stochastic gradient descent. While these supervised training approaches have proven effective, this technical note confidently presents a groundbreaking alternative that eliminates randomness in calculations altogether. This innovative method calculates biases and weights as precise solutions to a system of equations. This approach not only reduces computational demands but also enhances energy efficiency significantly. By strategically incorporating target values during training, we can expand the number of target values within acceptable variance limits, enabling the formation of square matrices. This ensures that input and output nodes remain perfectly balanced through the generation of fictive data, particularly for output nodes. These fuzzy sets guarantee that the variance of neuron target values stays within permissible limits, effectively minimizing error. The generated data is intentionally minimal and can also be produced using random processes, facilitating effective learning for the neural networks. Unlike conventional techniques, the values of biases and weights are determined directly, leading to a process that is both faster and less energy-intensive. Our primary objective is to establish an efficient foundation for the training data of the neural network. Moreover, these calculated values serve as robust initial parameters for integration with other determination methods, including stochastic gradient descent and steepest descent. This presentation showcases a powerful new algorithm, poised to significantly enhance the efficiency and effectiveness of neural network training.},
     year = {2025}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - An Alternative Way of Determining Biases and Weights for the Training of Neural Networks
    
    AU  - Huseyin Murat Cekirge
    Y1  - 2025/08/18
    PY  - 2025
    N1  - https://doi.org/10.11648/j.ajai.20250902.14
    DO  - 10.11648/j.ajai.20250902.14
    T2  - American Journal of Artificial Intelligence
    JF  - American Journal of Artificial Intelligence
    JO  - American Journal of Artificial Intelligence
    SP  - 129
    EP  - 132
    PB  - Science Publishing Group
    SN  - 2639-9733
    UR  - https://doi.org/10.11648/j.ajai.20250902.14
    AB  - The determination of biases and weights in neural networks is a fundamental aspect of their performance, traditionally employing methods like steepest descent and stochastic gradient descent. While these supervised training approaches have proven effective, this technical note confidently presents a groundbreaking alternative that eliminates randomness in calculations altogether. This innovative method calculates biases and weights as precise solutions to a system of equations. This approach not only reduces computational demands but also enhances energy efficiency significantly. By strategically incorporating target values during training, we can expand the number of target values within acceptable variance limits, enabling the formation of square matrices. This ensures that input and output nodes remain perfectly balanced through the generation of fictive data, particularly for output nodes. These fuzzy sets guarantee that the variance of neuron target values stays within permissible limits, effectively minimizing error. The generated data is intentionally minimal and can also be produced using random processes, facilitating effective learning for the neural networks. Unlike conventional techniques, the values of biases and weights are determined directly, leading to a process that is both faster and less energy-intensive. Our primary objective is to establish an efficient foundation for the training data of the neural network. Moreover, these calculated values serve as robust initial parameters for integration with other determination methods, including stochastic gradient descent and steepest descent. This presentation showcases a powerful new algorithm, poised to significantly enhance the efficiency and effectiveness of neural network training.
    VL  - 9
    IS  - 2
    ER  - 

    Copy | Download

Author Information
  • Sections