-
Research/Technical Note
Tuning the Training of Neural Networks by Using the Perturbation Technique
Huseyin Murat Cekirge*
Issue:
Volume 9, Issue 2, December 2025
Pages:
107-109
Received:
5 June 2025
Accepted:
21 June 2025
Published:
6 July 2025
Abstract: The calculation of biases and weights in neural network are being calculated or trained by using stochastic random descent method at the layers of the neural networks. For increasing efficiency and performance, the perturbation scheme is introduced for fine tuning of these calculations. It is aimed at introducing the perturbation techniques into training of artificial neural networks. Perturbation methods are for obtaining approximate solutions with a small parameter ε. The perturbation technique could be used in several combination with other training methods for minimization of data used, training time and energy. The introduced perturbation parameter ε can be selected due nature of training of the data. The determination of ε can be found through several trials. The application of the stochastic random descent method will increase training time and energy. The proper combined use with the perturbation will shorten training time. There exists abundance of usage of both methods, however the combined use will lead optimal solutions. A proper cost function can be used for optimum use of the perturbation parameter ε. The shortening the training time will lead determination of dominant inputs of the out values. One of the essential problems of training is the energy consuming will be decreased by using hybrid training methods.
Abstract: The calculation of biases and weights in neural network are being calculated or trained by using stochastic random descent method at the layers of the neural networks. For increasing efficiency and performance, the perturbation scheme is introduced for fine tuning of these calculations. It is aimed at introducing the perturbation techniques into tr...
Show More
-
Research Article
Towards a Set of Morphosyntactic Labels for the Fulani Language: An Approach Inspired by the EAGLES Recommendations and Fulani Grammar
Zouleiha Alhadji Ibrahima,
Charles Moudina Varmantchaonala,
Dayang Paul*
,
Kolyang
Issue:
Volume 9, Issue 2, December 2025
Pages:
110-121
Received:
4 June 2025
Accepted:
27 June 2025
Published:
21 July 2025
Abstract: This paper details the development of a morphosyntactic label set for the Adamawa dialect of the Fulani language (Fulfulde), addressing the critical lack of digital resources and automatic processing tools for this significant African language. The primary objective is to facilitate the creation of a training corpus for morphosyntactic tagging, there by aiding linguists and advancing Natural Language Processing (NLP) applications for Fulani. The proposed label set is meticulously constructed based on a dual methodological approach: it draws heavily from the well-established EAGLES (Expert Advisory Group on Language Engineering Standards) recommendations to ensure corpus reuse and cross-linguistic comparability, while simultaneously incorporating an in-depth analysis of Fulani grammatical specificities. This adaptation is crucial given the morphological richness and complex grammatical structure of Fulani, including its elaborate system of approximately 25 noun classes, unique adjective derivations, and intricate verbal conjugations. The resulting tagset comprises 15 mandatory labels and 54 recommended labels. While some EAGLES categories like "article" and "residual" are not supported, new categories such as "participle," "ideophone," "determiner," and "particle" are introduced to capture the nuances of Fulani grammar. The recommended tags further detail the mandatory categories, subdividing nouns into proper, common singular, and common plural; verbs based on voice and conjugation (infinitive active, middle, passive; conjugated active affirmative/negative, middle affirmative/negative, passive affirmative/negative); and adjectives and pronouns into more specific types based on demonstrative, possessive, subject, object, relative, emphatic, interrogative, and indefinite functions. Participles are divided into singular and plural, adverbs into time, place, manner, and negation, numbers into singular and plural, and determiners into singular and plural. Particles are further broken down into dicto-modal, abdominal, interrogative, emphatic, postposed, and postposed negative. The categories of preposition, conjunction, interjection, unique, punctuation, and ideophone remain indivisible. This meticulously defined tag set was utilized to manually annotate 5,186 words from Dominique Noye’s Fulfulde-French dictionary, creating a valuable, publicly accessible resource for linguistic research and NLP development. Furthermore, the paper outlines a robust workflow for automatic morphosyntactic tagging of Fulfulde sentences, leveraging a Hidden Markov Model (HMM) in conjunction with the Viterbi algorithm. This approach, which extracts transition and emission probabilities from the annotated corpus, enables the disambiguation of morphosyntactic categories within context, considering the specific syntactic and lexical patterns of the Adamawa dialect. Ultimately, this work significantly contributes to the digitization and standardization of the Fulani language, enhancing the performance of linguistic tools and fostering its integration into digital technologies and multilingual systems.
Abstract: This paper details the development of a morphosyntactic label set for the Adamawa dialect of the Fulani language (Fulfulde), addressing the critical lack of digital resources and automatic processing tools for this significant African language. The primary objective is to facilitate the creation of a training corpus for morphosyntactic tagging, the...
Show More
-
Research Article
Research on Application Pathways and Development Trends of Generative Artificial Intelligence in 3D Modeling
Issue:
Volume 9, Issue 2, December 2025
Pages:
122-128
Received:
22 July 2025
Accepted:
31 July 2025
Published:
13 August 2025
Abstract: With the rapid advancement of Generative Artificial Intelligence (AIGC, Artificial Intelligence Generated Content) technology, 3D modeling, as a core component of digital content creation, is undergoing a profound transformation from "human-driven" to "intelligent generation." Traditional 3D modeling relies on specialized software and manual operations by modelers, characterized by complex workflows, inefficiency, and high skill barriers. In contrast, AIGC enables the automatic generation of 3D geometry, topological relationships, and texture mapping information through natural language prompts (Prompt), image inputs, or sketch instructions, significantly enhancing modeling efficiency and creative freedom. This paper systematically reviews the current primary pathways—Text-to-3D, Image-to-3D, and Sketch-to-3D—based on the technical principles of generative models. It conducts an in-depth analysis of the application characteristics of representative platforms such as MeshyAI, Kaedim, Tripo, and Hunyuan 3D. Through case studies, the feasibility and operational workflows of AIGC modeling in character asset generation, scene construction, and teaching practices are examined. Furthermore, the study comparatively analyzes the differences between AIGC and traditional modeling approaches in terms of efficiency, quality, and scalability, highlighting current challenges faced by AIGC, including precision control, limited editability, and copyright compliance. The research posits that AIGC is reconstructing the paradigm of 3D modeling, propelling 3D content production towards a new era of "intelligent collaboration" and "low-barrier generation." Future advancements are expected to be driven by the deep integration of AIGC with Digital Content Creation (DCC) toolchains, the evolution of multimodal large models, and enhanced semantic control capabilities of Prompts. This study aims to provide a systematic reference and trend analysis for the integration of AIGC modeling technology within higher education, industry practices, and AI development.
Abstract: With the rapid advancement of Generative Artificial Intelligence (AIGC, Artificial Intelligence Generated Content) technology, 3D modeling, as a core component of digital content creation, is undergoing a profound transformation from "human-driven" to "intelligent generation." Traditional 3D modeling relies on specialized software and manual operat...
Show More
-
Research Article
An Alternative Way of Determining Biases and Weights for the Training of Neural Networks
Huseyin Murat Cekirge*
Issue:
Volume 9, Issue 2, December 2025
Pages:
129-132
Received:
22 July 2025
Accepted:
4 August 2025
Published:
18 August 2025
Abstract: The determination of biases and weights in neural networks is a fundamental aspect of their performance, traditionally employing methods like steepest descent and stochastic gradient descent. While these supervised training approaches have proven effective, this technical note confidently presents a groundbreaking alternative that eliminates randomness in calculations altogether. This innovative method calculates biases and weights as precise solutions to a system of equations. This approach not only reduces computational demands but also enhances energy efficiency significantly. By strategically incorporating target values during training, we can expand the number of target values within acceptable variance limits, enabling the formation of square matrices. This ensures that input and output nodes remain perfectly balanced through the generation of fictive data, particularly for output nodes. These fuzzy sets guarantee that the variance of neuron target values stays within permissible limits, effectively minimizing error. The generated data is intentionally minimal and can also be produced using random processes, facilitating effective learning for the neural networks. Unlike conventional techniques, the values of biases and weights are determined directly, leading to a process that is both faster and less energy-intensive. Our primary objective is to establish an efficient foundation for the training data of the neural network. Moreover, these calculated values serve as robust initial parameters for integration with other determination methods, including stochastic gradient descent and steepest descent. This presentation showcases a powerful new algorithm, poised to significantly enhance the efficiency and effectiveness of neural network training.
Abstract: The determination of biases and weights in neural networks is a fundamental aspect of their performance, traditionally employing methods like steepest descent and stochastic gradient descent. While these supervised training approaches have proven effective, this technical note confidently presents a groundbreaking alternative that eliminates random...
Show More