OPTIMIZING PRODUCTION SCHEDULING WITH THE RAT SWARM SEARCH ALGORITHM: A NOVEL APPROACH TO THE FLOW SHOP PROBLEM FOR ENHANCED DECISION MAKING

: The Rat Swarm Optimizer (RSO) algorithm is examined in this paper as a potential remedy for the flow shop issue in manufacturing systems. The flow shop problem involves allocating jobs to different machines or workstations in a certain order to reduce execution time or resource use. The objective function is used by the RSO method to optimize the results after mapping the rat locations to task-processing sequences. The RSO method successfully locates high-quality solutions to the flow shop problem when compared to other metaheuristic algorithms on diverse test situations. This research helps to improve the flexibility, lead times, quality, and efficiency of the production system. The paper introduces the RSO algorithm, creates a mapping strategy, redefines mathematical operators, suggests a method to enhance the quality of solutions, shows how successful the algorithm is through simulations and comparisons, and then uses statistical analysis to confirm the algorithm's performance.


Introduction
The manufacturing systems (Zheng et al., 2022) are complex systems (Wang & Magron, 2022) that involve the production and creation of materials with machines, tools, and labor. Ensuring the efficient operation of these systems is crucial to the success and profitability of a business. One of the main challenges facing manufacturing systems is the scheduling of tasks on machines, also known as the flow shop problem (Reza & Saghafian, 2005).
This problem involves finding the optimal sequence of operations to process a set of tasks on a set of machines. It arises in manufacturing systems where multiple machines or workstations are used to process a set of tasks, and the tasks must be processed in a specific order and cannot be processed simultaneously on different machines. The objective of the flow shop problem is to find the optimal sequence of operations to process the tasks to minimize the total execution time or to use resources efficiently.
Solving the flow shop problem is a complex optimization (Wang & Magron, 2022) task that requires consideration of multiple variables and constraints. Traditional optimization algorithms may not be sufficient to solve this problem, especially when dealing with large-scale, real-time systems. To address this challenge, researchers have turned to swarm intelligence optimization algorithms.
Swarm intelligence optimization algorithms(Ab Wahab et al., 2015) are a class of optimization algorithms inspired by the self-organizing and decentralized behavior of natural systems, such as flocks of birds (Alaliyat et al., 2014), ant colonies (Blum, 2005), and schools of fish. These algorithms have been widely studied and applied in various fields, including operations research, computer science, and engineering, due to their ability to find good solutions to complex optimization problems in a burnt and efficient manner.
In recent years, swarm intelligence optimization algorithms have received increasing attention as a means of solving the flow shop-scheduling problem, which is a well-known problem in manufacturing systems. The flow shop problem involves scheduling a set of tasks on a set of machines to minimize the completion time of all tasks. Optimizing task scheduling can improve the effectiveness and efficiency of the manufacturing process, thereby reducing costs and increasing competitiveness.
In the continuous flow-scheduling problem, a set of tasks must be processed on a set of machines in a specific order. Each task consists of a sequence of operations, and each operation must be performed on a specific machine. The objective of the scheduling problem is to find a schedule that minimizes the execution time of all tasks. By finding the optimal schedule, manufacturing systems can improve their effectiveness, efficiency, and competitiveness.
There are several variations of the flow shop problem, depending on the specific constraints and objective function. Some common variations include: − Flow shop with no wait: In this variation (Smutnicki et al., 2022), the machines are assumed to be available for processing at all times, and there is no waiting time between the processing of different jobs. − Flow shop with total flow time minimization: In this variation (Marichelvam et al., 2017), the objective is to minimize the total processing time of all the jobs. − Flow shop with makespan minimization: In this variation, the objective is to minimize the time it takes to complete all the jobs, also known as the makespan. − Flow shop with machine availability constraints: In this variation (Smutnicki et al., 2022), the availability of the machines is taken into account, and the schedule must respect any constraints on the use of the machines. − Flow shop with job release times: In this variation (Wu et al., 2022), the jobs are released at different times, and the schedule must consider the release times of the jobs. Solving the flow shop problem requires finding an optimal schedule for the jobs that satisfy the specific constraints and objective function of the problem. This can be 19

Related works
The flow shop problem is a scheduling problem that involves finding the optimal order of processing a set of tasks on a set of machines to minimize the total processing time. This problem is NP-hard (Tanaev et al., 1994), which means that it is difficult to solve using traditional optimization methods. However, metaheuristics and swarm intelligence algorithms can be used to develop more efficient solutions to the flow shop problem.
Swarm intelligence algorithms are a type of optimization algorithm that is inspired by the self-organizing and decentralized behavior of natural systems, such as flocks of birds, colonies of ants, and schools of fish. These algorithms have been widely studied and applied in various fields, including operations research, computer science, and engineering, due to their ability to find good solutions to complex optimization problems robustly and efficiently.
Popular swarm intelligence metaheuristics that have been used to solve the flow shop problem include ant colony optimization (ACO) (Blum, 2005), particle swarm optimization (PSO) (Zhang et al., 2010), bee colony optimization (BCO) (Huang & Lin, 2011), and the artificial fish swarm algorithm (AFSA) (Babaee et al., 2020). In addition to swarm intelligence algorithms, other types of metaheuristics have also been proposed and applied to solve the flow shop problem.
Iterative improvement-based metaheuristics generate solutions through iterative improvements. The IIGA algorithm (Pan et al., 2008) uses a constructive heuristic and an acceptance criterion to generate and select the best solution for the next iteration. The DPSOVND algorithm (Pan et al., 2008)is designed to minimize both the makespan and total flow time for a shop floor scheduling problem. The TMIIG algorithm (Ding et al., 2015) is a modified version of the iterated greedy algorithm that incorporates a Tabu-based reconstruction strategy and a neighborhood search method involving insertion, permutation, and double-insertion moves to solve the no-wait job shopscheduling problem with a scope criterion. The NEH (Nawaz, Enscore, and Ham) algorithm (Liang et al., 2022) is a heuristic method for minimizing the execution time in a continuous flow shop with infinite storage at each stage.
Hybrid metaheuristics combine several approaches to leverage individual strengths and overcome their weaknesses. The NEH-NGA algorithm (Liang et al., 2022)combines the NEH heuristic and the niche genetic algorithm to create a hybrid optimization method to solve scheduling problems. The SSO algorithm (Kurdi, 2021)is based on the collaborative behavior of social spider colonies, which involves interactions between males and females performing various tasks. The SCE-OBL algorithm (Kurdi, 2021) combines the SCE algorithm with adversarial learning. The CLS-BFE algorithm (Kurdi, 2021) combines chaotic local search with bacterial foraging principles to search for optimal solutions. The CSO algorithm (Li & Yin, 2013) combines cuckoo search with Levy flights, a random search technique based on the probability distribution of Levy flights observed in nature.
Nature has inspired many metaheuristic algorithms, such as the BAT algorithm (Bellabai et al., 2022), which is inspired by the echolocation system of bats to solve problems. The HMSA algorithm (Marichelvam et al., 2017) combines elements of the Monkey Search algorithm with other techniques to solve the flow shop problem. The DWWO algorithm (Ding et al., 2015) is designed to solve the NWFSP with a focus on minimizing the makespan, and it has five phases. Propagation and breaking operations are based on insertion.
Evolution-inspired metaheuristics use the principles of natural selection and genetics to simulate the evolutionary process. The SGA algorithm (Liang et al., 2022) uses the principles of natural evolution, such as reproduction, mutation, and selection, to search for the optimal solution to a given problem. The GA algorithm (Arik, 2021) is another type of optimization algorithm that draws on the principles of natural evolution and genetics. These algorithms are often used to solve optimization problems, including the flow shop problem.

Flow shop problem
Flow shop scheduling is a well-known problem in the field of operations research and manufacturing systems. It can be formalized as an optimization problem whose objective is to minimize the total processing time of a set of tasks on a set of machines. The problem can be formulated as follows: Subject to: Where: n is the number of jobs m is the number of machines is the processing time for job j on machine i is a binary decision variable that is 1 if job j is processed on machine i and 0 otherwise The first constraint ensures that each job is assigned to exactly one machine, and the second constraint ensures that each machine can only process one job at a time. The third constraint indicates that the decision variables are binary.
The objective of the optimization problem is to find the values of the decision variables ( ) that minimize the total processing time, subject to the constraints. This can be achieved using optimization algorithms, such as linear programming, mixed integer programming, or metaheuristics such as swarm intelligence algorithms.

Importance of solving the flow shop problem in manufacturing systems
The Flow shop problem is a major challenge in manufacturing systems. It involves planning a sequence of operations for a set of tasks in a specific order through a series of machines. This problem requires effective planning and optimization to minimize production time, reduce costs, and improve productivity.
Solving the FSSP problem can have significant benefits for manufacturing systems, including: 1) Improved efficiency: By optimizing the production schedule, manufacturing systems can operate more efficiently, reducing production time and increasing output.
Optimizing production scheduling with the Rat Swarm search algorithm: A novel approach… 21 2) Reduced costs: An optimized production schedule can reduce the need for overtime, excess inventory, and other expenses, resulting in significant savings.
3) Increased competitiveness: Manufacturing systems that can produce goods more efficiently and cost-effectively are more competitive in the marketplace. 4) Improved customer satisfaction: A well-optimized production program can help meet customer demand and ensure on-time delivery, which improves customer satisfaction.
Therefore, solving the flow shop problem is of great importance in manufacturing systems and can have significant benefits for companies. Figure 1 shows the Gantt chart for 5 tasks and 4 machines.

Proposed Rat swarm algorithm
Rat Swarm Optimization (RSO) (Mzili et al., 2022) is a metaheuristic algorithm inspired by the behavior of rat swarms and their ability to find food sources efficiently. In particular, the RSO algorithm is inspired by how rat swarms can adapt to changing environments and use their collective intelligence to locate and capture prey.
In the RSO algorithm, a population of "rats" is used to represent potential solutions to the optimization problem. Each rat is associated with a set of decision variables that represent a potential solution to the problem. The rats move through the search space, exploring different solutions and updating their position according to the quality of the solutions found.

Mathematical modeling of the RSO algorithm
The rat swarm optimization (RSO) algorithm consists of two main phases: exploration and exploitation.
To model the behavior of rats when they search for and capture prey, specific equations are used in the algorithm. These equations allow the rats to locate and capture prey effectively and efficiently while optimizing the position or solution of the prey in the search space.

• Pursuit of prey (Exploration phase)
The pursuit behavior of rats that update their position according to the best personal position found by the best searcher in the group. Parameters A and C provide a balance between exploration and exploitation, allowing rats to search for and capture prey efficiently. This behavior is described by the following equation.
Where P(t) represents the position of the rat at time t, P(t-1) represents the position of the rat at the previous time step, and Pbest(t) represents the best position of the rat at time t.
Therefore, parameters A and R are responsible for balancing exploration and exploitation during the iteration process. They are sensitive to finding a good balance between the two, and their values are randomly generated between 1 and 5 for A and 0 and 2 for R. This helps the rats effectively search for and capture their prey while also optimizing the solution or position of the prey.

• Fighting prey (exploitation phase)
The rats attack the target prey detected in the previous phase. However. The prey often tries to escape from dangerous situations or to defend itself against this attack.
In this case. A deadly battle ensues between the rats and the prey and. in some cases. Ends with the death of some rats.
Therefore, the fight between the rats and their prey is mathematically described by the formula below: This equation represents the exploitation phase of the rats, where they accept the position and evaluation of the prey that they have found and fought with. P(t+1) represents the updated position of the rat at the current time step, and Pbest represents the best position or solution found by the rats so far. The absolute value function ensures that the updated position of the rat is always a positive value, regardless of whether Pbest is greater or less than P(t).
Where F(t) represents the evaluation or value of the prey at time t, and f(P(t)) represents the position of the prey at time t.
The value of the prey, represented by F(t), can be determined using a suitable evaluation function, such as the fitness function in an optimization problem. The position of the prey, represented by f(P(t)), can be used to update the personal and global best positions of the rats in the swarm.

Using the RSO algorithm to solve the flow shop problem
Solving the flow shop-scheduling problem using the RSO algorithm requires the definition of a set of discrete operators that the rats can use to move through the search space. These operators can consist of swapping the position of two tasks in the calendar, inserting a new task into the calendar, or deleting a task from the calendar. The rats then use these operators to explore different scheduling configurations and update their positions based on the quality of the solutions found.
In RSO, a set of "virtual rats" search for an optimal solution by moving through the problem space and adjusting their movement according to the positions of other rats. The rats are guided by a "rat king", who is a virtual leader who guides the movement of the rats toward the optimal solution.
To use RSO to solve the flow shop problem, the following steps can be taken: 1) Define the problem: Clearly define the problem to be solved, including the number of tasks, the number of shops, and any constraints or requirements that need to be addressed.
2) Initialize the population: Create a population of rats that will represent potential solutions to the problem. Each rat will be assigned a set of tasks to perform in a specific order.
3) Evaluate the fitness of each rat: Calculate the fitness of each rat in the population by evaluating the effectiveness of the order of the tasks they have been assigned. The fitness of each rat will be based on measures such as total processing time, number of delays, and overall system efficiency. 4) Selection of the fittest rats: Select the fittest rats from the population using the objective function. These fittest rats will be used to create the next generation of rats. 5) Generate new rats: Generate new rats from the fittest rats using equation (8) by replacing the mathematical operators with other discrete operators such as crossover and mutation. Since this optimizer is designed to solve continuous and linear optimization problems, it cannot be used directly to solve discrete optimization problems. Therefore, several modifications must be made.
The operator of subtraction between two rat positions will be changed in our case to a list of swaps to be performed on a sequence of jobs ( ) to obtain the first sequence list ( ). • * ( ( ) − ( )): This operation between a real [0,1] and a list of swaps will be defined to manipulate and reduce the number of swaps generated by the previous equation.
The addition operation allows for the final number of possible swaps to be applied to a sequence of jobs. These changes will be clarified in the example below in the following Figure 2: 7) Apply the 2-opt local search algorithm to improve each solution: The 2-opt algorithm is primarily used to solve the traveling salesman problem (TSP); however, it can be adapted and extended to address the flow shop (FSSP). The algorithm consists of selecting two non-adjacent edges in the schedule and swapping the order of the tasks between them. After the swap, calculate the new makespan and, if it is greater than the current solution, keep the updated solution. Run this process iteratively several times to gradually refine the quality of the solution. 8) Evaluate the fitness of the new rats: Calculate the fitness of the new rats and add them to the population. 9) Repeat the process: Continue to repeat the process of selecting the fittest rats, generating new rats, and evaluating their fitness until the optimal solution is found or a predetermined number of iterations has been reached.
The following is the description of the final algorithm.

Experimental results
The DRSO algorithm has been applied to more than 150 instances of the OR library and the results are presented in Tables (2-7). These tables indicate the instance name ("Instance"), the number of tasks (n) and machines (m) for each instance ("n×m"), the best result proposed by other algorithms ("BKS"), the best results obtained by the different methods ("Best"), and the average results ("Average"). The column "PDav(%)" indicates the percentage deviation of the average solution length from the optimal solution length, calculated using equation 9: (9) In the "PDav(%)" column, values of 0.00 are highlighted in bold when all solutions found in the 20 trials are equal to the length of the best-known solution. Values less than 0.00 are highlighted in bold and blue if the average of the solutions found in all trials is less than the length of the best-known solution. Table 1 shows the initial discrete RSO parameters. To conduct a comprehensive evaluation of DRSO, it is necessary to compare it with other problem-solving algorithms or methods. A wide range of metaheuristics must be selected to ensure a thorough and detailed analysis of DRSO's strengths and weaknesses compared to other algorithms and to identify the situations in which DRSO performs best. The metaheuristics chosen for comparison include IIGA, DPSOVND, TMIIG, DWWO, BAT, TLBO, SGA, HMSA, NEH, NEH-NGA, SSO, SCE-OBL, CLS-BFE, ACGA, and CSO. This diverse set of metaheuristics will provide a comprehensive basis for comparison and will help to assess the effectiveness of DRSO relative to other optimization methods. Figure 3 shows the convergence curves of several different algorithms on four instances of Ta001, Ta002, Ta021, and Ta031 in the context of the productionscheduling problem. The curves represent the performance of each algorithm in the four different instances.
The horizontal axis in the Figure represents the number of iterations required to reach the optimal value of the objective function, while the vertical axis represents the value of the objective function.
Examination of the curves shows that the DRSO algorithm converges quickly compared to the other algorithms.

Comparison
In this section, we will proceed to the comparison of the DRSO algorithm with other metaheuristics based on the data provided by the authors of the compared methods. The objective is to evaluate and analyze the performance of these algorithms to determine the relative efficiency of DRSO.
The algorithms are evaluated according to three main criteria: the best solution found (Best), the average solution found (Average), and the percentage of deviation from the best-known solution (PDav).
The Holm-Šídák test is a multiple comparison method that controls the type I error rate when examining several hypotheses simultaneously.
The Wilcoxon test is a non-parametric test that compares the medians of two samples to determine if they are from the same population.
Each comparison will be illustrated by a graph showing the PDav comparison curve or the best value obtained, to justify the performance of the DRSO algorithm, as illustrated in the five Figures (4-9).

Comparison between DRSO, IIGA, DPSOVND, TMIIG, and DWWO
The results in Table 2 show that the DRSO algorithm reached the optimum for all instances (i.e. 100%) with an average close to or equal to the optimum in most cases. In contrast, the other algorithms such as IIGA, DPSOVND, TMIIG, and DWWO failed to find the optimum for all instances (0 out of 65). The average results of these algorithms were also very high compared to the optimum found. Thus, the performance of DRSO is significantly better than the other algorithms.
In the Figure 4, it can be observed that the curve of DRSO is significantly smaller than the other curves, indicating that DRSO is more stable and has better overall performance than the other algorithms. In Table 8, the results of the Holm-Šídák multiple comparison test comparing the performance of the DRSO method with that of the IIGA, DPSOVND, TMIIG, and DWWO methods are displayed. The test results indicate that the differences in performance between the DRSO method and the other methods (IIGA, DPSOVND, TMIIG, and DWWO) are statistically significant.
Specifically, the negative mean difference for each comparison suggests that DRSO performs better than the other methods. Additionally, the adjusted P values for all comparisons are less than 0.0001, indicating a very high level of significance. In detail, the mean difference between between DRSO and DPSOVND,between DRSO and TMIIG,, and between DRSO and DWWO is -917.2. In all of these comparisons, the DRSO method shows superior performance, as indicated by the four stars (****) in the abstract, which indicate a very high level of significance.

Comparison between DRSO, BAT, and TLBO
From Table 3, the comparison of the performance of the DRSO, BAT, and TLBO methods in solving the instances of the problem Ta shows significant differences in the percentage of success in finding the best solution equal to the best-known solution (BKS) as well as in the percentage of average deviation (PDav).
DRSO appears to be the best-performing method for finding the best solution equal to BKS for all instances. Conversely, BAT performs less well, reaching the best solution equal to BKS only 16.67% of the time. TLBO performs the worst of the three methods, with 8.33%.
In terms of percent average deviation (PDav), DRSO generally has a low deviation, indicating that the performance of this method is close to the best-known solution. In the majority of cases, BAT exhibits a higher percentage of mean deviation than DRSO, suggesting lower accuracy for this method. Similarly, TLBO has a higher average deviation percentage than DRSO in many cases and sometimes even higher than BAT, indicating that its performance is less accurate than the other two methods.
In addition, Figure 5 shows the comparison of the performance curves of the algorithms. The curve of DRSO is significantly smaller than those of BAT and TLBO, indicating better stability and overall superior performance for DRSO compared to the other algorithms.  Table 9, the results of the Holm-Šídák test indicate a significant difference in performance between DRSO and BAT, with a mean difference of -120.9 and an adjusted P value of 0.0240. The negative difference suggests that DRSO performs better than BAT. This comparison has a level of significance, as indicated by the (*) in the summary column.
In contrast, the comparison between DRSO and TLBO does not show a significant difference in performance. The mean difference is -218.0 and the adjusted P value is 0.0515, slightly above the 0.05 significance level. In this comparison, the summary indicates "ns" (not significant), which means that there is insufficient evidence to conclude that the performance of DRSO is significantly different from that of TLBO

Comparison between DRSO, SGA, and HMSA
From Table 4, the comparison of the performance of the DRSO, SGA, and HMSA methods in solving the instances of the problem Ta shows significant differences in their ability to find the best solution equal to the best-known solution (BKS) as well as in the percentage of average deviation (PDav).
DRSO appears to be the best-performing method for finding the best solution equal to BKS for all instances. SGA and HMSA, on the other hand, are less effective in obtaining the best solution equal to BKS, with HMSA generally outperforming SGA.
In terms of percent average deviation (PDav), DRSO consistently shows a low deviation, indicating that the performance of this method is close to the best-known solution. In most cases, SGA exhibits a higher percentage of mean deviation than DRSO, suggesting lower accuracy for this method. Similarly, HMSA often has a higher average deviation percentage than DRSO, indicating that its performance is less accurate than DRSO, but it is generally more accurate than SGA. Figure 6 illustrates the consistency of DRSO's performance compared to SGA and HMSA using a graphical representation. The curve highlights the low deviation and higher accuracy of DRSO, while SGA and HMSA show higher average deviation percentages. Therefore, this Figure supports the claim that DRSO is a more reliable and accurate algorithm than SGA and HMSA. The results of the Holm-Šídák multiple comparison test in Table 10 show that the differences in performance between DRSO and the other two methods (SGA and HMSA) are statistically significant. The negative mean difference suggests that DRSO performs better than SGA and HMSA, with an adjusted P value of less than 0.0001. The four stars (****) in the abstract indicate a very high level of significance for these comparisons.

. Comparison between DRSO, NEH, and NEH-NGA
Comparing the results of the three methods, DRSO, NEH, and NEH-NGA in Table 5, reveals that the DRSO method performs best in solving the scheduling. DRSO finds the best solution, equal to the best-known solution (BKS), for 42 out of 54 instances, resulting in a success rate of approximately 77.78%. In addition, this method has a very low percentage average deviation (PDav) of 0.037%, indicating higher accuracy than the other methods.
In comparison, the NEH method fails to find the best solution for any of the 54 instances, with a success percentage of 0%. Its average PDav is 2.82%, which shows a significant difference from BKS.
The NEH-NGA method, on the other hand, succeeds in finding the best solution for 12 of the 54 instances, with a success rate of about 22.22%. Its average PDav is 0.43%, which is a relatively small difference on average, but still higher than that of the DRSO method. Figure 7 shows a comparison of the PDav(%) values obtained by DRSO, NEH, and NEH-NGA. The Figure shows that the DRSO method has an exceptionally low percent mean deviation (PDav), which means higher accuracy than the other methods examined. The results of the Holm-Šídák multiple comparison tests, presented in Table 11, indicate statistically significant differences in performance between DRSO and the other two methods (NEH and NEH-NGA). The negative mean differences suggest that DRSO performs better than NEH and NEH-NGA. Adjusted P values less than 0.0001 for both comparisons, represented by the four stars (****), denote an exceptionally high level of significance.  Table 6 compares the performance of the DRSO algorithm to four other optimization methods: SSO, SCE-OBL, CLS-BFO, and ACGA, on 21 instances of a problem characterized by "nxm" matrices. The evaluation criterion for these algorithms is their capacity to achieve the best-known solution (BKS) for each instance. Remarkably, DRSO consistently reaches the BKS in all 21 instances, boasting a 100% success rate. In contrast, the other algorithms exhibit varying success levels in attaining the BKS, with SSO accomplishing it in merely 7 out of 21 instances, while the other three methods also fall short of DRSO's performance.
As indicated in Table 12, out of the 21 instances, DRSO surpasses the SSO algorithm in 11 instances (52.38%), SCE-OBL in 15 instances (71.43%), CLS-BFO in 17 instances (80.95%), and ACGA in 16 instances (76.19%).  Figure 8 shows the comparison of the best value obtained by DRSO, SSO, SCE, CLS-BFO, and ACGA. The curve for DRSO is significantly lower. This indicates that DRSO systematically obtains better results in terms of the best value obtained, which underlines its superior performance and efficiency compared to the other algorithms.  Table 13 is used to compare the performance of DRSO against the other methods. The results show that DRSO performs significantly better than SSO, SCE-OBL, CLS-BFO, and ACGA in all cases studied, with mean differences of 40.52, 91.71, 112.1, and 50.38, respectively. Holm-Šídák adjustments were applied to control for type I errors. Adjusted p values were calculated for each comparison and were all less than 0.05, indicating a significant difference between the performance of DRSO and the other methods.  For the DRSO algorithm, the Best and Average values are either identical or very close, indicating consistent and stable convergence to the BKS. The PDAV values for the DRSO algorithm are also very low, confirming its stability.
In contrast, the performance of the CSO algorithm varies from instance to instance. In some instances, the best values are equal to those obtained by DRSO (e.g., Ta001, Ta031, Ta035, Ta040, Ta061), while in others, the best values are higher than those of DRSO (e.g., Ta011, Ta015, Ta021, Ta025, Ta041, Ta045, Ta051, Ta055, Ta065, Ta071, Ta075). The average values are generally higher than the optimal values, suggesting less stability in the convergence of the CSO algorithm. The PDAV values for CSO are also higher than those for DRSO, reflecting the more variable performance of the CSO algorithm.  Based on the Wilcoxon test information provided in Table 13, to compare DRSO and CSO. The results of the test indicate a P value of 0.0010 and a summary P value of ***, which means that the two groups are significantly different with a significance level of P < 0.05.

Table 14. Wilcoxon signed rank Comparisons Test Results for DRSO and CSO
Wilcoxon signed-rank test value P value 0,0010 P value summary *** Significantly different (P < 0.05)? Yes

Evaluating DRSO Performance Using Analysis and Friedman Test
The Friedman test with an alpha of 0.05 and a 95% confidence interval can also be used to justify the performance of the DRSO optimization algorithm. The Friedman test is a statistical test that measures the significance of the difference between two data sets. By setting the alpha to 0.05 and the confidence interval to 95%, we can determine whether the difference in performance between DRSO and the other algorithms is statistically significant.
If the Friedman test reveals that the difference in performance between DRSO and the other algorithms is statistically significant with a p-value less than 0.05, we can conclude that the performance of DRSO is significantly better than the other algorithms. This means that we can be 95% sure that the observed differences in performance are not due to chance or random variation, but rather to the inherent superiority of the DRSO algorithm.
Based on the results of the Friedman test with an alpha of 0.05 and a 95% confidence interval, as presented in Table 9, it appears that DRSO outperforms the other optimization algorithms in terms of finding the optimal solution. The multiple comparisons test shows that DRSO has a significantly lower rank sum difference than BAT, TLBO, TMIIG, DWWO, BAT, TLBO, SGA, HMSA, NEH, NEH-NGA, CLS-BFO, and ACGA. This is indicated by "Yes" in the "Significant?" column, and the adjusted p-value is less than 0.0001 for all these comparisons.
However, the results indicate that DRSO does not have a significantly lower rank sum difference than SSO or SCE-OBL. The "No" in the "Significant?" column and the adjusted p-value are greater than 0.9999 for SSO and 0.0873 for SCE-FBL.
These results, as shown in Table 9, suggest that DRSO is a highly efficient optimization algorithm compared to the other algorithms tested. It consistently outperformed the other algorithms in finding the optimal solution with accuracy.

Conclusion
In summary, the utilization of discrete rat swarm optimization in manufacturing systems shows significant promise for improving efficiency and productivity. Implementing this approach could contribute to considerable advancements in manufacturing processes, resulting in more streamlined and cost-effective operations.
The implementation of discrete rat swarm optimization has demonstrated its efficacy in addressing the flow shop-scheduling problem, indicating its potential to enhance the efficiency of manufacturing systems. The ability of this method to identify optimal solutions with a high degree of accuracy positions it as a valuable tool for boosting manufacturing process productivity.
When compared to other optimization algorithms, such as BAT, TLBO, TMIIG, DWWO, SGA, HMSA, NEH, NEH-NGA, CLS-BFO, and ACGA, discrete rat swarm optimization consistently outperforms these techniques in obtaining the optimal solution. This evidence underscores the superiority of this approach for solving complex optimization challenges.
The integration of discrete rat swarm optimization within manufacturing systems offers immense potential for substantially enhancing efficiency and productivity. By adopting this approach, significant advancements in manufacturing processes can be realized, ultimately leading to more streamlined and cost-effective operations.
In future work, we will focus on several key aspects to advance the current state of research. First, we will continue to refine the performance of discrete rat swarm optimization in the context of the shop floor scheduling problem to improve its efficiency and impact. In addition, we will explore the potential applications of this optimization technique in other optimization tasks, thus expanding its scope and influence in various domains. In addition, we will develop hybrid optimization algorithms that combine the strengths of rat swarm optimization with those of other optimization techniques, which could lead to significant improvements in the overall efficiency of this method. Data Availability Statement: The data used to support the findings of this study are included within the article.

Conflicts of Interest:
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.