ARTIFICIAL RAT OPTIMIZATION WITH DECISION-MAKING: A BIO-INSPIRED METAHEURISTIC ALGORITHM FOR SOLVING THE TRAVELING SALESMAN PROBLEM

: In this paper, we present the Rat Swarm Optimization with Decision Making (HDRSO), a hybrid metaheuristic algorithm inspired by the hunting behavior of rats, for solving the Traveling Salesman Problem (TSP). The TSP is a well-known NP-hard combinatorial optimization problem with important transportation, logistics, and manufacturing systems applications. To improve the search process and avoid getting stuck in local minima, we added a natural mechanism to HDRSO by incorporating crossover and selection operators. In addition, we applied 2-opt and 3-opt heuristics to the best solution found by HDRSO. The performance of HDRSO was evaluated on a set of symmetric instances from the TSPLIB library, and the results demonstrated that HDRSO is a competitive and robust method for solving the TSP, achieving better results than the best-known solutions in some cases.


Introduction
Optimization, planning, and decision-making in real-time are essential in every aspect of our lives, from daily decision-making to the operations of large companies. However, these decisions can often be complex, with multiple factors and potential drawbacks. By looking at how large companies and mega-companies approach decision-making, we can gain insight into how to make better choices. These companies often face high stakes, with significant potential gains and losses. They use various methods and tools to address these complex optimization problems, which Artificial rat optimization with decision-making: A bio-inspired metaheuristic algorithm… 151 can be classified based on their computation time and solution quality. Some methods prioritize speed and may not always find the optimal solution, while others prioritize a very high solution quality, which may come at the cost of longer computational time. Ultimately, the performance and efficiency of these methods depend on both their optimality and the time required for implementation.
Combinatorial optimization problems (COPs) are an important area of study within operations research, with applications in various fields such as industry, urban management, biology, and technology (Peres & Castelli, 2021). When studying these problems, it is important to consider factors such as the available time and resources, the potential benefits of the study, and the available tools and computing power. To solve COPs, there are several classes of methods, including exact and deterministic methods (Chung & Freund, 2022). These methods typically involve enumerating the possible solutions in the search space, using techniques such as boundary calculations and heuristics to guide the search and improve efficiency. Traditional methods such as separation and progressive evaluation techniques (SEP) or backtracking algorithms fall under this category. While exact methods can be used to find optimal solutions for problems of moderate size, their computational time tends to increase exponentially with the size of the problem, making them less practical for larger applications.
When the need for an optimal solution is not as pressing, approximate approaches can provide an efficient solution for large optimization problems. These techniques, such as greedy approaches and iterative improvement, have been used by practitioners for many years and have proven effective in various contexts. For example, Lin and Kernighan's approach is widely considered the best algorithm for the traveling salesman problem. These approximate methods can balance computational time and solution quality for certain types of problems.
In recent years, significant progress has been in developing powerful and general approximate methods known as metaheuristics. These methods, which include neighborhood approaches such as simulated annealing and tabu search (Prajapati et al., 2020) and evolutionary algorithms such as genetic algorithms (Sun, 2015) and evolutionary strategies (Slowik & Kwasnicka, 2020), have enabled the development of approximate solutions for large-scale classical optimization problems and previously unmanageable applications (Ezugwu et al., 2021). Metaheuristics have gained increasing attention in operations research and artificial intelligence in recent years.
There are several reasons why metaheuristics have become increasingly popular in recent years: -They have strategies in place to guide the search for optimal solutions.
-They can efficiently explore the search space to find (near) optimal solutions. -The techniques that make up metaheuristic approaches range from simple local search algorithms to complex learning processes. -They have mechanisms to avoid getting stuck in suboptimal regions of the search space. -They can incorporate problem-specific heuristics into the search process, but a higher-level strategy controls these. -They can use the experience gained during the search process to guide the remainder of the search better. Table 1 provides a classification of several types of metaheuristics that can be distinguished.  (Kennedy & Eberhart, 1995) Firefly Algorithm (FA) (Xu et al., 2022;Yang, 2009) Bat Algorithm (BA) (Saji & Riffi, 2016;Yang, 2010) Salp Swarm Algorithm (SSA) (S. Mirjalili et al., 2017) Wolf Optimization (GWO) (Medjahed et al., 2016) Gorilla Troops Optimizer (GTO) (Ginidi et al., 2021) Grasshopper Optimization Algorithm (GOA) (S. Z. Mirjalili et al., 2018) Physics-based algorithms Simulated annealing (SA) (Kirkpatrick et al., 1987) Lichtenberg Algorithm (LA) (Pereira et al., 2021) Gravitational search algorithm (GSA) (Rashedi et al., 2009) Black hole algorithm (BB) (Abualigah et al., 2022) Evolutionary algorithms : Genetic Algorithm (GA) (Sun, 2015) Genetic Programming (GP) (Koza & Poli, 2005) Evolutionary programming (EP) (Opara & Arabas, 2019) Biogeography Based Optimizer (BBO) (Simon, 2008) Tree-Seed Algorithm (TS) (Cinar et al., 2020) Human algorithms Harmony Search (HS) (Lee & Geem, 2004) Imperialist Competitive Algorithm (ICA) (Atashpaz-Gargari & Lucas, 2007) Tabu Search (TS) (Barbarosoglu & Ozgur, 1999) Heat Exchange Optimization (TEO) (Kaveh & Dadras, 2017) The study of optimization and NP-hard problem-solving, including metaheuristics, has been influenced by the behavior of animals in nature (Tanaev et al., 1994). One well-known and extensively studied problem in this field is the traveling salesman problem (TSP) (Mzili et al., 2020), which belongs to the class of NP-hard optimization problems. The TSP involves finding the shortest route that visits a list of cities, passing through each city only once. While the problem may initially seem simple, no known algorithm can quickly find an exact solution for all cases. Furthermore, computational complexity increases exponentially with the number of cities, making it a useful test case for optimization techniques. The TSP has many practical applications, including in astronomy, logistics, transportation, telecommunications, and scheduling. Metaheuristic algorithms have successfully solved the TSP and other similar problems, demonstrating their versatility and effectiveness. These algorithms use search techniques to explore the search space efficiently, often focusing on specific areas of interest.
The contributions of this paper are as follows: -Presentation of the Rat Swarm Optimizer (RSO), a new robust optimizer inspired by wild rats' attack and hunting behavior, outperforms many known metaheuristics and effectively solves the discrete traveling salesman problem. -Proposed a hybrid approach using RSO to solve a widely applicable and influential combinatorial problem with potential applications in various domains. -Developed a uniform crossover and mutation operator mechanism to improve performance in the exploration phase, thereby conserving information throughout the search space and balancing exploration and exploitation. -Use local Lin-Kernighan searches to increase efficiency and an acceptance and solution search strategy to avoid getting stuck in local optima. -Introduction and testing of a new random parameter, T, to balance the workload of the auxiliary operators. -Tested the performance of the proposed algorithm on more than 26 instances of the TSPLIB library and used the parametric student's t-test and the nonparametric Wilcoxon test to compare the proposed algorithm to other models. -Comparison of the proposed HDRSO algorithm with the baseline algorithm and five recently developed bio-inspired metaheuristics: DJAYA, RNN-SA, GGSC-SSA, DSSA, and DSOS, to demonstrate its superior performance. This paper is structured as follows: Section 1 introduces the general topic. Section 2 represents some related works; Section 3 discusses the Traveling Salesman Problem. Section 4 presents the Rat Swarm Optimizer. Section 5 proposes an improved and hybrid Rat Swarm Optimizer. Section 6 presents the results and discussion, and finally, the conclusion.

Related works
The Traveling Salesman Problem (TSP) is widely studied in optimization, with numerous algorithms being developed to solve it efficiently. The TSP seeks to find the shortest possible route for a salesman who must visit a set of cities exactly once and return to the starting city. The TSP is an NP-hard problem, meaning that its solution time grows exponentially as the size of the problem increases. Nevertheless, a vast literature on the TSP has many approaches to solving it using various algorithms.
Genetic Algorithm GA (Sun, 2015) is a popular optimization technique that mimics the process of natural selection. It has been used to solve the TSP, and several variants of GA have been proposed. SA is another optimization technique that simulates the process of annealing in metals. It has been applied to solve the TSP, and several variants of SA have been proposed. TS is another optimization technique that uses short-term memory to avoid visiting the same city twice. It has been applied to solve the TSP, and several variants of TS have been proposed. ILS is another optimization technique that combines local search with perturbation. It has been applied to solve the TSP, and several variants of ILS have been proposed. VNS is another optimization technique that uses a sequence of neighborhoods to explore the search space. It has been applied to solve the TSP, and several variants of VNS have been proposed. Finally, MA is another optimization technique combining local and global searches.
Recent research in the optimization field has focused on applying bio-inspired metaheuristics to solve real-world problems. Osaba et al. (2020) reviewed recent research on TSP and the application of bio-inspired metaheuristics in solving it. A.  provided a comprehensive overview of metaheuristic optimization techniques and their applications in engineering. A.  reviewed optimization techniques for petroleum engineering, while Uniyal et al. (2022) provided an overview of nature-inspired metaheuristic algorithms for optimization. A.  used nature-inspired optimization algorithms to optimize the availability-cost of a butter oil processing system, while Rawat et al. (2022) provided a state-of-the-art survey on the applications of the Analytical Hierarchy Process (AHP). Finally, J.  discussed the use of Multi-Criteria Decision-Making (MCDM).

Introducing the traveling salesman problem
The TSP is a classic and old NP-hard problem. The TSP problem consists in finding the smallest path to visit a list of cities by passing through a city only once and returning to the starting city. To help the commercial traveler to save more time. money. and effort.

Importance of solving the TSP
The Traveling Salesman Problem (TSP) is a fundamental problem of finding the shortest possible route to visit a set of locations and return to the starting point. TSP has many applications in logistics, transportation, and manufacturing. Effective solutions to TSP can provide significant cost savings and efficiency improvements in these areas.
In logistics, TSP is essential for determining the most efficient delivery routes for vehicles, which reduces travel time and distance. As a result, this can result in cost savings and improved customer satisfaction through shorter delivery times.
Similarly, in transportation, TSP can help plan optimal routes for public transportation, including buses and trains, thereby reducing fuel consumption and emissions and improving the overall efficiency of the transportation system.
In manufacturing, TSP is used to optimize the order in which tasks are processed on a production line, reducing overall processing time and minimizing machine idle time. These benefits can translate into increased productivity and significant cost savings for companies. Thus, the ability to effectively resolve TSP can profoundly impact decision-making in various areas, leading to improved efficiency and substantial cost savings.

Presentation of rat swarm optimizer
The Rat Swarm Optimizer (RSO) is a metaheuristic algorithm that uses the collective behavior of rats in a swarm as a model for optimization. The algorithm is inspired by the hunting and aggressive behavior of rats in the wild, which is used to model the exploration and exploitation phases of the search for solutions to optimization problems. RSO effectively solves various continuous optimization problems and has been used in many different applications. The hunting and aggressive behavior of rats in the wild is the main inspiration for the RSO algorithm, which mathematically models this behavior to optimize solutions to difficult problems. This behavior is characterized by the social intelligence and territoriality of rats and their ability to engage in complex behaviors such as jumping and running.
The two main behaviors that are the basis of the RSO algorithm are: -Swarm hunting behavior: When rats think they have located their prey, they will designate a captain and follow him, allowing them to cover the entire search area. -Fighting behavior with prey: To hunt their prey, rats will enter conflict with them. This conflict may result in the death of some rats, which is modeled in the algorithm as the cancellation of a particular solution. These behaviors are used to guide the exploration and exploitation phases of the search for solutions in the RSO algorithm. Figure 1 shows the movement of rats around the prey in a 2D space.

Pursuit of prey (Exploration phase):
In this part of the rats' chasing and fighting behavior, the rats' exploration mechanism is described. Rats have powerful eyes that allow them to track and detect their prey, but sometimes the prey may not be visible.
Due to their social behavior, rats often hunt in groups, which makes them highly effective at locating and capturing their prey. To model this behavior, we assume that the best searcher knows where the prey is located, and the other searchers can update this information based on their observations. This mechanism is described quantitatively using the following equations: Where denotes the best optimal solution and +1 the locations of the rats. However, the parameters and are determined as follows: Therefore, throughout the iteration, parameters and are sensitive to good exploration and exploitation, while and are random values between [1. 5] and [0. 2].

Fighting prey (exploitation phase):
The rats attack the target prey detected in the previous phase. However, the prey often tries to escape dangerous situations or defend itself against this attack.
In this case, a deadly battle ensues between the rats and the prey. In some cases, the battle ends with the death of some rats.
Therefore, the fight between the rats and their prey is mathematically described by the formula below: represents the most recently updated location of the rat. The ideal solution is saved, and the locations of the other search agents are changed relative to it.
In general. The RSO algorithm is presented as follows:

Discrete RSO method to solve the TSP problem
The RSO (Rat Swarm Optimization) method is designed to solve continuous optimization problems. Therefore, it cannot be directly applied to solve discrete combinatorial optimization problems such as the traveling salesperson problem. To use the RSO method for the TSP, several modifications must be made to the algorithm to account for the discrete nature of the problem. These modifications may include changes to the search space, the evaluation function, and the exploration and exploitation strategies used by the algorithm.
In the context of the traveling salesperson problem, each possible route can be represented as a list of cities or as a graph, with each city represented as a vertex and the edges between cities representing the distances between them (as shown in Figure  2).

Figure 2. Example of a TSP trip
In the context of using the RSO method to solve the traveling salesman problem, each rat can be associated with a random path (sequence of cities) representing a potential solution. During the optimization process, each rat's movement can involve making small changes or permutations to the order of cities in the path, applying minimal modifications to the current solution. The "fighting with the prey" process can be defined as a verification of the solution, where the solution is accepted if the rat wins and ignored if it does not. This can help to ensure that the algorithm can explore the search space effectively and find good solutions to the TSP.
To adapt the RSO method for use with discrete combinatorial optimization problems such as the traveling salesperson problem, the continuous operators used in the original algorithm must be replaced with discrete counterparts. For example, the subtraction operator, which calculates the difference between two positions in the search space, can be defined as a set of permutations that can be performed on one of the positions to obtain a new closer to the other position. This allows the algorithm to explore the space of possible solutions and make changes to the current solution in a way appropriate for the problem's discrete nature.
Example: the subtraction between two positions − be defined as the set of permutations to be performed to obtain a new position .
Similarly, the addition operator in the RSO method can be adapted for use with discrete optimization problems by defining it as a set of permutations that can be applied to a path (list of cities to visit) to modify the current position. This operator allows the algorithm to make changes to the current solution in a way that is appropriate for the problem's discrete nature and can help explore the space of possible solutions more effectively. For example, the operator could swap the positions of two cities in the path or insert a new city into the path at a specific position. Again, these changes can help the algorithm explore the search space and find good solutions to the TSP. Finally, the multiplication operator in the RSO method can be defined as an operator that allows reducing the number of permutations applied to a path. This operator can be applied between a real number and a permutation list, allowing the algorithm to make more targeted changes to the current solution and avoid making unnecessary permutations.
To improve the quality of solutions to the traveling salesperson problem, various neighborhood search strategies can be used. One example of a reliable neighborhood search method for the TSP is the two-exchange function, which involves making small changes to the order of cities in the current solution to find a better solution.
In this work, we will present two versions of the rat swarm optimization (RSO) algorithm for solving the TSP: the basic DRSO (Discrete Rat Swarm Optimization) algorithm and the improved hybrid HDRSO (Hybrid Discrete Rat Swarm Optimization) algorithm. The basic DRSO algorithm uses the RSO method in its original form, while the HDRSO algorithm incorporates additional techniques and modifications to improve its performance.
Before presenting the developments and modifications to the RSO method, we will introduce the basic version of the algorithm and discuss its limitations.

The limits of the basic discrete rat swarm optimizer
The Discrete Rat Swarm Optimization (DRSO) algorithm, introduced by Mzili et al. (2022), emulates the behavior of rats in the wild, where they collaborate to search for prey. The population of the algorithm consists of rats, one of which is designated as the captain, responsible for leading the group to the prey. However, we found that the algorithm tends to converge to a local optimum after several iterations, which hurts its performance (as shown in Table 2).
This behavior is because the captain rat does not have accurate information about the position of the prey, which can lead the whole group to be trapped in a false location, as rats do in the wild. Unfortunately, the captain is not replaced periodically, and the whole group suffers from having to start the search from the beginning, resulting in considerable delays.
To address these limitations, we propose an improved hybrid version of the RSO algorithm that incorporates additional strategies and modifications to improve its 1 2 3 4 1 2 3 4 2 1 4 3 performance. One strategy is to search for additional generations of rats during the prey search phase that can fight the prey and are more efficient, with varying information about the location of the prey through additional enhancement, growth and selection heuristics. The rats work together to guide the population to the optimal solution, and if a captain is trapped, another rat can take over.
In addition, we suggest introducing a random mutation operator to allow the population to explore new solutions and escape from local optima. Applying this operator with a low probability avoids interfering with the algorithm's convergence. Moreover, we propose to incorporate a local search mechanism to refine the solutions found by the algorithm. This mechanism can be applied to the best solutions the algorithm finds to improve their quality further. The proposed hybrid version of the RSO algorithm combines multiple strategies to address its limitations, increase population diversity, improve exploration capabilities, and refine the algorithm's solutions.

Proposed hybrid discrete rat swarm optimizer
To improve the basic version of the RSO algorithm and address its limitations, we propose incorporating a mechanism common in many animal species: mating and selection. To improve the next generation of any animal breed, animals (male or female) will choose their life partner based on specific characteristics that are distinctive for that type of animal to ensure the continuation of the desired qualities. For example, in wild animals such as lions and wolves, females will choose the strongest males, and sometimes this selection is made through mortal combat. On the other hand, in animals that value aesthetics to preserve the beauty of the offspring, females will choose the most beautiful, attractive, and elegant males.
We adopt this mechanism in our rat swarm optimization algorithm, where rats can mate after selecting the most intelligent and strongest elements that can find the position of the prey and successfully attack it without dying. This mechanism maintains the characteristics of the swarm and the group and generates stronger solutions. Additionally, at each iteration, we can incorporate basic local improvement heuristics such as 2-opt and 3-opt (Zhong, 2021). However, the algorithms derived from the K-Opt algorithm have high complexity in terms of time ( ) and memory usage, so caution must be taken when using them. For this reason, we have associated these algorithms with probability. A new random parameter T is added at this level, and its value is between [0, 1]. We can choose which operator to call at each iteration based on this value. This allows us to balance the exploration and exploitation of the search space and improve the algorithm's overall performance.

Crossover and selection operators
Crossover operations were first introduced in genetic algorithms to create a new population. The idea behind crossover is to use each parent's best qualities through a new production generation. Various crossover operators have been proposed in the literature to solve the traveling salesman problem. This paper adopts the modified unified intersection to improve the search strategy of DRSO. In this paper, the crossover is performed as follows: 1) A son is matched with the best-found father, which can also be named Gbest to generate another individual who can be better than both. 2) A and B, representing different solutions to the traveling sales problem, are selected as parents for the crossover operation.
3) The modified unified intersection operator is applied to A and B to generate a new individual, C. This operator combines the strengths of both parents and generates a new solution that is better than both original solutions. 4) C is then compared to the current Gbest, which represents the best solution found so far by the algorithm, and if it is better, it becomes the new Gbest. 5) This process is repeated until a new population of solutions is generated, with each new solution being generated through the RSO Crossover operator. The advantage of using the RSO Crossover operator is that it allows for a more efficient and effective search strategy for solving the traveling sales problem. By combining the strengths of both parents and generating a new individual with the best qualities of both, the algorithm can explore a wider range of solutions and find better solutions more quickly. This leads to improved performance and better solutions to the traveling sales problem.

2-Opt move algorithm
The 2-opt algorithm is a local search algorithm that is commonly used to improve the quality of solutions to the traveling sales problem. It works by iteratively removing and reconnecting pairs of edges in the current solution to improve the cost of the solution. This is done by removing intersecting edges and applying the triangle inequality to ensure the solution is valid.
The 2-opt algorithm is applied in the last step of the RSO Crossover operator after a new population of solutions has been generated. This allows the algorithm to further improve the quality of the solutions by removing any remaining intersections and ensuring that the solutions are valid. Overall, using the RSO Crossover operator combined with the 2-opt algorithm allows for a more efficient and effective search strategy for solving the traveling sales problem.
In Figure 3, the route <a; b; c; d> is changed to <a; c; b; d> by reversing the order of visiting cities b and d. This involves removing the edges (a, b) and (c, d) and reconnecting them in the new order (a, c) and (b, d).

Figure 3. 2-Opt move
The 2-opt algorithm is a special case of the more general k-opt algorithm, where k is the number of edges that are removed and reconnected at each step. The 2-opt algorithm specifically considers k = 2, meaning that only two edges are swapped at each step ( Figure 4). While it is possible to generalize the 2-opt algorithm to consider higher values of k, such as the 3-opt ( Figure 5) algorithm, which considers k = 3, this is generally not necessary. This is because increasing k leads to a more complex search space and a higher computational cost without necessarily significantly improving the quality of the solutions.

Modification and adjustment of basic parameters
Our approach has several variables that play important roles in the optimization process.
-C is a variable that helps to correctly explore the discrete case's search space correctly. It determines the number of permutations performed on a path (city sequence) to find a potentially optimal solution. If C < 1, we can generate a different path. If C=1, we get the same path. -δ is a variable that controls the adjustment of the objective function in the continuous case or the modification of the entire trajectory in the discrete case. -T is a variable that plays a crucial role in balancing the use of auxiliary operators.
At each iteration, T takes on a random value.
• T < 0.5, we perform permutations on the path according to a certain equation.
• 0.5 <= T <= 0.9, we select a solution for the crossover using the crossover operator (a crossover between a solution and the best solution found so far). T > 0.9, we apply the local heuristic 3-OPT, which has high complexity in terms of time and memory but can quickly converge to a local optimum. We call it a low probability due to its high complexity.

Experimental results and comparison
The basic and improved hybrid algorithms, HDRSO (Hybrid Discrete Swarm Optimizer Search), are applied to solve the traveling salesperson problem. They are tested on several TSP instances (benchmarks) from the public electronic library TSPLIB of TSP problems. Each TSP instance provides a list of cities with their coordinates (x and y) (as shown in Figure 6).

Figure 6. Example of berlin52 instance
The instance name represents its name in TSPLIB concatenated with the number of cities in the instance: the instance named st70 has 70 cities. Euclidean distances of these TSP instances are used for experiments and comparisons. The basic and improved algorithm is tested using C++ as a programming language under the 64-bit Windows 10 operating system. The tests are performed on a Dell laptop with a 2.00 GHz Intel Core i5 processor and 16 GB of RAM. The values of the parameters of the proposed algorithm are chosen based on some preliminary tests. Then, the comparison is made based on the following criteria: 1) The Best Value, which designates the best solution obtained by each algorithm; 2) The mean value designates the average value of the 20 solutions obtained after 20 executions of an algorithm; 3) The worst value is the worst value obtained by an algorithm; 4) PDav(%) designates the percentage of deviation of the average length of the solution from the optimal solution of 20 executions: average-opt opt ×100 (7) 5) The STD is the standard deviation; finally, 6) the Time value is the average time in seconds of the 20 executions.
The proposed HDRSO algorithm is compared with the basic algorithm and five recently developed bio-inspired metaheuristics: DJAYA, RNN-SA, GGSC-SSA, DSSA, and DSOS. The DJAYA algorithm (Gunduz & Aslan, 2021) is a population-based approach proposed to solve constrained and unconstrained optimization problems. RNN-SA (Rahman & Parvez, 2021) is an extension of the well-known Nearest Neighbor algorithm, designed to build routes efficiently. GGSC-SSA (Wu et al., 2021) is inspired by the foraging behavior of sparrows, while DSSA (Bas & Ülker, 2021) is based on the behavior of spiders. Finally, DSOS (Ezugwu & Adewumi, 2017) is a metaheuristic algorithm that takes inspiration from the symbiotic interactions among organisms in nature.
The performance of these algorithms is compared on a set of TSP instances from the TSPLIB library using the Euclidean distance. By comparing the algorithms on this benchmark dataset, we can gain insights into their relative performance and efficiency for solving the traveling salesperson problem.

Comparison between the base DRSO result and the HDRSO
In this section, we will compare the basic version of DRSO with the HDRSO and examine the impact of altering parameters and introducing a new type of motion. Table 4 compares the results of the basic DRSO and the HDRSO.

Comparison between the HDRSO and other recently developed metaheuristics
This section will compare the hybrid HDRSO metaheuristic with several other techniques developed in 2020 to evaluate its ability to solve TSPLIB instances. To do this, we ran HDRSO on 26 TSPLIB instances with 20 independent runs and used a parametric test to analyze the results. It's worth noting that the experiments were conducted on different computers and platforms, so we will not be making runtime comparisons.
In Experiment 1, HDRSO is compared with the basic DRSO and DJAYA, RNN-SA, GGSC-SSA, DSSA, and DSOS on a set of 26 symmetric TSP instances. The results of this comparison are listed in Tables 4-9, which show the mean, best value, standard deviation, and average performance time for each algorithm. The best results and averages are highlighted in bold.
To determine significant differences between the results, we performed a student's t-test for each algorithm compared to HDRSO. The t-values were computed using the standard deviation and mean of 20 independent runs for each problem.
The t-test results can be found in the "T-value" and "Sig." columns of the tables.
To test for significant differences between HDRSO and the other techniques, we derived five significant levels using critical values at the 95% confidence level (t0.05 = Figure 9. Average convergence curves of all comparison algorithms 1.960) and the 99% confidence level (t0.01 = 2.576). The "Sig." significance levels are defined as follows: t > 2.756: +++ (Extremely significant) -1.960 < t ⩽ 2.756: ++ (Significant) -0 < t ⩽ 1.960: + (Slightly significant) -t = 0: = (Equal) -1.960 ⩽ t < 0: -(Insignificant) The WSR column represents the sign of the difference between each pair of paired observations. The smaller of the two rank sums, one for positive values WSR+ and one for negative values WSR-, is used as the test statistic for Wilcoxon hypothesis tests.

Analysis and discussion
To further verify the effectiveness of the proposed algorithm, we conducted a comparative analysis with several state-of-the-art metaheuristic algorithms published from 2020 to 2022 using a non-parametric statistical test. This allowed us to evaluate the performance of our proposed algorithm and its improvements to these other approaches. The first comparison is with the basic RSO algorithm to see the effects of changes and improvements, followed by a complete comparison with other metaheuristics.
Tables 4 This indicates that the hybridization and improvement strategy chosen was able to create a very robust new algorithm that could solve many other combinatorial problems.
In this section, we chose to make a statistical evaluation using a student parametric test (T-test) by comparing the results of HDRSO and the Basic DRSO. The t-test results presented in Table 4 show that HDRSO is 100% superior to the basic DRSO in all test cases (11 out of 11 assessments). In addition, the t-test results between HDRSO and DSSA are also presented individually in Table 5. HDRSO is highly significant in 84.21% (16 out of 19 assessments) and either significantly better in 5.26% (1 out of 19 assessments) or slightly better in 10.52% (2 out of 19 assessments) of the cases, as reflected in the differences in results. Table 6 shows that HDRSO outperforms RNN-SA in 86.22% (19 out of 22 evaluations) of the cases and is either significantly better in 4.54% (1 out of 22 evaluations) or slightly better in 9.09% (2 out of 22) of the evaluations.
For the comparison with DJAYA, we will see that our algorithm is significantly better than Djaya's in almost all test cases (92.30% or 12 out of 13 evaluations) and 7.69% significantly better (1 out of 13 assessments). The algorithms that do not have enough information to make a statistical comparison according to the student's T-test are compared based on the average time and their ability to obtain the optimal value. Regarding the comparison between HDRSO and GGSC-SSA, we can see that HDRSO obtained the optimum at 70.83% (17 tests out of 24) and exceeded the optimum at 16.66% (4 tests out of 24). On the other hand, GGSC-SSA obtained the optimum at only 16.66% (4 tests out of 24), which is significantly weaker compared to HDRSO.
Finally, in the comparison between HDRSO and DSOS, we can see that HDRSO obtained the optimum in 86.66% (13 tests out of 15), while DSOS obtained the optimum in 60% (9 tests out of 15) with a 26.66% difference compared to HDRSO. Furthermore, when we analyze the average values of PDav(%), we can see that the PDav of HDRSO is lower than that of DSOS at 100% (15 tests out of 15), which can justify that for each test, the solutions of the 20 executions made by HDRSO are very close to the optimum, whereas those of DSOS are not.
These new values obtained can be new references for future research.
We will also confirm our analysis and comparison with a non-parametric Wilcoxon test (Fix & Hodges Jr, 1955) with a 95% confidence interval (α=0.05) to compare our optimizer and other metaheuristics.
This test was applied to compare the difference between the best-Obt value in two algorithms for comparison and ranking.
N denotes the number of test cases, and W+ represents the scores of the cases with the best performance in the proposed algorithm (sums of WSR+). While W-represents the sum of the scores of the cases where the proposed algorithm performs worse than the comparative algorithm (sums of WSR-), and the p-value is compared to a critical value α =0.05 in the Wilcoxon signed-rank test. If the p-value ≤ α, it indicates a significant difference between the performance of the two algorithms. However, if the p-value > α, then there is no significant difference between the performance of the two algorithms. In this table 10, the Wilcoxon test allows us to see that the difference between DRSO and the other metaheuristics is statistically significant. According to these evaluations, the proposed algorithm, which uses the hybridization mechanism, crossover operators, and the 2-opt and 3-opt local search algorithms, outperformed other metaheuristics regarding solution quality and the ability to obtain the optimum. The hybrid HDRSO algorithm is a promising approach for solving the TSP and other combinatorial optimization problems.

Conclusion
This paper proposes a new optimization algorithm called the hybridized and discrete rat swarm optimization (HDRSO) algorithm. This algorithm is an improved version of the standard rat swarm optimization (RSO) algorithm and has been adapted to solve the Symmetric Traveler Problem (TSP), a combinatorial optimization problem. Our HDRSO algorithm uses new motion types, mathematical operators, and heuristics, such as basic genetics and K-OPT, to reconstruct its population and introduce a new, more intelligent class of RSO. In addition, the algorithm is inspired by natural rat behavior, such as hunting and chasing prey, and has been discretized for improved performance.
We compare the performance of our HDRSO algorithm to several recently developed metaheuristics, including DJAYA, DSSA, DSOS, RNNA-SA, and GGSC-SSA. The comparison results show that our HDRSO algorithm is more efficient than the other methods in solving TSP problems. The main contributions of this work are the development of a new optimization strategy based on group behavior and other robust mechanisms, as well as the use of a local search heuristic to improve the quality of solutions. This new optimization strategy is applied to the traveling salesman problem, and experimental results show that it outperforms classical heuristics in terms of computational efficiency and solution quality. This method can be useful for real-time decision-making in high-volume logistics transportation, especially in complex and dynamic environments. It can help significantly reduce salesmen's working time and travel costs.
The Discrete Rat Swarm Optimization (DRSO) algorithm is effective in solving the Traveling Salesman Problem (TSP). It can be extended to solve a wide range of other combinatorial optimization problems, such as the Quadratic Assignment Problem (QAP), the Vehicle Routing Problem (VRP), the Job Scheduling Problem (JSSP), and the Knapsack Problem (KP).
DRSO offers several advantages that make it well-suited for these problems. First, it excels at handling discrete optimization problems with a large search space where other optimization methods may have difficulty finding optimal solutions. Second, it uses a natural mechanism that mimics rats' behavior in nature, allowing it to avoid local optima and identify promising solutions. Finally, DRSO can be easily adapted to different problems by adjusting its parameters, such as population size, crossover rate, and mutation rate. Therefore, it is a versatile algorithm that can be applied to various fields, such as logistics, transportation, manufacturing systems, artificial intelligence, and machine learning applications.
In future work, the proposed algorithm can be extended to solve more advanced discrete optimization problems, such as the Quadratic Assignment Problem (QAP), the Job Shop Scheduling Problem (JSSP), and the Vehicle Routing Problem (VRP). In addition, the algorithm can be generalized to handle a larger number of discrete optimization problems. Further studies will evaluate the algorithm's performance on these more complex problems and explore its potential applications in various fields.

Data Availability Statement:
The data used to support the findings of this study are included within the article.

Conflicts of Interest:
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.