Loading...

Table of Content

    25 September 2025, Volume 34 Issue 9
    Theory Analysis and Methodology Study
    Research on Platform Supply Chain Financing and Production Strategies under Agent Mode
    LIU Ying, MU Yinping, GUO Xiaorui
    2025, 34(9):  1-8.  DOI: 10.12005/orms.2025.0268
    Asbtract ( )   PDF (1224KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    With the continuous development of e-commerce, on the one hand, more and more small and medium-sized enterprises (SME) are selling their products through e-commerce platforms and market competition is intensifying; on the other hand, these SME often have financial constraints in production, thus causing a break in the supply chain, which seriously affects the profitability of the various participants in the supply chain. There are already many e-commerce platforms trying to provide financing services for these SME, such as Alibaba’s Aliloan and JingBaoBei of JD. The aim is to transform the uncontrollable risks of individual enterprises into controllable risks of the supply chain as a whole, so as to achieve an improvement in the overall efficiency of the supply chain.
    Based on the above background, this paper establishes a two-level platform supply chain consisting of a capital-constrained manufacturer and a well-funded e-commerce platform. By building a Stackelberg game model, the optimal output of the manufacturer and the equilibrium financing decision of the manufacturer and the e-commerce platform under different operation modes are derived, and the differences between different operation modes of the e-commerce platform are analyzed through a cross-sectional comparison. A comparative analysis shows that the optimal output of bank financing is always smaller than the optimal output of platform guaranteed financing, while the optimal output of platform direct financing has a relative size with conditions compared to the optimal output of bank financing and platform guaranteed financing. And e-commerce platforms have similarities in their choice of financing modes under different operating models, except that manufacturers are encouraged to choose bank financing when the production cost per unit is higher but the revenue per unit of product is lower, otherwise they are more inclined to choose direct financing, so that e-commerce platforms can adjust the financing rate more flexibly to obtain higher revenue; manufacturers prefer guarantor financing and direct financing under different operating models. Manufacturers prefer guarantor financing and direct financing under the different operating models, as both methods share a portion of the risk of default with the e-commerce platform, thereby increasing the manufacturer’s incentive to produce; from the perspective of the supply chain as a whole, there is really not much difference between the different operating models, as it is the uncertainty of market demand and the structure of trade finance that are the fundamental factors determining the revenue of e-commerce platforms and manufacturers, rather than the operating model.
    This paper focuses on analyzing the optimal choices of various participants in the financing process, including operational decisions for SME and financing model selection for e-commerce platforms, starting from different operational modes of e-commerce platforms. The article provides a precise analysis of the advantages and disadvantages of bank financing, platform guarantee financing, and platform direct financing, answering how SME and e-commerce platforms can selectively choose financing models in different situations to maximize their profits. Through a comparative analysis of different financing methods used by e-commerce platforms under different operational modes, the article concludes with some common findings, indicating that e-commerce platforms offering direct financing as one of the options enriches the financing choices for SME and plays a positive role in their development.
    The main innovations and contributions of this research are as follows: a clear analysis of the advantageous ranges of different financing models through an analysis of production costs and financing interest rates. A clear analysis of the optimal choices for SME through bank financing, guarantee financing, and e-commerce platform direct financing in different situations. A clear analysis of e-commerce platform preferences for different operational modes. Future research can expand in several directions, including: The use of various distribution functions, such as normal distribution, binomial distribution, and Poisson distribution, to test the robustness of the results obtained in this study. Exploring the impact of dual-channel supply chain financing for e-commerce platforms operating in a hybrid model, combining both agency and resale models, like platforms such as JD that act as both sellers and third-party sales channels. Investigating the positive influence of e-commerce platforms as core companies in the supply chain, and providing direct financing to manufacturers facing capital constraints upstream in a dual-channel supply chain financing scenario. Examining the impact of procurement prices and commission rates as exogenous variables on e-commerce platforms, and considering the influence of these variables in both resale and agency models, while assuming fixed production costs.
    Sequential Relation Analysis Method for Group Evaluation
    GONG Chengju, FU Lei, ZHU Mengyao, PENG You
    2025, 34(9):  9-16.  DOI: 10.12005/orms.2025.0269
    Asbtract ( )   PDF (1147KB) ( )  
    References | Related Articles | Metrics
    Comprehensive evaluation refers to the process of conducting a holistic and overall evaluation of an object under evaluation by utilizing multi-dimensional index data. As evaluation issues become increasingly complex, it is becoming more difficult for a single expert to make accurate judgments. To ensure the comprehensiveness and accuracy of evaluation, more and more evaluation issues require the participation of multiple experts, thus forming group evaluation. Especially when dealing with complex systematic evaluation issues, the adoption of group evaluation to solve such problems has become a widely held consensus. One urgent problem to be solved in group evaluation is how to determine the weight coefficients based on the evaluation index preference information provided by multiple experts. As one of the representatives of subjective weighting methods, the sequential relation analysis (G1) method has gained wide attention and extensive application since its introduction due to its simplicity, ease of operation, and the need for no judgment matrix. Currently, the G1 method is gradually being applied to group evaluation, but there are still few studies on how to determine index weights using this method when facing group evaluation issues.
    Based on existing research and aiming to ensure experts’ confidence in group evaluation results, this paper proposes a G1 method for group evaluation that follows the principle of “the minority is subordinate to the majority.” Firstly, an iterative algorithm is designed to determine the sequential relationship of evaluation indicators based on the evaluation information provided by experts. Secondly, the concept and measurement method of the ordered rate of sets are proposed to determine expert weights, and the evaluation indicator weights are solved by aggregating the individual evaluation indicator preference information of experts. Thirdly, a method is provided to determine the ratio of the importance of any two adjacent evaluation indicators in the group evaluation indicator sequence based on the evaluation indicator preference information provided by individual experts. The evaluation indicator weights are solved by aggregating the group’s evaluation indicator preference information. Then, two methods are presented to calculate the comprehensive weights of evaluation indicators for group evaluation. One is to solve the evaluation indicator weights based on the preferences of the evaluation demander from two perspectives, and the other is to maximize the overall differences between the evaluated objects by constructing a nonlinear programming model. Finally, an example is used to introduce the application process of the proposed method, and a comparative analysis is conducted with existing research results.
    The results show that: (1)The index weights calculated by the proposed method are very close to those obtained by other methods in the literature. Additionally, as the proposed method does not require constructing a judgment matrix, and the information aggregation method is not limited to nonlinear weighting, it can be seen as a more generalized form of other methods, demonstrating its rationality to a certain extent. (2)By solving the evaluation indicator weights from two individual indicator preference information and group indicator preference information, respectively, while considering the preference of the evaluation demander, the proposed method considers more comprehensive information and improves the accuracy and satisfaction of the evaluation results. (3)Different preferences of the evaluation demander in selecting preference coefficients lead to different final weights of evaluation indicators which indicates that considering different evaluation demander preferences has a significant impact on determining weights in the proposed method and verifying its effectiveness. Compared with existing research results, the proposed method extends the G1 method itself to group evaluation situations, rather than simply using it as a method to determine indicator weights in group evaluation. At the same time, the proposed method addresses the issue of determining evaluation indicator weights when the relative importance ratio between adjacent evaluation indicators in the group sequence is missing which makes the application scope of the G1 method more extensive.
    In future studies, the proposed method will be further extended to the evaluation of uncertain evaluation situations represented by fuzzy numbers, interval numbers, etc. At the same time, the application of the G1 method in group evaluation will be further explored.
    Research on Session-based Recommendation Method with Multi-attribute-aware Graph Neural Network
    LIANG Yuxin, GAN Mingxin, ZHANG Xiongtao
    2025, 34(9):  17-24.  DOI: 10.12005/orms.2025.0270
    Asbtract ( )   PDF (1273KB) ( )  
    References | Related Articles | Metrics
    In the era of the Internet information explosion, the efficiency of information acquisition has dropped sharply, which leads to an issue of information overload. In this context, how to support users to obtain valuable information from massive data has become a hot social concern. Recommender systems, as one type of decision support systems, effectively alleviate the information overload problem and have been widely used in online service platforms, such as social medias, e-commerce websites, and so on. However, conventional recommendation methods rely on users’ long-term historical behaviors, which leads to the fact that the recommendation performance suffers when users’ identity information and historical behaviors are unavailable. To overcome this limitation, session-based recommendation models users’ short-term interests in real-time and analyzes the current session sequences based on incomplete user information to provide dynamic recommendation. Hence, session-based recommendation has come to be popular now.
    Since they have the advantage of modeling complex transitions among items, graph neural networks have been a hot technology in the session-based recommendation. However, existing studies have largely ignored the attribute information of items when learning item transitions, which results in inadequate session interest representation. Considering that category information is informative in understanding users’ session interests, some studies have combined category information to enrich item transitions and showed effective session-based recommendation performance. However, category information is only one type of item attributes;multi-attributes(e.g., brand, price) of items are not explored effectively in learning item transitions, resulting in inadequate representation of the session interest.
    To this end, we propose a novel graph neural network named multi-attribute-aware graph neural network, short for MASR, for session-based recommendation. First, MASR models the attribute association with the multi-head self-attention mechanism to optimize the representations of all attributes. And then, an attribute-aware graph neural network is designed to learn item transitions in the session, which effectively improves item representations by synthesizing the multi-attribute information. Finally, the soft attention mechanism is used to integrate item representations in the session to fully obtain the multi-attribute-aware session representation to provide recommendation.
    To validate the effectiveness and rationality of the proposed method, we perform a series of experiments on two publicly available benchmark datasets obtained from the Cosmetics site. The results of two ablation experiments confirm the effectiveness of considering both attribute association and multi-attribute information in item transitions. Based on our parametric experiments, we have determined that the optimal number of layers for the attribute-aware graph neural network on both datasets is 2. In addition, we conduct comparative experiments between MASR and four popular session-based recommendation models. The results of the experiment confirm the proposed method outperforms other mainstream models in terms of Precision, Hit Rate (HR), and Mean Reciprocal Rank (MRR) on both datasets. Specifically, our method achieves about 20% and 2% improvement in terms of MRR for two datasets.
    Despite demonstrating superior performance in the session-based recommendation, the proposed method has certain limitations. Specifically, our method only concentrates on item transitions within the current session and has not yet effectively addressed item transitions across sessions. Therefore, it may lead to limited performance of the session-based recommendation when the length of a session is limited. In the future research, we will combine multi-attribute information with item transitions across sessions to further improve the performance of session-based recommendation.
    Nonparametric DELPT Control Chart for Joint Monitoring of Location and Scale Parameters
    WANG Haiyu, WANG Sen
    2025, 34(9):  25-31.  DOI: 10.12005/orms.2025.0271
    Asbtract ( )   PDF (1334KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    Control charts play an important role in statistical process control and are widely used for process quality control in production and services. They are usually based on the premise that the process distribution is known and the parameters of the process distribution are used to construct the monitoring graph, which is referred to as a parametric control chart. Parametric control charts are suitable for mass production processes with more historical data, but due to fierce competition in the market and diverse customer demands, the modern production model has gradually turned from mass production to multi-species small-lot production. In the small-lot production mode, due to the lack of sufficient historical data to accurately infer the process distribution, it is often difficult for the traditional control chart method to play an effective role. Therefore, when the process distribution cannot be certain, non-parametric control charts are often considered, which have become a research hotspot in the field of process quality control for the advantage of not requiring known process distribution.
    In the current study, there are more non-parametric control charts that monitor location or scale parameters individually, while there are fewer ones that monitor them jointly, and there is room to improve the monitoring efficiency. In practice, it is often difficult to determine in advance whether the location parameter or the scale parameter will change abnormally, and it may even be possible that both of them will shift at the same time, so it is very necessary to monitor both of them at the same time. In order to effectively monitor different degrees of abnormal shift in location and scale parameters in the case of unknown process distribution, a non-parametric dynamic exponentially weighted moving average (EWMA) control chart with Lepage type statistical form, abbreviated as DELPT (Dynamic EWMA of Lepage T2) chart, for joint monitoring of location and scale parameters, is constructed by Wilcoxon rank sum statistics and LOG statistics. The process of a larger shift in the process is usually easier to be recognized by the various types of control charts, but in the monitoring of smaller process shifts in the efficiency of the control charts, there is a large difference, so this paper uses a variable sampling interval (VSI) design to strengthen the effectiveness of the monitoring of small shifts. Secondly, in order to examine the influence of control chart parameters on monitoring efficiency, this paper analyzes the sensitivity of various parameters to monitoring efficiency by using Monte Carlo simulation. Since the sampling interval is not fixed, it is no longer appropriate to evaluate the performance by the conventional average run length (ARL), so the average single to time (ATS) is used to measure the control chart performance. Through 50,000 simulations, the appropriate value ranges of the control chart parameters are given in the paper for practical use, taking into account the efficiency and practicality. Then, the steps of using DELPT control charts are introduced through a real case study of the can encapsulation process, using the weight of the can as the key quality indicator to be monitored, and compared with the traditional control charts, DELPT charts indeed improve the monitoring efficiency and have better robustness. Finally, in order to further measure the monitoring efficiency of the nonparametric DELPT chart proposed in this paper, a comparative analysis of several other existing nonparametric control charts is carried out under symmetric and asymmetric distribution types. The optimal control charts under different shifts are also given, and the study shows that the control chart method proposed in this paper has a better monitoring performance for small and medium shifts in both location and scale parameters.
    In summary, when the small number of controlled samples makes it difficult to accurately infer the process distribution statistically and when there is a possibility of simultaneous changes in the location and scale parameters, the use of non-parametric DELPT chart is a better way to monitor the quality. Because the nonparametric DELPT chart in this paper combines the idea of dynamics with that of memory control charts, it has good monitoring performance for small and medium shifts.
    Research on Container Shuttle Transport Slot Allocation Considering Collecting-Distributing Transportation Balancing and Empty Container Repositioning
    JIN Zhihong, LI Mengyu, WANG Wenmin
    2025, 34(9):  32-38.  DOI: 10.12005/orms.2025.0272
    Asbtract ( )   PDF (1376KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    Due to factors such as the variability of traffic volume between ports and the uncertainty of transportation demand, problems such as uneven distribution of empty containers and unbalanced collection and distribution have arisen, and the operating costs of liner companies have remained high. Reasonable distribution of container slots under limited resources and further improvement of the collecting-distributing transportation system of liner companies will help shipping companies to improve the utilization rate of resources and reduce the total cost.
    To this end, in view of the mode and characteristics of container shuttle transportation, this thesis discusses the optimization of container shuttle transport slot allocation problem regarding the balance of collection and distribution and the repositioning of empty containers. Aiming at the uncertainty of transportation demand, an integrated forecasting model is constructed to predict the demand for shuttle transport, and the effectiveness of the model is verified based on historical data. The integrated forecasting model consists of SARIMA (Seasonal Autoregressive Integrated Moving Average Model), ASHW (Seasonal Holt-Winter Model) and machine learning algorithms such as LSTM (Long Short-Term Memory Network). On this basis, combined with the transportation demand of shippers and the operational demand of liner companies, the focus is on the balance of collection and distribution and the demand for empty container transfer. A mixed integer planning model is constructed for different container types to formulate transportation strategies according to different directions of collection and distribution, with the goal of minimizing the total cost. The model distinguishes between loaded and empty container transportation modes, restricting loaded containers to be transported only between feeder ports and hub ports, while allowing empty containers to be transferred between feeder ports and hub ports and among feeder ports. The exact solution of the example is obtained by using CPLEX solver. The solution is compared with the results of two scenarios restricting the transportation of empty containers only between feeder ports and hub ports and without considering the transfer of empty containers, respectively.
    The research results show that the integrated prediction model constructed in this thesis can provide more accurate transportation demand data and strong data support for the slot allocation research. In addition, compared with the traditional loaded container fixed transportation mode, the total cost of the two routes can be optimized by 14.62% on average, compared with the transportation mode without considering the transfer of empty containers. Compared with the transportation mode without considering the transfer of empty containers, the total cost of the two routes can be optimized by 21.75% on average. Moreover, the sensitivity finding shows that the market demand of collecting-distributing transportation has a significant effect on the container slot allocation of liner companies. When market demand increases, Scenario 1 is more inclined to optimize slot utilization, empty container reposition are considered more frequently, and the total cost increase is reduced. Therefore, the slot allocation constructed in this thesis, which considers the balance of collection and distribution and the reposition of empty containers, can realize the optimal allocation of empty container resources for collection and distribution and promote the coordination and balance of cargo flow between feeding ports. It can reduce the transportation cost of liner companies to a certain extent while reducing the waste of resources.
    Scheduling of Vacuum Heat Treatment Furnaces with Job Rejection
    XU Jun , FENG Ge , ZHAO Yi, WANG Yan, HU Guofeng, FAN Guoqiang
    2025, 34(9):  39-45.  DOI: 10.12005/orms.2025.0273
    Asbtract ( )   PDF (967KB) ( )  
    References | Related Articles | Metrics
    A vacuum heat treatment furnace is an innovative and environmentally friendly heat treatment technology that combines heat treatment with vacuum technology. It is widely used in the manufacturing processes of high-end metal components such as molds, fasteners, and precision coupling parts. It is primarily applied in industries such as automotive, high-speed railways, nuclear power, aerospace, and aviation. In comparison to other processes, vacuum heat treatment has a longer processing time. After vacuum treatment process, the metal components exhibit high quality and extended lifespan. Due to high purchased cost and restricted installation space, the number of vacuum heat treatment furnaces is limited, and the furnace is the bottleneck machine in real production. When facing production anomalies such as machine failures or urgent rush orders, a common approach is to use rescheduling methods to quickly restore production. However, if the production anomalies exceed a certain critical threshold, rescheduling strategies may be ineffective. By borrowing the concept of “sacrificing a knight to get the king”, it becomes necessary to reject a portion of jobs to ensure the on-time delivery of the accepted ones.
    The scheduling of vacuum heat treatment furnace is modeled as mixed batch model. Multiple jobs can be processed on the mixed batch machine simultaneously. When the number of jobs processed in a mixed batch is arbitrary, we call this model as unbounded. The processing time of a mixed batch is the weighted sum of the maximum processing time in the batch and the sum of the processing times for all the job in the same batch. The mixed batch model combines the parallel batch model and serial batch model, which increase the complexity of the mixed batch scheduling problem. In addition, different job release times usually lead to the reduction of utilization of furnaces, and thus it is necessary to reject some jobs. To enhance the utilization of vacuum heat treatment furnaces, the rejection of jobs with low cost-effectiveness is considered. This paper studies unbounded mixed batch scheduling with release dates and rejection, where jobs are either accepted for processing or rejected by paying rejection cost. The makespan and total rejection cost are considered as the objectives in this paper.
    Rejecting jobs usually reduce the makespan of furnaces and increase the total rejection cost, and thus it is important to decide the rejected jobs and schedule the accepted jobs. In order to balance the makespan and the total rejection cost, this paper studies three models: the linear weighted model, the constrained model and the Pareto optimization model. The first model is a linear weighted model where objective function is a linear combination of the makespan and the total rejection cost. Based on whether the job is accepted or rejected and its contribution to the objective function, four scenarios have been considered for discussion. A dynamic programming algorithm is designed, and the time complexity of the algorithm is analyzed. The second model is the constrained model, where the total rejection cost or the makespan does not exceed a certain threshold value, and the other objective function is minimized. A dynamic programming algorithm is proposed and its time complexity is analyzed. The third model is Pareto optimization model. The iterative process involves repeatedly constraining the total rejection cost to a certain threshold value and then optimizing the makespan. Based on the analysis of the four scenarios, a dynamic programming algorithm is yielded. After iteration times, the Pareto frontier can be obtained. Lastly, the time complexity of the algorithm is presented.
    The insights derived from unbounded mixed batch scheduling with release dates and rejection are as follows. When jobs concentrate on their arrivals during specific time periods, it is possible to achieve line balancing by rejecting certain jobs strategically to optimize the workload on various lines. Jobs with specific arrival times result in more constraints when grouping them for processing. Changes in orders or urgent insertions can reduce the flexibility of scheduling. Rejecting some jobs strategically can alleviate production bottlenecks and enhance the overall flexibility of the scheduling scheme. Taking the advantage over the production flexibility arising from job rejection, a more accurate and scientific schedule can be devised for customers with jobs having different arrival times. This allows for the implementation of differentiated custom services, ultimately enhancing customer satisfaction. These insights highlight the importance of adaptability and strategic decision-making in the face of dynamic production scenarios, emphasizing the potential benefits of rejecting certain jobs to optimize overall system performance and meet customer expectations more effectively.
    Enterprise Financial Risk Early Warning Research Based on Extended Functional Logistic Model
    WANG Deqing, XUE Shoucong, LU Zhihao, GUO Mengxia, HOU Yiwen
    2025, 34(9):  46-52.  DOI: 10.12005/orms.2025.0274
    Asbtract ( )   PDF (1141KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    Along with the development of economic globalization and the continuous intensification of market competition, not only the potential factors leading to the financial distress of enterprises are increasingly complex, but also their influence mechanisms show continuous and time-varying characteristics. The warning of financial distress is related to the crisis prevention of enterprises, the protection of investors’ and creditors’ interests, and the effective supervision of securities and capital markets. It is of great theoretical value and practical significance to find out the mechanism that affects the formation of corporate financial distress and to provide timely early warning. For corporate equity investors, early detection of crisis signals of financial distress and adjustment of investment strategies can minimize property losses. For corporate managers, early warning of financial distress risks can prevent financial crises and ensure healthy development of the corporate. As a result, how to effectively identify the key warning indicators that lead to financial distress, and establish an accurate early warning model for financial distress is the key issue that needs to be solved for corporate financial risk management.
    In essence, financial distress is a long-term cumulative result of deteriorating business conditions. Traditional warning methods based on discrete data focus on measuring the average effect of indicators leading to financial distress from a static perspective, ignoring the continuous evolution of the formation of corporate financial distress. To address the continuous time-varying nature of the warning effect of financial indicators, this paper systematically extends the functional logistic model based on the financial indicator curves and the selection of variables, aiming to identify effective financial indicators from a process perspective and measure their time-varying effects on the formation of corporate financial distress. First and foremost, the Karhunen-Loève expansion based on the principal component is used to reconstruct the curves in the functional data framework. This article identifies the set of indicators that affect the financial distress of corporate by combing the literature, and the number of base functions is self-driven by the information of discrete observations. Secondly, the logistic model is extended under functional data analysis, and the optimal functional variable selection method is determined. Considering the significant of indicator warning ability, this paper systematically expands the variable selection methods under the functional logistic model, including Lasso, adaptive Lasso (based on CP statistic and based on GCV statistic) and random subspace. Finally, based on the screened financial indicators, a functional logistic model is established to portray the continuous trajectory of the indicator early warning effect, and test the relative advantages of the model.
    The results of the empirical study find that only 8 indicators out of the set of 20 early warning indicators, such as return on assets, have significant early warning capability for financial distress. The early warning results further indicate that the warning model considering variable selection is significantly and robustly better than the full variable model in terms of prediction accuracy. Then, the early warning effects of financial indicators show significant differences. Total net asset margin and operating profit margin always have significant effects on corporate distress throughout the sample period, while fixed asset turnover, total asset turnover, and net asset turnover have medium-term early warning ability in the first 2-3 years of the discriminations. In contrast, the return on assets ratio exhibits significant negative warning ability only near the warning year. In addition, compared with the existing warning models, the functional logistic model incorporating random subspaces (RSFLR) in this paper has better early warning effect, exhibiting higher early warning accuracy and lower missed warning rate.
    In conclusion, compared with the existing studies, this paper provides new ideas for financial distress early warning from a process perspective, continuously measures the continuous trajectory of the indicator warning effect, and enriches the screening method of existing early warning indicators. The empirical results have empirical references for enterprise managers and regulators. This paper only uses enterprise financial data to establish model, but does not consider the influence of other factors on financial distress. As a result, how to combine industry characteristics and economic factors of enterprises to build a more effective functional logistic warning model is an important issue for future research.
    Joint Optimization of Stochastic Project Scheduling and Component Ordering for Prefabricated Buildings
    WANG Jingjing, LIU Huimin, DONG Wenjie, WANG Zongxi
    2025, 34(9):  53-60.  DOI: 10.12005/orms.2025.0275
    Asbtract ( )   PDF (1403KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    In traditional management of prefabricated building (PB) construction, project scheduling plans are critical to management performance. However, with the increasing applicability of PB projects, the scale and complexity of the projects continue to expand, resulting in a great rise in the demand for prefabricated components. At this point, a reliable and stable component ordering scheme plays an increasingly important role in project scheduling and optimal management. Moreover, in the real PB construction, the component ordering scheme and the project scheduling plan affect each other. On the one hand, the scheduling process inevitably involves the component assembly program, which means that a timely and reasonable component ordering scheme is conducive to the smooth progress of scheduling. On the other hand, the arrangement of resources and activities in the project scheduling determines the ordering time of components, which in turn affects the component ordering. Therefore, in order to improve the project management performance and better reduce the risk of project delays and cost overruns, a joint consideration of project scheduling and component ordering (PSCO) problems in PB projects is considered.
    In this work, we firstly define the PSCO problem and propose an ordering strategy that integrates the consideration of ordering lead time and component consumption. Since the lead time is a key factor influencing the material ordering plan, i.e., different values of the lead time will have a different impact on the start time of the activity, which further affects the project schedule. Additionally, the start time and material ordering quantity also affect the project scheduling plan. Thus, we take these three variables as the decision variables in this paper and construct a time-cost trade-off model with limited resources. We define it as the PSCO model. It aims to study when the project is scheduled to start, as well as when and how many components are to be ordered, which can enable the duration and cost of the project to be minimized at the same time.
    Then, we design an improved non-dominated sorting genetic algorithm-II (INSGA-II) which is suitable for solving projects with complex multi-paths. By analyzing the construction characteristics of PB projects, we conclude that the increasing scale and complexity of projects lead to higher amounts of activities. Traditional multi-objective algorithms may not be suitable for solving such projects. As a result, we design the INSGA-II algorithm where the probability of generating a feasible solution population is improved by adding path identification and judgment operations to the initialization population process in the preliminary stage of this algorithm. In addition, in order to improve the optimization efficiency of this algorithm, further optimization is made by using the multi-objective particle swarm optimization (MOPSO) algorithm after obtaining the optimal solution by the INSGA-II algorithm.
    In the end, a case study is utilized to illustrate the efficiency of the model and algorithm. The simulation case is taken from previous literature. By analyzing the solution results, it can be found that considering the impact of the lead time on project schedule is beneficial to reducing the project cost within the duration threshold. Namely, with the same parameters, the ordering strategy that takes into account the lead time reduces the cost by 28.43% compared to the strategy with no such consideration. Moreover, the number of component ordering times has an impact on the PSCO model, i.e., the higher the number of component ordering times, the larger the corresponding project duration and the smaller the project cost. Meanwhile, we compare the comprehensive performance of the INSGA-II algorithm proposed in this paper with the multi-objective genetic algorithm (MOGA) and the MOPSO algorithm. It can be found that, with an increase in the number of iterations, the INSGA-II algorithm basically outperforms the traditional MOGA and MOPSO algorithms in terms of the optimization ability for multi-path complex projects. Moreover, by using multi-objective evaluation metrics of hypervolume (HV), inverted generational distance (IGD) and spacing (SP), the results also illustrate that the INSGA-II algorithm has better performance in diversity, convergence and distribution of solutions.
    From the research results, we can see that this model and algorithm are effective and also efficient for solving the joint optimal model of PB project scheduling and component ordering problem. It not only improves the stability and reliability of PSCO plans in real project management, but also provides a powerful solution tool for project managers to reduce the risk of project delay and cost overrun, and further make better decisions for projects with complex multi-paths under uncertain environments.
    Disruption Management for Vehicle Routing Problem of Fresh Products with Simultaneous Pickup and Delivery in Change of Time Windows
    DING Qiulei, LIU Mukang, HU Xiangpei, JIANG Yang
    2025, 34(9):  61-69.  DOI: 10.12005/orms.2025.0276
    Asbtract ( )   PDF (1282KB) ( )  
    References | Related Articles | Metrics
    Since the onset of the COVID-19 pandemic, fresh e-commerce sector has witnessed exponential growth in order volumes and transaction values, accompanied by a marked increase in customer-initiated cancellations. Accelerated urban lifestyles have further intensified vulnerabilities in cold chain last-mile delivery, where disruptions-particularly time window changes and demand fluctuations-frequently invalidate pre-generated routing plans and may even compromise cold chain integrity. Such disruptions not only provoke customer dissatisfaction due to untimely service (triggering customer attrition), but also exacerbate product spoilage and potentially jeopardize consumer safety. Consequently, rescheduling vehicle routes of fresh products with simultaneous pickup and delivery in the change of time windows-while balancing competing operational constraints-represents a critical research challenge. This entails specifically: (1)accommodating revised time window requests without diminishing service expectations for other customers; (2)dynamically readjusting vehicle routes; and (3)rigorously preserving product freshness throughout the delivery process.
    To address these challenges, this study integrates service quality perception theory from marketing with quantitative optimization methodologies from operations research within a disruption management framework. Firstly, an initial mathematical model is formulated by analyzing key distribution costs: spoilage costs, transportation costs, refrigeration costs, and penalty costs. Secondly, the paper quantifies the impacts of disruption events on customers, vehicles, and products, specifically measuring the psychological perception gap between expected and actual service experiences in last-mile delivery. Moreover, building upon both the original distribution objectives and associated deviation costs, it develops a disruption management model for vehicle routing problem of fresh products with simultaneous pickup and delivery in the change of time windows. Grounded in initial routing objectives, this model minimizes perceived service gaps while mitigating the negative effects of time window deviations. Finally, an improved non-dominated sorting genetic algorithm-II (INSGA-II) is demonstrated to solve the model. Diverging from NSGA-II’s stochastic initialization, INSGA-II strategically constructs the initial population via a saving algorithm to accelerate convergence. Furthermore, the incorporation of a queen-preserving order crossover (OX) operator effectively retainshigh-quality genetic traits from elite solutions, enhancing search efficiency. Concurrently, parent selection via population traversal maintains genetic diversity, facilitating robust global exploration.
    Experimental results demonstrate that, compared to both executing the original plan and implementing the global rescheduling strategy, the proposed approach achieves comparable total costs while significantly minimizing customers’ perceived cost losses. This reduction in perceived service gaps is likely to positively influence subsequent purchasing decisions. Although the approach entails a marginal increase in routing costs, it substantially reduces spoilage costs and narrows the psychological discrepancy between expected and actual service experiences. Consequently, product freshness is better preserved, enhancing the potential for retaining latent customers. Additionally, in solving the vehicle routing problem of fresh products with simultaneous pickup and delivery and other complex optimization problems, the INSGA-II exhibits superior solution quality and greater Pareto solution diversity compared to the standard NSGA-II.
    This study combines the concept of service quality with disruption management theory, translating theoretical concepts into practical methods through quantitative analysis in operations research. It also represents an interdisciplinary convergence of fields such as logistics management and marketing, not only enriching the theory and methodology of disruption management but also enabling creative practices of service quality. The proposed disruption management model for simultaneous pickup and delivery of fresh products under time window variations efficiently improves the proficiency of last-mile logistics distribution to handle the disruptions, thereby helping to improve customer satisfaction. It can provide decision support for cold chain last-mile logistics planning in fresh product e-commerce enterprises.
    Flexible Two-dimensional Warranty Menu Design Method Based on Usage Rates
    ZHANG Zhaomin, FU Weifang, HE Shuguang
    2025, 34(9):  70-76.  DOI: 10.12005/orms.2025.0277
    Asbtract ( )   PDF (1272KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    Complex products, such as vehicles and large equipment, are usually provided with a two-dimensional warranty policy. A two-dimensional warranty region is defined by an age limit and a cumulative usage limit. When the optimal two-dimensional warranty policy is designed, from the perspective of the manufacturer, warranty cost is more concerning. However, from the perspective of customers, they are more concerned about the available warranty region according to the usage rate, which leads to various perceptions of the same two-dimensional warranty policy. Due to competition, the manufacturer is motivated to provide an attractive warranty policy for customers with heterogeneous usage rates, to increase the customer’s satisfaction with after-sales and promote sales amount. Therefore, to satisfy the demand of customers for heterogeneous usage rates, a flexible two-dimensional warranty menu design method based on customers’ usage rates is proposed from the perspectives of both the manufacturer and customer.
    Firstly, the expected warranty cost of the proposed flexible two-dimensional warranty menu is established based on the assumption of minimal repair under the warranty period. A non-homogeneous Poisson process is modeled to the expected number of warranty claims during the two-dimensional warranty region. Meanwhile, to describe the attractiveness of the warranty region for customers with heterogeneous usage rates, an attractiveness index model for the flexible two-dimensional warranty menu is proposed, which is established based on the available warranty region.
    Secondly, both a uniform division strategy and the optimal division strategy for designing the flexible two-dimensional warranty menu are proposed, which maximizes the total expected attractiveness index for customers with a given total expected warranty cost. Design principles to implement the flexible two-dimensional warranty menu are given, and the necessary conditions for obtaining the optimal nominal usage rate and dividing usage rate are discussed. The design procedures for designing a two-dimensional warranty menu are summarized.
    A case study of an automobile manufacturer is proposed to illustrate the proposed method, and a sensitivity analysis of the effects of key parameters on the optimal two-dimensional warranty menu is conducted. The findings indicate that the optimal values of uniform division strategy and the optimal values of optimal division strategy for designing the flexible two-dimensional warranty menu are approximate. When the mean value of customers’ usage rate increases, the optimal nominal usage rates of the flexible two-dimensional warranty menu rise. When the variance of customers’ usage rate increases, the total expected attractiveness index for the optimal flexible two-dimensional warranty menu decreases.
    The paper considers a flexible two-dimensional warranty menu design considering the heterogeneous usage rates of the customers. An attractiveness index model for the flexible two-dimensional warranty menu is established to describe the various perceptions of the two-dimensional warranty policy for customers with different usage rates. The expected warranty cost of the proposed flexible two-dimensional warranty menu is proposed from the perspective of the manufacturer. Meanwhile, a uniform division strategy and the optimal division strategy for designing the flexible two-dimensional warranty menu are proposed. Design principles and the necessary conditions for obtaining the optimal nominal usage rate and dividing the usage rate are discussed. Through a case study of an automobile manufacturer implementing a flexible two-dimensional warranty menu design, the effectiveness of the proposed method is verified.
    K-means Clustering Based on Improved Dung Beetle Optimizer
    MA Zhihai, LIU Sheng
    2025, 34(9):  77-83.  DOI: 10.12005/orms.2025.0278
    Asbtract ( )   PDF (1039KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    Cluster analysis is a data analysis method used to group similar data points into different categories or clusters. It is an unsupervised learning approach that does not require pre-defined class labels but rather automatically classifies data points based on their similarity. Through cluster analysis, similar samples are assigned to the same group, revealing similarities and differences among samples, and providing a preliminary classification of the data. Cluster analysis has been widely applied in various fields such as data mining, image processing, natural language processing, and market segmentation.
    K-means clustering algorithm is the most commonly used algorithm in cluster analysis due to its simplicity, scalability, suitability for high-dimensional data, and robustness. However, K-means algorithm is highly sensitive to the initial selection of cluster centers, and improper initialization can lead to inaccurate or unstable clustering results. Swarm intelligence algorithms, which are stochastic search algorithms capable of escaping local optima, have been adopted by researchers to optimize clustering algorithms and have shown promising results. Dung beetle optimization algorithm (DBO) is a swarm intelligence optimization algorithm proposed in 2022, inspired by the rolling, dancing, foraging, stealing, and reproduction behaviors of dung beetles. Compared to classical algorithms like particle swarm optimization and whale optimization algorithm, DBO exhibits better optimization performance. However, like other swarm intelligence algorithms, the dung beetle optimization algorithm may suffer from uneven distribution and lack of population diversity during the initialization of the population. Additionally, during the rolling phase where positions are updated, the algorithm relies solely on the worst value for updating, resulting in a weaker global exploration capability.
    To overcome the limitations of K-means clustering’s heavy reliance on initial cluster centers, a novel K-means clustering algorithm based on an improved beetle optimization algorithm, called POTDBO-K-means, is proposed in this study. Firstly, the beetle optimization algorithm is enhanced by incorporating a Piecewise Linear Chaotic Map (PWLCM) to improve population diversity, enhance solution accuracy, and accelerate convergence. Secondly, inspired by the osprey optimization algorithm for position recognition and fishing strategy, replacing the dung beetle optimization algorithm’s rolling stage strategy with its global exploration strategy can compensate for the algorithm’s reliance on only the worst value and its inability to communicate with other dung beetles during the rolling stage, thereby enhancing the algorithm’s global exploration capability. Then, a dynamically selected adaptive t-distribution perturbation is introduced to increase both global exploitation and local search capabilities. The effectiveness and superiority of the improved dung beetle optimizer are verified through experiments on CEC2017 test functions. Finally, the improved dung beetle optimizer is combined with the K-means clustering algorithm and compared with other K-means clustering algorithms enhanced by swarm intelligence algorithms proposed by other researchers. The comparison is conducted on six UCI datasets with different characteristics. The simulation results demonstrate that the POTDBO-K-means algorithm exhibits faster convergence, stronger optimization ability, and higher clustering accuracy.
    In future work, the proposed POTDBO-K-means clustering algorithm can be applied to address challenging problems such as credit risk assessment, potential customer segmentation for the automotive industry, and user profiling for insurance products. Furthermore, further research will be conducted to combine swarm intelligence algorithms with K-means clustering in order to improve the convergence speed and clustering accuracy of the K-means algorithm.
    Integration Decision Analysis of Service Supply Chain from Perspective of Digital Intelligence
    LI Yue, WU Dan, YAO Fengmin, HU Yanling
    2025, 34(9):  84-91.  DOI: 10.12005/orms.2025.0279
    Asbtract ( )   PDF (1385KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    In the realm of economic globalization and the digital transformation of enterprises, an increasing number of companies are prioritizing supply chain management to leverage the cost-efficiency of this approach, maximizing profit margins while minimizing resources allocation. With the exponential growth of the service industry, the service supply chain has become a crucial aspect of enterprise service operation and management. The rise of service outsourcing has stimulated scholars’ interest in researching the service supply chain. At present, the service supply chain management sector in China is in its embryonic stage, characterized by a high degree of dispersion and limited concentration. The majority of service providers, particularly integrators, possess expertise in only one or a few industries, achieving significant success within their target sectors. However, the varying levels of development across different industries’ supply chains significantly exacerbate the challenges of cross-industry competition. Few supply chain integrators concurrently serve multiple sectors. Consequently, exploring strategies to enhance service efficiency, minimize service costs, broaden service scope, and augment service competitiveness in China’s service supply chain is of paramount importance for optimizing the service supply chain overall. This optimization is essential for the advancement of China’s current service industry.
    As the evolution of digital intelligent technology accelerates, the service supply chain has garnered significant attention as a crucial platform for service operation management. The integration of service supply chains represents a novel trend and strategic direction in the evolution of the service industry. Omni-channel service supply chain integration plays a crucial role in facilitating the innovation of modern service supply chains and enhancing their value. The primary goal of service enterprises is to improve customer satisfaction by effectively addressing their needs. The integration of the service supply chain facilitates the optimal utilization of resources and demands, thereby fostering the progressive advancement of enterprises and establishing a virtuous cycle. This also emphasizes the concept of people-oriented, environmentally friendly, coordinated, and sustainable development.
    Focusing on the integration of the service supply chain from the perspective of digital intelligence, this paper constructs a three-level service supply chain model comprising two service providers, one service integrator, and one customer. It investigates two integration modes of the service supply chain: horizontal integration and vertical integration, utilizing Stackelberg game theory, backward induction, and comparative analysis. The influence of “digital intelligence level” on the integration decision, the profit of each main entity in the service supply chain, and the total profit of the service supply chain is thoroughly examined. The results show that “digital intelligence” will enhance the efficiency and profitability of the service supply chain. For vertical integration, when the level of digital intelligence is low, service providers with weak competitiveness are more inclined to accept vertical integration, while service integrators tend to avoid integration. When the level of digital intelligence is high, service providers and service integrators will actively opt for vertical integration mode simultaneously. When the level of digital intelligence is high, it will promote more competitive service enterprises to integrate with weaker service enterprises. Both service providers have intentions for horizontal integration. Therefore, the integration of the service supply chain holds immense significance for fostering the rapid development and enhancing the competitive edge of China’s service enterprises.
    A Generalized Gradient-based Method for Solving Combinatorial Optimization Problems
    ZHANG Han
    2025, 34(9):  92-98.  DOI: 10.12005/orms.2025.0280
    Asbtract ( )   PDF (1069KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    Combinatorial problems with linear objective functions have a wide range of applications in real life, and there are many highly efficient algorithms for finding the optimal solutions to these combinatorial optimization problems. With the continuous development of artificial intelligence technology, people can make full use of the characteristics of powerful function approximators such as neural networks and combine rich feature extraction with efficient combinatorial solvers to achieve efficient approximate solution of high complexity combinatorial problems in an end-to-end, without any compromise. However, the solutions to the above combinatorial problems are similar with respect to the parameters in the problem instance, therefore, it is difficult for us to use gradient-based methods to optimize the model when the solution to the combinatorial problem is used as the criterion for model training. Many authors have attempted to apply combinatorial optimization and more broad convex optimization solvers to gradient-trained models. Several methods have been developed to differentiate the solution vectors of optimization problems. In most cases, we only need to differentiate the objective value (not the solution vector), but current existing methods can introduce unnecessary extra computations.
    Here, we show how to perform gradient descents directly over the objective value of the solution to the combinatorial problems. Specifically, for efficiently solvable combinatorial problems that can be efficiently expressed as integer linear programs, the generalized gradients of objective values with respect to the real-valued parameters in the problem exist and can be efficiently computed by a black-box combinatorial algorithm in a single-run. This way of turning combinatorial solvers into differentiable building blocks in deep learning models enables us to execute their internal algorithms more efficiently. While ensuring the generalization of combinatorial deep learning models, it solves the problems: combinatorial solvers are difficult to be directly invoked and their applicability under specific problem structures is difficult to guarantee.
    Moreover, we conduct two experiments: (1)weakly supervised image classification and (2)global sequence alignment problems with differentiable encoder-decoder architectures using Softmax or Gumbel-Softmax. The experimental results show that our proposed method can provide differentiable combinatorial losses for the above problems. Compared with other existing methods, our proposed method has the advantages of being more stable in training processes and more accurate and efficient in prediction. Specifically, in experiment (1), we use the information from the model about the class probability distribution output by each feature vector, match the model’s output about the feature vectors in the bag to the class label, and use the Hungarian algorithm as a combinatorial solver to find the permutation order in this problem. Compared with the existing gradient-based methods, the proposed method using generalized gradient to solve combinatorial optimization problems can provide training signals for large neural networks very effectively and is much faster than the current state-of-art methods in training speed. In experiment (2), we use Global Sequence Alignment (GSA) as loss. Compared to training using baseline loss, the proposed method achieves the best text summarization results of all three ROUGE evaluation metrics, and is more accurate and effective.
    In the future, we will do further research on how to apply the proposed method in this paper. For example, DEtection TRansformer (DETR) was the first algorithm to apply the Transformer encoder-decoder architecture to object detection, and its architecture has become a building block in many transformer-based applications. However, DETR usually is set-based on the global loss function, which leads to inconsistent allocation cost and global loss. How to use the generalized gradient-based method proposed in this paper to improve the convergence speed and performance of DETR is a very valuable research direction.
    On Fairness Concerns in Bimatrix Games
    YI Wentao, FENG Zhongwei
    2025, 34(9):  99-105.  DOI: 10.12005/orms.2025.0281
    Asbtract ( )   PDF (1213KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    Game theory is mainly concerned with the players’ optimal strategies, and has become a powerful tool for analyzing interactions among different players. It has applications in diverse areas including economics, policy, psychology, environment and logistics, and so on. Bimatrix games, as an important part of non-cooperative games, play a vital role since the very beginning of game theory. A classic bimatrix game involves two different rational players who make corresponding decisions to maximize their own interests, following the assumption of “economic man”. However, this assumption limits the application of bimatrix games in reality. Due to the complex and changeable environment, cognitive limitations, emotions and preferences, people tend to show the behavior characteristics of limited rationality, which makes their choices deviate from the prediction of classical theory. Recently, a large number of behavioral and economic researches and practical cases have shown that people are concerned about fairness, and will give up part of their own interests to achieve fairness when they are treated unfairly. Namely, people show the characteristics of the bounded rationality with fairness concerns. Therefore, in order to make the theoretical prediction result more realistic, this paper incorporates fairness concerns into the bimatrix game to analyze the influence of the players’ fairness concern on the Nash equilibrium of the bimatrix game.
    In order to accurately describe the players’ fairness concern, some scholars put forward several models. The model (namely F-S model) proposed by Fehr and Schmidt, one of the most famous, focuses on the issue of the fair distribution of the benefits. The F-S model points out that players show great concern for their own and others’ benefits and the basis for judging fairness is mainly to compare their own benefits with those of others. Namely, the fair reference point of a player is the opponent’s payoff. However, this point may not be fully applicable in practice, because a player would not like to get the same benefit as his opponent, if his competitive power or contribution is comparatively larger, and vice versa. Therefore, Ewerhart modified the F-S model to stress the fairness but not the outright altruism, in which a common agreement reached by the two players is regarded as their own fairness reference level. Therefore, in this paper, the model proposed by Ewerhart is used to describe the players’ fairness concern behavior in the bimatrix game. By incorporating exogenous and fixed fairness reference points, this paper constructs a model of bimatrix game with fairness concern and explores the existence of its equilibrium.
    At present, some scholars have carried out research on the problem of the bimatrix game with limited rational behavior. However, these researches do not consider the strategy choice of fair-minded players. Other scholars also have discussed the influence of fairness concern on game equilibrium, which mainly focuses on bargaining game, Betrand game, prisoner’s dilemma game and so on. But, none of them consider the bimatrix games. Therefore, it is meaningful to analyze bimatrix games with fairness concerns. In view of this, in the bimatrix game, this paper considers the situation where only one player is fair-minded and his fairness reference point is exogenous and fixed. A bilinear programming method is applied to solve the Nash equilibrium of this game. Then, a complete analysis of the 2×2 bimatrix games with fairness concerns is conducted. Specifically, three scenarios are considered to explore the influence of fairness concerns on the Nash equilibrium when the fairness reference points are in different value ranges. Finally, a simple case is used for analyzing and verifying.
    The results show that if a player has the characteristics of fairness concern behavior, the equilibrium strategy of his opponent and this player’s expected payoff are not only affected by his own fairness concern behavior, but also related to his fairness reference point. To be specific, when the player’s fairness reference point is in a different value range, the opponent’s equilibrium strategy and his own expected payoff will also change differently with an increase in fairness concern degree. In other words, the player’s fairness concerns may be beneficial or hurtful to himself, or may not have an effect on himself.
    Asymptotic Unbiasedness of Damping Accumulated GM(1,1) Model and its Extension: Based on the Perspective of Function Transformation
    CHEN Pengyu
    2025, 34(9):  106-112.  DOI: 10.12005/orms.2025.0282
    Asbtract ( )   PDF (1301KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    Accumulation generation is one of the key steps in GM(1,1) modeling, which has an important impact on model accuracy. The damping accumulated generation is a new type of accumulated generation method based on the principle of “new information priority”, and the established damping accumulated GM(1,1) model (DAGM(1,1) model) can adjust the exponential growth rate of the predicted values freely. The damping accumulated generation is equivalent to the function transformation method with parametric variables in the way the data are processed, but the range of values of parameters is different. Then which parameter value range is more reasonable? This paper will carry out related research. In addition, no study has focused on whether damping accumulation or the function transformation method with parametric variables can realize the unbiased prediction of white exponential series, which is very important for the effective improvement of the fitting and prediction accuracy of the GM(1,1) model.
    In this paper, the DAGM(1,1) model is taken as the research object. The damping accumulation is regarded as a function transformation method. The effect of the damping accumulation on the simulation accuracy is analyzed in terms of background value error and restoring error, and it is proved that the DAGM(1,1) model has the asymptotic unbiasedness, and can achieve unbiased prediction of white exponential sequence within a negligible range of error. For pure exponential series, this paper expands the range of values of damping coefficient. For the existence of a large number of approximate exponential series in the social and economic data, they are not necessarily strictly increasing series, especially for low-growth approximate exponential series, and affected by data fluctuations, the value of the damping coefficient cannot be taken with reference to the pure exponential series. At this time, the role of the damping coefficient is to regulate the growth rate of the simulated series in order to obtain the best fitting accuracy. Pattern search method, genetic algorithm or particle swarm optimization algorithm and other optimization algorithms should be used to obtain the optimal damping coefficient. In this paper, Matlab programming combined with its optimization toolbox is used to solve the optimal damping coefficients. The results of example applications show that after widening the range of the damping coefficient, it can not only effectively reduce the influence of the background value error on the simulation accuracy, but also adjust the growth rate of the simulated sequences in order to obtain the best simulation accuracy, and the simulation and prediction accuracy of the DAGM(1,1) model is higher than that of the GM(1,1) model and the discrete GM(1,1) model, and is comparable to that of the damping accumulated discrete GM(1,1) model.
    Similar to the GM(1,1) model, the DAGM(1,1) model is only applicable to approximate homogeneous exponential series, and the DAGM(1,1) model may fail to predict approximate non-homogeneous exponential series that are widely available in socio and economic data. In order to retain the advantages of damped accumulation while expanding the suitable modeling series to non-homogeneous exponential series, this paper combines damped accumulation with translational transformation to construct the DANGM(1,1) model applicable to approximate non-homogeneous exponential series. The example application results show that the simulation and prediction accuracy of the DANGM(1,1) model is higher than that of the NDGM model and the ONGM(1,1,k,c) model, which verifies the validity of the proposed model.
    There exists a class of fluctuation series with seasonal characteristics in social and economic data, such as quarterly GDP, quarterly electricity consumption and so on. For this kind of seasonal fluctuation series, the DAGM(1,1) model is more likely to fail in prediction, and then the seasonal GM(1,1) model (SGM(1,1) model) applicable to seasonal fluctuation series can be adopted. However, the SGM(1,1) model retains the background value construction of the GM(1,1) model, which still affects the fitting accuracy, and for this reason, this paper combines the damped accumulation with the SGM(1,1) model and constructs the damped accumulated SGM(1,1) model (DASGM(1,1) model). The results of the example application show that the simulation and prediction accuracy of the DASGM(1,1) model is higher than that of the SGM(1,1) model and the GM(1,1,T) model, which verifies the validity of the proposed model.
    Application Research
    Cold Chain Logistics Distribution Center Location-allocation Problem with Uncertain Demand and Travel Time
    BAI Qinyang, YUAN Yuxiang, ZHOU Zhili
    2025, 34(9):  113-119.  DOI: 10.12005/orms.2025.0283
    Asbtract ( )   PDF (976KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    With economic and living development, the demand for fresh products is increasing. However, due to the perishability, vulnerability, and time-sensitive nature of fresh products, a significant amount of them is lost through the existing distribution network before reaching consumers, resulting in significant losses for cold chain logistics companies. For example, fresh agricultural products in China can cause 300 billion RMB losses due to unreasonable logistics every year. In the United States, more than 40% of food, worth 218 billion USD, is wasted each year. The distribution center plays a crucial role in the cold chain logistics network as it connects suppliers to retailers. Scientifically locating distribution centers can improve distribution efficiency and reduce operational costs. Therefore, studying the problem of distribution center location-allocation is of great significance for optimizing the entire cold chain logistics network.
    A cold chain logistics network typically consists of suppliers, distribution centers, and retailers, with the distribution center being a key link between suppliers and retailers, deciding the quality and efficiency of the entire cold chain logistics network. In order to save costs, cold chain logistics companies often rent existing candidate cold storage facilities as their distribution centers. Thus, the design problem of fresh logistics networks involves two decisions: (1)which cold storage facility is rented as the distribution center? and (2)what is the allocation plan from the distribution center to retailers? However, it is difficult to accurately predict consumer demand and travel time between network nodes, making the demand and travel time uncertain. Therefore, this paper focuses on the location-allocation problem of cold chain logistics distribution centers with uncertain demand and travel time.
    Existing research generally solves the location-allocation problem of cold chain logistics with uncertain parameters using stochastic optimization and robust optimization methods. However, stochastic optimization methods assume exact knowledge of the distribution or scenarios of uncertain parameters, which is not feasible in practice. Robust optimization methods assume no knowledge of any distribution information about uncertain parameters and solve the uncertainty problem by optimizing the worst-case scenario, often leading to unnecessarily conservative decisions. Moreover, most existing research only considers uncertain demand, ignoring uncertain travel time. However, in reality, the traffic situation is complex, and neglecting uncertain travel time may lead to failed decisions on distribution center location and allocation, resulting in delayed delivery of fresh products to customers’ increased operating costs, and decreased customer satisfaction for the cold chain logistics company.
    Considering factors such as the difficulty with accurately predicting retailer demand and the complex and variable road network traffic, a demand and travel time ambiguity set based on mean absolute deviation is constructed. This forms the foundation for the development of a cost-minimizing cold chain logistics distribution center location-allocation distributionally robust optimization (DRO) model. To make the model more tractable, auxiliary variables are introduced, and the DRO model is reformulated as a mixed integer programming model using dual theory. Finally, a comparison is made between the reformulated model and the stochastic programming (SP) model on both in-sample and out-sample data to evaluate their performance. The results demonstrate that the DRO model is more competitive in cold chain logistics distribution center location-allocation problems where it is difficult to obtain precise distributions of demand and travel time. Furthermore, it exhibits stronger resilience to extreme scenarios.
    An Optimization Model and Algorithm of Distant Sea Fisheries Rescue Base Location Problem Based on Sea-Air Cooperation: A Case Study of Nansha Sea Area in China
    WU Di, LIU Jingyuan, WANG Nuo
    2025, 34(9):  120-126.  DOI: 10.12005/orms.2025.0284
    Asbtract ( )   PDF (1327KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    The Nansha Islands, situated at the southernmost end of China’s maritime border within the South China Sea, one of China’s three marginal seas, require an independent rescue support system due to their distance from the mainland. Rich in resources, the Nansha Islands are vital for the development of China’s offshore fisheries. As competition for marine resources in the South China Sea intensifies and China’s political policies and global dynamics evolve, establishing maritime search and rescue bases and dynamic duty stations in the Nansha Islands holds significant theoretical and practical importance, including shortening rescue times, promptly responding to emergencies, protecting China’s maritime rights, and ensuring the safety of fishermen. Given the unique background of the site selection of rescue base in the Nansha Islands, this paper primarily addresses two challenges: how to establish maritime rescue duty points to enhance their coverage of fishing vessels, and how to determine the number and locations of rescue bases to minimize investment costs.
    Analyzing the issues reveals the dilemma we confront: Firstly, there is a matter of determining the number and locations of maritime rescue bases. Establishing one rescue base may reduce construction costs, but if the distance between the maritime duty points and the base is too great, it will inevitably inflate operational expenses. Secondly, there is a question of the quantity of maritime rescue duty points and helicopters. A shortage of rescue facilities will diminish rescue efficiency, while equipping more search and rescue facilities will escalate investment costs. Hence, an optimization model is devised to achieve maximum rescue coverage and the shortest rescue time within the constraints of limited rescue facilities and investment. Based on concepts from particle swarm optimization and evolutionary algorithms, a multi-objective P-MOEA algorithm tailored to this problem is designed. The fishing vessel locations are put into the fishery information database, which is developed and designed using GIS technology. Subsequently, the K-means clustering algorithm is applied to determine the locations of maritime rescue duty points. The results obtained are then put into the established integer optimization model to derive the search and rescue base locations and facility configuration plan. Finally, a cost-effectiveness method is employed to select the most economically viable solution.
    This paper focuses on optimizing the construction of the fishery rescue system in the Nansha Islands in the South China Sea, using it as an application example for analysis and verification. Firstly, there are seven islands in the Nansha sea area qualified to establish fishery rescue bases. These islands have docks and landing sites for search and rescue helicopters. The paper considers these islands as candidate rescue bases, selecting several of them to establish maritime search and rescue bases. By interpolating data obtained from the South China Sea fishery resources survey results of the Academy of Fisheries Sciences, a fishery information database application platform is designed based on ArcGIS software. This platform projects the distribution positions of fishing vessels and island coordinates in the Nansha area. By using the K-means algorithm the positions of maritime rescue duty points at Nansha Islands are clustered. All these above data are then put into the established integer planning model to calculate the final site selection and search and rescue facility configuration scheme for the Nansha Islands through the P-MOEA algorithm. The results indicate that two maritime rescue bases are established in the Nansha Islands, namely, Huayang Island and Meiji Island. There are a total of seven maritime search and rescue duty points. Three of these duty points are based on Huayang Island, while the four are based on Meiji Island. Moreover, each rescue base is equipped with large search and rescue helicopter, and each duty point is equipped with suitable rescue ships to promptly respond to needs. To verify the performance of the proposed algorithm, it is compared with traditional evolutionary algorithms by increasing the number of fishing boats by 10%, 25%, and 50%, respectively. The algorithm population size is set to 1000, and the number of iterations is 1000. After running for 20 times, the examples’ Pareto frontier solutions show that the proposed algorithm in this paper outperforms traditional evolutionary algorithms in terms of Pareto front optimization. Furthermore, the cost improvement, uniformity, and diversity of the Pareto optimal solutions achieved by P-MOEA algorithm in this article surpass those of traditional evolutionary algorithms. In terms of the cost improvement index, the P-MOEA algorithm produces a more cost-effective result in total investment for the same rescue coverage rate. In terms of diversity metrics, the algorithm proposed in this paper generates a greater quantity of Pareto frontier solutions. Furthermore, in terms of the uniformity index, the algorithm proposed in this paper exhibits a broader distribution of Pareto frontier solutions. The aforementioned comparative results demonstrate that the algorithm proposed in this paper can be applied to optimize the site selection of offshore fishing rescue base and the allocation of rescue facilities. It also exhibits outstanding performance.
    This study provides an effective analytical approach for selecting rescue base locations and scientifically allocating rescue resources in China’s offshore islands. However, there are still some limitations in this article. The research focuses on operational fishing vessels, yet in reality, rescue operations are also required for passing transport vessels, necessitating a comprehensive consideration. This will make the analysis process more complex, and determining how to model and optimize this problem will be the next step in further research.
    Multi-objective Optimization of Water Environmental Governance Program Based on Special Social Responsibility Fulfillment
    CHEN Xu, FENG Jingchun, FENG Hui, XU Hao, ZHAO Liangwei
    2025, 34(9):  127-132.  DOI: 10.12005/orms.2025.0285
    Asbtract ( )   PDF (1063KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    The report of the 20th National Congress of the Communist Party of China clearly pointed out that it was necessary to intensify the prevention and control of environmental pollution, make further efforts to keep waters clear, improve water resources, aquatic environments, and aquatic ecosystems, and strengthen ecological conservation of major rivers, lakes, and reservoirs. Improving the environmental governance of important rivers is vital to the great rejuvenation and sustainable development of the Chinese nation. The main body of aquatic environmental governance in China is generally state-owned enterprises, which generally are in the form of programs. Although the state-owned enterprises enjoy unique advantages in various policies and resources, the effect of aquatic environmental governance is still unsatisfactory. The fundamental reason is that there is more public welfare in aquatic environmental governance projects. It is necessary to rely on the governance enterprises to fulfill their special social responsibilities to achieve certain governance goals, which is contrary to the profit purpose of enterprises, coupled with various uncertain factors, resulting in low profitability of aquatic environmental governance projects. Therefore, it is necessary to establish a scheduling method from the perspective of enterprises, so as to fulfill certain special social responsibilities, increase their profitability and anti-risk ability, and ensure that the governance effect of water environmental governance programs meets the standards. This is of great significance to the benign development of governance enterprises and the implementation of water environmental governance strategy in China.
    This paper derives the concept and connotation of special social responsibility of water environmental governance enterprises from the concept of social responsibility and the responsibilities undertaken by state-owned enterprises. It believes that there is a correlation between the fulfillment of special social responsibility and the effectiveness of water environmental governance, proposes a solution to the problem from the perspective of project portfolio and scheduling, and constructs a dual objective optimization model for the net present value and robustness of programs. In view of the difficulty of solving complex constraint problems using conventional algorithms, this paper puts forward a NSGA-III algorithm optimized by Multi Population Genetic Algorithm (MPGA), which transforms a single population into multiple populations in parallel, adds immigration operations in each iteration process, and can maximize the retention of key information in the population and avoid overfitting. In the algorithm, there are two layers of encoding structure. The first layer uses binary encoding to represent the proportion of special social responsibility fulfillment, while the second one represents the start time of different projects. In order to verify the effectiveness of the model and algorithm described in this paper, Program A is taken as an actual case study. The governance content is to renovate the overall water conservancy facilities in the entire city, with the governance goal of improving the water quality cross-section rate by 5% as the basic goal and 12.5% as the high standard goal.
    The study reaches the following four conclusions: (1)In terms of algorithm advantages, the NSGA-III algorithm based on multi-population genetic algorithm optimization can adapt to complex constraints very well and find more Pareto solutions, which has greater advantages than Multi-Objective Particle Swarm Optimization (MOPSO), standard NSGA-III algorithm, and Multi-Objective Grey Wolf Optimizer algorithm (MOGWO). (2)The net present value and robustness of the program approximately show a negative correlation relationship, that is, the higher the program’s net present value, the weaker its robustness; conversely, the lower the program’s net present value, the stronger its robustness. (3)The robustness of the program scheduling scheme is resistant to uncertain environments, that is, the stronger the robustness of the program, the less adverse impact it will suffer. At the same time, the scheduling scheme with high net present value robustness is more compact, while the more relaxed scheduling scheme will bring greater net present value, which is contrary to the conventional cognition. (4)The government’s subsidies or rewards for water environmental governance enterprises are crucial and conducive to improving the effect of water environmental governance. If the government does not give rewards, the enterprises may suffer losses, which will damage their enthusiasm in the long run and cause counterproductive effects.
    This paper explains the special social responsibility of water environmental governance enterprises, takes full account of the coordination of water environmental governance effect and enterprise profitability, and provides a new scheduling method for enterprises and some ideas for the government to formulate a reward and punishment system. Future studies will be devoted to further exploring the correlation of multiple objectives such as robustness, qualitative robustness, net present value, and governance effect of the scheduling solution, and developing a more suitable and efficient algorithm.
    Carbon Emission Prediction of Waste Air Conditioning Refrigerants Based on Seasonally Adjusted GRNN
    WANG Fang, CHENG Wenxin, YU Lean
    2025, 34(9):  133-140.  DOI: 10.12005/orms.2025.0286
    Asbtract ( )   PDF (1394KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    As one of the largest producers of Waste Electrical and Electronic Equipment (WEEE) in the world, China’s theoretical waste volume of WEEE reached 7.674 million tons in 2021, of which more than 75.762% came from five kinds of WEEE, including the waste television, refrigerator, washing machine, air conditioner (AC) and microcomputer. Of the five kinds of WEEE, the waste AC has the largest carbon emission reduction potential. Carbon emissions from 22.06 tons can be avoided if one ton of waste air conditioner is collected and treated, and the reduction of carbon emissions is mainly attributed to the recovery of refrigerants. Predicting the carbon reduction potential of discarded air conditioning refrigerants can provide data support for the formulation of national carbon emission reduction policies and optimization of recycling and dismantling enterprises carbon emission reduction schemes. To accurately predict carbon emissions, precisely forecasting the waste volume is of great importance. However, there are several points that need to be improved in existing models for predicting the waste volume. First, some methods that rely on empirical data tend to introduce subjectivity and uncertainty into the prediction outcomes. Second, existing researches mainly use annual data for forecast and analysis, which hinders the government and enterprises from formulating more targeted policies and plans. Third, few studies consider socio-economic factors like GDP(Gross Domestic Product) and average temperature in predicting WEEE carbon emissions.
    This paper proposes an X12-GRNN framework for carbon emission prediction of waste AC. The first step is to forecast the quarterly sales of AC. Nine socio-economic factors that may affect the sales of AC are identified through market research and expert advice. Then the Pearson correlation coefficients between nine factors series Xi(i=1,2,…,9) and AC sales Y are calculated to choose the important factor series. After that, we use the X12 method to decompose the factor series with seasonal characteristic and sales into trend components (TC), seasonal factors (SF), and irregular components (IR). Then, we develop three Generalized Regression Neural Network (GRNN) models for TC, SF and IR, and the results of prediction are TC(Y^), SF(Y^) and IR(Y^), respectively. By integrating the predicted TC(Y^), SF(Y^) and IR(Y^), a quarterly forecast series of AC sales Y^ are obtained. The second step is to estimate the waste volume of AC. Combining the parameters of the AC quarterly sales Y with those of their life cycle, we predict the waste volume of AC with the market supply A model. Then, the calculated average AC weight of 44.251kg is applied to convert quantity into weight of waste AC. The third step is to estimate the carbon emissions of waste AC refrigerants. The market shares of R22, R32, and R410a refrigerants between 2011 and 2025 are predicted by Holt-Winters-No Seasonal (HWN) method. Then, based on existing researches, parameters such as refrigerant filling volume of household AC, the annual leakage rate and recycling rate of refrigerant in waste AC, and the carbon emissions of other materials in the recovery progress of waste AC are set. Finally, the carbon emissions of waste AC refrigerants are calculated.
    The main findings can be summarized in three points. Firstly, the carbon emissions from discarded household AC are enormous. From the beginning of discarded household AC in 2005 to the end of the calculation period in 2052, a cumulative emission of 1.103×109t CO2-eq is expected without the recovery of refrigerants. Secondly, the standardized recovery of discarded household AC has a high carbon reduction potential. If the recovery rate of refrigerants from discarded AC increases from 5.000% to 10.000% during the calculation period, it will reduce emissions by 5.161×107t CO2-eq. Thirdly, the use of environmentally friendly refrigerants is an effective measure to reduce carbon emissions from discarded household AC. During the calculation period, the use of R32 refrigerant AC will result in much smaller carbon emissions compared to R22 and R410a refrigerant AC, accounting for only one-fifth of R22 and one-third of R410a. However, there are still several issues such as the integration of other prediction models into the X12-GRNN framework, the inclusion of imported AC quantity into the research scope, and the setting of carbon emission parameters that require further discussion and research in the future.
    Research on a Wartime Joint Air Operation Task Planning Method
    ZHANG Hongbing, ZHAO Hong
    2025, 34(9):  141-147.  DOI: 10.12005/orms.2025.0287
    Asbtract ( )   PDF (1197KB) ( )  
    References | Related Articles | Metrics
    Before the war, it often takes several days, ten days, or even dozens of days to plan the daily air task orders required for joint air operations. How to rationally utilize aircraft and ammunition in the entire process of the battle, adopt appropriate mounting schemes and flight profiles, and implement air interception, air assault, and close-range aerial fire support tasks under various weather conditions, aircraft wear and tear, ammunition consumption, battlefield assessment, and other factors is an extremely complex and important optimization problem that directly affects the efficiency of combat operations. However, until now, there is still a lack of proven optimization idea on this issue, making it more difficult to optimize the air force in the entire process of the battle. It is urgent to conduct in-depth research on air task allocation issues, that is, given the type and quantity of combat aircraft, based on certain battlefield environment, tactical knowledge, and mission requirements, we can allocate one or a group of ordered tasks (target or spatial task points) to each combat aircraft so as to achieve the maximum possible number of tasks while achieving optimal overall efficiency in air force combat operations. Joint air combat task allocation includes pre-war task allocation and dynamic task allocation. Pre-war task allocation is used for detailed planning before task execution, generally with larger problem sizes and more comprehensive considerations, making it more difficult to solve and often using centralized solving methods. Dynamic task allocation is used for task instruction generation activities that occur during the execution of wartime tasks, highlighting real-time characteristics and often using distributed solving methods.
    This article focuses on the dynamic task allocation problem in wartime joint air operations, with the research goal of dynamically generating task allocation instructions for fighter aircrafts. Based on the summary of the advantages and disadvantages of existing methods, this paper focuses on the problem that existing joint air combat planning models are difficult to efficiently solve in the face of large-scale air combat, highlighting the requirements of high reliability and timeliness in wartime. A “human-in-the-loop” wartime joint air combat mission planning method is proposed. The model first calculates the available aircraft sorties and takeoff time windows within the air task instruction cycle. By analyzing the task requirement nodes and fighter resource nodes of joint air operations, it calculates the loss function based on the matching degree of ‘flight unit-operational task’, and uses the Russell approximation matrix to construct the solution space. Subsequently, the initial air task allocation solution space is mapped to a directed graph network. By regularizing fighter sorties that flight unit provides and fighter sorties that operational task requires, further optimized solution space can be obtained. With strong human-machine adaptability, the entire planning model incorporates the commander’s intentions, which can better adapt to the impact of the commander’s handling of unexpected situations on the generation of air task instructions during the generation cycle of air task instructions.
    Following the basic idea of solving complex problems with simple methods, the method proposed in this paper can effectively compress the solution space in a relatively short period of time, solving the problem of the complexity of wartime planning rapidly increasing with the scale of air combat. At the same time, due to the human involvement in the task planning loop, the reliability of task allocation schemes in the face of changing battlefield environments is higher. It is worth noting that the purpose of the entire allocation of joint air mission instructions is to orderly form air mission instructions for each operational day, assisting commanders in commanding joint air combat operations. Therefore, while calculating and optimizing the solution space of ‘flight unit-operational task’, it is also necessary to comprehensively consider factors such as the pilot’s ability of each flight unit, experience in executing relevant combat tasks, and reflect them in the process of solving the loss function.
    Research on the Decision Model for Railway Transportation Loading of Mobile Medical Service Team
    XU Yite, HAO Xiaoxiong, HUANG Zhaohui
    2025, 34(9):  148-153.  DOI: 10.12005/orms.2025.0288
    Asbtract ( )   PDF (1053KB) ( )  
    References | Related Articles | Metrics
    Railway transportation is the main way for the army to deploy troops and transport equipment over long distances. Currently, with the need for troops to conduct realistic combat training, cross-regional exercises and training have become a routine work. Railway transportation has gradually become the primary mode of cross-regional mobility for mobile medical service teams in the army, which are usually equipped with a variety of medical equipment and supplies. This involves complex affairs, high transportation requirements, and significant difficulties in logistics support, making the organizational procedures more complicated. However, most mobile medical teams established on the basis of the army hospitals lack professional military transportation personnel. The lack of professionalism among the commanding officers of mobile medical units can lead to unreasonable railway transportation loading schemes and loading errors, resulting in delays in the departure of troops on time.
    To further enhance the medical support capabilities of mobile medical teams and optimize the organization and planning of railway transportation loading, this article comprehensively uses integrated optimization methods to study the railway transportation loading decision-making model. The first part of the article aims to address the optimization of loading arrangements for railway transportation of mobile medical service teams. It utilizes a first-fit algorithm, combined with relevant railway transportation regulations, to construct an optimization model for loading arrangements. Users can input specific quantities and models of vehicles or equipment to the model, and quickly calculate the best loading arrangement while minimizing the use of resources. The second part of the research aims to tackle the loading demand issues for railway transportation of mobile medical service teams, on the “arrangement optimization model” from the first part. Based on the mission requirements and operational characteristics of mobile medical units, the calculation methods for the train length, total traction weight, and quantity and quality of the reinforcement equipment required for loading are outlined to derive a loading demand calculation model. The third part of the research focuses on optimizing the workflow for railway transportation of mobile medical units, with a goal to optimize the loading time. Firstly, the duration of various stages of railway transportation loading for mobile medical units is measured through field surveys. Subsequently, by employing integrated optimization analysis methods, the critical path and total duration of the railway transportation loading work are identified, and the workflow is optimized to minimize the loading time. The fourth part of the article conducts model calculations using the example of a mobile medical service team from a hospital organizing long-distance railway transportation in 2021. Through model optimization, the loading cost and time for this mobile medical service team are significantly reduced.
    The main focus of this article lies in the cost and time optimization of railway transportation loading. By comprehensively applying the first-fit algorithm and integrated optimization analysis methods, a decision-making model for railway transportation loading of mobile medical service teams is constructed. With the aid of Python programming tools, the model is validated using practical case studies. The model provides accurate and reasonable railway transportation loading solutions for the commanding officers of mobile medical service teams, serving as a valuable decision-making aid in their organizational planning work.
    Research on Optimal Classification of Small Enterprise Credit Rating Based on Default Probability Index Distribution
    ZHAO Zhichong, BAI Xuepeng, ZHANG Tong, ZHANG Yajing
    2025, 34(9):  154-161.  DOI: 10.12005/orms.2025.0289
    Asbtract ( )   PDF (1283KB) ( )  
    References | Related Articles | Metrics
    Credit rating is not simply a ranking of customers’ credit worthiness, but involves a further analysis of customers’ default probability to determine the distribution of default risk among customers with different credit ratings. Currently, credit rating is dominated by institutions like Standard & Poor’s and Moody’s, while third-party credit rating agencies in China are still in their infancy. Their rating systems are not tailored to the characteristics of small businesses in China. If the existing rating system is used to assess small enterprises, commercial banks may reject many small enterprises with potential for development due to low credit ratings, thus constraining their growth.
    Therefore, this paper proposes a credit rating classification model based on default probability following any given exponential distribution. By examining the credit rating outcomes of authoritative rating agencies such as Standard & Poor’s and Moody’s, we observe an exponential distribution pattern between credit rating and default probability. We introduce an optimal classification method for credit rating, where the default probability conforms to an exponential distribution. The credit ratings are ranked from high to low, with the objective function minimizing the sum of differences between the default probability of each rating and the ideal default probability. This ensures that the credit rating classification results adhere to the pattern where default probability follows an exponential distribution. A genetic algorithm is employed to solve the established nonlinear credit rating classification model, overcoming the limitations of manual implementation and the tendency of general optimization algorithms to get trapped in local optima. This approach addresses the inadequacies of existing credit rating classification methods, which often overlook the systematic relationship between credit ratings and default probabilities.
    This study uses empirical data from a commercial bank in China, comprising 3,045 settled loan transactions of small enterprises distributed across 28 regions including Beijing, Tianjin, Dalian, and Chengdu. The research investigates the default patterns of different credit ratings within this bank and further compares the distribution patterns of credit ratings given by Standard & Poor’s and Moody’s across various regions, such as Asian enterprises, emerging market enterprises, and Chinese enterprises. The comparison of rating results under different distribution patterns validates the effectiveness and adaptability of the proposed model.
    The proposed research on credit rating in this paper presents a specific credit rating methodology tailored for small enterprises in commercial banks. Under the premise of controlling default risk in commercial banks, this method involves classifying customers into different credit ratings. By referencing the rating results of investment grade and speculative grade from Standard & Poor’s and Moody’s, the study evaluates the credit worthiness of small enterprises. This classification helps determine which credit rating categories correspond to credit worthy customers and which require close monitoring.
    The default probability and loss given default parameters provided in the loan credit rating classification are core parameters for calculating default loss compensation in loan pricing. In the next step of the research, under the assumption that the default probability follows an exponential distribution, the distribution pattern of loss given default will also be considered. This will enable the investigation of the default probability and loss given default for small enterprises across different credit ratings.
    Deep Reinforcement Learning Portfolio Model Based on Variational Mode Decomposition
    GAO Ni, RAN Qili, HE Yiyue
    2025, 34(9):  162-168.  DOI: 10.12005/orms.2025.0290
    Asbtract ( )   PDF (1421KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    Portfolio selection is the process of allocating wealth dynamically among a group of assets with the goal of maximizing long-term returns with a certain level of risk tolerance or minimizing risk with a certain expected return on the portfolio. With the development of computer technology, constructing a portfolio selection model based on machine learning has become a hot spot in current intelligent financial investment research. But in traditional machine learning and deep learning, supervised learning is usually used to predict various asset prices, and cannot directly interact with financial market, which is still defective in constructing portfolio selection models. Firstly, deep learning feature extraction focuses more on short-term returns at the expense of long-term returns, which creates more risk in the portfolio. Secondly, deep learning models cannot dynamically adjust their trading strategies as the market changes.
    However, unlike other machine learning methods, deep reinforcement learning is centered on the interaction of the agent with the environment. Therefore, with the goal of maximizing the reward function and learning from feedback signals, deep reinforcement learning is better suited for solving nonlinear problems with delayed returns such as portfolio selection. Existing deep reinforcement learning methods for solving portfolio problems can be broadly categorized into three types: value-based, strategy gradient-based, and actor-critic. Value-based algorithms suffer from high bias and are often used to solve problems in discrete spaces. Therefore, these algorithms are not suitable for solving portfolio problems with continuous action spaces. Policy gradient-based algorithms suffer from training instability and policy convergence difficulties due to their presence of high variance and noisy gradients. The algorithms based on the actor-critic framework combine the above two methods, and solve the contradiction between high bias and high variance. This kind of algorithm is able to generate strategies directly through the actor network as well as evaluate the goodness of the strategies in real time through the critic network, which are more suitable for solving portfolio problems. Proximal Policy Optimization (PPO) is an algorithm based on actor-critic framework as well as one of the SOTA algorithms in the field of reinforcement learning. Therefore, in this paper, PPO will be utilized as a framework to construct a stock portfolio model. Furthermore, there is a lot of short-term speculation and noisy trading in the capital market, which makes the financial time series data contain a lot of noise. Specifically, in the short term, the price of assets will fluctuate irregularly due to a large number of short-term speculative and noise trading; however, in the long term, the price of assets will return to its value due to the law of value. Therefore, the high-frequency fluctuations of asset prices contain more noise; correspondingly, their low-frequency fluctuations contain more valid information.
    Aiming at above problems, this paper proposes a deep reinforcement learning portfolio model VMD-PPO. Firstly, this model decomposes the stock price time series using the VMD based on the SSA algorithm determining the parameters to obtain k intrinsic mode functions (IMF) with different center frequency. Secondly, it removes some high-frequency IMFs in the decomposition results and reconstructs the decomposition stock price time series using the gray correlation clustering method to obtain the high and low frequency terms and trend terms. This step can reduce noise in financial time-series data. Thirdly, the volatility feature extraction network is constructed to learn the multiscale features of the stock price time series. Finally, the optimal portfolio model and the corresponding portfolio strategy based on PPO algorithm are constructed.
    To verify the validity of our model, 20 constituent stocks in 10 industries from CSI300 andCSI500 are randomly selected for back testing separately. Four indicators, including cumulative return, Sharpe ratio, maximum retracement and Karma ratio are used as assessment indicators. Multiple experimental results show that VMD-PPO can effectively reduce the noise of financial time-series data and efficiently extract the multi-timescale features therein, which significantly outperforms the other control group models, and can better control the risk in different market environments and obtain more excess returns.
    Emission Reduction and Promotion Policies of Low-carbon Supply Chain Considering Overconfidence and Information Asymmetry
    XIA Liangjie, FENG Jinru, YANG Xinwen, LI Youdong
    2025, 34(9):  169-176.  DOI: 10.12005/orms.2025.0291
    Asbtract ( )   PDF (1315KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    To control carbon emissions and incentivize emission reduction among businesses, China officially implemented the “Administrative Measures for Carbon Emissions Trading (Trial Implementation)” in February 2021. The carbon cap-and-trade policy directly affects firms’ costs, profit composition, and operational decisions, standing as one of the most effective policy tools for emission reduction. Simultaneously, growing consumer awareness of low-carbon practices is leading to increased emphasis on products’ low-carbon attributes. Retailers actively engage in low-carbon marketing to highlight their products’ low-carbon attributes. Furthermore, information on market demand, costs, and quality significantly influences supply chain decisions. Companies often base their choices to share proprietary information on self-interest. Asymmetric cost information is a typical example of this phenomenon. Retailers possess proprietary information regarding their promotional costs, and whether they choose to share this information with manufacturers affects decision-making and utility throughout the supply chain. Moreover, market demand is typically uncertain, and cognitive biases can influence decision-makers’ judgment of market demand, thus affecting their decisions. Research indicates that overconfidence is one of the most prevalent cognitive biases. Decision-makers tend to exhibit overconfidence when faced with external uncertainty. This not only affects their own operational decisions but also influences other supply chain members. The retailer is prone to over-precision in judging consumers’ low-carbon awareness. Additionally, since the retailer’s low-carbon promotional costs are often private, the manufacturer may develop an overconfident bias in its assessment of these costs if the information is not shared.
    Therefore, this study investigates the game-theoretic problem of emission reduction and promotion in a low-carbon supply chain incorporating overconfidence and information asymmetry. We model a supply chain with a single manufacturer and a single retailer under a carbon cap-and-trade policy. The model incorporates the following key features: the retailer, as the supply chain leader, is overconfident about consumers’ low-carbon awareness and holds private information on its promotional costs; the manufacturer, in turn, may become overconfident about these costs when information is asymmetric. We develop Stackelberg game models for three scenarios: (1)the retailer shares cost information; (2)the retailer hides cost information, and the manufacturer remains rational; and(3)the retailer hides cost information, and the manufacturer is overconfident. We analyze the equilibrium outcomes of the emission-reduction and promotion game, the retailer’s information sharing strategy, and the impacts of overconfidence and carbon trading price on decision-making and firm profits.
    The results show that: (1)Both the retailer’s order quantity and the manufacturer’s unit product emission reduction are negatively correlated with the manufacturer’s overconfidence level; the more overconfident the retailer is, the higher (lower) the order quantity of low-margin (high-margin) products. (2)The retailer’s overconfidence only affects its belief-expected profit, whereas the manufacturer’s overconfidence affects both firms; the retailer always benefits from its own overconfidence, while the manufacturer always suffers from its overconfidence under information asymmetry. (3)The retailer’s decision to share (conceal) promotional cost information is not always beneficial to the manufacturer (retailer); this strategy depends on its profitability and promotional efficiency.
    Future studies could introduce competition among manufacturers or retailers and explore the problem within more complex supply chain structures. The research framework could also be extended from a single carbon cap-and-trade policy to consider multiple carbon policies.
    Study of Evolutionary Game of Multi-agent Safety Behavior under Subcontracting Operation of Building Construction Labor
    CHENG Lianhua, WANG Chen, LI Shugang
    2025, 34(9):  177-183.  DOI: 10.12005/orms.2025.0292
    Asbtract ( )   PDF (1262KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    In order to enhance safety management at construction sites involving labor subcontracting, this study develops a tripartite evolutionary game model based on prospect theory, incorporating the general contractor, labor subcontractor, and workers. By integrating risk perception and mental accounting mechanisms, the decision-making behaviors of these three parties under uncertainty are systematically analyzed. A perceived benefit matrix is constructed to reflect the subjective evaluations of gains and losses, incorporating elements such as safety compliance benefits, risk compensation, and potential penalties. Furthermore, replicator dynamic equations are derived and solved to examine strategic interactions and evolutionary pathways among the multiple agents, identifying equilibrium conditions and convergence trends under varying constraints. Numerical simulations using MATLAB are conducted to evaluate how variations in key parameters such as safety management costs, penalty intensity, behavioral valence, and accident losses influence the system’s evolutionary trajectory and stability. These simulations also assess the sensitivity of each party’s decision-making to changes in regulatory and economic conditions.
    The results indicate that the stability of the safety management system is significantly affected by these factors. The strategy combination(active management, active management, and safe operation) is identified as the most favorable evolutionary stable strategy, which effectively aligns the interests of all participants while minimizing systemic risks. Enhancing the perceived value of penalties and accident losses for all parties proves effective in guiding the system toward this ideal state by increasing the psychological and financial weight of non-compliance. Additional measures include reducing the psychological reference point for safety investment costs through improved cost-sharing mechanisms and policy incentives, as well as adopting technological means to improve the efficiency and effectiveness of safety management. These interventions collectively strengthen risk control capabilities, enhance coordination among stakeholders, and reduce the probability of accidents.
    This research integrates behavioral economics with the evolutionary game theory, providing theoretical insights and practical strategies for optimizing safety management in subcontracting-intensive construction environments. The findings highlight the importance of addressing perceptual and psychological factors in safety management systems and offer actionable recommendations for policymakers and project managers to enhance collaborative risk governance. By aligning economic incentives with safety objectives, this study contributes to the development of more resilient and adaptive safety management frameworks in complex construction projects.
    Design of Intra-city Metro Logistics Network Based on Distributionally Robust Optimization
    GUO Shihao, HU Qingmi
    2025, 34(9):  184-191.  DOI: 10.12005/orms.2025.0293
    Asbtract ( )   PDF (1284KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    The rapid growth of e-commerce has spurred the need for efficient urban logistics solutions. Conventional intra-city express delivery networks face challenges like congestion, pollution, and susceptibility to disruptions. One potential solution is the Metro-based Underground Logistics System (MULS), which integrates metro infrastructure with logistics operations, promising cost reduction and operational streamlining. In multi-to-multi networks like intra-city express delivery, aviation, and telecommunications, the hub-and-spoke network shows extensive application potential. However, current research mainly focuses on fully interconnected hub networks, termed the Hub Location Problem (HLP). Some scholars have noted that generating incomplete hub networks may offer advantages, with incomplete hub networks possibly forming circular or tree-like topologies. Apart from incomplete HLP in fully connected networks, limited research exists on incomplete HLP within incomplete underlying networks like subways and railways. Although some past studies have employed hub-and-spoke network designs for MULS systems, applying incomplete HLP methods based on incomplete underlying networks in MULS system design remains relatively unexplored. Moreover, existing research mainly addresses deterministic express delivery demands. However, the random distribution pattern of intra-city express delivery demand requires special consideration in MULS design. Solely addressing deterministic demands may prove inadequate for practical operations and could lead to excessive resource wastage.
    This research aims to propose an intra-city metro logistics system utilizing both surface road and metro transportation modes to enhance intra-city express delivery efficiency. We determine metro hub locations from a potential hub set, establish routes between metro hubs, and assign customers to metro hubs. Unlike previous research, we define the Hub Location Problem in an incompletely connected network environment, termed the Incomplete Hub Location Problem (IHLP), to address intra-city metro logistics network planning. The IHLP model proposed in this paper can be directly applied to real metro network structures, enhancing its practicality. Furthermore, we not only consider service time constraints but also incorporate metro transfers into the decision-making process. To address uncertainty in express delivery demand, we develop a distributionally robust optimization model for the IHLP to enhance the robustness of the metro logistics network.
    As the proposed distributionally robust optimization problem is a semi-infinite chance-constrained optimization model, we delve into two cases of ambiguous sets. Firstly, we consider probability distributions with zero-mean bounded perturbations and transform the ambiguous chance constraint into operational forms through feasible approximation methods. Secondly, we explore Gaussian perturbation families with partial mean and variance information, enabling us to convert the ambiguous chance constraint into deterministic equivalent forms. The models derived from these two ambiguous set approximations are referred to as the Bounded Perturbation-DRO model and the Gaussian Perturbation-DRO model, respectively.
    Finally, we conduct numerical experiments using a portion of the Shanghai metro network as our experimental platform, along with test cases generated from the AP dataset. All numerical experiments are solved using Python calling CPLEX. The optimal values obtained for the Bounded Perturbation-DRO and Gaussian Perturbation-DRO models are 178349.32 and 176817.28, respectively, with the Gaussian Perturbation-DRO model yielding the smallest optimal value. Additionally, sensitivity analysis of parameters like allowed maximum service time, discount factor, and transfer cost reveals their significant impact on network configuration and optimal values. For instance, excessively compressing the allowed maximum service time may lead to increased total operational costs. Furthermore, we compare the proposed distributionally robust optimization with classical robust optimization methods and stochastic optimization methods. The experimental results indicate that compared to classical robust optimization methods, the distributionally robust optimization method effectively avoids overly conservative optimization results, while in comparison to stochastic optimization methods, it incurs a small cost of describing the uncertainty of probability distribution.
    CEO Alumni Network and Corporate Risk: From the Perspective of Relationship Embeddedness
    LU Jing, MIN Jian, SHEN Jun
    2025, 34(9):  192-198.  DOI: 10.12005/orms.2025.0294
    Asbtract ( )   PDF (948KB) ( )  
    References | Related Articles | Metrics
    In the face of the pressure of the continuing international economic downturn, the business activities of enterprises have been hit by strong external shocks. The number of business failures has been high in recent years. Based on the purpose of reducing corporate risk, the research on mining the influencing factors of corporate risk has been paid more and more attention by the academia and industry.
    Previous studies have shown that a corporate risk is affected by internal factors such as management characteristics and corporate governance as well as external factors such as macro policies and financial environment. However, from the perspective of management characteristics, most of the research is based on the natural attributes of executives, but little is based on the social connection attributes of CEOs. China is a country that values connections, circles and relationships. It is a typical relational society. Complex interpersonal relationships are interconnected to form a huge social network. This kind of social relationship is not only the emotional bond between people, but also the connection of interests, especially the alumni connection. Influenced by the campus culture and historical background of the same university, alumni can form a strong sense of cohesion and identity, which can bring certain resources to the enterprise and affect the operation of the enterprise. However, it is worth considering that although CEO alumni connection helps enterprises to exchange information, obtain funds and achieve stable development of enterprises, if an enterprise is hit by a risk, related CEO enterprises may also be affected by risk contagion. Therefore, it is not yet known how CEO alumni connection affects the corporate risk and the influencing mechanism.
    Based on the above analysis, this paper broadens the analysis of the antecedent factors of the corporate risk. It establishes a network with CEO alumni connections as nodes through a complex network model, and deeply analyzes the topological properties of CEO alumni networks. It selects the data of China’s A-share listed companies from 2009 to 2019 as the initial research sample to explore the effect and mechanism of CEO alumni connection on the corporate risk. Since the calculation of the corporate risk measures requires a rolling standard model, the final panel sample period is from 2009 to 2017. Among them, CEO alumni connection data are manually organized, while financial data and other data mainly come from CSMAR database and WIND database. It uses the fixed effect model for regression analysis, while controlling the fixed effects of industry and year.
    It is found that CEO alumni connection can significantly reduce corporate risk. After controlling the possible endogeneity problems and conducting the robustness test, the conclusions of this paper remain consistent. At the same time, through the mechanism test, it finds that CEO alumni network has the effect of “information substitution” and “capital acquisition”. At the same time, it also finds that CEO alumni network has a “hierarchy effect” in reducing corporate risk. That is, the degree of CEO’s indirect alumni connection is relatively shallow, so it does not significantly reduce the corporate risk. However, the CEO’s direct alumni connection and close alumni connection are deeply embedded, which can bring information and financial resources to enterprises, thus reducing corporate risk.
    The possible contributions of this paper are mainly reflected in the following aspects: from the perspective of CEO alumni connection, a social attribute, it enriches the research content of the factors affecting corporate risk. At the same time, we explore the influencing mechanism and uncover the “black box” of CEO alumni connection to reduce corporate risk, which provides practical enlightenment for the government to strengthen the construction of formal institutions and enterprises to rationally use resources to reduce risks.
    Can Diversity Partners of Cross-border Alliance Promote Disruptive Innovation of Latecomers?
    SONG Zeming, ZHANG Guangyu, DAI Haiwen
    2025, 34(9):  199-204.  DOI: 10.12005/orms.2025.0295
    Asbtract ( )   PDF (937KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of technological innovation, disruptive innovation has become one of the important means for enterprises to take a strategic initiative. However, the disruptive innovation process represented by new energy vehicles shows that relying on the force of a single enterprise is difficult to overcome the difficulties that arise in disruptive innovation. Therefore, in order to achieve disruptive innovation, latecomers have relied on their own advantages to form innovation alliances across industries, fields, disciplines, and other boundaries, namely cross-border innovation alliances. However, due to the continuous deepening of understanding and practice of disruptive innovation, as well as the increasing technological interdependence among various entities, cross-border innovation alliances have also exposed some problems in the process of disruptive innovation. In order to better enhance the disruptive innovation efficiency of cross-border innovation alliances and establish a more open innovation ecosystem, this article takes the China EV100 as the research object to explore the threshold effect of relationship embedded between the diversity partners of cross-border innovation alliance and the disruptive innovation of latecomers, as well as the regulatory effect of enterprise size after crossing the threshold effect.
    The research has found that: (1)The diversity partners of cross-border innovation alliance promote disruptive innovation of latecomers, and there is a dual threshold of relationship embedded. When the relationship embedded crosses the first threshold, the diversity partners of cross-border innovation alliance can promote disruptive innovation of latecomers. When the relationship embedded cross the second threshold, the promoting effect of the diversity partners of cross-border innovation alliance on disruptive innovation of latecomers weakens. (2)The moderating effect of the enterprise size on the diversity partners of cross-border innovation alliance and the disruptive innovation of latecomers varies under different levels of relationship embedded. After the relationship embedded crosses the first threshold, the enterprise size positively regulates the promoting effect of the diversity partners of cross-border innovation alliance on disruptive innovation of latecomers, which is reflected in the fact that the diversity partners of cross-border innovation alliance can more promote disruptive innovation of large latecomers. After the relationship embedded crosses the second threshold, the enterprise size negatively regulates the promoting effect of the diversity partners of cross-border innovation alliance on disruptive innovation of latecomers, which is reflected in the fact that the diversity partners of cross-border innovation alliance can better promote disruptive innovation of small and medium-sized latecomers. (3)Age, type of enterprise, alliance level, and alliance experience are all important influencing factors for the diversity partners of cross-border innovation alliance to promote disruptive innovation of latecomers.
    This article has three limitations that need to be further explored in future research. Firstly, this study only identifies and studies the diversity partners of cross-border innovation alliances, without exploring the role of other characteristics of cross-border innovation alliances. Future research can further expand the research dimension relationships of cross-border innovation alliance based on alliance combination characteristics. Secondly, the data collected in this study is from enterprises in the China EV100, and studying the automotive industry alone limits the applicability of the research conclusions. Future research will be based on multiple industries, using a combination of questionnaire data and objective data to select more reasonable proxy indicators for comparative analysis. Finally, this study is based on the analysis of the current development status of cross-border innovation alliances in the Chinese context, but lacks relevant analysis of cross-border innovation alliances in foreign contexts. Future research can combine typical cross-border innovation alliance cases from abroad to deeply explore the boundary conditions under which cross-border innovation alliances promote disruptive innovation of latecomers in different contexts.
    Digital Economy Driving Low Carbon Development: Empirical Study Based on Dynamic QCA and Mediation Effect Modeling
    CAO Ze, CHEN Junwei, CUI Lizhi
    2025, 34(9):  205-210.  DOI: 10.12005/orms.2025.0296
    Asbtract ( )   PDF (987KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    Low-carbon development represents a significant trajectory for China’s future economic growth. The efficacy of the digital economy in driving low-carbon development has garnered widespread attention in recent years. Based on the group theory, this paper employs both qualitative and quantitative analyses, with NCA, dynamic QCA, and a mediation effect model to scrutinize the interplay between the digital economy and low-carbon development. It addresses key inquiries: Can the digital economy propel low-carbon development? What are its underlying mechanisms? What digital economy development patterns foster optimal low-carbon development?
    The study reveals several key findings: (1)The digital economy significantly drives low-carbon development, with its effect remaining robust across various tests and displaying regional and temporal variations. (2)Apart from its direct impact, the digital economy also fosters low-carbon development by promoting technological advancement and industrial structure upgrading. (3)From a group perspective, NCA findings suggest that specific components of the digital economy are not indispensable for achieving significant levels of low-carbon development. (4)Dynamic QCA results reveal that informatization and digital transactions are key antecedent variables, pivotal for maximizing the digital economy’s role in driving low-carbon development through strategic promotion. At the same time, low antecedent variables are not an obstacle to driving low-carbon development. Playing the roles of active government and effective market, formulating appropriate digital and low-carbon development policies according to its own factor endowment and comparative advantages, they can continuously increase the accumulation of value, and through the accumulation of innovations in small steps, they can slowly but continuously climb up the value chain, and ultimately realize the low-carbon transformation.
    Policy insights: Firstly, boosting investment in new-generation digital technology R&D, promoting Digital China and Gigabit Cities construction, and leveraging big data and AI to solidify the digital economy’s low-carbon development benefits. Secondly, addressing regional disparities in digital and low-carbon development by prioritizing informatization and digital transactions, and tailoring development policies based on regional endowments and advantages. Thirdly, utilizing digital intelligence to integrate innovation resources, fostering strategic and future industries, cultivating new quality productivity, and driving low-carbon development. Lastly, maximizing big data and smart tech for dual gigabit network penetration to enhance technological exchange and industrial structure upgrading for low-carbon development.
    Management Science
    Price War or Innovation War? Operational Decisions of EV under Background of Price Threshold Subsidy
    QIAN Zhifeng, YANG Shuli, CHAI Junwu
    2025, 34(9):  211-218.  DOI: 10.12005/orms.2025.0297
    Asbtract ( )   PDF (1200KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    With the increasingly prominent environmental problems, electric vehicles (EV) have become more and more popular around the world. Using the selling price of EV as the subsidy threshold has become a key criterion for governments to formulate incentive subsidy policies and change the existing competitive landscape of the EV market, potentially leading to intense price wars. Therefore, the urgent research question to be solved is: should EV manufacturers restricted by the price threshold subsidy adjust their prices to obtain subsidies? In addition, given battery recycling and innovation as the two main features influencing manufacturers’ pricing decisions, will subsidy policies force EV manufacturers to increase their innovation efforts and recycling rates?
    This study provides an in-depth exploration of the competitive landscape between high-technology EV manufacturers and low-technology EV manufacturers in the market based on the price-threshold subsidy policy context, exploring three main questions: Firstly, it investigates how price threshold subsidy policy influences the optimal pricing strategies of EV manufacturers in a competitive landscape characterized by technological differentiation. Secondly, it delves into the impact of price threshold subsidy policy on the operational decisions of EV manufacturers, including pricing, recycling, and innovation under the three pricing strategies, and aims to uncover optimal pricing strategies for EV manufacturers. Thirdly, it explores how governments set a reasonable policy for price threshold and subsidy level.
    Based on the above background, this study develops four pricing models. Firstly, we consider the pricing strategy in the absence of subsidies, serving as the benchmark model in this paper, denoted as Model N. Then, we consider the existence of price threshold subsidy policy. We examine a scenario where the government sets the price threshold between the two product prices, with high-quality products restricted by the price threshold while low-quality products are not subject to this threshold. Therefore, the low-tech manufacturer is the policy beneficiary, and the high-tech manufacturer is a policy restricted manufacturer. The three pricing strategies under the price threshold subsidy policy are: Model C-price retention strategy: it is assumed that the restricted manufacturer adopts a price retention strategy in response to the price threshold subsidy policy. Model D-pricing war strategy oriented towards market competition: to counter the competitive advantage of policy beneficiary, the restricted manufacturer considers participating in market competition by adjusting prices, but its pricing remains above the price threshold. Model J-pricing war strategy oriented towards subsidy acquisition: the policy-restricted manufacturer reduces the price to match the price threshold subsidy, allowing both EV manufacturers to qualify for the government subsidy. Finally, the Nash equilibrium solution of each model is obtained by using the backward induction.
    Through the comparative analysis of equilibrium results, we find that, the restricted manufacturer can employ a combination of price war strategy and product innovation strategy to enhance its market competitiveness. Specifically, the restricted manufacturer should choose to implement a price retention strategy when both the price threshold and subsidy level are relatively low. Conversely, when the subsidy level is relatively high or the price threshold is relatively high, the restricted manufacturer can flexibly adopt price war strategies oriented towards market competition or subsidy acquisition to mitigate profit losses. When the subsidy level is relatively low, the policy beneficiary can also benefit from the pricing strategy adopted by the restricted manufacturer, achieving a win-win situation. In summary, the results drawn in this study not only contribute to further enriching the field of EV supply chain operational management but also serve as a reference for EV manufacturers when formulating decisions for pricing, product innovation, and product recycling in the context of price threshold subsidy policy. Additionally, it provides theoretical support for governments to formulate scientifically reasonable subsidy policies.
    Study of Vulnerability of Urban Bus-Metro Composite Network Based on Transfer Relationships
    DU Mijie, GUO Peng, ZHAO Jing
    2025, 34(9):  219-225.  DOI: 10.12005/orms.2025.0298
    Asbtract ( )   PDF (1955KB) ( )  
    References | Related Articles | Metrics
    Combining the metro’s high capacity with bus flexibility,the “metro-led, bus-assisted” model forms the backbone of urban transit. While transfer connectivity enhances travel convenience, frequent transit disruptions often cause localized impacts. Therefore, assessing bus-metro vulnerability based on transfer relationships is vital for improving urban safety, risk prevention, and emergency response.
    Existing studies typically evaluate the vulnerability of public transportation systems by considering both structural and functional aspects. Structural vulnerability relies on network topology metrics, while functional vulnerability involves assessing operational characteristics such as passenger demand satisfaction and flow impact. Most studies assess public transportation system vulnerability using accessibility and efficiency metrics. Accessibility reflects basic urban transportation system needs, while efficiency metrics gauge overall operational effectiveness during disruptions. While current research has addressed the impact of capacity disparities between bus and metro stations on system vulnerability, there remains a lack of characterization of fundamental differences in capacity, operating speed, failure rules, and other aspects between bus and metro systems. Additionally, the bus-metro composite network exhibits significant structural heterogeneity and differentiated station functionalities, but existing studies mostly focus on the overall vulnerability of the system, with less attention paid to local considerations. Furthermore, transfer relationships, as the connection bridges of bus-metro composite networks, are of importance. Research on transfer relationships mainly includes transfer efficiency, transfer costs, transfer behavior, and characteristics of transfer stations. However, further in-depth exploration is needed regarding the impact of transfer relationships on the vulnerability of public transportation systems.
    This study focuses on assessing the vulnerability of bus-metro composite networks based on transfer relationships, considering local effects and dual disparities. Firstly, by introducing differences in capacity and speed attributes, a detailed representation of the structure of urban bus-metro systems is provided, creating local composite networks with transfer relationships as references. Subsequently, differentiated failure rules for bus and metro networks are defined. Then, considering passengers’ effective travel range and differences in capacity and speed between buses and metros, a multidimensional vulnerability measurement model of network efficiency including accessibility vulnerability, relative capacity-weighted network efficiency vulnerability, and relative distance-weighted network efficiency vulnerability is innovatively established.
    The Xi’an bus-metro network, built on transfer relationships, exhibits an uneven structure. Local accessibility vulnerabilities for metro and bus stations are 4.8% and 5.4%, respectively falling between the overall network and sub-network levels. When accounting for capacity and speed disparities, metro transfer failures intensify local efficiency vulnerability, while bus failures show a reduced impact. Notably, local networks with higher metro station proportions are more vulnerable. Analysis suggests that increasing network density and optimizing capacity and transfer relationships can mitigate vulnerability. These findings guide local-level optimization—considering scale, structure, and transfer counts—while future research should use real-world data to identify balanced network designs that maintain accessibility under varying traffic volumes.
    Research on Manufacturer-led Green Innovation Decisions of Supply Chain under Co-opetition
    LAI Zhixuan, LOU Gaoxiang, MA Haicheng
    2025, 34(9):  226-232.  DOI: 10.12005/orms.2025.0299
    Asbtract ( )   PDF (1489KB) ( )  
    References | Related Articles | Metrics
    With the enhancement of consumer awareness of environmental protection, the greenness of products has become an important factor of market competition, which has promoted enterprises to make a green manufacturing innovation to develop green products. Although industry and academia have paid high attention to green manufacturing and good business practice results have been achieved, product greenness remains low in supply chains such as food and fashion. Therefore, the manufacturing industry urgently needs to improve the development performance of green products. On the one hand, green procurement is crucial to improving the development performance of green products. On the other hand, green marketing is another key to improving the performance of green product development. Green procurement and green marketing expand the concept of green innovation in the supply chain and make it more complex, which is reflected in the interactive spillover effect of different green innovation practices. In addition, as the level of green competition continues to increase, many enterprises have decided to cooperate with their competitors to improve their green competitive advantage. However, the influence mechanism of competition and co-opetition on the green innovation behavior of supply chain is still unclear. Therefore, starting from the green innovation behavior of supply chain, this paper reveals the influence mechanism of cooperation strategy and cooperation dynamics of green product development performance, which has important theoretical and practical value for guiding enterprises’ green innovation practice.
    This paper aims to contribute to the progress of co-opetition research by examining the following key questions: (1)What are the differences between green innovation decisions and profits under perfect competition and co-opetition? (2)Does co-opetition always lead to better economic and environmental performance? If the answer is no, what is the applicable scenario? (3)If the cooperation strategy is effective, how to balance the cooperation dynamics and improve the development performance of green products?
    The research shows that the increased difficulty of green innovation in any manufacturer or retailer will not only reduce the green performance of the supply chain to which it belongs, but also reduce the green performance of another supply chain. In the co-opetition scenario, a higher intensity of green competition is not conducive to the H-type manufacturer and retailer, but may benefit the L-type manufacturer. When the H-type manufacturer sets lower green-purchasing standards, the L-type manufacturer will not choose to cooperate with the H-type manufacturer. When the H-type manufacturer sets higher green-purchasing standards, the L-type manufacturer will abandon horizontal cooperation with the H-type manufacturer.
    This paper reveals the interaction of green innovation in supply chain from the perspective of competition and cooperation, and reveals the management value behind the conclusion from the perspectives of supply chain members and the government. (1)For the manufacturer, when implementing green innovation decisions, enterprises should pay attention to not only how to reduce the difficulty of green innovation for themselves, but also how to reduce the difficulty of green innovation for supply chain members or competitors; when L-type manufacturers and H-type manufacturers establish competitive and cooperative relations, information sharing mechanism should be established at the same time to help them find greener suppliers; retailers should develop cost sharing or benefit sharing contracts and other coordination methods within the supply chain to encourage H-type manufacturers to invest in green innovation. (2)For the government, to achieve the goal of environmental protection, corresponding policies and regulations should be introduced to alleviate the intensity of green competition in the market; when the government formulates regulations to ease the intensity of green competition, it should fully consider the possible loss of interests of L-type manufacturers and formulate complementary subsidies or tax relief policies.
    Optimizing of “Photovoltaic-Energy Storage-Charging Integration” Cold Chain Logistics Operations Considering Carbon Trading and Real-time Electricity Prices
    SHAO Juping, SHI Jin, SUN Yanan
    2025, 34(9):  233-239.  DOI: 10.12005/orms.2025.0300
    Asbtract ( )   PDF (1111KB) ( )  
    References | Supplementary Material | Related Articles | Metrics
    As an industry that is highly dependent on electricity and fuel, cold chain logistics faces a significant carbon emission challenge that cannot be overlooked. The consumption of these resources not only escalates operating costs but also exacerbates environmental degradation, posing a dual threat to the long-term viability of businesses and the stability of the global climate system. Thus, promoting the use of new energy logistics vehicles and renewable energy sources emerges as a crucial strategy not only to curtail carbon emissions and ease environmental pressure but also to foster a symbiotic enhancement of corporate economic gains and environmental health. This proactive approach is vital for forging a path towards sustainable industrial practices that benefit both the economy and the environment.
    To rigorously investigate the potential within this sector, this study specifically targets cold chain logistics centers that are equipped with advanced “photovoltaic-energy storage-charging integration” system. The system is designed to offer clean and sustainable energy for new energy logistics vehicles, thereby significantly reducing carbon emissions. The introduction of such “photovoltaic-energy storage-charging integrated” system has undoubtedly revitalized the cold chain logistics industry and opened up new avenues for research and optimization that could lead to financial condition and environment improvements.
    In this comprehensive study, the impacts of carbon trading prices and real-time electricity prices on the operations of cold chain logistics are thoroughly assessed. These pricing factors are crucial not only for their direct influence on the economic costs borne by enterprises but also for their role in shaping the deployment and operational strategies of “photovoltaic-energy storage-charging” facilities. The intricate dynamic relationships among photovoltaic installations, energy storage equipment, new energy logistics vehicle charging stations and external power grids are studied in depth. The purpose of the study is to optimize the management of these elements to maximize revenue from electricity sales while minimizing the economic costs associated with their operation. To this end, a mixed integer optimization model is constructed that combines the key aspects of cold chain logistics vehicle route optimization with “photovoltaic-energy storage-charging” energy management. This model seeks to establish an optimal operational framework that not only meets the logistical demands of transporting temperature-sensitive goods but also leverages renewable energy to its fullest potential to minimize carbon emission. In order to solve this complex model, the MOEA/D-IEpsilon algorithm is designed. This improved algorithm is capable of exploring a wider array of feasible solutions in large-scale, multi-decision scenarios compared to the traditional MOEA/D algorithm, thereby enhancing both the quantity and quality of potential solutions.
    The efficacy of this model and algorithm is corroborated through a detailed case study involving the tertiary cold chain operations of Company M in Jiangsu Province. The findings reveal that: (1)it is more economical for new energy logistics vehicles to be charged between 11∶00 and 17∶00; (2)by strategically planning vehicle routes and managing the charging behaviors of new energy logistics vehicles, it is possible to significantly reduce the economic costs associated with cold chain logistics while simultaneously boosting electricity sales revenue; (3)cold chain logistics companies should make full use of the site for photovoltaic power generation and give priority to “self-use”; (4)energy storage facilities should be charged when the electricity price is low and discharged when the electricity price is high to reduce economic costs.
[an error occurred while processing this directive]