Loading...

Table of Content

    25 October 2025, Volume 34 Issue 10
    Theory Analysis and Methodology Study
    Research on Mass Emergency Warning Based on Causal Bayesian Network
    GU Wenjing, QIU Jiangnan, WANG Yalan, YUAN Hao
    2025, 34(10):  1-8.  DOI: 10.12005/orms.2025.0301
    Asbtract ( )   PDF (1867KB) ( )  
    References | Related Articles | Metrics
    Mass emergencies can be regarded as complex disaster systems, characterized by numerous system elements and intricate direct and indirect relationships among these elements. Furthermore, the situation of mass emergencies often evolves rapidly, exhibits diverse forms, and possesses significant uncertainty in their development and impact outcomes. The aforementioned system complexity and uncertainty of mass emergencies pose challenges to event early warning, necessitating a method that can systematically represent all elements and their complex interrelationships while quantifying the uncertainty of these elements. To address this issue, this paper constructs an early warning model for mass emergencies based on causal Bayesian networks. This model can effectively utilize missing data, accurately identify and represent the complex causal relationships between elements, and quantify the uncertainty of variables through conditional probabilities. Consequently, by means of probabilistic inference, it achieves the prediction and early warning of outcomes and warning levels. Therefore, this model can simultaneously resolve the system complexity and uncertainty of mass emergencies, enhancing the accuracy, relevance, and interpretability of early warnings. Moreover, the model presented in this paper not only intuitively outputs the event consequences and warning levels, addressing what the question is, but also answers the key causes leading to these outputs, addressing why the question is. This thereby provides more effective decision support for the emergency management of mass emergencies.
    To simplify the complexity of mass emergencies, this paper first introduces the disaster system theory of “disaster-prone environment-causative factors-vulnerable bodies” to construct a representation model for the disaster system of mass emergencies, providing the initial structure for constructing the causal Bayesian network. Subsequently, based on the representation model, the system elements of mass emergencies are identified to provide variables and data for the construction of the early warning model. Finally, the greedy fast causal inference (GFCI) algorithm is selected for structure learning to solve the pseudo-causality problem brought about by unobservable latent variables and confounding variables; the expectation-maximization (EM) algorithm is used for parameter learning to address the issue of missing data, ultimately constructing a causal Bayesian network model for the early warning of mass emergencies. Through probabilistic inference with the CBN, the possible outcomes and warning levels of mass emergencies are output, and based on sensitivity analysis, the most critical causal variables leading to the outcomes of mass emergencies are identified.
    This paper collects 1138 complete cases of mass emergencies from the Wise News database to construct a causal Bayesian network for the early warning of mass emergencies and validate its effectiveness. The results show that the overall accuracy of the model reaches 0.92, and various indicators demonstrate that the model constructed in this paper achieves good results in predicting an early warning of mass emergencies. Additionally, the sensitivity analysis results show that the most critical variable affecting the intensity of online public opinion is “road traffic congestion,” the key variable affecting casualty numbers is “assault on police officers,” and the most critical variable affecting economic loss is whether “key departments” are damaged. Emergency management departments are recommended to focus on these variables when making decisions. Furthermore, this paper analyzes the evolutionary mechanisms of different types of mass emergencies and finds that events related to interest demands and land acquisition and demolition are more likely to result in “economic losses” and “casualties,” as they are directly related to people’s interests. In contrast, events related to social indignation and social disputes are more likely to lead to high-heated online public opinion consequences.
    This study primarily relies on historical case data of mass emergencies. Due to the difficulty of obtaining time attributes, the dynamic characteristics of mass emergencies are not considered. In the future, we will further enrich the case data with time attributes and construct an early warning model for mass emergencies based on dynamic causal Bayesian networks to achieve more efficient and accurate results.
    Modeling and Reliability Evaluation of Cellular Manufacturing System for Process Path Diversity
    WANG Xin, YE Zhenggeng, CAI Zhiqiang, ZHANG Shuai
    2025, 34(10):  9-16.  DOI: 10.12005/orms.2025.0302
    Asbtract ( )   PDF (1541KB) ( )  
    References | Related Articles | Metrics
    The manufacturing industry is undergoing rapid development and change in the modern era. As a result of the ongoing advancements in technology and the diversification of consumer demand, an increasing number of businesses are focusing on ways to increase production efficiency and lower costs, and enhance product quality. The cellular manufacturing system has garnered attention in this context because its remarkable flexibility and adaptability enable itself to promptly respond to changes in the market and client customized requirements. The equipment manufacturing business has made an extensive use of the cellular manufacturing system, which is a standard multi-variety and small-batch production organization with the manufacturing cell at its center. The cellular manufacturing system completes the production and processing tasks of equipment through the cooperation among different manufacturing cells, and its manufacturing cells or equipment can be regarded as the components of the cellular manufacturing system, which affects the completion of the system tasks. The cellular manufacturing system divides the factory into different areas, the same area has the same processing function, and different processing cells are placed in each area to perform related processing tasks, that is, the processing cells in the same area can be replaced by each other. As a result, the processing paths of the cellular manufacturing system are diverse, and the uncertain processing paths bring challenges to the reliability evaluation of the cellular manufacturing system. Its manufacturing cells or equipment can be regarded as the components of the cellular manufacturing system, which affects the completion of the system tasks. The cellular manufacturing system completes the production and processing tasks of equipment through the cooperation among different manufacturing cells. The factory is divided into distinct regions by the cellular manufacturing system, and each area has a single processing function. Various processing cells are positioned in each area to carry out related processing duties; in other words, the processing cells in the same area are replaceable. Because of this, the cellular manufacturing system has a variety of processing pathways, and the performance assessment of the system is complicated by the uncertain processing paths.
    This study first examines the differences in the processing cell’s equipment deterioration process under various conditions: when equipment is not used for processing parts, its characteristics—which are limited to its life parameters and time—determine the equipment’s degradation process, which can be expressed as a continuous stable failure rate. Processing time, processing accuracy, and other factors influence the degree to which parts machining activities contribute to equipment deterioration during the machining process. The incremental failure rate may be used to characterize the degree of degradation resulting from these factors. So, the processing cell’s reliability function and cumulative failure function are determined, and the mixture failure rate model is developed. Subsequently, the various states that might occur inside the cellular manufacturing system are identified, and a performance assessment framework considering the variety of process routes is proposed to address the issue of the cellular manufacturing system’s complex reliability evaluation. The probability formulae of the system under various states are obtained in this procedure, which models the reliability of a specific processing task using multistate fault trees and multistate binary decision graphs. In order to remove the imbalanced efficiency loss and overproduction between operations, a genetic algorithm is utilized to select the process path that satisfies both the highest reliability of the system and the shortest job completion time. After obtaining the process path through genetic algorithm, the sampling approach is used to determine how the processing cell and equipment reliability vary over time. This allows for the evaluation of the probability of the manufacturing system in various stages. The feasibility of the method is demonstrated by the results of the case analysis, which obtain the probability of the system in various states as well as the dependability of the internal equipment and processing cell.
    The analysis of the result of the case confirms that the number of machining equipment, processing time, machining parts and degradation parameters inside the processing cell all have an impact on the reliability of the processing cell. And when there are multiple types of the same parts between pieces of equipment, the failure of one processing cell will cause multiple equipment to be unable to complete production, and the risk of production plan failure can be reduced by analyzing the number of similar parts in different pieces of equipment and making adjustments. Finally, the correctness and effectiveness of the proposed method are verified by comparing the results with those based on simulation analysis. Compared with the learning cost of simulation software, the model evaluation has better practicability and convenience, and can provide support for the production management and real-time decision making of unitized manufacturing.
    Rumor Detection on Social Platforms Using Multi-modal Multi-layer Attention Networks
    ZHANG Yaozeng, MA Jing
    2025, 34(10):  17-23.  DOI: 10.12005/orms.2025.0303
    Asbtract ( )   PDF (1470KB) ( )  
    References | Related Articles | Metrics
    In today’s rapidly evolving information age, social media has become an indispensable platform for disseminating various types of information to broad and diverse audience. However, the surge of content on these platforms has also led to negative consequences, particularly the chaos and misinformation caused by rumors. The content on these platforms often includes both text and images. This multimodal nature makes it difficult for users to discern the authenticity of the information, leading to the widespread dissemination and adoption of rumors, which threatens social stability. The emergence of large language models like ChatGPT has significantly lowered the barriers to generating and spreading information, making it easier to create rumors. Therefore, there is a pressing need to continuously advance rumor detection technology to mitigate the harmful impact of rumors and protect individuals from their influence. Traditionally, rumor detection technologies have primarily focused on identifying relevant features in text and images. However, the complex relationships among rumor writing styles, the potential for image tampering, and multimodal information remain critical areas that need attention. This study aims to address these challenges by developing an advanced deep learning framework called the multi-modal multi-layer attention network (MMAN). This framework integrates multiple data modalities and utilizes multi-layer attention mechanisms to uncover the complex patterns inherent in deceptive content. The goal of this approach is to enhance the accuracy and efficiency of rumor detection systems, thereby reducing the harmful impact of misinformation on individuals and society.
    This study focuses on constructing a MMAN framework for rumor detection and conducting a multi-dimensional analysis of it. The deep learning model framework uses the TF-IDF algorithm and the PMI algorithm to build a text segment-word network, and then employs a graph convolutional network to capture writing style features related to rumors or non-rumors. Additionally, an error level analysis is used to detect tampered parts of images and extract corresponding features, built on traditional methods for extracting image semantic features. Inspired by transformer encoders, the study constructs multi-modal feature encoders to acquire high-dimensional features across different modalities. The model is trained using the AdamW optimizer, combined with early stopping techniques to optimize computational resource utilization. Hyperparameter tuning is meticulously performed through grid search to determine the best combination of hyperparameters, ensuring optimal detection accuracy for rumor posts. Further, the model’s performance is validated using datasets from two major social platforms, with rigorous comparisons with baseline models to demonstrate the model’s superiority. The study also visualizes the attention mechanism weight matrices at the end of the text and image feature extraction sub-networks to further interpret the model. t-SNE dimensionality reduction techniques are used to visualize the feature sequences output by the core modules, allowing for a detailed analysis of the model’s primary functions. Finally, the model’s robustness is strictly evaluated by introducing noisy data and combining the original data with noisy data from different modalities, comprehensively assessing its resilience against interference.
    This study provides a viable deep learning approach for rumor detection, successfully developing and validating a deep learning rumor detection model that outperforms baseline models. The experimental results clearly show that the model demonstrates high accuracy and efficiency in detecting rumors on two major social media platforms, both domestic and international. Ablation experiments, conducted by selectively removing various modules of the model, verifies the unique contributions and roles of each module in handling different data types, showcasing the model’s strong generalization performance across various social platforms. Additionally, the robustness tests reveal that the model has a certain level of resistance to interference, but its performance significantly declines when dealing with noisy text data. This decline is attributed to its focus on rumor/non-rumor texts on social media platforms. In terms of application, deploying this model in a real-time rumor detection system has significant potential. It can enhance social media regulation by providing users with timely and accurate rumor alerts, thereby effectively curbing the spread of misinformation.
    This study provides a potential pathway for rumor detection, particularly explores advanced feature extraction techniques, and further optimizes the model to enhance its performance and robustness. The text feature extraction part of the model may be overly focused on the specific domain of rumor detection. Thus, introducing pre-trained models in the future could enhance their generalization ability and address the issue of resistance to textual noise. Regarding the Weibo dataset, many images may not be directly related to the text content of posts, which could lead to poor image feature extraction in the initial stages. Therefore, more sophisticated feature extraction methods could be considered to extract more effective image features from the outset. We extend our heartfelt gratitude to the invaluable data sources used in this study and the pioneering contributions in the fields of rumor detection and deep learning. Additionally, we sincerely thank the expert reviewers and editors for their meticulous efforts, which have significantly improved the quality and rigor of this research.
    Location and Capacity Determination Method of Charging Station Considering Partial Charging Behaviors
    LU Xinhui, ZHOU Zongdian, ZHOU Kaile
    2025, 34(10):  24-30.  DOI: 10.12005/orms.2025.0304
    Asbtract ( )   PDF (1135KB) ( )  
    References | Related Articles | Metrics
    Due to excessive exploitation of fossil fuels, the supply of fossil fuels is gradually unable to meet the current huge demand. At the same time, the use of fossil fuels often comes with certain environmental pollution issues. Transportation is one of the main fields that use fossil fuels. As an effective alternative to traditional vehicles primarily fueled by fossil fuels, a large number of electric vehicles have been produced in the past years due to their clean and environmentally friendly characteristics. However, due to the limited battery capacity of electric vehicles, it may be necessary for drivers to go to the charging station once or even multiple times during long-distance travel to meet the travel requirements. However, the current number of electric vehicle charging stations is relatively limited, making it difficult to meet the growing demand for charging. Therefore, the location of electric vehicle charging stations is currently a relatively important issue.
    This article investigates the problem of locating electric vehicle charging stations on a highway network considering capacity limitations and partial charging behavior. Compared to the cost of electricity charged in scenarios such as home and workplace, that of electricity charged on highways is usually much higher. In addition, when electric vehicles are charged on highways, owners are often forced to wait, while in scenarios such as home and workplace charging, owners can engage in other activities such as work and leisure. It is obvious that charging behavior during long-distance travel often brings higher time cost. In the above situations, electric vehicle owners are often only willing to supplement the electric vehicle with enough electricity to complete the journey during long-distance travel, rather than fully charging it. And this partial charging behavior can effectively alleviate the capacity pressure of charging stations. Considering the alleviation of capacity pressure can improve the utilization efficiency of charging stations to a certain extent when selecting sites for charging stations.
    The first part of this article explains concepts such as deviation paths, charging behavior, coverage standards, and model assumptions. Given that the deviation paths of different OD pairs contain the coupling relationship between nodes and candidate nodes of the charging station, this article establishes a set of nodes included in the deviation paths of each OD pair to represent the corresponding relationship between the nodes included in the deviation path and the candidate nodes of the charging station.
    The second part proposes a mixed integer linear programming model to optimize the location problem of electric vehicle charging stations, in order to meet the travel needs of electric vehicles under limited budgets as much as possible. This model simulates the capacity from the perspective of the charging capacity of the charging pile, and completes the modeling of partial charging behavior by considering the logical relationship of electric vehicle electricity between nodes in the driving path. At the same time, a complete charging constraint is added to consider the charging strategy in this situation.
    The third part of this article provides a case study for testing a simulated highway network in Anhui Province, China with 16 nodes and 24 edges. The GUROBI solver is used to solve the model, obtaining location schemes for charging stations considering two different charging behaviors: partial charging and full charging. The article compares and further analyzes the coverage of demand vehicles and OD pairs under these different charging behaviors. The results show that, under the same budget constraints, the location scheme considering partial charging behavior provides better coverage for demand vehicles and OD pairs.
    In summary, this article studies the location problem of electric vehicle charging stations under the conditions of considering partial charging behavior and capacity limitations. A mixed integer linear programming model is proposed, and the superior performance of the location scheme considering partial charging behavior is demonstrated through case analysis.
    Real-time Estimation Method of Travel Time Reliability of Urban Road Network
    WANG Jiawen, CHEN Chao, ZHAO Jing, LI Wenbo, HANG Jiayu
    2025, 34(10):  31-36.  DOI: 10.12005/orms.2025.0305
    Asbtract ( )   PDF (1125KB) ( )  
    References | Related Articles | Metrics
    Urban road networks are often affected by periodic or stochastic perturbations, leading to traffic issues such as supply chain disruptions or personal travel costs increase. As a probabilistic expression of system risk, network reliability is defined as the probability of the road network providing satisfactory service levels under random disturbances. Travel time reliability is an important indicator for measuring the reliability of the road network, which can be represented by the distribution of vehicle travel times. Existing studies mostly rely on simulation or historical data, and the estimation of network travel time reliability has not yet been achieved. This study proposes a method framework for estimating the threshold, expectation, and variance of road network travel time reliability using cross-sectional data as input, and the proposed real-time estimation method for the expectation and variance of the probability distribution of vehicle delay travel time ratios. These methods reduce the constraints on data volume and data types, providing theoretical support for evaluating the performance of traffic control in different regions from the reliability perspective.
    Firstly, the statistical definition of real-time network travel time reliability is clarified, and its mathematical expression quantization index is given: the probability that the vehicle delay travel time ratio in the road network is less than a given threshold within a differential time period. On this basis, a quantitative model for network travel time reliability is established, and a framework for estimating the threshold, expectation, and variance of network travel time reliability using cross-sectional data as input is proposed. Then, the delay travel time ratio threshold is determined based on the percentile of the probability distribution of the delay travel time ratio. The BPR function is used to estimate the expectation of the delay travel time ratio. The variance of the travel time ratio is estimated based on the relationship between the difference between the ideal and actual total travel distance of vehicles in the macroscopic fundamental diagram and the traffic flow density distribution of the road network. Combined with practical application requirements, reliability estimation steps are provided. Finally, a microscopic simulation network is used as an example to verify the effectiveness of the proposed real-time reliability estimation method.
    The estimation method described in this study makes average absolute errors of 0.0568, 0.0617, and 0.0759 when the signal cycle of the road network is 60s, 90s, and 120s, respectively, with an estimated reliability error of less than 10%. The estimation results are better when the network traffic flow is in a non-saturated state. Based on the application of this method in Qingpu District of Shanghai and Binjiang District of Hangzhou, a reliability error below 20% can meet the daily usage requirements of traffic management departments. This proves that dynamic estimation of network travel time reliability can be achieved by obtaining only partial cross-sectional traffic detection data within the network, providing a new solution for estimating road network travel time reliability.
    This study can provide traffic information that reflects the current regional travel reliability for traffic participants: the higher the network travel time reliability, the greater the probability of completing the trip as planned in that area. For traffic managers, the research results can support the evaluation of traffic control effectiveness at the reliability level, providing theoretical support for the performance evaluation of traffic control in different regions. This study has not yet analyzed the actual impact of the delay travel time ratio threshold. To address this limitation, crowdsourced vehicle trajectory data will be collected to conduct an empirical sensitivity analysis in the future.
    Identifying Critical Line in Power Transmission Network for Connectivity and Cascading Failure
    DU Yongjun, HE Mingyu, CAI Zhiqiang, WANG Ning
    2025, 34(10):  37-43.  DOI: 10.12005/orms.2025.0306
    Asbtract ( )   PDF (1063KB) ( )  
    References | Related Articles | Metrics
    Large blackouts are typically caused by line cascading failures in a power transmission network. For some key institutions in a city, such as hospitals and command centers for disaster mitigation, there is an essential need for ensuring the connectivity of power lifelines between a power plant and a high-voltage substation in a specific area when the line cascading failures occur. Oriented to this need, there is a vital and challenging problem of determining some critical lines for maintaining the connectivity.
    Once these critical lines are determined, on the one hand, efforts can be focused on protecting these critical lines before the blackout occurs, in order to prevent the paralysis of the power transmission network; on the other hand, after the blackout occurs, limited resources can be focused on prioritizing the maintenance of these lines in order to restore the connectivity of the power lifeline as soon as possible.
    The current methods to identify critical lines have focused on the vulnerability of the line itself and its role in failure propagation or the serious consequences of the line failure, such as the magnitude of the load loss and the size of the blackout.
    However, the current methods fail to identify critical line to maintain the connectivity of power lifelines in the context of line cascading failures, so that they cannot be used to scientifically guide the maintenance and optimization of power transmission networks. To this end, this paper proposes a scientific method to identify critical lines by combining the connectivity and development mechanism of line cascade failures in a power transmission network.
    First, to characterize the cascading failure process of lines, a cascading model is developed in which the blackout size is measured by the number of failed lines. The blackout size is a random variable related to the initial disturbance, load threshold, and transfer load. To develop the probability distribution of the blackout size, we summarize a normalized cascading failure model in which the initial load for each link is a uniformly random variable distributed in the interval [0,1]. According to the normalized cascading failure model, we derive the probability distribution of the blackout size, which is a saturating quasibinomial distribution with parameters such as the initial disturbance, load threshold, and transfer load.
    Applying the saturating quasibinomial distribution, we develop a formula to calculate the reliability of the power transmission network, where the reliability is the probability that a power plan and a high-voltage substation can be connected by some operational lines. According to the network reliability, a Bayesian importance measure is proposed to quantify the importance of lines for maintaining connectivity. The Bayesian importance measure depends on the distribution of the blackout size and the structure of the power transmission network, where the structure is quantified by the concept of the structural spectrum.
    Exactly calculating the structural spectrum is an NP-hard problem; thus, exactly calculating the Bayesian importance measure is difficult. Therefore, a numerical algorithm is constructed to approximately evaluate the Bayesian importance measure, so as to identify the critical lines. The greater the importance of the link is, the more critical the link is, and vice versa.
    Finally, a case study on the Taiwan power transmission network is presented to illustrate how the Bayesian importance measure can effectively assist in obtaining the criticality of lines regarding the reliability of the power transmission network in the context of line cascading failures. A sensitivity analysis is discussed for determining the impact of changing parameter in the saturating quasibinomial distribution on the Bayesian importance measure. The experimental results reveal that given a fixed initial disturbance, when the load threshold is larger and the transfer load is smaller, the difference in the criticality degree of the lines becomes more significant. These numerical results show that the proposed method to identify critical lines can aid in decision-making when performing emergency maintenance of a power transmission network and ensuring the connectivity of power lifelines.
    In the context of cascading line failures, this paper proposes a method for identifying critical lines, which quantifies the effects of individual links on maintaining the connectivity of the power transmission network. However, this method fails to consider the joint effects of two links on network connectivity. Therefore, for future studies, we will concentrate on the interaction effects of two links on the connectivity of the power transmission network, so as to comprehensively identify critical lines based on these interaction effects.
    Quantum Game Analysis of Competition and Cooperation Relationship between Two Manufacturers Producing Alternative Products
    LI Yanhui, GAO Huan, YAO Qi, GUAN Xu
    2025, 34(10):  44-51.  DOI: 10.12005/orms.2025.0307
    Asbtract ( )   PDF (1229KB) ( )  
    References | Related Articles | Metrics
    With the development of information globalization and information technology modernization, the adjustment of competition relationship and dynamic integration of resources between enterprises have a profound impact on the performance improvement of enterprises. More and more enterprises gradually realize that in order to survive and develop in fierce competition, they need to break through their own limitations in technology and resources, explore new cooperation methods, and fully utilize market opportunities by uniting external forces or establishing cooperation relationships with other enterprises. At the same time, various forms of cooperation such as information cooperation, technology cooperation and production cooperation have emerged as enterprises explore new ways of cooperation. However, conflicts of individual interests often lead to various contradictions and competitions in cooperation between enterprises, and there are significant differences in the degree of competition and cooperation in different stages, making it difficult to form stable cooperative relationships. How to maintain high-quality and stable supply chain cooperation is still a problem worthy of further research and discussion.
    This paper applies quantum game theory and methods to study the problem of cooperative production of competing enterprises based on critical components through mathematical modeling, aiming to reveal the influence of quantum entanglement mechanism on the establishment of rational individual cooperation stability, in order to provide a new method for strategic choice for enterprises in different market positions, and provide a new research idea for achieving“win-win” for enterprises. The main research content and methods of this article are as follows: (1)The quantum game analysis of the competitive and cooperative supply chain formed by manufacturers with independent production capacity around key components is carried out, and the game relationship between the competitive and cooperative behavior of manufacturers and the decision of each parameter are discussed. (2)Four combination models of competition and cooperation strategies of manufacturers in wholesale competition and license competition are constructed respectively, and the changes in quantum entanglement and their effects on revenue and balance are analyzed. (3)This paper uses quantum entanglement to simulate the interest entanglement in competitive relations, obtains the optimal solutions of classical Nash equilibrium and quantum equilibrium, and then compares and analyzes the effects of different entanglement conditions on decision parameters and optimal strategies of manufacturers.
    The results show that by introducing manufacturer competition and cooperation into the analysis paradigm of quantum games, and by considering entanglement of states, active-seeking manufacturers for cooperation do not have to bear the risk of “betrayal”. This solves the “prisoner’s dilemma” problem of manufacturer cooperation in classical games to some extent. In addition, appropriately enhancing the “entanglement” with cost-advantaged enterprises can reduce the gap in equilibrium benefits between both parties. Manufacturers with cost advantages can use wholesale prices as a mechanism to influence the cooperation decisions of manufacturers without cost advantages. If a manufacturer has a dominant position in the industry, its optimal quantum strategy is to sign a wholesale cooperation contract with competitors in the market. To induce competitors to cooperate, cost-advantaged manufacturers must offer a lower wholesale price. Therefore, when seeking the best strategy, they face a trade-off between prices, and even if they have a cost advantage, they need to reduce their wholesale prices to attract competitors to cooperate.
    There is a lot of room to expand the research of cooperative relationship from the perspective of quantum strategy. In addition to the application of quantum game to the analysis of competition and cooperation strategies based on key components in this paper, we can also consider mapping characteristics such as quantum entanglement and quantum decoherence to the analysis of joint investment and resource development, knowledge and resources sharing, environmental protection and other decisions among enterprises. This helps us better understand strategic choices and outcomes in these complex relationships.
    Robust Optimization of Unmanned Aerial Vehicle Delivery Point Selection under Uncertainty
    JI Mingjun, HUANG Youfeng
    2025, 34(10):  52-58.  DOI: 10.12005/orms.2025.0308
    Asbtract ( )   PDF (1344KB) ( )  
    References | Related Articles | Metrics
    In recent years, frequent sudden-onset natural disasters and public health events have attracted widespread attention globally, and these disasters, including earthquakes, floods, epidemics, and mudslides, etc., not only pose a serious threat to the safety of people’s lives and property, but also have a serious impact on public infrastructure and transportation systems, which further aggravate the rescue and emergency response difficulties. In the aftermath of a disaster, the rapid distribution of relief materials is crucial to mitigating the situation and saving lives. As an emerging technology, an unmanned aerial vehicle (UAV) can avoid complex terrain and is not subject to the restrictions of ground transportation interruptions, which significantly improves the efficiency, safety and stability of the “last-mile” distribution, especially in the field of rescue material distribution, showing many unique advantages. However, the suddenness of disasters leads to uncertainty in the distribution of relief materials. Compared with traditional vehicle transportation, the energy consumption of UAV flights is easily affected by internal and external factors such as self-weight and weather. In addition, disaster-stricken areas are often accompanied by unpredictable factors such as population movement, infrastructure damage, and environmental changes, and the actual demand for materials may differ from the predicted results. Based on the humanitarian spirit, it is crucial to quickly meet the demand for relief materials in disaster-stricken areas. However, making decisions based only on forecast data and current weather conditions may lead to problems such as insufficient reserves at delivery points and limited range of drones during the subsequent material transportation phase. Therefore, under uncertain conditions, how to scientifically and reasonably plan the layout of temporary drone delivery points and the distribution of supplies in disaster-stricken areas has become an urgent problem. To improve the distribution efficiency and reliability and provide effective support for post-disaster relief, this paper investigates the site selection of UAV temporary delivery points, the reasonable allocation of UAV resources, and the distribution strategy of demand-splittable delivery, considering the uncertainty of power consumption of UAV and the uncertainty of demand at the disaster-affected sites, which is of important theoretical and practical significance.
    In this paper, a two-stage robust optimization model is constructed to cope with the uncertainties of the demand and UAV power consumption at the post-disaster demand points, and the budget uncertainty set is used to characterize these two uncertainties. The model aims to minimize the total costs, including the delivery point siting cost, the drone configuration cost, the drone delivery cost, and the penalty cost incurred for unmet demand. In the first stage, the most suitable locations are selected from a set of known potential site locations to build temporary delivery points, which serve as both landing platforms for UAV and warehouses for storing relief materials. Based on the service radius constraints of drones, the demand points served by each temporary distribution point are determined, and an appropriate number of drones are allocated. In the second stage, when the uncertain information about demand and power consumption is clarified, specific distribution tasks are assigned. This phase fully considers the constraints of UAV’s load, energy consumption, and capacity. Given the limited carrying capacity of UAV, demand-splittable delivery is considered, i.e., when a single delivery cannot satisfy all the demands at the demand point, other UAVs are allowed to complete the subsequent deliveries. To accommodate this scenario, this paper approximates the UAV linear power consumption function. After transforming the model into its dual form and linearizing it, the C&CG algorithm is used to solve the problem.
    This paper uses randomly generated data for experimental analysis, and the experimental results show that the two-stage robust optimization model proposed in this paper exhibits great effectiveness, robustness, and flexibility compared to the deterministic model, the two-stage model, and the single-stage robust model. On the basis of effectively controlling the slight increase in total cost, this model significantly improves the ability to withstand uncertainty risk. Especially in the scenario of demand-splittable delivery, the approximate UAV linear power consumption function proposed in this paper shows high practicality and effectiveness. To further verify the effectiveness and efficiency of the algorithm, the C&CG algorithm is compared with the Benders decomposition algorithm, and the experimental results show that the C&CG algorithm outperforms the Benders decomposition algorithm in terms of solving efficiency and handling of large-scale problems. The experiments also finds that the demand uncertainty has a greater impact on the model than the power consumption uncertainty. Decision makers can adjust the uncertainty level parameters and fluctuation coefficients in the model according to the actual situation to obtain solutions adapted to different uncertainty levels, thus enhancing the overall flexibility of the rescue plan. The research results in this paper provide flexible and practical decision support for actual post-disaster rescue operations, which can help to improve the response capability and efficiency of post-disaster rescue by the government and related rescue organizations, and better cope with disaster events.
    Adjustable Robust Optimization Approach to Multi-project Scheduling with Uncertain Local Resources Supply
    ZHANG Haohua, BAI Sijun, LI Lubo
    2025, 34(10):  59-65.  DOI: 10.12005/orms.2025.0309
    Asbtract ( )   PDF (1055KB) ( )  
    References | Related Articles | Metrics
    The resources-constrained multi-project scheduling problem is constructing a schedule that projects can be executed in parallel if the resources and precedence constraints are not violated. The baseline schedule plays a crucial role in the multi-project scheduling. However, efficient multi-project scheduling becomes more challenging due to not only the complex resources constraints and inter-dependencies between projects but also the increasing size and complexity of projects. Moreover, in the real world, projects are subject to considerable uncertainty. Two types of uncertainty are always studied: duration uncertainties and renewable resources uncertainties. A multi-project scheduling plan should be not only robust, but also competitive in the market and in uncertain environments. There is sufficient research on robust optimization but little one on adjustable robust optimization, which is more applicable than the traditional robust optimization approach. As a new method that can solve uncertain optimization problems, adjustable robust optimization has improved its applicability compared to traditional robust optimization. The purpose is to determine multi-project scheduling schemes with different robustness levels.
    In this paper, we propose the multi-project adjustable robust scheduling problem in uncertain environments, where local resources supply uncertainty and activity duration uncertainty are simultaneously considered. Multi-project management is divided into multi-project layer and project layer. A stochastic scheduling model and an adjustable robust optimization model based on resources flow are proposed. Unlike stochastic programming methods in multi-project scheduling, robust optimization does not require probabilistic information about uncertain parameters and assumes that the uncertain parameters lie in a given uncertainty set. We design a new adjustable robust optimization method to generate robust baseline schedules for multi-project. The first stage is project scheduling based on priority rules, which aims to find feasible scheduling schemes that satisfy local resources and activity order constraints. Each project is simulated in a different scenario under uncertain local resources and activity duration conditions. In the second stage, we propose a linear adjustable robust optimization model based on the global resources flow to decide the start time of these projects based on how long they may be delayed. Compared with existing adjustable robust optimization approaches, this approach not only performs robust optimization based on the risk attitude of the manager but also generates baseline schedules with different robustness levels under each risk attitude. Furthermore, unlike existing adjustable robust optimization methods that can only solve small-scale problems, this approach integrates heuristics and exact algorithms that can efficiently solve large-scale problems.
    Finally, the numerical experiments are performed. Compared with the traditional robust optimization method, the two-stage adjustable robust optimization approach is more flexible, which can effectively avoid the over-conservative characteristic of traditional robust optimization methods. At the same time, it can provide different schedules according to the manager’s risk attitude. In addition, the parameter analysis shows that the change in local resources supply and order strengths between projects has a more significant impact on the optimization results. However, the change in stochastic activity duration has a relatively small impact on the optimization results. In multi-project scheduling in uncertain environments, although a robust baseline schedule can consider various uncertainties, it is difficult to completely resist the interference of uncertainties, such as global resources disruptions and project network relationship change. Therefore, future research also needs to design reactive scheduling methods to ensure that multi-projects are completed on time in uncertain environments.
    Optimization Algorithms for Solving Restricted Container Relocation Problem Based on Q-learning
    WANG Wenjie, HU Zhihua, TIAN Xidan
    2025, 34(10):  66-72.  DOI: 10.12005/orms.2025.0310
    Asbtract ( )   PDF (1198KB) ( )  
    References | Related Articles | Metrics
    The container relocation problem in container terminal yards affects the container terminal operations efficiency greatly. Inefficient relocation strategies prolong waiting times for customers and increase the operations costs in container yards. Additionally, to increase the container yard management performance, the container relocation schedulers must be generated in a real-time manner. The continuously increasing throughputs of container terminals challenge the speed and quality of the relocation strategy generation procedure. This paper focuses on the restricted container relocation problem with distinct priorities (RCRP). In RCRP, only the container on the top of a given stack can be moved to other stacks, when a container must be retrieved with higher priority than some other containers in the stack, under the minimization of the number of relocation steps.
    To address this unique problem, a model-free heuristic Q-learning algorithm is devised based on the ε-greedy strategy. First, RCRP is formulated as a Markov decision process, defining the state space, action space, and state transition process suitable for reinforcement learning methods. A method is developed to extract state features from the bay to reduce the original state space, avoiding introducing too many dimensions. A reward mechanism is tailored to the problem. Further, an environment and an agent are constructed using a model-free heuristic Q-learning method, where the Q-table is updated considering off-policy temporal difference learning. The action selection strategies in each phase are established to enhance the training performance and decision-making capacity of the agents. A greedy strategy is used to improve exploration and convergence capabilities during the agent learning process. In the agent decision-making process, heuristic rules are employed to prevent the agent from making random predictions under insufficient information, thereby enhancing the agent ability to obtain optimal solutions. An ablation experiment, a performance experiment, and a generalization experiment are conducted to evaluate the effectiveness, solution performance, and generalization capability of the algorithm using eight evaluation criteria.
    As is found through the numerical experiments with 280 randomly generated test cases of different sizes, the average prediction time of the Q-learning algorithm with heuristic rules for solving the 40 instances with the dimension 8×7 increases by 1.67 times compared to the Q-learning algorithm without heuristic rules; the steps used to generate the relocation strategies by the Q-learning algorithm with heuristic rules account for 34% of the Q-learning algorithm without heuristic rules. There is no significant difference in time for training the two algorithms. Within an acceptable increase in solution time, the Q-learning algorithm with heuristic rules produces significantly higher-quality solutions than the Q-learning algorithm without heuristic rules. Thus, incorporating heuristic rules to the Q-learning algorithm can help improve the quality of Q-learning algorithm solutions significantly. The heuristic Q-learning algorithm and the branch-and-bound algorithm can solve the instances of various sizes, using almost the same relocation steps (similarly between 0.91 and 0.99). The quality of solutions is either consistent or slightly superior to the branch-and-bound algorithm. In terms of computing time, except for small and extremely large instances, the efficiency difference rate is approximately 36~86% compared to the branch-and-bound algorithm. The agent training time is long for large-scale problems, ranging from 113 to 320 CPUs, and the time is acceptable in practices. In instances of 18, 50, 64, and 85 containers, the generalization metric of the average number of relocation steps obtained by the branch-and-bound and heuristic Q-learning algorithms ranges from 0.97 to 1.00. There is no statistically significant difference between the results obtained by the heuristic Q-learning algorithm and the branch-and-bound algorithm. The Q-learning algorithm proposed in this paper presents a good algorithm generalization performance. As a reinforcement learning method, the devised algorithm can achieve satisfactory results when solving problems of the same dimensions. The instances of various dimensions require training different agents for the solution algorithms. Additionally, more iterations during agent training are essential to achieving better prediction performance. In future research, it is worthwhile to design more suitable environment state extraction methods to make the environment state more concise and improve the computational performance of the algorithm. Furthermore, the action selection strategies during agent training and prediction can enhance the learning effectiveness of the agent.
    Robust Optimization Model for Deployment of Road Weather Information System Stations on Smart Highway
    SUN Hongyun, MIN Xudong, ZHANG Litao, YANG Jinshun
    2025, 34(10):  73-79.  DOI: 10.12005/orms.2025.0311
    Asbtract ( )   PDF (1206KB) ( )  
    References | Related Articles | Metrics
    Continuous global climate change brought severe meteorological disasters and adverse weather events to every country in recent years, and there was no exception to China. Those meteorological hazards greatly influence road safety and traffic operation, so road weather management programs are popularized in many western countries and road weather information system is developed to monitor, predict, and warn major meteorological events on the road network. This kind of system is a part of smart highway construction and intelligent meteorological support service, and its effectiveness greatly depends on the layout of road weather information system (RWIS) stations. However, locating RWIS station is not so straightforward because it is related to lots of internal and external influencing factors. For example, decision-makers are faced with the uncertain average annual transport demand for each road segment in 5 to 15 years to come, the inconsistent spacing standards of RWIS station location due to fuzzy weather information requirements for smart applications, and the investment budget uncertainty due to weak world economic growth. These uncertainties from smart transport system and economic growth increase the difficulty of optimizing RWIS station layout. This study applies robust concept to existing RWIS stations’ siting problem by modelling uncertain transport demand as a polyhedral uncertainty set. Moreover, this research also gives a case study of Zibo highway network to demonstrate the effectiveness of the proposed methodology, which can also be applied to other places.
    In this study, a polyhedral uncertainty set is introduced to describe the uncertainty of highway transport demand, and a robust siting model for RWIS stations is proposed under three assumptions. The robust optimization model aims to maximize the total freight turnover serviced by RWIS station network, and it includes two special constraints such as minimum station spacing related with smart highway level and investment budget in the context of weak investment willingness. Because the objective function of the proposed model has max set operation, the equivalence between the inside max optimization subproblem and its min dual optimization subproblem is firstly proved, then the original robust model is further transformed into an equivalent mixed integer programming problem in the manner of the work of BERTSIMAS and SIM (2004). Besides, the nonlinear minimum station spacing constraint is converted to its equivalent linear constraint using logic constraint and big M method. As a result, an equivalent mixed integer linear programming (MILP) problem is derived from the original robust model. Because CPLEX solver can deal with the large-scale MILP problem efficiently, it is integrated into master control solving program written by Python programming language in PYCHARM.
    A data preparation of case study includes three parts: First, the static information about the planned Zibo highway network and candidate RWIS station in 2025 is collected as well as some major model parameters are configured. Second, both average annual passenger transport demand and average annual cargo transport demand from 2021 to 2035 are predicted as nominal values. Third, the proportion of disturbance and the level of uncertainty are determined according to existing studies. Then several numerical analyses are carried out and some findings are concluded as follows: (1)The proposed model is shown to be sensitive to the proportion of disturbance and the level of uncertainty of transport demand. When there is neither disturbance nor uncertainty in transport demand, the optimal value of objective, that is, the total freight turnover serviced, reaches its peak at 77141.5 million ton-kilometers from 2021 to 2035. It is also suggested that when the level of uncertainty of transport demand increases, the total freight turnover serviced will decrease. (2)In the worst uncertainty of transport demand, the effect of the minimum station spacing on optimal layout plan is investigated which indicates that the former may have impact on the latter. However, that effect shows heterogeneity on different types of highways with various topography and intelligence. (3)In the worst uncertainty of transport demand, the effect analysis of the investment budget on optimal objective value and road section coverage rate shows that a progressive increase in investment budget from 3.0 to 3.4 million Yuan would improve optimal objective value and road section coverage rate. Nevertheless, when the budget exceeds 3.4 million Yuan, both optimal objective value and road section coverage will remain stable.
    The proposed model may be improved to consider other influencing factors and develop heuristic algorithm, and it is necessary to study how to jointly optimize RWIS layout problem with other ones like meteorological station maintenance resources scheduling and highway intelligence upgrading decision-making.
    Research on Safety Risk Prediction and Control Method Model for Full-line Construction of Road Engineering
    DUAN Xiaochen, XING Wenhao, DUAN Pengxin, CHEN Chaofeng
    2025, 34(10):  80-86.  DOI: 10.12005/orms.2025.0312
    Asbtract ( )   PDF (1233KB) ( )  
    References | Related Articles | Metrics
    The existing methods for predicting and controlling construction risks in the scope of entire road engineering projects exhibit certain limitations, such as latency, lack of comprehensive analysis, and reliance on a single-dimensional approach. These shortcomings fail to effectively address the safety demands of medium to large-scale road engineering projects. This paper aims to address these issues by exploring the influencing factors, evolution mechanisms, development trends, and management strategies of engineering risks, based on the objectives of safety risk prediction and control in the construction of road engineering projects, and supplemented with numerous real historical engineering cases.
    Initially, the paper conducts a work breakdown structure (WBS) analysis and quantifies key features of the case data to elucidate the primary influencing factors of engineering construction activities and accidents. Subsequently, it employs backpropagation neural network (BPNN) to jointly predict project progress and risks, and establishes a problem traceability system, countermeasure library, and early warning response system. Furthermore, it designs a BIM+GIS three-dimensional dynamic safety management system, integrating digital twins and virtual reality technology to construct a visual 7D model for the integrated management of safety risk prediction and control based on on-site three-dimensional modeling, engineering progress, risks, costs, and quality, using the example of the full-line construction of the Xifu Expressway for practical demonstration.
    The construction of the Xifu Expressway encompasses various terrains, including mountainous, hilly, plain, swamp, river, and wetland areas, with diverse weather conditions throughout the year. It involves a wide range of complex tasks, including extra-large tunnel and bridge projects, making it a representative case for extensive research. The method model proposed in this paper has significantly optimized the practical application of the full-line construction of the Xifu Expressway: achieving over 96% accuracy in identifying engineering construction progress, exceeding 97% accuracy in joint prediction of risks based on progress control, and reducing the construction accident rate by 93% compared to similar domestic projects. Additionally, the resources utilization rate of construction enterprises has increased by 12%, the average operating efficiency has improved by 9%, and the average unit construction cost has decreased by 7%. This demonstrates that effective safety risk management leads to precise prediction and control of accident precursors and risk signs, resulting in high quality, a shorter construction period, and lower investment compared to similar projects at home and abroad.
    The research results indicate that the closed-loop 7D integrated engineering three-dimensional (BIM+GIS)-progress-risk-cost-quality integrated management method model offers several advantages, including accurate analysis, information sharing, scientific control, and the combination of virtual and real aspects, thus possessing high scientific and practical value. This model, based on real case data and multi-dimensional information linkage, overcomes the subjective limitations of traditional expert experience, analytic hierarchy process (AHP), and fuzzy comprehensive evaluation methods. By jointly analyzing engineering progress and risks, it improves the predictive shortcomings of previous neural network algorithms. Through the integration of engineering quality and cost, it refines the prevention and optimization of construction safety risks and clarifies the causal relationship between project management goals and engineering safety risks.
    Furthermore, the research emphasizes the importance of rational optimization and scientific decision-making measures in reducing the probability of accidents. It underscores the necessity of supporting management systems and system construction. With the continuous progress of information technology and the development of new theories, technologies, and algorithm models, the research on safety risk prediction and control in full-line road engineering construction is expected to become more comprehensive and in-depth, better meeting engineering needs.
    Research on Stock Price Volatility Forecast Integrating Sentiment Indicators: Based on Fine-tuned Large Language Model and GAT-TCN Network
    LYU Chengshuang, WANG Tong, SUN Haoran
    2025, 34(10):  87-92.  DOI: 10.12005/orms.2025.0313
    Asbtract ( )   PDF (959KB) ( )  
    References | Related Articles | Metrics
    Stock price volatility stands as a pivotal research focus within the financial sector, drawing scholars into the intricate dynamics that underpin stock market movements. Recognizing the complexities involved in forecasting stock price volatility, this research embarks on an extensive analysis using daily data spanning from January 4, 2010 to September 22, 2023. For empirical rigor, six prominent Chinese publicly traded companies—namely, Industrial and Commercial Bank of China, Kweichow Moutai, GD Power Development, Daqin Railway, Yangtze Power, and East China Pharmaceutical—are meticulously selected as the study’s focal samples. Leveraging the capabilities of fine-tuned large-scale language models, this study ventures into the realm of sentiment analysis, delving deep into investor commentary on stocks. Through this, an investor sentiment index is meticulously constructed. Complementing this, the research introduces an avant-garde stock price volatility prediction model that amalgamates graph attention mechanisms with dilated temporal convolutional networks. This innovative approach is designed to forecast stock price volatility, anchoring its predictions on a nonlinear and non-stationary indicator framework.
    Navigating through this intricate landscape requires an integrative methodological approach. A salient feature is the employment of fine-tuned large-scale language models for a nuanced sentiment analysis of investor-generated stock commentary. The insights garnered from this analysis culminates in the development of an investor sentiment index, providing a quantifiable measure of market sentiment. In tandem, the research unveils the GAT-TCN model, an innovative fusion of graph attention mechanisms and dilated temporal convolutional networks. This model is meticulously crafted to enhance the precision of stock price volatility predictions, operating within a nonlinear and non-stationary indicator milieu.
    The empirical findings of this study are significant. Firstly, the superior efficacy of fine-tuned large-scale language models in financial sentiment classification tasks is unequivocally established. Secondly, the integration of the investor sentiment index into the predictive framework markedly elevates the accuracy and reliability of stock price volatility forecasts. Notably, the GAT-TCN network emerges as a frontrunner, showcasing enhanced predictive prowess compared to established models such as BiLSTM, CNN-LSTM, and SVM. The ramifications of these findings are profound, offering actionable insights for harnessing the potential of fine-tuned language models in financial sentiment analysis and advocating for the broader adoption of the GAT-TCN framework within the financial ecosystem. Such advancements hold transformative implications, paving the way for more precise stock price predictions and fostering interdisciplinary collaborations.
    Although this study has significantly advanced our understanding of how investor sentiment predicts stock price volatility, it also emphasizes that the GAT-TCN model exhibits superior performance compared to other models that have been predicted. However, several limitations also point to possible avenues for future research. This study employs the GAT-TCN model, which has demonstrated high forecasting accuracy, but future research can continue to optimize the structure of the forecasting model as technology advances. Secondly, the findings presented in this study are based on the text of six stocks, which, although representative and broad, may not capture the entire sample of stocks. Consequently, the findings may be limited and the generalisability of the findings will be enhanced in the future through the inclusion of a larger sample of stocks.
    Supply Chain Decisions and Benefit Linkage Mechanism for Large-scale Agricultural Operation
    FENG Hairong, GUAN Hui, GAO Lijun, ZENG Yinlian, GONG Lei
    2025, 34(10):  93-100.  DOI: 10.12005/orms.2025.0314
    Asbtract ( )   PDF (1280KB) ( )  
    References | Related Articles | Metrics
    Land-scale operation and service-scale operation, the two forms of agricultural development in China, are important ways to achieve modern Chinese agriculture. Based on the latest practices across China, this article selects a land-scale operation model with land circulation as the core and a service-scale operation model represented by land trusteeship and production trusteeship, to study the production decision-making and benefit linkage mechanism of supply chain members, and analyze the impact of parameters such as effort cost and productivity on the three agricultural operation models of land circulation, land trusteeship and production trusteeship. In particular, we answer the three research questions. First, under different agricultural scale operation models, how do members of the supply chain make production decisions? Second, how to build a benefit linkage mechanism between farmers and agricultural service enterprises to ensure the income of farmers? Third, how will the production cost reduction and productivity improvement effect brought by the scale operations affect the choices of farmers and agricultural enterprises for different scale operation models?
    We start by analyzing the decentralized decision of the supply chain. Then, we build multi-stage game models to derive the equilibrium decisions of the supply chain members under different scale-operation models and compare the performances of the supply chain under different scale operation models. The contributions of this article are as follows: (1)We take small farmers in the context of modern agriculture as the research subject, systematically studying the production decisions of supply chain members under different agricultural scalemanagement models. (2)We study the benefit linkage mechanism between farmers and agricultural service enterprises under different agricultural scale operation models, in order to ensure the income of farmers. (3)We compare and analyze the impact of production costs and productivity on three agricultural scale operation models. In particular, we have obtained the following results.
    (1)When productivity is relatively lower, the production trusteeship model is superior to the land circulation and land trusteeship models. Lower fixed cost of outsourcing production helps promote the active development of production trusteeship service. Higher fixed cost of outsourcing production erodes farmers’ interests, which will affect the implementation of production trusteeship service. Therefore, when promoting agricultural production trusteeship models, it is necessary to strengthen the role of village collectives as organizers. Village collectives promote the concentration of cultivated land by integrating the scattered land of farmers, which helps to increase the service scale of enterprises. Under the organization of village collectives, farmers increase their bargaining power with the enterprise, which helps to reduce the fixed cost of outsourcing production.
    (2)When the agricultural service enterprise applies advanced technology to agricultural production with a high positive technological spillover effect, the land circulation model will be the optimal choice. Higher land rent improves farmers’ income but compresses the profit of leading enterprises and increases their operating difficulties. Therefore, when promoting the land circulation model, it is necessary to strengthen the coordinating role of village collectives. Village collectives effectively reduce information asymmetry in land rent by collecting supply and demand information from both sides of the land circulation market.
    (3)When farmers value their land and are not willing to transfer their land, agricultural land trusteeship can meet farmers’ demands for retaining the management right, and balance the interests of the service enterprise and farmers through a revenue-sharing mechanism. When the service enterprise applies advanced technology to agricultural production with a low positive technological spillover effect, the agricultural enterprise will face operating pressure, and the probability of land abandonment will be higher. So the government’s financial subsidies for land trusteeship services are necessary.
    The research conclusions of this article provide decision-making reference for promoting the collaboration between smallholder farmers and agricultural service enterprises. In the future, it is possible to further consider the impact of government subsidies on different agricultural scale operations and the risk preferences of supply chain members.
    Research on Production and Environmental Protection Strategies and Complex Dynamic Behavior of Automakers with Dual Credit Policy
    TANG Jinhuan, WU Qiong, ZHAO Liqiang, JIN Yuran
    2025, 34(10):  101-106.  DOI: 10.12005/orms.2025.0315
    Asbtract ( )   PDF (1473KB) ( )  
    References | Related Articles | Metrics
    Since the “double carbon” was established, environmental protection and energy transition have received widespread attention from all walks of life. New energy vehicles (NEV), which use clean energy instead of traditional fuels and have low energy consumption, little environmental pollution, and minimal ecological damage, have become a significant trend in the growth of the automobile industry. When compared to the usage of gasoline vehicles (GV), the adoption of NEV is accelerating thanks to the robust national policy thrust. However, the GV industry is an important pillar industry of the national economy. Therefore, finding a solution to the pressing issue of how to achieve the coordinated growth of the automotive industry under the energy transformation while balancing the game between economic expansion and energy conservation and emission reduction is imperative. The two premises in the short-term game, the hypotheses of rational economic man and market complete information, are difficult to exist in the real economy and society. Hence, some scholars have incorporated bounded rationality into the field of economics, revealing complex dynamic phenomena that economic theories cannot explain. In a single game, it is difficult for all parties to arrive at a stable market equilibrium because of the bounded rationality. It is necessary to carry out long-term repeated games. Currently, little literature on long-term repeated game in the automobile industry takes into account the impact of production competition on market stability, or addresses how to maintain the smooth transition of energy transformation in the automobile industry.
    Based on the practical issues, this study first constructs a short-term game model for duopoly automakers that produce NEV and GV respectively under the dual credit policy. Next, it discusses the production and environmental protection strategies of automakers. Further, the nonlinear dynamic system model under the dynamic adjustment of production and environmental protection level is established, and the long-term repeated game behavior of automakers is analyzed based on the complex system theory. Finally, combined with the actual case, values are assigned to the model. The influences of product substitution coefficient, low-carbon preference of consumers and credits trading price on the production and environmental protection level of automakers are compared under the two scenarios of cooperation and non-cooperation. The system’s dynamic properties are characterized by the chaotic bifurcation and the largest Lyapunov exponents, and the decision variable adjustment approach is explored.
    The results show that: (1)The product substitution coefficient, consumer’s low-carbon preference and credits trading price have a positive promoting effect on the production of NEV, while negatively inhibiting that of GV. In addition, consumer’s low-carbon preference has a significant role in improving the environmental protection level. (2)The manufacturing strategy of automakers will become unstable and ultimately descend into a state of chaos with an increase in the adjustment speed of production and environmental protection level. Therefore, in order to ensure a good market order, automakers should treat market competition rationally and limit the adjustment speed within a reasonable range. In addition, the range of NEV production adjustment is wider than that of GV, indicating that consumers are more sensitive to GV production adjustment. (3)The increase in substitution coefficient and credits trading price will narrow the stable region of the automakers’ production strategy. It is necessary to take measures to control the chaotic behavior of manufacturers, such as delayed feedback control methods. (4)The excessive adjustment strategy of the automakers always hurts their own profit and benefit, while increasing the profit of their rivals. Once in a state of chaos, the market is unstable, and it is difficult for either automaker to make a profit. This research expands the application of nonlinear analysis method of complex system theory, which has very important practical significance and reference value for the optimal strategies of automakers and the policy formulation of government.
    Cross-efficiency Evaluation of Two-stage DEA Based on Cooperative Game Theory and Overall Coordination Strategy
    LI Meiling, WANG Yingming
    2025, 34(10):  107-112.  DOI: 10.12005/orms.2025.0316
    Asbtract ( )   PDF (1002KB) ( )  
    References | Related Articles | Metrics
    Considering the complexity and close connection of real production process, it is difficult for a single-stage evaluation to reflect the real situation. It is necessary to open the “black box” to scientifically measure the efficiency value of each stage and the overall efficiency value. The related researches on two-stage efficiency decomposition and evaluation can be mainly divided into three categories. The first category is to reconstruct the evaluation model by combining with other related theories, such as cooperative game, non-cooperative game, bargaining model, expectation theory, regret theory, etc. The second type is to give the corresponding cross-efficiency model according to the different attitude tendencies of decision makers, such as benevolent attitude, neutral attitude and so on. The third is to establish the corresponding two-stage efficiency evaluation model for different representation forms of input-output information, such as probabilistic linguistic terms, interval numbers, etc. However, the research on cross-efficiency of two stages is not sufficient, especially when shared input and independent inputs in each sub-stage exist. In addition, it is of great significance to consider the coordination between the stage and the whole process to avoid extreme differentiation between them. At the same time, we also need to pay attention to the association relationship between the evaluated subjects which cannot be neglected in the actual evaluation. But most of the existing related research ignores the impact of such factors on the evaluation results.
    Under this background, this paper establishes an evaluation and aggregation method based on global coordination and the concept of game solution, and applies it to practical problem to verify its feasibility and effectiveness. Firstly, from a neutral point of view, a coordination variable is introduced to avoid the extreme differentiation between the whole process and the sub-stage, so as to realize the screening of multiple optimal weight vectors. In addition, considering that the solution concept of cooperative game is very suitable to describe the interactive characteristics of decision-making units, the CIS value is chosen to effectively describe the relevance between pieces of the evaluation information, and the weighted aggregation of efficiency information is realized. Finally, combined with the problem of industrial environmental efficiency evaluation, based on the establishment of the two-stage evaluation index system of industrial environment, the comprehensive evaluation of the overall and stage efficiency of industrial environment is carried out among thirteen provinces and cities in China.
    The proposed models consider the coordination between the local and the whole, as well as the association relationship between the evaluated subjects. By introducing the solution concept in cooperative game theory, this paper realizes the reasonable aggregation of multi-dimensional evaluation information. The contrastive analysis and practical application results of industrial environmental efficiency evaluation show that the models established in this paper can scientifically and effectively evaluate the efficiency of two-stage network structure, and can realize the full sorting of decision-making units, which provides a new modeling idea for two-stage efficiency evaluation. Meanwhile, the proposed models have strong applicability and operability in practical applications, such as supplier evaluation, energy efficiency evaluation, bank risk assessment, etc., which can provide useful and scientific reference information for managers to make relevant decisions.
    In the future, the fusion mechanism of the cross-evaluation information of decision-making units in game theory under different interactive and competitive environments will be one of the research directions that need to be deeply explored. In addition, efficiency evaluation is often carried out in complex environments in practice. Fully considering the uncertain characteristics of data such as randomness, fuzziness and roughness will further improve the practical flexibility and application scope of interactive evaluation method.
    Collaborative Protection Strategy of Privacy Security in Context of Mobile Medical Services: Based on Tripartite Evolutionary Game
    LIU Zhengmin, WANG Wenxin, LIU Weilong, ZHANG Jihao, LIU Peide
    2025, 34(10):  113-118.  DOI: 10.12005/orms.2025.0317
    Asbtract ( )   PDF (957KB) ( )  
    References | Related Articles | Metrics
    The rapid advancement of information technology has led to the widespread adoption of mobile healthcare services, greatly enhancing the convenience of medical experiences. However, this progress has also posed a significant challenge to user privacy security. Addressing these challenges, this study employs evolutionary game theory to explore the behavioral strategies and evolution for users, mobile healthcare service providers and the government. Specifically, it aims to reveal their interactions and influences in terms of privacy protection. To begin with, this study theoretically analyzes the optimal strategies for users, service providers and the government. Subsequently, numerical simulations are conducted to assess the impact of various factors on strategy selection and create three insights.
    Firstly, in the digital age, users must heighten their awareness of personal information and privacy security. When utilizing service platforms, users must remain vigilant to the risks of personal information leakage and take proactive measures, such as reporting to the government and assisting in governance, upon privacy infringement. This proactive approach to rights protection not only safeguards individual interests but also contributes to the societal privacy protection mechanism. Furthermore, the rights protection behavior of users can serve as an inspiration for others to join the actions, collectively defending lawful rights. Notably, user behavior significantly influences government regulatory strategies. If users relinquish their rights, the government may resort to passive regulation, leading to ineffective containment of privacy leakage issues. Conversely, active rights protection by users can strengthen government regulation, thereby better ensuring privacy security.
    Secondly, when service providers are inclined to adopt strategies that could result in data leakage, users become more cautious and are likely to engage in rights protection actions. The combined effects of user rights protection and government regulatory strategies increase the risk of service providers penalized for privacy breaches. Consequently, service providers, considering the legality and compliance of their actions, may prefer to choose the lower-risk ‘data self-use’ strategy. In the digital era, data security is a crucial component of corporate competitiveness. Service providers must ensure that their data management and protection capabilities adequately safeguard user privacy. Effectively controlling privacy leakage risks not only enhances user trust but also prevents legal and financial repercussions. Therefore, ‘data self-use’ emerges as a pivotal strategy for service providers in the competitive market.
    Lastly, when users tend to abandon their rights, the government may adopt a passive regulatory stance, allowing service providers to exploit regulatory gaps and leak data. To address this, the government should motivate users to actively protect their rights and participate in regulation and governance. One effective approach is to establish reward mechanisms that encourage users to report privacy leaks. Enhancing regulatory efficiency is crucial to constraining service provider behavior, making them more inclined to opt for the ‘data self-use’ strategy. Additionally, the government’s regulatory costs also affect its strategy choices. As regulatory mechanisms improve, the costs of active regulation decrease, prompting the government to favor active regulation. To ensure effective protection of user privacy rights, the government should continuously evaluate and adjust its regulatory strategies.
    Application Research
    Risk Assessment of Associated Enterprises Based on SNA and DEA Methods in Network Public Opinion Environment
    AN Qingxian, PENG Wenjing, WANG Ping, GAO Xian
    2025, 34(10):  119-126.  DOI: 10.12005/orms.2025.0318
    Asbtract ( )   PDF (980KB) ( )  
    References | Related Articles | Metrics
    In market economic activities, enterprises do not exist independently. They usually become associated enterprises due to interpersonal, asset or transaction relationships. The occurrence of risks in one enterprise tends to have an impact on other associated enterprises that are more strongly connected with it. If the risk of an associated enterprise is evaluated solely on the basis of data from disclosed indicators, the enterprises may be considered “safe” in the field or as a whole. And it is difficult to truly and accurately assess the overall risk of an enterprise. This poses a significant obstacle to risk avoidance, sustainable and healthy business operations, as well as timely prevention and control of market risks by regulatory authorities. It is critical to characterize the impact of this risk on associated enterprises. Online public opinions reflect negative or positive information of an enterprise, which can reveal the company’s image and business situation condition to some degree. Moreover, public opinion risk has a greater detrimental effect on the associated enterprise. For this reason, it is necessary to take into account the associated impact of public opinion risk when conducting a comprehensive risk assessment of an enterprise. Data envelopment analysis (DEA), as an effective evaluation method, is also applicable to comprehensive enterprise risk assessment. However, few of the existing DEA evaluation method studies have explored the association relationships between decision-making units (DMU) and the impact of the association relationship on the performance of individual DMU. Therefore, it is of great theoretical value and practical significance to investigate how to assess comprehensive risk of associated enterprises based on DEA in network public opinion environment.
    Based on the above issues, this paper considers the risk of corporate public opinion, and proposes a corporate risk assessment method considering individual association relationships which is based on social network analysis (SNA) and DEA. Our approach aims to quantify the impact of association relationships and evaluate the overall risk status of an enterprise more accurately. In our approach, the enterprise association network is firstly constructed. Then from individual attributes and network topology two aspects, we introduce the node global importance, node similarity, and node attributes, and propose a model that can quantitatively portray the public opinion risk under the influence of association relationships. After that, the quantitative association influence is integrated with the modified slacks-based measure (MSBM) model to develop a comprehensive risk assessment model for enterprises. Finally, the risk data of 345 real estate enterprises are used to verify the effectiveness of our proposed method.
    We conduct an empirical analysis of 345 real estate firms in Hunan Province. The changes in the relative ranking of the overall risk of associated firms among all the sample firms before and after considering shareholder association are discussed comparatively. The findings are as follows: (1)In the enterprise association network, the overall risk rankings of all good public opinion enterprises that are directly associated with the public opinion risk enterprises have risen to a varying degree. (2)There are 10 firms of the 32 associated enterprises that have the changing level in the risk rankings reaching 50% or more. Furthermore, from the perspective of social networks, we rationally analyze the reasons for the changes in the risk rankings of these 10 enterprises based on the structural characteristics of the networks and the attributes of the enterprises. The results demonstrate that: (1)The close connection between individuals in the network and its impact cannot be ignored, and focusing only on the risk data of the enterprises themselves will lead to an “underestimation” of the risk status. (2)The methodology proposed in this paper can effectively quantify the associated impact of network public opinion risks among enterprises and reveal the associated effect of enterprises and its influence on the comprehensive risk of single enterprise in the context. Additionally, a risk ranking of enterprises that is closer to the actual situation and more explanatory can be obtained. (3)It also provides a new perspective for the relevant regulators to make risk prevention decisions, which can help them focus on monitoring the core enterprises and cluster hubs in the industry, and curb the adverse effects of enterprise risk spillover through association relationships in time. At the same time, it is conducive to urging enterprises to regulate their own business behaviors. On the other hand, our proposed method considers association relationship between individuals in the process of DEA-based assessment, which enriches DEA theoretical research to a certain extent.
    Research on Financing Strategy of Photovoltaic Power Generation Enterprises in China under Carbon Trading Regulation
    LI Yin, SONG Yazhi, WANG Xinyu, LI Kaifeng
    2025, 34(10):  127-133.  DOI: 10.12005/orms.2025.0319
    Asbtract ( )   PDF (1243KB) ( )  
    References | Related Articles | Metrics
    With the implementation and deepening of the carbon peaking and carbon neutrality goals, China’s photovoltaic (PV) industry has witnessed a leapfrog development in terms of installed scale and technological innovation. The frenzied expansion of production has left much of China’s photovoltaic industry with production links far in excess of actual market demand. The “absolute excess” capacity leads to fierce competition and excessive reshuffle in the industry. Against the backdrop of price competition and technological upgrades, PV companies need to invest more in technology development to meet the challenges of survival. However, in recent years, most PV enterprises in China have been in the initial stage, and the internal capital size of enterprises is not sufficient to support extreme R&D costs. In addition, the lengthy investment cycle of the photovoltaic industry and the low capital recovery rate make it difficult for photovoltaic companies to obtain conventional financing in terms of government subsidies and bank credit. With the maturity of various mechanisms in the carbon trading market, stable financing channels resulting from emissions reduction benefits will considerably alleviate financing difficulties in the financial markets for PV companies due to unstable output and high risk. However, to expand the new financing channels, it is necessary to further study the mechanisms by which carbon trading regulation affects subsidy policies and production decisions of governments and PV companies.
    In this paper, based on the evolutionary game model, we construct a system dynamics model with government subsidies, production expansion of PV companies, and financing from the carbon market as the main ingredients. Specifically, in this paper, we first construct an evolutionary game model to obtain the pure strategies and strategies of governments and PV companies under carbon trading regulation. Second, this paper theoretically analyzes the evolutionary stability strategies for governments and PV enterprises. Third, based on numerical simulations, we obtain the mechanism of influence of carbon trading regulation on the evolutionary game strategies for governments and PV enterprises. Finally, from the perspective of exploiting the emission reduction benefits of PV companies, this paper analyzes the evolutionary game strategy between governments and PV companies under carbon trading regulation.
    The results of this paper show that, first of all, under pure strategy, the optimal evolutionary stable strategy for both the government and PV companies is for the government to adopt the subsidy strategy and for the PV companies to adopt the expansion strategy. And the government subsidy strategy has guided PV companies to expand their production strategies. This result shows that government policy guidance and financial support are key factors in fostering and expanding emerging industries. In the early stages of PV industry development, it is necessary to stick to policy subsidies. Second, under the hybrid strategy, the cost of government subsidies is inversely proportional to the government’s willingness to adopt the subsidy strategy, and as the cost of government subsidies increases, the government will be forced to adopt the non-subsidy strategy because it cannot afford the subsidy costs. This result suggests that when the government uses subsidies to guide the development of the PV industry, it needs to reasonably control the subsidy costs to prevent the failure of the subsidy strategy due to excessive subsidy costs. Third, carbon market revenues are an effective boost to replace government subsidies, which improves the financing capacity of PV enterprises and realizes the real market-oriented operation of PV enterprises. Low carbon market returns need to work with government subsidies to jointly improve the returns of PV companies. Higher carbon market revenues can directly replace government subsidies and help PV companies increase their revenue, thus facilitating further PV production expansion. Based on the results in this paper, we further propose corresponding management implications. First, we should give full play to the positive guiding role of policies and explore modes of government-business cooperation. The second is to comply with market rules and enhance the core competitiveness of enterprises. The third is to expand the range of companies that can participate in the carbon market and use the carbon revenues to help finance the PV industry.
    In future research, due to the stochastic nature of carbon market prices, further consideration of the impact of market uncertainty on the carbon market financing strategies of PV companies is needed. At the same time, since the parameter assignments in this paper mainly refer to the results of existing studies, further parameter estimation is needed in future studies to improve the fitting accuracy.
    Stochastic Optimization for Vaccination Station Location Considering Equitable Allocation and Service Quality
    SHI Xiyuan, YANG Dong
    2025, 34(10):  134-141.  DOI: 10.12005/orms.2025.0320
    Asbtract ( )   PDF (1022KB) ( )  
    References | Related Articles | Metrics
    As people’s medical concerns gradually shift from treating diseases to preventing ones, vaccination, as one of the most effective ways to prevent serious infectious diseases, has attracted much attention from governments and the public. In recent years, the demands for vaccination have been increasing year by year. However, the vaccine supply chain is characterized by rather long research, development and production cycles. Its high supply-demand uncertainties and strict storage conditions make it difficult to significantly reduce vaccine supply lead-time, which will result in frequent shortages of vaccines. In addition, as a medical health resource, the fairness of vaccine allocation among different regions and groups is particularly important. Ignoring this factor may lead to dissatisfaction and disorder among the public, which can bring about serious social problems. Furthermore, the queuing situation at vaccination stations may lead to a decrease in the number of vaccinators and affect people’s willingness to vaccinate. As a result, low vaccine coverage rate and related problems may occur. Therefore, in the case of limited vaccine supply, it is very essential to construct an optimal network of vaccination station to achieve equitable allocation of vaccines and improve people’s enthusiasm for vaccination.
    Regarding the fairness in allocation, decision-makers may wish to adjust the level of fairness in allocation based on the reality situation, such as regional economic status, population distribution, availability of medical resources, and national vaccination policies. To handle the problem, this article proposes a method of using the Gini coefficient as a constraint to ensure the fairness of vaccine allocation. By setting a Gini coefficient threshold, the level of fairness in allocation can be adjusted. Moreover, because the vaccination process is a service system, we apply the M/M/1 queuing system to model the queuing phenomenon in the vaccination service process, and ensure the service quality of vaccine vaccination by adding the queue length as a constraint to the model. As a consequence, the number of vaccination service points can be determined by solving the model. Additionally, when the fairness constraint of vaccine allocation cannot be met, a mechanism of transferring vaccines between vaccination stations to achieve fairness in vaccine allocation is put forward in this paper. Furthermore, since the demands for multiple types of vaccines over periods are uncertain, a stochastic programming method is applied to handle the uncertainty. A series of discrete scenarios is generated to simulate the demand by using SAA(Sample Average Approximation).
    To sum up, for the problem of vaccination station location and fairness of vaccine allocation, we employ the Gini coefficient as a constraint to ensure the fairness of vaccine distribution, utilize the M/M/1 queuing system to model the queuing phenomenon in the vaccination service process, and construct a two-stage stochastic programming model with the objective of minimizing total cost. Because solving the expected value of the two-stage stochastic programming model involves the high-dimensional integral of random variables, directly solving the model is very time-consuming. According to the characteristics of the model, we design a Benders decomposition algorithm based on sample average approximation and a series of Benders decomposition acceleration methods to solve the model.
    Finally, a series of numerical experiments and sensitivity analysis are conducted. Numerical experimental results show that compared to commercial solution software and the basic Benders decomposition algorithm, the Benders decomposition algorithm combined with a series of acceleration methods has significantly improved the efficiency of solving problems. Moreover, decision-makers can adjust the fairness of distribution by setting Gini coefficient thresholds. In a case where the fairness of vaccine allocation cannot be met, the transfer mechanism can achieve fairness in vaccine distribution and reduce the total cost.
    Location and Deployment Optimization of Municipal Waste Separation Facilities Considering Residents’ Satisfaction
    ZHANG Yan, PEI Mengyao, JIU Song
    2025, 34(10):  142-148.  DOI: 10.12005/orms.2025.0321
    Asbtract ( )   PDF (1262KB) ( )  
    References | Related Articles | Metrics
    With the increasing global awareness of environmental sustainability, waste separation management in urban areas has become increasingly critical. Recently, to enhance the accuracy of waste separation, waste source-separation facilities equipped with intelligent monitoring systems have been introduced in many residential districts in China. The location and number of intelligent source-separation facilities have a significant impact on the investment cost of the facilities, residents’ enthusiasm for participating in waste separation, and subsequent collection costs. Therefore, it is necessary to optimize the location and deployment of source-separation facilities from a systematic perspective.
    In foreign countries, waste collection is generally conducted by municipal collection vehicles using door-to-door or curbside collection methods that directly access waste drop-off points. However, in most urban residential communities in China, the waste from drop-off points needs to be first transported by property management to centralized collection points before being collected by municipal vehicles. Currently, there is no research specifically addressing the location-allocation and capacity optimization problem that covers both drop-off points and centralized collection points in the context of waste source separation in China. Additionally, previous studies have generally assumed that residents’ satisfaction is inversely related to the distance to the drop-off points. However, residents exhibit a “semi-aversion” psychology towards waste disposal facilities, where satisfaction is affected if the drop-off distance is either too far or too close. Moreover, previous research rarely considers the significant fluctuations in the generation rate of different types of waste on weekends and holidays.
    This paper examines the two-level location-allocation and capacity optimization problem for intelligent separation facilities in urban residential waste management. Several intelligent facilities with supervision functions need to be deployed at drop-off points and centralized collection points. Each drop-off point is equipped with bins for different categories of waste, while centralized collection points are located near main roads accessible to municipal collection vehicles. Based on the generation rates of various types of waste under different scenarios, the location and allocation of all facilities, the number of bins, and the collection frequency for each type of waste at each point must be decided. The objective is to minimize the investment cost of the facilities, the supervision, and the subsequent collection costs.
    We first construct a mixed-integer programming model to address the two-level location-allocation and capacity optimization problem with dynamic collection frequencies. The model incorporates a satisfaction function that considers both the semi-aversion psychology effect and the environmental external effect. On one hand, residents are dissatisfied if the drop-off facilities are too near or too far. On the other hand, residents are willing to bear a longer walking distance to drop-off points in the waste separation context than to traditional mixed drop-off points. To solve the model, the nonlinear constraints are transformed by introducing auxiliary variables. Then, based on the relationship between waste collection frequency and the number of waste bins, we introduce conditional inequalities as enforcement constraints to simplify the model and improve computational efficiency. Lastly, a scenario-based stochastic programming method is employed to analyze the impact of waste generation uncertainty.
    This study selects seven residential communities in Dalian. The number of people per household is estimated based on the characteristics of the building types. The actual walking distances are measured according to the community road network. Four sets of instances with different scales are established. The deterministic model and the stochastic model are solved using AMPL and the Gurobi solver version 10.0 on a computer with a 2.50 GHz processor and 4GB RAM.
    The results of the four instance sets indicate that adding conditional inequalities as enhanced constraints can reduce the required solving time by 90%. When considering resident satisfaction constraints, the average walking distance for residents to dispose of waste will be reduced, thereby increasing satisfaction levels by approximately 20%. However, this requires more drop-off points, leading to higher facility investment, supervision, and collection costs. When considering variable collection frequency, the frequency at points with lower waste generation rates can be reduced (e.g., every two days). This requires more waste bins, increasing facility investment costs, but reduces collection costs, resulting in a slight decrease in total costs. If both resident satisfaction constraints and variable collection frequency are considered simultaneously, resident satisfaction can increase by an average of 13%, with a rise in total costs of less than 5%. Additionally, as the scale of the problem increases, the cost advantage of adopting variable frequency collection becomes more significant.
    In addition, when considering the fluctuations in waste generation on weekends and holidays, we find that the deterministic model can provide stable facility location and allocation solutions through flexible adjustments in bin numbers and collection frequency. The stochastic model, on the other hand, can offer a more robust solution. These research findings provide a systematic and cost-effective facility deployment solution that also considers social benefits and environmental impacts, contributing to the promotion of sustainable urban development.
    Home Health Care Routing and Scheduling Problem with Synchronized Services and Carrying Medical Supplies
    LI Yanfeng, WANG Hairui
    2025, 34(10):  149-155.  DOI: 10.12005/orms.2025.0322
    Asbtract ( )   PDF (1392KB) ( )  
    References | Related Articles | Metrics
    The aging population in China is characterized by a large number of elderly people, a rapid rate of aging,and significant disparities. As a result, there is a swift increase in the medical needs of the elderly. In this context, community-based home care is garnering increasing attention due to its unparalleled advantages. It allows seniors to enjoy services at home, ensuring their psychological health. Additionally, it provides professional services tailored to meet the complex daily needs of the elderly. However, China’s community-based home care is still in its infancy, facing challenges such as the shortage of medical staff and outdated scheduling technology. The home health care routing and scheduling problem is an important aspect of community-based home care, in which a team of medical personnel is dispatched by medical institutions to provide services to patients at home while minimizing operational costs under various constraints. The scheduling issue has profound theoretical and practical significance for the development of community-based home care. Therefore, this paper reviews the shortcomings of current research at home and abroad and proposes the home health care routing and scheduling problem considering synchronized services and the transportation of medical supplies.
    This study investigates a category of Home Health Care Routing and Scheduling Problems, which integrates the cases where patients require simultaneous services, and healthcare workers need to carry various types of medical supplies, thus rendering the model more congruent with real-life scenarios. Additionally, time windows and patient-caregiver matching constraints are considered, for which a Mixed Integer Programming (MIP) model is formulated. Subsequently, an analysis of the model characteristics is conducted, decomposing the original problem into the master and pricing subproblems utilizing the Dantzig-Wolfe decomposition, and a Branch-and-Price algorithm is designed for problem-solving. Initially, the study employs a greedy algorithm to generate a starting solution. To assure the feasibility of the solution, a branching strategy that combines arc branching with time window branching is utilized during the branching process. Secondarily, a bidirectional labeling algorithm is adopted for solving subproblems, which enhances the label dimensions and optimizes the dominance criteria to accommodate the problem’s distinctive features. The numerical experiments indicate the efficacy of the proposed algorithm.
    In conclusion, a comparative performance analysis between the Branch-and-Price algorithm and CPLEX demonstrates that the former exhibits significant advantages in instances of varying sizes and types, substantiating the efficiency and robustness of the algorithm presented in this paper. In a sensitivity analysis section, the study initially examines the patient proportion requiring synchronous services, revealing that an increase in this proportion escalates operational costs with a non-linear trend. The sensitivity analysis is then conducted of the maximum carrying capacity for medical supplies, showing that the maximum capacity has a considerable impact on operational costs. Healthcare providers must balance the total costs with the maximum carrying capacity of medical supplies to devise a more rational scheduling plan.
    The problem model studied in this article is a static deterministic one; future research is inclined towards dynamic stochastic models. Consequently, future studies could consider uncertainties such as the total quantity of medical supplies needed by patients, the variability of service times for patients, and the unpredictability of travel times for healthcare workers. Additionally, models that account for the dynamic nature of patient demands could be considered, where new requests emerge during the scheduling process, and existing patient requests may be canceled at any moment. From an algorithmic perspective, the use of exact algorithms in this study has led to potential issues with extended runtimes and lower solution quality when solving large-scale instances. Therefore, improving the efficiency of exact algorithms is a focal point for future research.
    Optimizing Cost-Effectiveness for Fair Distribution of Relief Supplies Amid Resource Scarcity
    LIU Tongxin, WANG Xiang, WANG Xihui
    2025, 34(10):  156-162.  DOI: 10.12005/orms.2025.0323
    Asbtract ( )   PDF (1471KB) ( )  
    References | Related Articles | Metrics
    Fairness, as a principle in disaster management and humanitarian relief, aims to ensure equal access to treatment for all individuals. However, achieving absolute fairness is challenging, particularly in resources-constrained scenarios like the early stages of disaster.
    In contrast to absolute fairness objective, the relative fairness perspective aims to control the distribution disparities within a reasonable range and do not deliberately pursue complete equality. One common approach is to minimize the distribution disparity, for example, the variance or maximum gap among recipients. Another typical approach is based on Rawls’ theory of justice, which attempts to improve the worst-treated individuals’ outcome through max-min or min-max type optimization. In fact, fairness perception matters more than the distribution outcomes. However, both approaches aim to reduce objective inequality, either in a direct or indirect manner, without considering the beneficiaries’ subjectively acceptable range towards inequity. It could be possible that the distribution outcomes may exceed the beneficiaries’ acceptance range or tolerance zone, thus leading to a lack of fairness and even triggering social conflicts. To address these issues, this paper puts forward an alternative method to describe beneficiaries’ fairness requirements and ensure fair distribution in disaster relief. The proposed beneficiaries-oriented model framework could integrate the relative fairness standards of disaster beneficiaries, and achieve a balance between fairness and efficiency. This research will contribute to deepening the understanding of fairness perception for humanitarian organizations or governments, and broadening the research on the fairness in beneficiary’s preservative.
    The requirement for fairness varies from person to person, so it is not easy to define a clear boundary or accurate threshold to describe this vague concept. To tackle with the subjective uncertainty in describing this variable, fuzzy numbers are introduced to represent the beneficiaries’ acceptance range towards inequality distribution. A fuzzy chance constrained model with a cost-effectiveness objective is then formed to limit the probability of fairness violation, and meanwhile achieve maximum relief efficiency. By setting different confidence levels, the model can adjust the probability of constraint violation, reflect the risk preference of decision-makers and make it adapt to different relief scenarios. For ease of computation, the model is further converted into its equivalent deterministic form, so that it could be calculated by Newton iterative method. Finally, a real case study based on Ludian earthquake is presented to test the feasibility and flexibility of the proposed method and analyze the range of the fuzzy parameter and confidence level on the optimization strategy.
    Two important insights can be drawn from this article. Firstly, this study provides an integrated model for balancing fairness and efficiency. As is shown in the case study, when the acceptance range for unfairness gets wider, the allocation plan gradually favors the nodes with greater demand, leading to higher efficiency. Conversely, it tends to be more equity driven. In addition, as the confidence level decreases, the allocation results will get closer to equal division. The case also shows that the nodes with higher confidence levels are more likely to be prioritized for delivery, while the nodes without higher confidence may be slightly delayed and subject to greater inequality. Based on the above results, we demonstrate the adaptability and flexibility of the proposed model. By adjusting the relative fairness threshold and confidence level, the model can be applied to different allocation principles, including demand, equality, and priority or efficiency. Secondly, in contrast to Gini Index measuring objective inequality, fairness acceptance range is derived from beneficiaries’ preference and therefore is more suitable for describing their demands for fairness. It is much easier to understand and can be quickly measured through surveys before or after disasters occur, hence making it more flexible and reliable in complex scenarios such as disaster relief.
    To assess the impacts of key parameter changes on the allocation results, we haven’t pointed out a specified inequality acceptance range. Nevertheless, the rise of fuzzy measurement methods such as semantic difference analysis has indicated the possibility of successfully measuring this vague and fuzzy variable. As a reflection of subjective preferences, the acceptance range may vary depending on a series of factors such as relief supplies and emergency management stages. In connection with different relief scenarios, future research could focus on developing methods to characterize inequality acceptance range and measure the membership function of fuzzy number, thus making it more in line with the feelings and psychological demands of the affected groups.
    Research on Robust Ordering Strategies for Loss-averse Omni-channel Retailers under Carbon Regulations
    BAI Qingguo, DING Yingzhen, XU Jianteng, ZHANG Yuzhong
    2025, 34(10):  163-170.  DOI: 10.12005/orms.2025.0324
    Asbtract ( )   PDF (1235KB) ( )  
    References | Related Articles | Metrics
    Omni-channel retailing, as a new type of business model and industrial form that integrates the development of online and offline, has gradually become a key path for the high-quality development of China’s retail industry. “Buy online and pick up in store” (BOPS) is an important strategy for traditional dual-channel retailers to transform to omni-channel. Dual-channel retailers that implement BOPS fulfil online orders through offline physical shops, and need to stock a sufficient number of products in their physical shops. Adequate inventory can avoid losses due to stock-outs, but uncertain market demand tends to lead to inventory backlogs. Therefore, it is necessary to explore effective ordering strategies to cope with the inventory pressure caused by BOPS. In practice, limited market demand information has been a bottleneck restricting the transformation and development of retail enterprises due to seasonal changes, promotional activities and market competition. This dilemma is rooted in the fact that retailers cannot accurately know the number of consumers in each channel under the limited demand information, which makes them prone to inter-channel supply and demand mismatch, and increases the risk of business management. In addition, the loss aversion of individual risk decision-making process often drives retailers to deviate from the optimization path of rational expectation theory and adopt more conservative ordering strategies. This not only directly leads to the damage of economic efficiency, but also indirectly increases the cost of warehousing and logistics and environmental burden due to excessive storage.
    Motivated by the real challenges mentioned above, this paper considers the limited information of market demand and the irrational behavior of decision makers, and investigates the impact of the implementation of BOPS on the ordering strategy of brick-and-mortar shops by omni-channel retailers. Combining prospect theory and the minimum-maximum regret value criterion, this paper constructs robust optimization models for loss averse retailers under the mandatory carbon emission capacity and carbon cap-and-trade regulations, respectively. With the objective of minimizing the retailer’s maximum regret value, we solve the robust ordering quantity of the two models, and analyze the impacts of loss aversion, carbon quota and carbon trading price on the retailer’s ordering quantity, regret value and carbon emission from the theoretical and numerical aspects.
    The paper has the following results. (1)Under the mandatory carbon emission capacity regulation, retailers’ robust ordering strategies need to be flexibly adjusted in accordance with the carbon quota set by the government, taking into account the impact of loss aversion. When the government-set carbon quota is lenient, the robust ordering strategy will not be restricted by the carbon regulation, but the loss aversion characteristic may lead to over-conservative ordering and increase the likelihood of retailer’s regret. At this point, retailers should adjust their strategies to balance environmental requirements and economic benefits to reduce the risk of regret. On the contrary, if the carbon quota imposed by the government is more stringent, the impact of loss aversion on ordering volume and regret value will be weakened. The retailer should pay more attention to factors such as product price, inventory cost and cross-selling profit to optimize the ordering strategy. (2)The robust ordering strategy for retailers under the cap-and-trade regulation is independent of government mandated cap. Retailers should consider a variety of factors when developing their ordering strategies. These include not only economic factors such as market demand range, selling price per unit of product and inventory cost, but also environmental factors such as carbon emission. (3)Compared with the mandatory carbon emissions capacity regulation, the cap-and-trade regulation not only effectively mitigates retailers’ regrets due to non-optimal decision-making, but also ensures that they operate with lower carbon emission at the same time, which has the dual advantages of risk management and environmental protection. However, in the process of implementing the regulation, setting strict quota may cause resistance from enterprises and affect the effective implementation of the regulation. Therefore, the government needs to take into account both the emission reduction target and the adaptability of enterprises when determining carbon emission quota.
    This paper focuses on the ordering strategy of the loss-averse retailer when the demand information is located in a certain range. Future research directions can consider the case where only the mean and variance of the demand distribution are known. This model can also be extended to consider the supply chain system.
    Impact of Delayed Retirement on Optimal Contribution Rate for Enterprise Pension Insurance: Based on Two-period OLG Model
    YAO Haixiang, ZOU Zhiwen, ZHANG Weixuan
    2025, 34(10):  171-177.  DOI: 10.12005/orms.2025.0325
    Asbtract ( )   PDF (965KB) ( )  
    References | Related Articles | Metrics
    According to a report released by the National Bureau of Statistics in 2020, after the relaxation of the birth policy, the birth rate in China significantly exceeded the average annual level of 16.44 million births during the Twelfth Five-Year Plan period. This undoubtedly affects the choice of the optimal contribution rate for enterprise pension insurance. When calculating the optimal contribution rate for enterprise pension insurance, it is necessary to consider the impact of the multi-child policy in order to improve the accuracy of the research. Although the multi-child policy can to some extent increase the population growth rate in China, it has not fundamentally accelerated population growth. Numerous studies in the academic community unanimously indicate that the relaxation of the birth policy can only slow down the rate at which the population growth rate is declining and cannot stop the trend of declining population growth.
    In this article, assuming perfect competition in the market and equilibrium in the capital market, an OLG (Overlapping Generations) model is constructed and solved, taking into account life expectancy and delayed retirement. Furthermore, population growth rates are forecasted in the context of the multi-child policy. Based on research findings from experts and the government, three population growth scenarios are set, and the current optimal contribution rate range for enterprise pension insurance is studied. Moreover, the Fourteenth Five-Year Plan, the 2035 vision, and the research of numerous scholars all indicate that China will implement a policy of delayed retirement in the future. Based on this, two scenarios are set for future retirement age: delaying retirement by 1 year every 3 years and delaying retirement by 1 year every 5 years. The optimal contribution rate for enterprise pension insurance in the future is studied using numerical simulation methods. Finally, a sensitivity analysis is conducted to explore the impact of reducing the contribution rate for enterprise pension insurance and delaying retirement on the economic system. The following conclusions are drawn:
    By constructing an OLG model and introducing a social welfare maximization condition, an explicit expression for the contribution rate of enterprise pension insurance is obtained. Numerical simulations reveal that a decrease in the population growth rate will reduce the optimal contribution rate for enterprise pension insurance, while an increase in the population growth rate will raise the optimal contribution rate. Similarly, an increase in life expectancy will raise the optimal contribution rate, while a decrease in life expectancy will lower it. Compared to the elasticities, it is found that life expectancy is the primary factor affecting the contribution rate of enterprise pension insurance. According to the 2019 China Health and Health Development Statistics Report, the life expectancy in China in 2019 was 77.3 years, so the current optimal contribution rate range for enterprise pension insurance is 12.68% to 20.06%.
    The National Population Development Plan (2016-2030) predicts that the life expectancy in China will reach 79 years by 2030. If a conservative delayed retirement policy is implemented in the future, the optimal contribution rate for enterprise pension insurance from 2020 to 2030 will be 10.01% to 16.34%. If a moderate delayed retirement policy is implemented, the optimal contribution rate for enterprise pension insurance from 2020 to 2030 will be 7.06% to 13.43%. Therefore, it is recommended that the government control the contribution rate for enterprise pension insurance within the range of 7.06% to 16.34% from 2020 to 2030.
    The sensitivity analysis reveals that capital income share, subjective utility discount rate, and social discount factor all significantly impact the optimal contribution rate for enterprise pension insurance. Therefore, the reasonableness of parameter values should be considered. Additionally, reducing the contribution rate for enterprise pension insurance has more benefits than drawbacks. While it may decrease the overall social pension fund, it will increase per capita capital, individual wage income, personal account pension, and savings, thereby enhancing both the consumption during youth and retirement, ultimately improving overall social welfare. Finally, delayed retirement has both advantages and disadvantages for the economic system. Advantages include boosting social pension fund, increasing capital returns, and improving elderly consumption. Disadvantages include reducing per capita capital accumulation, individual wage income, personal account pension, and youth consumption. From a social welfare perspective, delayed retirement may reduce overall social welfare. Therefore, the government should carefully formulate delayed retirement policies while considering leisure and relaxation.
    Law Firms and IPO Pricing Efficiency
    CHEN Kejing, BAO Han, XIONG Xiong, YE Jing
    2025, 34(10):  178-184.  DOI: 10.12005/orms.2025.0326
    Asbtract ( )   PDF (961KB) ( )  
    References | Related Articles | Metrics
    The phenomenon of high IPO underpricing and suboptimal long-term market performance has long been a focal point in financial discussions. In emerging capital markets, like those in China, the inability to fully reflect IPO companies’ idiosyncratic information results in significant deviations of stock value from intrinsic value, leading to IPO underpricing. The prolonged period between stock issuance and official listing further heightens investor risks, with high IPO underpricing acting as compensation for these risks. Given China’s current position in the “emerging and transitioning” stage of its capital market, characterized by low information efficiency, IPO underpricing consistently ranks among the highest globally. Addressing this issue is essential for promoting the healthy and stable development of China’s stock market and ensuring the effective allocation of capital market resources. Against the backdrop of ongoing reforms in the stock issuance system, this study systematically explores the impact of issuer engagement with top-tier law firms during the IPO process on underpricing.
    The legal environment and law enforcement efficiency play pivotal roles in fostering financial development and economic growth. In recent years, the efficacy of law firms has emerged as a crucial indicator, measuring the level of law enforcement and facilitating information exchange between issuers and market participants. During IPO, the engagement of top-tier law firms by issuers transmits a high-quality signal to market participants, enhancing an investor’s understanding of the IPO company and mitigating emotional investment risks. Additionally, exemplary law firms contribute to increased corporate information disclosure, promoting information flow, enhancing transparency, and reducing the difficulty and cost of information searches. This, in turn, empowers IPO companies to set reasonable prices, improving capital market pricing efficiency.
    This research focuses on issuers’ decisions to hire excellent law firms during the initial public offering process. The study utilizes OLS regression and considers listed companies on China’s Shanghai and Shenzhen stock exchanges from 2007 to 2020 as the research sample. The data on law firms are primarily sourced from the official website of the All China Lawyers Association and the China Lawyers Network. Micro-enterprise data are mainly obtained from the CSMAR database, CNRDS database, WIND database, and the Juchao Information website.The findings reveal that issuers engaging top-tier law firms during the IPO process reduce underpricing by 6.6%. Rigorous methods, including entropy balance and instrumental variables, are employed to address endogeneity issues, ensuring the stability of this conclusion. Furthermore, this paper conducts an in-depth analysis. The analysis of the mechanism of action indicates that top-tier law firms can enhance the quality of information disclosure for IPO companies, which in turn aids in improving the pricing efficiency of these companies. The results of the heterogeneity analysis indicate that the impact of law firms on IPO pricing efficiency is more significant in situations where the information environment is poor, or the levels of corporate governance or the rule of law are low.
    In conclusion, this article offers a valuable contribution to the understanding of China’s IPO pricing formation mechanism, emphasizing the role of excellent law firms hired by issuers. By shedding light on the impact of law firms on the capital market and expanding the scope of intermediary reputation theory, the study enriches the intersection of law and finance. Ultimately, this research contributes to the enhancement of capital market pricing efficiency, fostering its stable and healthy development.
    Measuring Status Quo in Regional Production Network Chain and Spillover Effect between Yunnan and LMB Countries
    CHENG Xiannan, XIAO Qin, WEN Shuhui, FANG Xiang
    2025, 34(10):  185-191.  DOI: 10.12005/orms.2025.0327
    Asbtract ( )   PDF (956KB) ( )  
    References | Related Articles | Metrics
    With the stagnant expansion of the global value chain (GVC), most countries’ traditional embedding modes have changed dramatically. During such a period, the provinces, especially the inland border provinces in China, have to ameliorate the trading patterns in the regional production network by expanding the scale of opening to the world. The countries in the Lancang-Mekong Basin (LMB) have also enjoyed a rapid growth for the last decade but confronted as the same internal condition and external shocks in GVC as China has done. Thus, they have a strong willingness to cope with Chinese manufacturing industries to reconstruct the embedding mode in GVC.
    Due to the geographical discrepancies, there exists a distinct difference in trading mode, development stages and organization mode in manufacturing industries between the eastern coastal provinces and western inland ones in China, and between one country and another one in LMB. Meanwhile, both the western inland provinces in China and the countries in LMB have familiar culture and industrial development stages because of geographical proximity, but the manufacturing industrial linkage between them is still weak. Yunnan province and thecountries in the LMB are geographically adjacent and culturally similar. With decades’ cooperation in various industries, Yunnan province and the LMB countries have accumulated more experience on the joint production mode in manufacturing sectors, for instance, textile, tire and construction. Due to the weak industrial development basis, Yunnan would implement a totally different trading mode to the LMB countries compared with the eastern coastal provinces in China. As a result, the cooperation between Yunnan province and the LMB countries would create a totally different spillover effect to each other.
    In order to evaluate the tendency of the industrial cooperation between Yunnan province and the LMB countries, this paper discusses the dynamic status quo of Yunnan province and the LMB countries in the regional production network and their mutual spillover effect. Thus, this paper has built a world input output table (WIOT) which takes Yunnan province and the LMB countries as a distinct independent entity, and allows us to compare their status quo in the framework of GVC. With this table, this paper discusses the dynamic status quo and spillover effect of manufacturing industries in Yunnan province and the LMB countries.
    Several results have been found: (1)From 2007 to 2012, the variations of embedded positions between manufacturing industries in Yunnan province and the LMB countries are different. The value-added creation capacity of manufacturing industries in Yunnan province has increased with a decline in the GVC length, while the value-added creation capacity and the GVC length increase simultaneously in the manufacturing industries of the LMB countries. (2)Generally, the industrial structure and development stages of manufacturing industries in Yunnan province are more similar to those in the LMB countries compared to those in the other provinces in China. (3)The industrial structures and development stages between manufacturing industries in Yunnan province and those in the LMB countries are complementary, but the spillover effects in the LMB regional production network are relatively low. To increase the spillover effect between Yunnan province and the LMB countries in the regional production network, we should increase each region’s embedded position in GVC, break the barriers of inter-regional industrial cooperation and enhance the complementary industrial’s cooperation between Yunnan province and the LMB countries. Besides, the manufacturing industries in Yunnan province shall also actively absorb the industrial transfer led by the eastern coastal provinces in China.
    Impact of Media Sentiment on Total Factor Productivity of Listed Companies: Promotion or Inhibition?
    BAN Qi, FAN Xiaoyun
    2025, 34(10):  192-198.  DOI: 10.12005/orms.2025.0328
    Asbtract ( )   PDF (962KB) ( )  
    References | Related Articles | Metrics
    At present, China’s economy has reached a bottleneck in its development, and it is difficult to support the healthy and rapid growth of the economy only by investing a large number of production factors in the development model. As the microscopic subject of economic operation, whether enterprises can shift to the development mode oriented to improving their own development quality has become the key to the transformation of China’s economy. In the era of information economy, the media, as an important information intermediary in the capital market, can not only alleviate the information asymmetry problem among market participants and improve the external governance level of enterprises, but also influence the decision-making of external investors by publishing reports with distinctive positions and emotional tendencies, which in turn affects listed companies and the capital market. Therefore, it is of great significance to actively use the function of media tone in improving corporate governance and easing corporate financing constraints, so as to promote high-quality development of both micro enterprises and the macro economy.
    Many studies have found that in addition to playing the role of information intermediary and external monitor of enterprises, the media reports with tendentious emotions may also intervene in enterprises’ investment, innovation and other activities in various ways. Therefore, in the current context of China’s economic development, this paper takes the high-quality development of enterprises as the starting point, focuses on the relationship between media sentiment (tone) and total factor productivity (TFP) of enterprises, and attempts to uncover the positive effects of emotion-laden media reports on the high-quality development of enterprises. Specifically, based on the Chinese financial media text information obtained from the CNRDS database, we first construct the tone of news media reports on Chinese listed companies by using the Chinese financial text sentiment dictionary (CFTSD) and combining it with textual analysis. Secondly, using the financial data of Chinese listed companies obtained from CSMAR, we calculate the TFP of the companies by using the LP method, the OP method and the fixed effects method. Finally, with both tone and TFP as the explanatory variables, and with the introduction of a series of control variables, we construct a two-way fixed-effect model to test the impact of tone on the TFP of Chinese listed firms over the period 2009-2019. We find that positive media sentiment can significantly increase firms’ TFP, which is mainly due to the mitigating effect of rising media sentiment on firms’ external financing constraints, as abundant capital allows firms to improve their TFP by enhancing their own innovation capabilities. In addition, we find that positive media sentiment may exacerbate agency conflicts and reduce firms’ investment efficiency, which may dampen firms’ TFP to some extent, but overall, rising media sentiment mainly contributes to firms’ TFP. Further research finds that positive media sentiment can help firms obtain more bank loans at lower interest rates, which undoubtedly effectively alleviates firms’ financing constraints. The heterogeneity analysis shows that the promotion effect of media sentiment on TFP is also better for firms with higher financing constraints and lower location marketing process. This also indirectly proves that the improvement in the marginal effect of firms’ external financing pressure is an important reason why media sentiment drives firms’ TFP.
    As media reports with biased sentiment often question the impartiality and objectivity of the media, most of the previous literature has focused on discussing the negative economic consequences of media sentiment. This paper, on the other hand, validates the positive economic consequences of media sentiment from the perspective of the quality development of enterprises, which not only expands the research on the economic consequences of media sentiment on enterprises, but also provides a valuable reference for the government on how to use media tools to promote the quality development of enterprises in the future. In addition, although existing studies have examined the impact of media sentiment on enterprises' innovative ability, investment efficiency and financing constraints, not all the conclusions are positive. Therefore, based on the analysis of the existing literature alone, it is not yet possible to infer an effective relationship between media sentiment and the quality development of firms. By examining the impact of media sentiment on corporate TFP, as well as the influencing mechanism and specific channels behind this impact, this paper not only provides direct empirical evidence on how media sentiment promotes corporate quality development, but also extends the research on external influences on TFP of listed firms and the corresponding channels of action.
    The findings of this paper have the following policy implications: First, it is important to take a dialectical view of the role of media sentiment, which has been prejudged by most people in the past. In fact, our study shows that positive media sentiment can improve the innovation performance of firms by easing their financing constraints, which in turn increases their TFP, contributing to the transformation of China’s economy into a high-quality development. Second, at present, China’s private enterprises, especially small and medium-sized enterprises (SME), still face the problem of difficult and expensive financing; if media sentiment can be effectively used within a reasonable range to alleviate the financing constraints of SME, it will be conducive to stimulating economic vitality and promoting the high-quality development of the economy. In addition, excessive emotional reporting may also aggravate the problem of agency conflict, lead to a decline in the investment efficiency of enterprises, and ultimately inhibit the TFP of enterprises. Therefore, the government should carry out reasonable regulation and guidance of media reporting, which is of great significance to create a healthy and efficient market environment.
    Research on Classification and Prediction of Stock Price Trends Based on PI_RF Classification Balanced Selection Features
    WANG Zhaogang
    2025, 34(10):  199-204.  DOI: 10.12005/orms.2025.0329
    Asbtract ( )   PDF (1410KB) ( )  
    References | Related Articles | Metrics
    Accurately predicting the trends and directions of financial time series data such as stocks in advance has always been an important concern for investors and financial regulatory agencies. With the development of machine learning and artificial intelligence technology, in the research of predicting stock price trends based on machine learning, integrated fundamental analysis with technical analysis to select input features and address the high dimensionality of input data, the classification model is integrated into the feature selection process to improve the matching between input features and the structure of the classification model. To accurately predict stock volatility trends, a variety of prediction models and feature selection methods are provided.
    However, the feature selection process often considers the correlation or importance between input features and the target sequence, while ignoring the differences in the impact of input features on different trend categories such as upward and downward trends. This makes it difficult to balance the preservation of information from different trend categories in the selected feature combinations, resulting in uneven prediction accuracy for different trend categories, which, to some extent, limits the improvement of average classification accuracy.
    Therefore, this article proposes a feature classification balanced selection method based on permutation importance (PI) and random forest (RF) (PI_RF), which evaluates the importance of input features for different trend types such as rising and falling using the PI_RF method, and selects the features with higher importance for different trend categories as the optimal input feature combination.
    With data from 24 constituent stocks of the Shanghai and Shenzhen 300 Index as experimental data, basic trading data and its technical indicators as raw input features, the PI_RF method is used for feature classification evaluation and equilibrium selection, the MLP is used as the classification prediction model, and the average classification accuracy (Accuracy), upward trend recall (U-Decall), and downward trend recall (D_Recall) are used as evaluation indicators to verify the effectiveness of the PI_RF based feature classification equilibrium selection method.
    The data analysis results indicate that there are significant differences in the importance of input features for different trend categories such as rising and falling. The use of PI_RF classification balanced feature selection could effectively improve the prediction accuracy of different trend categories such as rising and falling, thereby improving the average classification accuracy. Stability and reliability of the method are verified by adjusting the number of feature selections and using LSTM as the classification model. Heterogeneity analysis shows that there are significant differences in the importance of input features for stock data in different industries. The importance of the same input feature for both upward and downward trends varies among stock data in different industries in terms of both degree of importance and direction of action.
    Although there are differences in the importance of input features for different trend categories such as rising and falling, it is found in the study that there is a significant overlap phenomenon among input features that are more important for different trend categories. Input features that are more important for rising trends are equally important for falling trends. Although the overlapping phenomenon of significant features in importance indicates the importance of overlapping features in predicting the fluctuation of rising and falling trends, to some extent, it also indicates that overlapping features lack the ability to distinguish between different trend categories such as rising and falling. The existence of overlapping features may reduce the model’s ability to distinguish between different trend categories, thereby limiting the accuracy of predicting different trends such as rising and falling. Therefore, based on the evaluation and balanced selection of feature importance classification, the high importance features of different trend types are de-recombined, and then adaptive intelligent algorithms such as genetic algorithm (GA) are used to optimize the classification prediction accuracy. Further iterative optimization of the combination is carried out within the optimal feature range after de-recombination to solve the phenomenon of feature redundancy such as overlapping features. It is worth exploring and trying whether it can further improve the prediction accuracy of different trend categories.
    Nash Bargaining Game with Loss Aversion for Repurchasing Traffic BOT Project
    FENG Zhongwei, LI Fangning, YANG Yuzhong
    2025, 34(10):  205-211.  DOI: 10.12005/orms.2025.0330
    Asbtract ( )   PDF (1230KB) ( )  
    References | Related Articles | Metrics
    BOT(Build-Operate-Transfer) projects are used to reduce the financial burden on the government incurred from improving the economy and the quality of services. Under a BOT scheme, during the concession period, the enterprise undertakes finance, construction, operation and maintenance of an infrastructure, while the government allows the enterprise to charge tolls from users. However, the unfairness in the design of the risk allocation mechanism and the irrationality in decision-making have brought enormous difficulties for enterprises to recover their investment costs, which in turn has led to the early termination of transportation BOT projects. Early termination severely impacts participating enterprises. To address this, governments and enterprises may agree on buybacks, where government compensation is critical to successful buybacks. Existing compensation calculation methods, however, are irrational and unfair, causing endless disputes and heavy losses. Thus, the Nash bargaining game is adopted, as its solution satisfies axioms like Pareto efficiency and symmetry, balancing fairness and efficiency in distributing the “bargaining pie”.
    In a real-world bargaining game, persons usually evaluate concessions made by themselves as their own losses, which implies that the aversion to concessions is the aversion to losses. The profit seeking nature of an enterprise indicates its tendency to avoid losses, while the government may also be loss averse. In the buyback compensation negotiation of traffic BOT projects, the success of compensation negotiations requires mutual concessions between the enterprise and the government. The aversion to concessions means that the enterprise and the government have the behavioral characteristics of avoiding losses. Therefore, it is crucial to construct a new compensation negotiation game model with loss aversion by adopting Nash bargaining, which is of great significance for the design of franchise contracts for traffic BOT projects.
    The results in our work are shown as follows: (1)The more traffic attracted by the newly built road, the smaller the buyback compensation paid by the government to the enterprise. (2)The buyback compensation paid by the government to the enterprise is positively (or negatively) correlated with the loss aversion of the government (or the enterprise). (3)When the investment cost of the government is sufficiently low, the government will build a new road. With an increase in the investment cost, the greater the enterprise’s (or government’s) level of loss aversion is, the more likely the government will repurchase the traffic BOT project (or build a new road). When the investment cost further increases to a higher level, the government chooses to repurchase the traffic BOT project.
    In future research, considering the variable investment cost of new roads, the compensation issue will be studied when the government repurchases the early terminating traffic BOT projects. On the other hand, the complexity and uncertainty of repurchasing early terminating traffic BOT projects hinder the government and the enterprise from obtaining sufficient information. Therefore, another future research is to investigate the buyback bargaining with incomplete information between the government and the enterprise.
    Management Science
    Private Label Introduction Strategies on E-commerce Platform Influenced by Sales Models of Green Manufacturer
    ZHANG Weisi, QIAN Jiafeng, WANG Junbin
    2025, 34(10):  212-218.  DOI: 10.12005/orms.2025.0331
    Asbtract ( )   PDF (1486KB) ( )  
    References | Related Articles | Metrics
    This paper constructs a two-tier supply chain comprised of a single manufacturer and a single e-commerce platform. The manufacturer’s brand products are sold to end consumers through the e-commerce platform using either resale or agency sales model. Consumers in the market have a green sensitivity and higher demand for product quality. The e-commerce platform considers whether to introduce its private label, and whether this private label should be of low or high quality. To this end, the paper examines six different scenarios, provides the specific equilibrium outcomes, and performs a comparative analysis. Additionally, simulations have been conducted using numerical data.
    The main findings of the paper are as follows: Firstly, under the reselling model, the introduction of low-quality private labels by e-commerce platforms will be generally feasible only when relative production cost efficiency is low. For high-quality private labels, it is necessary to ensure that the relative production cost efficiency is not too high. Secondly, in the agency model, since manufacturers control pricing and benefit from lower agency rates, introducing private labels could reduce profits for both parties. In such cases, there is a consensus between the e-commerce platform and the manufacturer about not introducing private labels. Additionally, if the production cost efficiency of these brands is low, the e-commerce platform has the flexibility to introduce either high-quality or low-quality brands. In this scenario, private labels have weaker competitiveness and do not pose a threat to manufacturers, thus usually gaining their support and helping to achieve a win-win situation. However, as production cost efficiency increases, if the e-commerce platform’s private labels start to genuinely threaten the manufacturer’s brands, this balance of collaboration might be disrupted. Lastly, when e-commerce platforms introduce low-quality private labels, if these brands are of relatively low quality, they may prompt manufacturers to lower their green levels. If the e-commerce platform enhances the quality of its private labels, manufacturers typically respond by raising their green levels. Yet, when high-quality private labels are introduced and only if these brands reach a certain high-quality threshold, the manufacturer’s green levels might decrease. Moreover, the green levels of manufacturers are higher under the agency model than in the reselling model.
    This paper studies the symbiotic and competitive relationship between green manufacturers and e-commerce platforms, offering managerial insights for related businesses. First, the findings can assist manufacturers in adjusting the greenness and pricing of their products in response to the threat posed by retailer private labels. Second, regarding the choice of sales models, after the introduction of private labels, the agency model can facilitate a consensus between the two parties. In the resale model, as the production cost efficiency increases, the platform should consider introducing a high-quality private label. Finally, if the e-commerce platform introduces a private label, it needs to adjust the agency rate and product quality to reach a consensus in the agency model.
    While this paper addresses the sales model and the strategy of introducing private labels, it also presents certain limitations. Future studies could explore the following areas: First, this paper presumes that consumers perceive the quality of the manufacturer’s brand and private labels equally, yet some consumers may hold biases against private labels in reality, so future research could account for consumer heterogeneity. Second, the paper only accounts for a manufacturer-centric supply chain structure, though e-commerce platform-centric structures also exist. Third, the paper does not consider the greenness level of products from the e-commerce platform, although several platforms have joined forces with third-party manufacturers to launch green products, such as Amazon with its eco-friendly brand.
    Authorization Strategy for Closed-loop Supply Chain in Remanufacturing Based on Patent Protection
    HUANG Zuqing, AN Jinqiang, DUAN Housheng
    2025, 34(10):  219-225.  DOI: 10.12005/orms.2025.0332
    Asbtract ( )   PDF (1267KB) ( )  
    References | Related Articles | Metrics
    With the scarcity of resources and environmental degradation, the call for resources recycling and green sustainable development in society is increasing. Especially in the field of electronic products, due to the rapid development of technology and diverse consumer demands, electronic products are being updated and replaced rapidly, resulting in a large number of obsolete and abandoned electronic products. Currently, most electronic product companies effectively recycle electronic waste and repurpose it through remanufacturing methods, turning it into valuable resources. However, remanufacturing poses challenges related to patent authorization, and mishandling of these issues may result in infringement disputes. At present, literature on patent protection focuses on the situation of manufacturers authorizing the distributor and third-party remanufacturer, often neglecting the granting of patents to second-hand manufacturers with remanufacturing capabilities.
    In summary, this paper investigates licensing strategies of manufacturers and second-hand manufacturers regarding patent authorization for electronic products. From the perspective of patent protection and power structure, we construct four models: non-licensing and market without a leader, non-licensing and market with a leader, licensing and market without a leader, and licensing and market with a leader. By comparing and analyzing the optimal solution of the model, the optimal authorization strategy is determined, and the impact of various parameter changes (such as recycling price coefficient, transfer markup coefficient, and patent licensing fee) on product sales and profits of all parties is studied.
    The research findings suggest that manufacturers and second-hand manufacturers are mutually constrained, regardless of market dominance. The decision to grant patents is not absolute. If both parties seek to reach a licensing agreement, they must establish a reasonable patent authorization fee, taking into account factors such as the recycling price coefficient. In the case of manufacturer authorization, patent restrictions mitigate the extent to which second-hand manufacturers encroach on manufacturer profits. Manufacturer authorization is beneficial for expanding the development of the electronic product market. Without manufacturer authorization, the demand for second-hand products is greater than that for manufacturer authorization, while the demand for new products is lower than that for manufacturer authorization. Manufacturers and second-hand manufacturers can increase their profits by sharing resources and increasing the maximum price consumers are willing to pay. Finally, combined with numerical example analysis, the model results are further validated, and management insights are provided.
    There are still some limitations to this article. This article only considers the situation of homogeneity and price between remanufactured products and new products, but does not consider the impact of remanufacturing rate. Future research can further explore the price difference between remanufactured and new products, as well as the impact of remanufacturing rates.
    Pricing Data Application Products in Duopoly Market Considering Private Data Utilization
    CHEN Xiaoyan, GENG Wei
    2025, 34(10):  226-232.  DOI: 10.12005/orms.2025.0333
    Asbtract ( )   PDF (1177KB) ( )  
    References | Related Articles | Metrics
    Data application products, which cater to the customized needs of client companies, hold great potential in the current era of digital industrialization. This study focuses on the pricing of data application products in the duopoly market, considering the differentiated degree of private data utilization by suppliers, data externalities, and customer privacy concerns. We develop a game model analogous to the classical Hotelling’s linear city. In this model, two suppliers of data application products are positioned at the two ends of a unit line, while their client companies are uniformly distributed along this line. Suppliers create their data application products based on a common public dataset and their private dataset, the size of which is proportional to the respective installed base of the data application product. Thanks to the data externalities, leveraging the private dataset helps suppliers enhance the quality of the data application product. Meanwhile, client companies express privacy concerns, assumed to be proportional to the customer’s distrust against each supplier. Client companies typically procure data application products from a single supplier for data security reasons; thus, we assume they are single-homing in this paper. Each client company decides which supplier to procure the data application products from based on its own utility, influenced by the size of the common public dataset, size of the suppliers’ private dataset, the suppliers’ utilization levels on the two types of datasets, the price of the data application products, and its own privacy concern.
    We identify three different stages of market evolution. In the initial stage, characterized by relatively low data utilization, the market is partially covered. Suppliers consistently price their data application products based on the value provided by the public dataset, but they could gain a larger market share by leveraging private data. In the second stage, with moderate degrees of data utilization, the market is almost fully covered, leading to multiple equilibrium prices. The supplier with a higher level of private data utilization prices its product based on the mutual level of private data utilization, while the other supplier prices its product based on the value of the public dataset. The former consistently sets a higher price than the latter. In the third stage, with relatively high degrees of data utilization, the market is fully covered, and both suppliers price their data application products based on the mutual level of private data utilization.
    Furthermore, suppliers may experience different outcomes in the competition to improve their degree of private data utilization, depending on whether their rival improves simultaneously. In asymmetric competitions, the supplier enhancing its degree of private data utilization gains more revenue in the first two stages but incurs a loss in the third stage. In contrast, its rival generally receives no positive outcome but remains immune from losses if they are in the first stage of market evolution. In symmetric competitions, the revenues of the two suppliers mutually increase in the first two stages but decrease in the last stage. The results suggest that improving the degree of private data utilization is not advantageous when the market has evolved to the third stage with relatively high degrees of data utilization. Additionally, we identify a Prisoner’s dilemma for the two suppliers in the competition to enhance their degree of private data utilization.
    Our findings contribute to a comprehensive understanding of pricing policies for data application products and provide valuable managerial insights. We also suggest several directions for future research, such as exploring subscription-based business models, pricing based on data application product usage, and pricing for vertically differentiated data application products.
    Research on Bank Decision-making under Blockchain-enabled Receivable Chain Platform Financing Mode
    YUAN Jijun, WU Shuang
    2025, 34(10):  233-239.  DOI: 10.12005/orms.2025.0334
    Asbtract ( )   PDF (1257KB) ( )  
    References | Related Articles | Metrics
    Small and medium-sized enterprises play an important role in the stability of the entire supply chain. Compared to core enterprises, they do not have enough initial funds and often face a shortage of funds during production. Due to the opacity of financial information, they are often subject to financial exclusion when financing from banks. Unable to obtain sufficient financing from banks, they will have to choose to raise funds from other channels, which makes them face higher risks and seriously affects the stability of the supply chain. To alleviate the current situation of difficult and expensive financing for them, supply chain finance has emerged as an innovative business. However, the effectiveness of supply chain finance business has always been influenced by some problems such as information asymmetry. Blockchain technology has received high attention from banks due to its ability to alleviate information asymmetry. Many banks have begun to introduce blockchain technology into traditional supply chain finance businesses, leading to changes in the traditional supply chain finance model. However, compared to traditional supply chain finance, from the perspective of bank decision-making, the issue of whether blockchain supply chain finance is necessarily a better choice has attracted attention and created discussion in the academic community.
    From the perspective of bank decision-making, a two-echelon supply chain composed of a single capital constrained supplier and a single core enterprise is considered in the paper. After analyzing the income of the core enterprise, supplier and bank under the traditional receivable pledge financing mode and blockchain-enabled receivable chain platform financing mode, the game models of two financing modes are built based on the market demand, financing constraints and default punishment. The game models are analyzed under the conditions of complete information and incomplete information, respectively. Based on the differences in supplier initial funds, interest rates, and platform usage rates, the conditions under which blockchain-enabled receivable chain platform financing mode can bring higher returns to banks are obtained through an analysis of game equilibrium requirements.
    The real-world example during the production and sales process is presented to verify the models. The simulation experiment results show that: (1)when the loan interest rate is low, the blockchain-enabled receivable chain platform financing mode is more conducive to increasing the bank’s profits in the case of complete information; (2)when the default probability of the core enterprise and supplier increases, the blockchain-enabled receivable chain platform financing mode has more advantages over the traditional receivable pledge financing mode in terms of bank profits under incomplete information; (3)from the perspective of bank decision-making, the blockchain-enabled receivable chain platform financing mode is not unconditionally superior to the traditional receivable pledge financing mode. When the loan interest rate and platform usage rate meet certain specific constraints under low pledge rates, the blockchain-enabled receivable chain platform financing mode is a better choice for banks. In this case, the relationship between the loan interest rate and platform usage rate can’t be simply determined due to the loan interest rates of other channels. When the loan interest rate and platform usage rate meet certain constraints under high pledge rates, the blockchain-enabled receivable chain platform financing mode is a better choice for banks. In this case, the loan interest rate level is higher than the platform usage rate.
    The focus of this study is on whether the blockchain-enabled receivable chain platform financing mode can definitely bring higher profits to banks compared to the traditional receivable pledge financing mode, and under what constraints, the blockchain-enabled receivable chain platform financing mode is a better choice for banks compared to the traditional receivable pledge financing mode. In the future, further research will be conducted on the optimization of bank decision-making objectives, and further exploration will be conducted on platform usage rates in order to maximize the bank’s profit margin in the context of determining retail prices and given market credit rates. The acquisition of relevant results will better guide the bank’s business management practices.
[an error occurred while processing this directive]