Loading...

Table of Content

    25 March 2025, Volume 34 Issue 3
    Theory Analysis and Methodology Study
    Nonlinearly Weighted Convex Risk Measures Based on Market Regimes
    ZHUO Junhui, CHEN Zhiping
    2025, 34(3):  1-8.  DOI: 10.12005/orms.2025.0068
    Asbtract ( )   PDF (1023KB) ( )  
    References | Related Articles | Metrics
    The concept of financial risk consists of two main components: the possibility of negative outcomes, namely loss; and the variability of expected results, known as bias. With modern financial theory, risk measure has been the most important basis of risk management and an important tool to quantify the size of risk. Early risk measures mainly focus on the degree of random fluctuation or dispersion of investment returns deviating from its mean, such as variance, lower half moment and deviation measure. Since value at risk was proposed in 1997, almost all risk measures have been constructed based on the concept of loss of risk. One of the important reasons is that a large number of empirical studies show that the random returns of financial assets do not follow the normal distribution, but have obvious bias. Therefore, in order to ensure proper risk control, most scholars have turned their attention to tail risk measure, and an important task in this respect has been to propose a coherent risk measure. Since then, some scholars have replaced the sub-additivity and positive homogeneity of the coherent risk measures with a weaker convexity, and proposed a broader set of risk measures, called the convex risk measure. Compared with coherent risk measures, convex risk measures can better reflect the change of risk after the expansion of asset size, and explain liquidity risk. However, at present, the research on convex risk measures rather than coherent risk measures mostly stays in the more abstract level, and there are just a few cases in which they are successfully applied to real financial investment problems. Considering that monotonicity and convexity are the main properties of risk measures which are accepted by both academia and industry, some scholars have put forward the concept of generalized convex risk measures recently. Therefore, an important issue worthy of attention is how to integrate the information of high-order moments in random returns, the investor’s risk preference and the change in market macro conditions into the construction of new risk measures.
    Considering these issues and the pros and cons of current risk measures, we obtain the following three achievements: (1)We propose a class of market regime-based non-linearly weighted convex measures by combining the generalized convex risk measure with the market regime selection for the first time. Then, we demonstrate its theoretical properties and design a practical algorithm for market regime classification by using the idea of Markov chain. (2)Through considering different collections of market regime sets, we empirically demonstrate that the new risk measure has better performance than relevant measures and can flexibly reflect the influences of different market regimes. The two-dimensional regime model will be more suitable for the financial market when the macro situation is good, while the three-dimensional regime model will be more suitable for the poor macro condition of the financial market. And whether the financial market is in a stable state or an extreme condition, both are applicable.(3)We establish a corresponding portfolio selection model based on the new risk measure, and a series of empirical tests show the superior performance of the optimal portfolio in the new risk measure with respect to typical performance ratios. And the empirical results in and out of the sample indicate the practicality, effectiveness, and stability of the proposed new risk measures and corresponding portfolio selection models. Therefore, our new risk measures can help investors make efficient and robust investment decisions.
    On the basis of the work in this paper, we can also select other newly proposed coherent risk measures or (generalized) convex risk measures to construct new risk measures based on the market regime selection method, by modeling after the previous paper. Furthermore, we can also consider the broadening model of new risk measures proposed in this paper in the multi-period situation.
    Research on Airline Crew Scheduling Optimization Considering Overnight Risk
    LI Kunpeng, LI Jie, TIAN Qiannan
    2025, 34(3):  9-15.  DOI: 10.12005/orms.2025.0069
    Asbtract ( )   PDF (1197KB) ( )  
    References | Related Articles | Metrics
    Labor costs are the second largest expense in the total operating costs of airlines (fuel comes first). Taking China Southern Airlines, China Eastern Airlines, Air China, Spring Airlines, Juneyao Airlines, and China Express Airlines as example, the six airlines’ employee compensation costs accounted for about 20% of total operating costs in 2022, according to their publicly released annual reports. The two biggest controllable factors that lead to aircraft delays and flight cancellations are the lack of personnel to connect flights and backup crew members. The cost caused by overnight crew risk (some airports are susceptible to extreme weather or non-agreed hotels often have crew checking out 10 minutes beyond the agreed chargeable time point, requiring airlines to pay for an additional half a day or one day) is also a significant cost expense for airlines, with an airline company with 50 aircraft spending more than 100 million yuan per year on crew accommodation costs. Therefore, studying the optimization of airline crew scheduling considering overnight risk will effectively improve the current situation where airlines are less profitable or even lose money. The crew scheduling problem is usually decomposed into the crew pairing problem and crew rostering problem. Since the crew pairing problem is the first stage of crew scheduling and is more important to the overall quality of final crew scheduling, this paper focuses on the crew pairing problem. There is still a gap in the literature on crew scheduling studies that consider overnight risk, and there is a lack of studies based on China’s civil aviation industry regulations on duty period limitations, flight time limitations, rest periods, and the simultaneous consideration of airlines’ operating costs and overnight risk. Therefore, we study the crew pairing problem considering the overnight risk to minimize the crew pay cost, deadhead penalty cost, overnight cost, and penalty cost of overnight risk. Under the requirement of meeting all regulations (such as crew duty time and rest time), covering all flights, and ensuring the optimal utilization of all resources, a high-quality pairing plan is generated. Strengthening the management of crew members can not only effectively reduce costs but also improve the operation of flights. Reducing the number of overnight stays for crew members and the risk of overnight stays plays an effective guarantee role in their normal performance of flight tasks and has a significant impact on effectively improving consumer satisfaction.
    We model the crew pairing problem as a set-partitioning model (containing the master problem and subproblem models, respectively). We design a heuristic algorithm based on column generation to solve the model. The column generation algorithms are widely used to solve large-scale integer programming, where the optimal solution to the linear master problem is obtained by iteratively solving the master problem and subproblems. The CPLEX software solves the linear master problem, and the dual variables are obtained and passed to the subproblem. The subproblem is solved by the labeling algorithm, where the parameters, expansion rules, and dominance rules of the labels are designed according to the characteristics of the problem. The purpose of solving the subproblem is to obtain the columns with a negative reduced cost and add them to the master problem model. The master problem and the subproblem are solved iteratively until the column with a negative reduced cost cannot be found after solving the subproblem, and the optimal solution of the linear master problem is obtained. If the obtained solution is an integer solution, the integer solution is the optimal solution for the problem. We suppose the optimal solution to the linear master problem is a non-integer solution. In that case, we use a heuristic branching strategy to obtain a high-quality integer solution to the problem by invoking column-generated solving at each branch node. Finally, the efficiency of the solution algorithm is verified by testing multiple sets of different scale instances.
    The experimental results show that the optimal solution or a high-quality integer solution can be obtained within 20 seconds for small-scale instances. The optimal solution or better integer solution can be obtained within 2 minutes for larger-scale instances. By analyzing the influence of the overnight risk on the solution results, it can be seen that the consideration of overnight risk can effectively reduce the number of overnight stays at airports in risky areas, which can not only effectively improve the operation of flights but also help enterprises reduce the corresponding human resource costs, provide a scientific basis for making actual operational decisions, improve efficiency and achieve the goal of cost reduction and efficiency. Airlines should consider the risk of crews staying overnight at off-base airports in advance when making flight schedules and can use the algorithm proposed in this paper to assess the risk of crews staying overnight in advance and then make adjustments to flight schedules to reduce the risk of crews staying overnight.
    Fare Transformation in Airline Network Revenue Management
    WU Xiang, ZENG Lishun, HUANG Yuxian
    2025, 34(3):  16-22.  DOI: 10.12005/orms.2025.0070
    Asbtract ( )   PDF (1029KB) ( )  
    References | Related Articles | Metrics
    For the past decade, due to its great challenge and large potential gain, there has been growing interest in the choice-based network revenue management in the academic community as well as the airline industry. An airline company would offer a set of fare products with different prices and restrictions, on a single flight leg or a connecting market with multiple legs. Many products share the same inventory, i.e. seats of a single cabin, and a revenue management system maximizes the total revenue by deciding which products should be available. Traditionally, airline revenue management systems are based on the Independent Demand Model (IDM), in which a customer would consider purchasing a specific product, or leaving when the product is not available. Such an assumption is questioned with the rise of restriction-free pricing introduced by low-cost carriers. Researchers and practitioners then turn to the Discrete Choice Model (DCM), e.g. the well-known Multinomial Logit (MNL) model, which specifies the probability of purchase for each product as a function of the offered product set. There has been extensive literature on the airline network revenue management problem. Compared to the single-leg revenue management problem, the network problem considers both non-stop and connecting customers on the same flight to maximize the overall network revenue. It is widely known that the network problem is difficult to solve even with IDM, not to mention the choice-based counterpart. Our motivation is to propose a simple but attractive solution to integrating the DCM into the network problem.For the single-leg revenue management problem, a technique called the fare transformation is proposed to transform a DCM problem into an equivalent IDM problem, based on the idea of the efficient sets and the efficient frontier of a DCM.
    We extend the idea of fare transformation to a specialized network problem with disjoint itineraries, where each segment of the customers only considers a disjoint set of products with the same itinerary. Based on the efficient frontier for each itinerary, we can define the fare transformation in the network problem with disjoint itineraries, which transforms a set of products with DCM into a set of virtual products with IDM whose fares are defined by the slopes between efficient sets in the efficient frontier. We compare the original and the transformed problem in all three steps of the Dynamic Programming Decomposition (DPD) to show their equivalence. The DPD is a legacy solution approach to the network revenue management problem with the IDM assumption, while it is more complicated in the choice-based circumstance. Therefore, with fare transformation, a choice-based network problem with disjoint itineraries can be fit into the legacy revenue management system based on the IDM assumption, and the product substitution effect is taken into account without massive effort or investment.
    We prove that the transformed IDM problem is equivalent to the original DCM problem, in the sense that based on the approach of DPD, in any state, an offered product set is the optimal control in the original problem if and only if its corresponding product set is optimal in the transformed problem. Moreover, if the underlying DCM in each itinerary is nested-by-fare-order, we can map each of the original products onto a new IDM product, such that an original product is in the optimal offered product set if and only if its corresponding product is in the optimal set of the IDM problem. Thus, we can easily know whether a product should be available by comparing its transformed fare to the bid price, and this is a legacy optimal control mechanism for the IDM problem. We also discuss the nested-by-fare-order property of a DCM, with which an efficient set of a DCM must be a complete set consisting of all products with fares higher than some threshold value. It is proved that some typical DCMs: (1)the IDM, (2)the lowest open fare (LOF), and (3)the MNL are nested-by-fare-order. We prove that a mixed model of the IDM and the LOF is also nested-by-fare-order. Additionally, we show that a mixed model of the MNL and the IDM / LOF, does not have such a property.
    Improved NSGA-Ⅱ Algorithm for Multi-objective Slot Secondary Allocation Model Based on Flight Wave Operation
    CHEN Kejia, CHEN Jintao
    2025, 34(3):  23-29.  DOI: 10.12005/orms.2025.0071
    Asbtract ( )   PDF (1025KB) ( )  
    References | Related Articles | Metrics
    With the rapid growth in air traffic flow, the air traffic network is becoming congested. Flight delays occur occasionally, which has become the primary issue faced by the civil aviation industry. When flight delays occur, collaborative slot secondary allocation can, to some extent, reduce the impact of delays on passenger travel, and significantly reduce the losses of airlines in terms of operation and revenue. The focus of this research is to readjust the scheduling between flights and slots, while minimizing the total delay costs of passengers and ensuring fairness in the allocation results. This article establishes a multi-objective slot secondary allocation model under flight wave operation, and designs an improved NSGA-II algorithm to solve it. Theoretically, it ensures the scientific and efficient optimization of flight schedule, and fills the gap of the research on slot secondary allocation under flight wave operation. Realistically, conducting this work provides an effective decision-making basis for the air management department and airlines, and reduces the loss of benefits for passengers and airlines.
    This paper establishes a multi-objective cooperative slot secondary allocation model, which takes the minimization of the total delay costs of passenger as the efficiency objective, and the minimization of Gini coefficient as the fairness objective. The objective function takes into account the delay costs of arriving and transferring passengers, as well as the equalization of airline delay time. In terms of constraints, not only the operation characteristics of flight waves in the hub airport are considered, but also the maximum position shift (MPS) is introduced to reduce the workload of the airport controllers. In addition, this article develops an improved non dominated sorting genetic algorithm to solve the Pareto optimal solution set of the model. The main process of the algorithm incorporates duplicate individual control strategies and neighborhood search strategies. On the one hand, it eliminates repetitive individuals in the population after merging parents and children, accelerating the convergence speed of the algorithm. On the other hand, by expanding the search space through neighborhood search, Pareto solutions are enriched. Finally, citing the flight operation data of a large hub airport on a certain day, we construct three different scale examples for simulation experiments. We prove the superiority of the improved algorithm by calculating the Pareto solution set evaluation index.
    From the solution results of the algorithm, it can be seen that all Pareto solutions satisfy the constraints, and the Pareto front is uniform and approaches the origin, indicating the applicability of the algorithm and the abundance and optimality of the solution set. The total delay costs of passengers obtained by the improved NSGA-II method is 14.2% less than the solution provided by the FCFS method, while the Gini coefficient represents a 46.3% decrease. This demonstrates that the improved NSGA-II algorithm may successfully reduce the total delay costs of passengers and guarantee the fairness of the average delay time amongst airlines. Moreover, the experiments of the series examples show that, for the minimum values or the average values of the objective functions, the results produced by the improved NSGA-II algorithm are better than those produced by the original NSGA-II algorithm. In terms of IGD index, the improved NSGA-II algorithm is 14%, 17%, and 31% smaller than the NSGA-II algorithm, respectively, which means that the improved NSGA-II algorithm has a better optimization performance. For SP index, the improved NSGA-II algorithm is smaller than the NSGA-II algorithm which can demonstrate that the improved NSGA-II algorithm has a higher-quality Pareto solution set. With regard to the average running time of the two algorithms, the improved NSGA-II algorithm does not take much more time, which proves that the time efficiency of the improved strategy is better.
    Research on Optimal Subsidy Policy to Increase Effective Supply of Elderly Services
    MA Yiling, WANG Xiaoli, GUO Qian, LIN Ruofei
    2025, 34(3):  30-36.  DOI: 10.12005/orms.2025.0072
    Asbtract ( )   PDF (1228KB) ( )  
    References | Related Articles | Metrics
    According to China’s Seventh National Census, the percentage of population aged over 60 years had reached 18.7%, up by 5.44 percentage over the Sixth National Census. Meanwhile, the number of empty-nesters and disabled elderly has also been increasing. Due to the one-child policy, population dependency ratio rises markedly and demand for elderly services continues to grow. Thus, accelerating the development of elderly services system is an important task of the national positive ageing strategy. Since the elderly service industry is characterized by large investment, small operating profits, and long payback period, it is necessary to attract private capital through financial subsidy. In fact, the “Opinions on Strengthening the Efforts to Tackle Population Aging in the New Era” issued by the State Council proposed to increase government spending in senior services from central budge and lottery public welfare funds. Despite increasing government funding for senior services, the funding gap continues to grow in the face of rapid aging. Therefore, how to use financial resources more effectively to improve the quality of elderly services, raise consumption of elderly services, and promote the development of the elderly service industry has become an urgent issue for the future.
    The current subsidy policy can be divided into two main categories, namely supply-side subsidy and demand-side subsidy, where supply-side subsidy is provided to service providers, such as operating or construction subsidy, and demand-side subsidy is offered directly to seniors, such as elderly service vouchers or cash. There is no uniform subsidy scheme in China, and the subsidy policy varies from province to province. Given this situation, government is interested in the following questions: Which subsidy policy is preferable to achieve the above objectives? Is it better to grant seniors demand-side incentives to generate additional demand or to allocate more resources to providers to improve service capability? Can the same subsidy scheme coordinate the system consisting of home-based, community and institutional service providers? To answer these questions, this paper constructs a three-stage dynamic game model including the government, service provider and the elderly, and studies how to design subsidy policy to maximize the effective supply of elderly services and promote the construction of system coordinated by home-based, community and institutional elderly care service under the limited financial funds.
    The following results are derived. The optimal subsidy policy is influenced by fund budget, service mode, service quality of providers and so on. Choosing appropriate subsidy scheme can alleviate the structural contradiction between supply and demand, and improve the performance of the government. More specifically, pure demand-side subsidy scheme can increase purchase rate of elderly service, but it cannot influence service quality of providers. Home-based, community and institutional providers are consistent in their quality choices and all serve at the lowest process quality. Therefore, pure demand-side subsidy is not conducive to the development of home-based elderly service provider with a high weighting of process quality. Pure supply-side subsidy scheme cannot directly influence the service purchase rate, but it can influence the quality choice of service providers, and the quality choice of home-based, community and institutional service providers are different. Thus, government can design different optimal subsidy policy for three service modes. However, there are funding thresholds for pure supply-side scheme, and financial subsidies beyond the threshold are ineffective, leading to a waste of financial resources. Two-sides subsidy scheme can simultaneously compensate for the shortcomings of demand-side subsidy, which cannot affect service quality, and supply-side subsidy, which wastes financial resources, but it also reduces the amount of subsidy on each side within limited financial resources. The numerical simulation results show that the supply side subsidy is the best subsidy scheme when the fund is insufficient and its impact on government performance objectives is much higher than that of demand-side subsidy, especially in the home-based elderly service mode. When financial funds are sufficient, adequate supply-side subsidy ensures that service provider serves at the highest quality standard, and the priority of subsidy needs to be shifted from the supply side to the demand side so as to create the demand-led growth. At this point, two-side subsidy scheme become the optimal solution. The effect gap between other subsidy scheme and optimal scheme keeps rising with the increase in financial resources. The above findings can be used as a reference for the government to formulate subsidy policy to promote the development of the elderly service industry. Subsidy policy should not be applied in a “one-size-fits-all” manner, but should be applied in different ways according to budget levels, service models and operating characteristics of service providers.
    K-means Clustering Based on Improved Equilibrium Optimization Algorithm and its Application
    ZHU Xuemin, LIU Sheng, ZHU Xuelin, YOU Xiaoming
    2025, 34(3):  37-44.  DOI: 10.12005/orms.2025.0073
    Asbtract ( )   PDF (998KB) ( )  
    References | Related Articles | Metrics
    The clustering algorithm is a method of classifying data with high similarity in attributes between data, and the classified data have a greater similarity in the same class and a significant difference between different classes. K-means algorithm is the most classical algorithm of clustering algorithm, which, after determining the number of clusters k, follows the principle of making the similarity within classes as high as possible, while making the similarity between classes as low as possible, and divides the data objects into k classes. Due to the advantages of being simple, efficient and easy to implement, K-means algorithm has been widely used in logistics site selection, image segmentation, data classification and other fields, but it still has some shortcomings, such as the large randomness of the initial clustering centroids, being easy to fall into local optimum, etc.
    To address the shortcomings of K-means algorithm, scholars have used various methods to improve it, among which the group intelligence optimization algorithm, as a popular method in current research, is considered feasible to be used in combination with K-means algorithm. Based on the related literature, it is found that the combination of swarm intelligent optimization algorithm and K-means clustering algorithm can obtain better parameter values, while the equilibrium optimizer (EO), as a new swarm intelligent optimization algorithm proposed in 2020, simulates the process of dynamic equilibrium of mass in the control volume, which has better performance in finding the optimal value than the basic particle swarm, ant colony and other classical algorithms. However, like other intelligent algorithms, EO has great randomness in the initialization of its particle concentration when solving optimization problems, which may result in the aggregation of individuals in the population and make the population diversity decrease. And the update of its particle concentration always depends on the concentration update equation in the equilibrium pool, which will lead to a strong global exploration ability of the population but weak local exploitation ability. To address the shortcomings of the EO, scholars have successively improved the EO, and its improvement strategy has improved the algorithm’s search performance to a certain extent, but when faced with solving large-scale function optimization problems, the solution results of the improved EO are not ideal, and there is room for further optimization of the EO.
    Therefore, in order to solve the problem more effectively, the improved EO is used in combination with the K-means clustering algorithm, and the K-means clustering algorithm based on the improved EO (IEO-K-means) is proposed. Firstly, the EO is improved by introducing the diversity measure strategy to assess the diversity of the population, and if the population diversity exceeds a threshold, the proposed hybrid backward learning mechanism of reflection and inversion is used to initialize the population and enhance the population diversity. Further, the nonlinear time parameter and the golden sine method are introduced to update the particle concentration in the balanced pool, enhance the global search ability of the population in the early iteration, and ensure that the population can be developed continuously in the late iteration. Subsequently, the improved EO is used to optimize the initial center of mass for K-means clustering, reduce the computational overhead and solve the problems, such as the sensitivity of the initial clustering center to achieve better clustering results.
    Then, the UCI data with different characteristics are tested and compared with some well-known algorithms. The simulation experimental results show that the IEO-K-means algorithm converges faster, has better clustering effect, and has good merit-seeking performance. Finally, IEO-K-means algorithm is applied to customer classification, and the retail dataset of a global superstore in Kaggle platform is selected for the experiment. And the RFM model, which is the most classic analysis tool in customer value analysis, is used to build a persona portrait. Customers are classified into four categories: important value customers, important development customers, important retention customers, average development customers and low value customers. Then, we propose corresponding management suggestions for these four types of customers.
    In future work, the proposed IEO-K-means clustering can try to solve other challenging optimization problems that need further research, such as logistics site selection problem, credit risk assessment, network intrusion detection, and management of smart cities. In addition, other advanced algorithms, such as marine predator algorithm and snake optimization algorithm, can also be applied in the improvement of K-means clustering algorithm, to further enhance the clustering effect of the algorithm.
    Study on Optimal Leasing Service Strategy of Electric Vehicle Supply Chain
    WU Doudou, LI Jizi
    2025, 34(3):  45-50.  DOI: 10.12005/orms.2025.0074
    Asbtract ( )   PDF (963KB) ( )  
    References | Related Articles | Metrics
    In recent years, new forces in car manufacturing have emerged under the guidance of the national “dual carbon” strategy. The automotive industry is accelerating towards green and low-carbon development, and the power exchange mode as an efficient energy supplement method has received high attention from the industry. Electrification is the most promising means to move towards decarbonization. Carbon reduction in the automotive industry cannot be achieved without the efforts of battery suppliers, automotive companies, and consumers, among which automotive companies and battery providers, as direct suppliers of automotive products, play a pivotal role. Given the above background, the article mainly addresses four issues: First, who provides the electric vehicle battery rental service? Second, what is the optimal equilibrium solution between electric vehicle manufacturers and battery providers under different rental service modes? Third, how do electric vehicle manufacturers and battery providers make decisions to maximize benefits? Fourth, what leasing strategies are the most beneficial for consumers?
    Taking the supply chain composed of electric vehicle manufacturers and battery providers as the research object, we construct a game model for rental services in the electric vehicle supply chain to illustrate three types of rental service strategies for members of the electric vehicle supply chain: strategy M (m led rental service strategy), strategy S (s led rental service strategy), strategy C (m and s cooperate to dominate the leasing service strategy). We simultaneously study the optimal leasing service level and pricing strategy issues under the three strategies, as well as the impact of related factors such as battery leasing times, marginal cost of leasing services, and cost sharing ratio on the optimal decision.
    The research results show that: (1)Increasing the number of battery rentals under strategy M can promote m to improve their rental service levels, and consumers are more willing to increase demand under high-level rental services. (2)When s acts as the leader of leasing services under strategy S, increasing the number of battery rentals will correspondingly increase the battery rental price, which is more conducive to improving the profit of the battery provider. (3)Under strategy C, when electric vehicle manufacturers and battery providers cooperate to lead rental services, the battery provider helps the manufacturer actively share θ, for electric vehicle manufacturers have always been in an advantageous position in terms of rental service costs. Under strategy C, manufacturers and battery providers work together to improve the leasing service level, enabling consumers to obtain more high-quality services. Thus, strategy C will attract more consumers.
    At the same time, the following management implications are obtained: (1)Manufacturers: when their leader rents services, vigorously improving the level of rental service efforts can win more consumers’ favor, but increasing consumer demand and the number of battery rentals will be more beneficial to manufacturers and battery providers. (2)Battery providers: as a leader in leasing services, strategy S is more conducive to improving the profits of battery providers. As a follower of rental services, manufacturers can obtain relatively high profits only when the degree of rental service preference and the marginal cost of rental service level meet certain conditions. (3)Consumers: under strategy C, manufacturers and battery providers work together to improve rental service levels, enabling consumers to receive more high-quality services. (4)Government: when making policies related to electric vehicles, the government should call on or guide chain members to control the marginal cost of rental services within a reasonable range while they are striving to improve the level of rental services.
    The conclusions of this article have certain theoretical guidance significance for m and s to implement battery rental service strategies in the supply chain environment. However, this article only studies the issue of leasing service leaders and optimal leasing service strategies, but does not address the issue of consumer market segmentation. At the same time, considering the issue of dual channel sales between electric vehicle manufacturers and retailers is also the direction of further in-depth research.
    Intelligent Optimization Models for Improving Consistency of Pairwise Comparison Matrices
    ZHANG Jiawei, LIU Fang, LIU Zulin
    2025, 34(3):  51-56.  DOI: 10.12005/orms.2025.0075
    Asbtract ( )   PDF (1011KB) ( )  
    References | Related Articles | Metrics
    Pairwise comparison matrix (PCM) is the basic mathematical tool of the analytic hierarch process (AHP). Ordinal consistency and acceptable consistency are two important concepts that capture the degree of consistency of a PCM, and ordinal consistency is considered the minimum requirement of rational judgments. However, the PCM provided by the decision maker usually does not exhibit ordinal consistency or acceptable consistency in a real case. It is therefore of great practical and theoretical significance to investigate the methods for improving the consistency of the PCM.
    In recent years, though many methods have been proposed to improve ordinal consistency and acceptable consistency, there are still some gaps, as follows. (1)Existing strategies of eliminating the entries without ordinal consistency seem too single. (2)There are a few studies that make the modified PCM have both ordinal and acceptable consistency. (3)The process of solving some established optimization models is complicated, and the obtained results may not be optimal due to the strict constraints. (4)The deviation degree between the modified PCM and the original one may be higher, and even some of the revised elements exceed the 1/9-9 scale.
    The process of forming the PCM is considered as a whole in the traditional AHP. In fact, the formation of the PCM is a complicated process, and it is related to psychology, knowledge and so on, especially when the number of alternatives is large. Recently, the leading principal submatrices model has been proposed as a new fundamental tool for decision analysis, where the typical AHP model serves as a special case. The advantage of the leading principal matrices model is that it provides a wise way to discover the irrational behavior of the decision-maker when evaluating the importance of alternatives.
    In this paper, the leading principal submatrices model of the PCM is used to identify the entries without ordinal consistency, where the least number of modification entries are chosen as revision strategies. It is found that there may be more than one strategy to adjust the entries without ordinal consistency. Then, a minimal adjustment strategy is proposed. Three optimization models are established and optimized by using Gaussian quantum-behaved particle swarm optimization algorithm (GQPSO). The first one only considers ordinal consistency; the second one simply considers acceptable consistency; the third one considers both ordinal and acceptable consistency. Finally, the feasibility and efficiency of the models are verified by comparing other existing methods.
    Different from previous studies, we propose a simple method to identify entries without ordinal consistency and solve the problem of simultaneously modifying ordinal inconsistency and acceptable consistency, which achieves intelligent resolution of optimization models and decision information adjustment minimization. The proposed models can provide the decision maker with more accurate and easily acceptable modification recommendations.
    In the future, the proposed approach can be extended to investigate the consistency of additive reciprocal matrices, interval-valued comparison matrices, etc. Meanwhile, though the GQPSO algorithm achieves good performance in solving the established models, whether it outperforms other intelligent algorithms deserves further research.
    Research on Online Public Opinion Governance Based on Large Scale Group Decision Making-SEIR
    CUI Chunsheng, SHANG Shaoguo
    2025, 34(3):  57-62.  DOI: 10.12005/orms.2025.0076
    Asbtract ( )   PDF (1056KB) ( )  
    References | Related Articles | Metrics
    In the context of the new media era, the Internet has become an indispensable presence in people’s lives, and it is very easy for unexpected crisis events to ferment rapidly in cyberspace, forming a cyber public opinion crisis, which in turn threatens social stability. When an online public opinion crisis occurs, if the government fails to effectively manage online public opinion and stop the conflict of public opinion, it will cause public order chaos and social panic. The development of online public opinion is jointly influenced by multiple participants such as official media, internet celebrity and netizens, and due to the convenience of social networking platforms, the online public opinion spreads very rapidly, which is undoubtedly a challenge for the government. Therefore, in the context of promoting the modernization of the national governance system and governance capacity, in-depth research is needed on how to quickly and effectively manage online public opinion.
    Based on the characteristics of multi-point outbreaks of sudden crisis events and the promotional effect of internet celebrity on the evolution of events, this paper adopts the large-scale group decision-making and SEIR evolution models to study the formation mechanism and evolution process of network public opinion respectively, analyzes the threshold of public opinion dissemination and the evolution trend in different conditions, and studies the selection of strategies for network public opinion management under different network public opinion heat levels. The selection of network public opinion management strategy is studied, and the effectiveness and scientificity of the method are verified through case simulation. In this paper, we use python to capture the popular comments of microblogging topics to obtain real data processing and calculation to get the Internet celebrity’s behavior trend. After that, we simulate the actual development process of emergencies by using the SEIR model and analyze the impact of different behavioral decisions of the two on the emotional state of netizens, as well as the mechanism of network public opinion formation and evolution process. In addition, effective strategies are provided for the prevention and management of online public opinion crisis.
    This study investigates the online public opinion governance of sudden crisis events based on large-scale group decision-making and SEIR evolution model, and explores the evolution of netizens’ emotional state under different heat levels. Firstly, at an early stage of a sudden public crisis event, when the heat of public opinion is low, it is crucial for the official media to detect and guide the public opinion in a timely manner, so as to prevent the public opinion from spreading and to manage the event as early as possible, thus saving time and resources. Secondly, when the online public opinion is hot to a certain extent, the governance is more difficult, so the government needs to increase the authority of official reports and control the speech norms of internet celebrity, so as to realize the effective governance of the online public opinion. Finally, it is necessary to strengthen netizens’ education on the quality of the Internet and improve their ability to distinguish right from wrong, so that they can correctly identify the information in the event of Internet public opinion and reduce the possibility of the vicious evolution of Internet public opinion.
    The significance of further research is that monitoring the development of online public opinion helps online public opinion governance. For example, the heat level of online public opinion dissemination is monitored to capture changes in the heat level. In this way, appropriate means of handling can be selected at different heat levels, which can better stop the vicious development of online public opinion, while reducing the investment of material and human resources and saving resources. In addition, better monitoring of online public opinion can detect crises in a timely manner, giving the relevant departments enough time to react and manage them better.
    Research on Pricing Decision of Traditional Automobile Enterprises Considering R&D Investment under “Dual-credits” Policy
    ZHENG Yanfang, ZHAO Qiaojie, DANG Yongjie
    2025, 34(3):  63-69.  DOI: 10.12005/orms.2025.0077
    Asbtract ( )   PDF (1520KB) ( )  
    References | Related Articles | Metrics
    China implemented the “dual-credit” policy in 2018. The new version of the “dual-credit” policy released in 2021 clearly encourages traditional fuel vehicles (FV) to reduce the energy conservation and prevent automobile enterprises from placing too much emphasis on the development of pure electric vehicles to ignore the problem of fuel consumption saving of traditional FV. Since the “dual-credit” policy was implemented, most backbone passenger automobile manufacturers have failed to meet the assessment standards, the innovation degree of FV and NEV produced is still not high, and the R&D investment is quite low. Most of the previous studies have ignored the fuel consumption of FV, and even if a small number of studies have paid attention to this problem, they have not considered the influence of the R&D investment of FV on the pricing decision of NEV for a traditional automobile enterprise. Therefore, we want to explore how the R&D investment impacts the decision-making of the automobile enterprises. The result will provide suggestions for decision-making R&D investment of traditional automobile enterprises.
    This paper takes the traditional automobile enterprises that produce both FV and NEV as the research object. Through a two-stage game model composed of a manufacturer and a retailer, we firstly analyze the pricing decisions of the automobile enterprises with the R&D investment under decentralized and centralized scenarios. Secondly, based on the optimal decision of the automobiles, we study the sensitivity analysis of some parameters on quantity and profits of the automobiles. Finally, we adopt the K-S method to redistribute the profits obtained by all member enterprises in the supply chain. In view of the complexity of calculation, numerical examples are used to discuss how some parameters impact the profit of supply chain and the credit trading quantity. Besides, we use a numerical example to verify the effectiveness of the coordination mechanism. The results show that: (1)Under the “dual-credit” policy, increasing the R&D investment of a certain type of vehicles may reduce its own output and increase the output of alternative products. (2)Increasing R&D investment of FV can improve the profit of the supply chain, while increasing R&D investment of NEV may reduce the profit of the supply chain. (3)Increasing the credit trading price and reducing the proportion requirements for NEV production can improve the profit of the supply chain. Some management implications can be obtained from the above conclusions, which are as follows. (1)When making R&D investment decisions, the traditional automobile enterprises should evaluate the current level of R&D investment. (2)When the traditional automobile enterprises conduct the R&D investment of NEV, they should reasonably grasp the intensity of the R&D investment. (3)The government should use policies to properly regulate credit trading price, and let the “dual-credit” policy play a real role in promoting the high-quality development of NEV.
    As this paper is written on the basis of certain assumptions, it can be expanded from the following aspects in future. First, we just consider a monopoly game in this paper, however, multiple automobile enterprises competing with each other is a common phenomenon in the automobile market, so the decision-making of more than two automobile enterprises considering the R&D investment is worthy of future study. Second, the credit trading price is exogenous in this paper, however, the credit trading price is affected by the demand and supply of the credits. So, it is necessary to consider the credit trading price to be endogenous in future.
    A New Consensus-feedback Decision-making Model Based on Sentiment Analysis: A Case Study of COVID-19 Management
    YANG Wei, ZHANG Luxiang
    2025, 34(3):  70-75.  DOI: 10.12005/orms.2025.0078
    Asbtract ( )   PDF (1118KB) ( )  
    References | Related Articles | Metrics
    A group consensus decision-making method is proposed to adjust expert opinions based on public opinions in a social network environment. A sentiment analysis and TF-IDA technology is used to process public opinions on social media platforms and determine the number of attributes and attribute weights. Experts evaluate with linguistic evaluation values to form decision matrices, which are transformed into intuitionistic fuzzy decision matrices. Then the credibility degrees of experts are calculated from intuitionistic fuzzy decision matrices and the collective credibility degree matrix is calculated by using attribute weights. The consensus degree of expert is used to calculate weights of decision makers, consensus threshold and confidence threshold. An expert who has a low consensus degree should revise his/her evaluation values to decrease the deviation to other experts. If the expert agrees to revise his/her evaluation values, the other expert should be found who has the highest similarity degree with him/her and has reached consensus. Then mathematical programming models are set up to calculate the minimum adjustment cost. The experts who refuse to adjust should provide reasons for rejection and other experts give their degrees of recognition. If the other experts approve the rejection reasons, they will adjust their opinions. Otherwise, the weight of the expert will be reduced through a reduction coefficient. This process is repeated until a consensus is reached. Finally, the comprehensive evaluation values of the alternatives are calculated using the intuitionistic fuzzy weighted average operator and ranked accordingly.
    The new proposed method has been illustrated by the management problem of the COVID-19. First public comments are collected from Weibo in the pandemic situation including Xi’an, Shanghai, Chengdu, Beijing for aspects of resident lives, epidemic prevention measures, government management, community management. Sentiment analysis has been used to deal with public opinions to derive intuitionistic fuzzy public preference matrix. Then the new consensus-feedback decision-making model has been used to rank the four cities. The results demonstrate that the proposed method effectively enhances the credibility of decision-makers’ opinions. Furthermore, conflicting opinions acknowledged by decision experts are preserved to make the proposed method more objective and accurate.
    Compared with existing research on consensus-feedback problems, public opinions are considered to get attributes of evaluation problems and reference information for experts in interactive process, which can improve public participation in the decision-making process and decision results are easier to be accepted by the public. In consensus process, the experts are allowed to give their refusal reasons if they refuse to modify their evaluation values and other experts give their opinions to decide whether to be accepted or not, which can respect the opinion of experts and improve the quality the decision results.
    In the context of group consensus decision making in social network, there are still many issues worth further studying such as impact of dynamic public opinions, the computation complexity of large-scale group decision making problems, influence of different types of experts in decision making process, etc. Future research will focus on how to extend the proposed method to deal with dynamic decision-making problems.
    Product Pricing and Green Promotion Decisions Considering Supply Chain Dominant Structure under Information Asymmetry
    YANG Jianhua, XIE Wenqian, SUN Yiyuan, LIU Yuying
    2025, 34(3):  76-83.  DOI: 10.12005/orms.2025.0079
    Asbtract ( )   PDF (1294KB) ( )  
    References | Related Articles | Metrics
    Carbon emission reduction information asymmetry and the dominant structure of supply chain participants play crucial roles in shaping product pricing strategies of low-carbon manufacturers and the green promotion efforts of platforms. In practice, while platforms are often responsible for promoting sustainable products to environmentally conscious consumers, the true carbon reduction capabilities of manufacturers are not always transparent. Such asymmetry in information creates strategic uncertainty, which can distort market efficiency and influence the effectiveness of environmental incentives. This paper develops and solves four decision-making models that reflect different combinations of information symmetry and dominance structures: (1)full information with platform dominance, (2)full information with manufacturer dominance, (3)asymmetric information with platform dominance, and (4)asymmetric information with manufacturer dominance. The models utilize a combination of the Stackelberg game theory, where the dominant party acts first, and the signaling game theory to analyze strategic interactions under incomplete information.
    The following three key questions are answered by comparing manufacturers’ product pricing and the platform green promotion strategy under different scenarios: (1)How does the information asymmetry of carbon emission reduction affect manufacturers’ optimal product pricing and platform optimal green promotion decisions under different dominant structures? (2)What is the difference in the mechanism of the dominant firm’s first-mover advantage under different carbon reduction information structures? (3)What are the conditions for information sharing between manufacturers and the platform under different dominant structures?
    The result shows that: (1)The manufacturer’s product pricing and the platform green promotion decisions under both dominant structures are likely to be deviated from the optimization by the carbon reduction information asymmetry. On the one hand, the first-mover advantage of the dominant platform may disappear due to the information asymmetry, and both the manufacturer and platform may gain higher profits when acting as followers. On the other hand, the effect of dominance on the manufacturer’s private information advantage depends on the emission reduction technology level and market size. (2)For the manufacturer with a high emission reduction technology level dominated by the platform, sharing information with the platform is more beneficial. However, for the dominant manufacturer with a low emission reduction technology level or high emission reduction technology level and low market size, whether to share information with the platform does not affect firm profits.
    The insights are: (1)Manufacturers must carefully align their pricing decisions and information disclosure strategies with their actual carbon reduction performance and their relative dominance in the supply chain. For example, a dominant manufacturer with advanced emission reduction technology should consider setting a higher price to prevent imitation by competitors with lower carbon reduction standards. Alternatively, such a firm may benefit from proactively disclosing its emission credentials to the platform, enabling the platform to enhance green promotion efforts and improve consumer trust. (2)Whether the platform incentivizes manufacturers to disclose low-carbon information should be judged based on the type of manufacturers present. When dealing with manufacturers with advanced carbon reduction capabilities, the platform has a strong incentive to encourage transparent disclosure, as such information enhances the effectiveness of green promotion and increases consumer engagement. In contrast, when the manufacturer’s carbon reduction technology is weak or unverifiable, the platform may find little value in promoting information sharing and should instead invest in other verification mechanisms or certification schemes.
    While this study provides a foundational understanding of the strategic interactions between platforms and manufacturers under different information and dominance settings, it is limited to the agency model of platform-manufacturer cooperation. In the agency model, the manufacturer sells products directly to consumers through the platform, and the platform earns a commission. However, an important direction for future research lies in extending this framework to the resale model, in which the platform purchases products from the manufacturer and then sells them to end consumers.
    A Newsvendor Problem Based on Target Consumers
    WANG Dandan, WU Hecheng
    2025, 34(3):  84-91.  DOI: 10.12005/orms.2025.0080
    Asbtract ( )   PDF (1009KB) ( )  
    References | Related Articles | Metrics
    Clearance is an important marketing way for the seller to clear the redundant perishable products as they easily deteriorate over time. In view of the possible discounts in the future, the strategic consumers will determine to buy at full price or wait till the clearance, while the myopic consumers may opt to buy early or leave the market. The coexistence of strategic consumers and myopic consumers is commonly seen in real life. This paper mainly studies the newsvendor problem in this setting. Due to the diversified demand of consumers, the seller would be better off from classifying them and targeting the most valuable consumers. Consumer targeting plays an essential role in precision marketing and customer relationship management. For example, the target consumers of high-end fashion or luxury products are obviously less than those of low-end fashion. Considering the difficulty in acquiring and processing the consumer information, the seller may entrust the professional organization to figure out the target consumers, where the target consumers are those who are expected to buy the products early. The seller’s decision on the full price and order quantity with the given target consumers deserves much attention. Moreover, capacity rationing is widely considered as an efficient marketing method to alleviate the negative effect of strategic waiting behavior. Thus, a newsvendor model is constructed where the target consumers are given and the products are rationed. As we all know, consumers of high willingness to pay are more likely to buy early. The target consumers are comprised of the consumers whose valuation for the products stays great. The consumers with small valuation would be taken as the un-target consumers. And the proportion of target strategic consumers in strategic consumers couldn’t surpass that of target myopic consumers in myopic consumers. Based on this, three consumer-targeting strategies are proposed in this paper. They differ if the strategic consumers or the whole myopic consumers are targeted. The Rational Expectation (RE) theory is applied to solve the above newsvendor model. In this model, the full price and order quantity are decided by the seller with the aim to attain the maximal profit and encourage the target consumers or the un-target consumers to buy early or not to buy early. This paper compares the profits the seller gains in these strategies given the total number of target consumers as well, which can teach the seller to adopt the optimal consumer-targeting strategy. In the analysis, the impact of consumers’ composition and heterogeneity on the decision is emphasized.
    The results are as follows. (i)The seller may abandon the target consumer market that is comprised of only a few strategic consumers but a lot of myopic consumers in case their heterogeneity is significant. (ii)The larger the number of strategic consumers are taken as the target consumers, the lower the price could be charged, which increases the utility of buying early or the fewer the products might be ordered, which decreases the availability of discounted products, so as to fulfill their early purchase. Instead, the seller would opt to order in larger quantities when the strategic consumers’ ratio is over a threshold. Because the new target strategic consumers also contribute to a large decrease in the availability of discounted products. The decrease in order quantity may lead to the inadequate supply. This means the target customers for some fast-fashion brands such as UNIQLO and Zara, may include a large proportion of strategic consumers, so the retailer cannot blindly reduce the supply to attract their target consumers to buy early, but needs to consider the scale of strategic consumers in the market. (iii)The significant consumer heterogeneity indicates the low minimal valuation of target consumers which causes a somewhat low price to be charged. Nevertheless, the seller may be better off from the significant consumer heterogeneity if the proportions of target strategic consumers and target myopic consumers are small or the unit order cost is trivial. This indicates the seller can appropriately reduce the order cost such as changing the mode of transportation, signing wholesale price contracts with suppliers, etc. or choose a consumer-targeting strategy that includes a small number of myopic or strategic consumers so as to alleviate the bad effect of consumer heterogeneity. (iv)The consumer-targeting strategy that excludes the strategic consumers will be infeasible when the total number of target consumers stays over a threshold, otherwise the strategy that targets the strategic consumers and the whole myopic consumers is infeasible. Combining the analytical and numerical study, targeting a large number of myopic consumers, i.e., targeting the whole myopic consumers or excluding the strategic consumers, will be optimal if their heterogeneity degree is small. Targeting a few of myopic consumers and a large number of strategic consumers can be preferred only if their heterogeneity becomes significant and the target consumers’ ratio is medium large, where the target myopic consumers’ proportion remains over a threshold or falls in between the 2 thresholds.
    Research on Price and Service Rate Decision Considering Aversion to Service Waiting for Loss
    XIE Xiangtian
    2025, 34(3):  92-97.  DOI: 10.12005/orms.2025.0081
    Asbtract ( )   PDF (1314KB) ( )  
    References | Related Articles | Metrics
    Because service processing takes time, customers who participate in queuing need to wait for service. According to previous experience in waiting in line for services and consulting using artificial intelligence, customers can know a waiting time (waiting time includes waiting time in queuing and time served.) before participating in queuing, which forms a reference point. Customers like to receive service at their scheduled time, but affected by external environmental factors and service rate, waiting time often exceeds their expectations:delayed service or advanced service. Although the advanced service benefits and the probability of participating in queuing next time increases, the delayed service will affect the praise of service quality. Under the condition that the loss of delayed service is the same as the gain of advanced service, customers loathe the loss of service waiting. In a queuing system, the length of the waiting time is closely related to the price and service rate: the lower the price (the higher the arrival rate) and the slower the service rate, the longer the waiting time; vice versa. Then, under the condition of aversion to service waiting loss, how to configure price and service rate, as well as the relationship between price and service rate and related parameters, has been the main research subject.
    This study adopts the utility function of the loss and gain due to service waiting to describe the behavior of customers participating in queuing. On this basis, the M/M/1 queuing model and the double M/M/1 queuing model are built. The analyses of the models show that: (1)Whether they are in the M/M/1 queuing model or the dual M/M/1 queuing model, their optimal prices have an increasing relationship with demand potential and a decreasing relationship with aversion to service waiting loss. Their optimal service rate is in an increasing relationship with aversion to service waiting loss. (2)For the double M/M/1 queuing model, the optimal price increases with the channel preference. If the optimal price of one side of the two queues is higher, it decreases with a price change in a customer transfer rate, otherwise it increases with a price change in a customer transfer rate. Profits are maximized when customers prefer one queue over the other; profits are minimized when customers have the same preference for both queues. A price change in a customer transfer rate and aversion to service waiting loss are not good for profit. At the same time, the application of the model is illustrated by two cases of providing medical services (One is the offline single channel mode, and the other is the online and offline dual channel mode) through a certain medical institution.
    This study assumes that each M/M/1 queuing in the double M/M/1 queuing model has homogeneity: utility, service waiting unit cost, aversion to service waiting loss and unit cost of service rate remain the same. In practice, there are many heterogeneities (utility, service waiting unit cost, aversion to service waiting loss and unit cost of service rate do not remain the same.). On the basis of this study, we can set different values for utility, service waiting unit cost, aversion to service waiting loss, and unit cost of service rate in the future research.
    When constructing the utility function, this study only considers the utility of participating in the queuing, but does not consider the utility of not participating in the queuing, which is the shortcoming of this study. Therefore, in the future, based on this study, all utilities (including the utility of not participating in the queuing) can be considered to study price and service rate decision-making.
    The price and service rate in the decision-making of a single service provider is the subject in this paper. In reality, there are also decision-makings of the price and service rate in the competitive relationship between two or among more service providers. Therefore, in the future, we can study the decision-making of the price and service rate for the multi-service provider game relationship with aversion to service waiting loss.
    Research on Manufacturer’s Promotion Decision Considering Spillover Effects under Parallel Imports
    ZHANG Chong, SONG Jun, WANG Haiyan, MA Yuliang
    2025, 34(3):  98-104.  DOI: 10.12005/orms.2025.0082
    Asbtract ( )   PDF (1178KB) ( )  
    References | Related Articles | Metrics
    Parallel import is a form of importation where non-authorized distributors acquire products through legitimate channels and directly enter the domestic market for sale. There is often a significant price difference between asymmetrical markets for the same product. Parallel imported goods can meet the demand of consumers in high-priced markets for lower-priced products. However, parallel imported goods may suffer from devaluation due to issues such as lack of after-sales services and certification. Active parallel imports severely harm the interests of manufacturers’ authorized distribution channels. In reality, the manufacturer can counter parallel import behavior through promotion efforts. Promotion activities such as advertising and sales services are believed to enhance brand image and awareness, improve consumer perception and satisfaction in the consumption process, and increase consumer recognition and trust in products sold through authorized channels. However, in asymmetrical markets with parallel import activities, the manufacturer’s promotion investments often have spillover effects. This means that the manufacturer’s promotion actions not only increase the sales of authorized channels but also lead to an increase in consumers purchasing parallel imported goods.
    This study combines parallel imports and promotion efforts to investigate the impact of promotion efforts with spillover effects on parallel import behavior and supply chain performance. The following questions are addressed: (1)Can manufacturer’s promotion efforts suppress parallel import behavior? (2)Considering the spillover effects of promotion efforts, how should the manufacturer adjust their decisions? (3)Under what conditions can manufacturer achieve optimal promotion efforts and profits when parallel imports are active? Consider a manufacturer selling the same product in two markets simultaneously. Market 1 is a low-end market with consumers having a lower willingness to pay, while Market 2 is a high-end market with consumers having a higher willingness to pay. Due to factors such as geographical location and information asymmetry, consumers in both markets are unable to directly purchase goods from the other market. However, parallel importers can sell products from Market 1 to Market 2.To counter parallel import behavior, the manufacturer can implement promotion efforts in Market 2.Considering that the manufacturer’s promotion efforts have spillover effects, it not only improves its own sales but also benefits parallel importers. Based on this, this study considers five scenarios.
    This paper combines parallel imports and promotion efforts to examine the supply chain model of a single manufacturer selling a product in both high-end and low-end markets. The following conclusions are drawn: (1)Parallel import behavior erodes the manufacturer’s profits but enhances their pricing power in the low-end market. (2)When parallel imports are active, promotion efforts without spillover can improve the manufacturer’s profits, and sufficient promotion investment can counteract parallel import behavior. (3)When promotion efforts have spillover effects, increasing promotion investment within the threshold of the spillover rate can enhance the manufacturer’s profits. Different from existing research, this study considers two asymmetrical markets and characterizes market demand in the presence of parallel imported goods based on consumer utility. It proposes the use of manufacturer’s promotion efforts to address parallel import issues and considers spillover effects of promotion efforts in a cross-market market structure, thereby improving the study of promotion efforts. The model constructed and the managerial insights derived in this study have practical implications for relevant companies. However, there are certain limitations to this study. The paper considers a supply chain model where a manufacturer sells products in two markets. In practice, the manufacturer may collaborate with retailers to sell products in overseas markets. Future research can consider a two-tier supply chain model involving manufacturers and retailers. Additionally, this study assumes that all products in different markets have the same quality level. However, in reality, the quality of products sold in different regions may vary.
    Dynamic Flexible Flowshop Scheduling with Mixed Storage Constraints
    XUAN Hua, GENG Zhuxin, LI Bing
    2025, 34(3):  105-112.  DOI: 10.12005/orms.2025.0083
    Asbtract ( )   PDF (1512KB) ( )  
    References | Related Articles | Metrics
    Flexible flowshop scheduling problem (FFSP) is a common combinational optimization problem in the field of production scheduling, which is widely applied in industries such as chemical, steel and automobile manufacturing. Classical FFSP often assumes that there is an infinite storage capacity for buffers between two adjacent production stages. However, due to the limitations of storage equipment and physical space, the buffer capacity is limited. In addition, some production technologies require continuous processing without interruptions in certain stages. Due to different parallel machine structures and machine wear, the unrelated parallel machine structure is closer to the actual production situation. Therefore, with the objective of minimizing the total weighted completion time, this paper investigates the dynamic flexible flowshop scheduling problem with mixed storage constraints (DFFSP-MSC) including zero wait and limited buffer under the unrelated parallel machine environment.
    Each buffer is viewed as a stage with the function of buffering and the above DFFSP-MSC can be converted into an equivalent dynamic FFSP with mixed storage constraints including zero wait and blocking. Considering the dynamic arrival time of jobs and the transportation time between adjacent stages, an integer programming model is established for the transformed problem and an improved discrete artificial bee colony (IDABC)algorithm is presented integrating genetic algorithm (GA), neighborhood search (NS) and variable neighborhood descent (VND) algorithm. The algorithm adopts two-dimensional matrix for encoding food sources, and the machine idle rule and job right-shift strategy for decoding. The initial populations are generated by NEH heuristic algorithm and opposition-based learning strategy. Self-adaptive parameter adjustment strategy is designed for the employed bee stage to improve the genetic algorithm where two crossover operators including single-point row crossover and single-point column crossover, and two mutation operators including single-point row mutation and reverse-order inversion mutation are used to complete the update of the food sources. In the onlooker bee stage, three neighborhood structures of first-tail row crossover, single-point row insertion, and fragment row insertion are designed, and probability selection-based NS is applied to enhance the algorithm search capability. In the scout bee stage, the VND algorithm is used to search nearly the best solution and replace the worst solution.
    To verify the effectiveness of the IDABC algorithm, simulation experiments are conducted by comparing it with artificial bee colony (ABC) algorithm, classical GA, improved GA combined with NEH heuristic (NEH-IGA), hybrid heuristic algorithm based on GA and tabu search (HH-GA&TS). The testing results of different scale instances are described below. For the small & medium-scale problems, within an average CPU time of 196.52s, the average objective value obtained by the IDABC algorithm is improved by 26.88%, 22.08%, 14.85% and 5.87% respectively, compared with the ABC algorithm, GA, NEH-IGA and HH-GA&TS. For the large-scale problems, within an average CPU time of 351.97s, the average objective value is improved by 21.29%, 16.57%, 11.68%, and 8.03% respectively, compared with the above four compared algorithms. It can be seen from convergence experiments that although the IDABC algorithm converges slightly more slowly than HH-GA&TS in the early iterations, it outperforms the other four algorithms with an increase of iteration number and has a faster convergence speed especially when the problem scale increases. The IDABC algorithm obtains better near-optimal solutions within the same iteration number. As a whole, the proposed IDABC algorithm has better solution performance.
    In this paper, an IDABC algorithm is developed to solve dynamic FFSP with two intermediate storage constraints mixing zero wait and finite buffer between stages. Future research will focus on dynamic FFSP with other mixed intermediate storage constraints and multi-objective DFFSP-MSC.
    Covariance and τ-value of Cooperative Games
    YANG Qinle, BAI Xueting
    2025, 34(3):  113-118.  DOI: 10.12005/orms.2025.0084
    Asbtract ( )   PDF (913KB) ( )  
    References | Related Articles | Metrics
    TIJS (1981) proposed the famous τ-value on the quasi-balanced games based on the marginal contribution of players to the grand coalition. Significantly, this value is an effective compromise value between the maximum and minimum potential benefits of players in TU games. We take the situations, in real life (especially economic activities), in which players are unable to cooperate with others due to conflicts or their cooperative relationships are limited by coalition structures into account. CASAS-MÉNDEZ(2003)extended τ-value to cooperative games with coalition structures. In their research on this value, LI Dengfeng and HU Xunfeng (2017) urther extended the τ-value to cooperative games with level structures. WU Meirong et al. (2014),YANG Dianqing and LI Dengfeng (2016) respectively studied the τ-values on bicooperative quasi-balanced games and fuzzy cooperative games.
    In the study of cooperative game’s solution, it is not only necessary to propose the concept of corresponding solution and provide mathematical expression as much as possible, but more importantly to characterize the fairness and rationality of solution. Although the scholars have defined the τ-value under different game models and provided the axiomatic conclusions, the axiomatic properties used in those conclusions are all variations of the criteria used by TIJS (1987). The ideas of subsequent conclusions is completely similar to the former. Therefore, the new axiomatic method for exploring this value has important theoretical significance. The axiomatization research of classical cooperative game solutions such as Shapley value has produced rich results, and although the vast majority of axiomatization studies have proposed new axiomatic properties, they often need to combine with efficiency to achieve the purpose of characterizing uniqueness of the Shapley value. BÉAL et al. (2015) defined the covariance and invariance of solutions and used the direct-sum decomposition of game space to provide an axiomatized conclusion for Shapley value. The interesting aspects of this conclusion are that they abandon the classical property of effectiveness and provide a new algebraic base for the TU games space.
    In this paper, inspired by Béal’s ideas, we first prove that all uniform games constitute a set of algebraic bases for the quasi-balanced games space, and give a direct-sum decomposition of the linear space of the quasi-balanced games. Secondly, based on the individual covariance (BÉAL, 2015), the new axiom property, synergistic covariance, of cooperative game solution are introduced, and we obtain a new axiomatic characterization of τ-value by using the direct-sum decomposition theory of Euclidean space and the classical axiom of maximum concession to proportionality (TIJS, 1987). Finally, another axiomatic conclusion of τ-value is given by using inessential game and individual covariance instead of synergistic covariance.
    Critical Node Identification in Complex Hypernetwork Based on Importance Measurement Matrix
    LI Faxu, WEI Liang, XU Hui, HU Feng, GONG Yunchao
    2025, 34(3):  119-125.  DOI: 10.12005/orms.2025.0085
    Asbtract ( )   PDF (1055KB) ( )  
    References | Related Articles | Metrics
    Critical nodes are a small number of nodes that exist in a complex network, and have a significant impact on the property, function and behavior of the complex network. The identification of critical nodes in a complex network is crucial for optimizing network structure and enabling efficient information propagation. With the development of network in the real society, the number of edges between nodes increases dramatically. Due to the diversity of edge types and the complexity of their structures, complex networks are no longer able to describe real-world network features comprehensively and effectively. A hyperedge in a hypergraph can contain multiple nodes, which makes the hypernetwork better able to describe complex multi-dimensional and multi-criteria systems. Identifying critical nodes in hypernetworks based on hypergraph structures has become an important direction in the research of hypernetworks, and has high application value in real world for information dissemination, infectious disease spread, product promotion and so on. Traditional methods only consider the influence of nodes on their neighbors, but there is little discussion on the role of nodes in the entire network information transmission and the contribution relationship between adjacent nodes of nodes.
    A hypernetwork is composed of nodes and hyperedges, and the importance of nodes depends not only on their influence on the local network, but also on their position and the importance of their neighboring nodes. Therefore, considering comprehensively the importance of a node in itself and its neighboring nodes, we propose a method for identifying critical nodes in a hypernetwork based on the importance matrix, by defining the hyperdegree, efficiency, and node importance matrix of nodes in the hypernetwork. The importance contribution matrix reflects the proportion of importance contribution values of nodes in a hypernetwork to their neighboring nodes. The importance contribution value of a node to its neighboring nodes is related to its node efficiency and hyperdegree. The larger these two indicators, the higher its importance contribution value to neighboring nodes. Both the hyperdegree and the importance contribution values of a node are instrumental in reflecting the local significance of nodes within the hypernetwork. Conversely, node efficiency embodies the global significance of nodes. This approach focuses on node efficiency and the contributions made by neighboring nodes. It not only effectively reveals the differences in the importance levels among nodes, but also significantly improves the accuracy of identifying “bridge” nodes within hypernetworks.
    The advantage of this method is that it not only considers the properties of the nodes themselves, but also fuses the importance contributions of neighboring nodes, and uses the hyperdegree and efficiency of the nodes to characterize their importance contributions to neighboring nodes, and this method combines the local importance and global importance of the nodes, so it can improve the accuracy of the node importance measure, consistent with the practical need of the node importance measure. Furthermore, this method has been applied to protein complex hypernetworks for verification. The experimental results show that this approach can effectively identify critical nodes in hypernetworks. Additionally, it provides a significant level of reference value for future research on critical nodes in hypernetworks.
    Research on Consumption Subsidy and Quality Reward and Punishment Mechanism of Home Care Service Supply Chain
    MA Yueru, CHENG Yawen, LI Hai
    2025, 34(3):  126-133.  DOI: 10.12005/orms.2025.0086
    Asbtract ( )   PDF (1197KB) ( )  
    References | Related Articles | Metrics
    With the acceleration of population aging, the home-based care model has become an effective solution to alleviating the pressure of pension. In recent years, the number of all kinds of home-based care service institutions in China has continued to increase. However, many institutions can’t achieve sustainable operation because of the high operating cost and low service level. Due to the lack of ability to pay and uneven service quality, the elderly’s high demand and willingness for home-based care services are inconsistent with their actual purchase behavior. To solve the above problems, the Chinese government has issued several policy documents, but the policy effects are different due to the different implementation objects and methods. Therefore, it is necessary to explore the influence of consumption subsidies and quality rewards and punishments on the decision-making of home-based care service institutions and the elderly.
    Firstly, aiming at the home-based care supply chain, this paper constructs a game model in three situations: non-government intervention mechanism, consumption subsidy mechanism, and quality reward and punishment mechanism. Secondly, through backward induction, we get the optimal solution of the market price of home-based care service, the actual purchase price of the elderly, the market demand of home-based care service, the profit of home-based care service institutions, consumer surplus, and social welfare. Then, the optimal solutions of the three decision models are compared and analyzed, and the advantages and disadvantages of different mechanisms are discussed. Finally, in order to illustrate the validity of the model and verify the correctness of the proposition, we use the existing literature for reference to assign values to parameters, and use MATLAB to carry out numerical simulation under different conditions, and analyze the influence of quality improvement cost coefficient, government subsidies and rewards and punishments on the equilibrium results.
    The results show that: (1)The consumption subsidy mechanism is superior to the quality reward and punishment mechanism and the non-government intervention mechanism in improving the purchasing power level of economic consumers, but it will also indirectly lead to the quality consumers needing to pay higher prices to obtain services. (2)The incentive effect of quality reward and punishment mechanism on improving service quality and purchasing demand of quality consumers is better than that of consumption subsidy mechanism and non-government intervention mechanism, and the policy benefits more widely. (3)When the quality standard of home-based aged care service or the improvement coefficient of quality cost is too high, it is difficult for home-based aged care service institutions to make up for the cost to their payment by improving the quality, resulting in their profits under the quality reward and punishment mechanism being lower than that those under the non-government intervention mechanism. (4)When the cost coefficient of quality improvement is too high, especially when the proportion of economic consumers is high, the government should choose to implement the consumption subsidy mechanism for the sake of protecting the needs of home-based care services for the elderly with financial difficulties and the interests of service organizations. (5)When the cost coefficient of quality improvement is low, the government should choose to implement the quality reward and punishment mechanism for the sake of promoting the high-quality development of home care services and improving the satisfaction of the elderly. Therefore, the government should flexibly adjust subsidies and rewards, and punishments on the premise of formulating scientific and reasonable quality standards for home care services.
    There are still some shortcomings in this paper, such as not considering the influence of government budget constraints, asymmetric demand information, and other factors. The follow-up research can be further explored to provide a more comprehensive theoretical basis and decision support for the government to improve the regulation policy of home-based care services and promote the sustainable and healthy development of the home-based care supply chain.
    A Class of Subgraph Construction Problems with Fractional Objective Function
    DING Honglin
    2025, 34(3):  134-140.  DOI: 10.12005/orms.2025.0087
    Asbtract ( )   PDF (923KB) ( )  
    References | Related Articles | Metrics
    Recently, we have noticed a practical application of subgraph construction problems. An enterprise plans to build a network with a specified subgraph structure (such as spanning tree, path, arborescence, etc.) using a special material of fixed length. When using this material to construct each edge, considering the relatively high splicing cost of materials, we agreed to use several complete materials and more than one section of leftover material for construction, to ensure the least splicing times. The total income of the completed network is proportional to its total length. The total income is used for three parts of expenses, among which a fixed percentage is used to pay benefits to shareholders, and the rest is used to repay construction expenses and pay salaries to employees. In order to raise the salary of employees and ensure the smooth operation of the entire network, the construction cost per unit length of network must be reduced as much as possible. Usually, doubly edge-weighted subgraph optimization problems are to find some edges which form a particular subgraph, the objective is to minimize a ratio on those two weight functions. Motivated by doubly edge-weighted subgraph optimization problems and above practical application, this paper study the following extended problem, which is denoted as SC-FRA. Given a weighted graph G=(V,E) equipped with a length function w: E→Z+ and a construction cost function c: E→Z+, suppose we have some stock pieces with fixed length L and unit price c0. We seek an edge subset E′ that forms a specified subgraph structure S, subject to the additional constraint that every edge in E′ must be assembled from these stock pieces, the objective is to minimize 公式, where k (E′) is defined as the number of stock pieces with length L required to assemble all edges in the set E′. In the SC-FRA problem, the construction process of each e in E′ involves two distinct cases. (1)Case w(e)<L: Construct the edge e using a single segment of length w(e) cut from a stock piece of length L. (2)Case w(e)≥L: Use i(e)=「w(e)/L=-1 complete stock pieces of length L. Additionally, cut a residual segment of length w′(e)=w(e)-i(e)·L from another stock piece of length L. Combine these components to construct the edge e. And we should permit at most one piece of length less than L in any edge in such an edge-construction process, i.e., we should not permit at least two pieces of length less than L in some edge in our edge-construction process.
    MEGIDDO (1979) established that the existence of a polynomial-time exact algorithm for linear objective minimization under certain constraints implies a corresponding polynomial-time exact algorithm for rational objective minimization within the same constraints. CORREA et al. (2010) developed a generalized approximation preservation framework. Their seminal result demonstrates that any linear objective optimization problem admitting an approximation algorithm induces a structurally equivalent rational objective counterpart sharing identical constraints and approximation factor. For describing the algorithm strategy for solving SC-FRA problem conveniently, we write S-LIN and S-RAT for optimization problems that have the same subgraph structure S as SC-FRA and minimize a linear objective function or rational objective function, respectively. We design algorithms to solve the SC-FRA problem according to the following strategy. Firstly, we construct an instance of the S-RAT problem from an instance of the SC-FRA problem. And then, we invoke Megiddo or Correa’s algorithm to solve the instance of the S-RAT problem, thus an edge subset E′ is obtained. Finally, the FFD algorithm is executed to construct all the edges in E′ with material of length L.
    Our methodology yields three fundamental results. (1)When the S-LIN problem admits a polynomial-time exact algorithm, we derive an asymptotically optimal polynomial-time solution for SC-FRA, achieving a cost-to-length ratio that is guaranteed to exceed the optimum by at most c0/L. (2)For NP-hard S-LIN instances with existing α-approximation algorithms, our framework constructs an asymptotic α-approximation scheme for SC-FRA. (3)When SC-FRA’s subgraph structure corresponds to a path, we prove that both SC-FRA and the corresponding S-RAT problem (i.e. the minimum-ratio path problem) exhibit inherent inapproximability within factor f(n), where f(n) is a polynomial-time computable function.
    Defined Contribution Pension Planning Considering Interest Rate Risk under Mean-variance Criterion
    CHANG Hao, SUN Xiuxiu, LI Jiaao
    2025, 34(3):  141-148.  DOI: 10.12005/orms.2025.0088
    Asbtract ( )   PDF (1214KB) ( )  
    References | Related Articles | Metrics
    Pension fund management is an important public policy issue. In most countries, pension funds account for a large share of government spending, a share that will continue to grow as the “baby boom” generation retires. In addition, private pension funds are an important part of modern financial markets and have a significant impact on savings, investment and economic growth. Defined-benefit (DB) pension plans and defined-contribution (DC) pension plans are the two most common types of pension fund plans. The core of the DB pension plan is to pay the pension with a certain income, and the payment level of the pension fund after retirement is determined in advance, and has no relationship with the investment income in the accumulation stage and the inflation in the later stage, and the risk is borne by the fund management. This brings a lot of inconveniences to the investment management, preservation and appreciation of pension funds, and may even lead to a deficit. In the DC pension plan, the contribution rate is determined in advance, the payment level of pension fund is determined by the contribution rate in the accumulation stage and the corresponding investment income, and the risk is borne by the members of pension fund. It can be seen that the DC pension fund plan is a pension management model widely adopted by social security systems in the world. It is of great theoretical and practical significance to study the investment strategies of DC pension funds under different financial market environments.
    Interest rate is one of the most important and direct factors affecting the level of pension fund payment after retirement. This paper assumes that the instantaneous interest rate meets the CIR interest rate model. In order to maintain and increase the value of pension funds, fund managers invest their funds in the financial market, hoping to find an optimal strategy that can maximize investment returns and minimize investment risks. Based on this situation, we build a continuous-time mean-variance model for DC pension investment. For the mean-variance model under the stochastic interest rate market model, the traditional linear quadratic control theory and the backward stochastic differential equation theory are all difficult to solve this problem. In this paper, the mean-variance model is transformed into an unconstrained optimization problem by Lagrange multiplier method, and it satisfies the application conditions of stochastic optimal control theory. The analytical solutions of Precommitment strategies and efficient frontier are obtained by using Hamilton-Jacobi-Bellman (HJB) equation method and Lagrange duality theorem. Numerical examples explain the influence of model parameters on Precommitment strategies and efficient frontiers, and explain some economic implications.
    The theoretical and empirical analysis results are summarized as follows. (i)The efficient strategy depends on the instantaneous interest rate level, and the risk level does not depend on the instantaneous interest rate level, but on the initial interest rate level. (ii)The capital market line in the mean-standard deviation plane remains a straight line. When the investment risk is zero, the expected return is equivalent to the asset value derived from investing all of the accumulated discounted initial wealth and contribution rate in the zero coupon bond at the initial moment. (iii)When the mean-reverting speed of interest rate increases, fund managers should reduce the amount of investment in risk-free assets, and should invest more funds in stocks and zero-coupon bonds. At the same time, the risks faced by investors will increase. (iv)When the volatility level of interest rates increases, fund managers should reduce the amount of funds invested in stocks and zero-coupon bonds, and should invest more funds in risk-free assets, and the investment risk will decrease accordingly.
    Application Research
    Analysis of Small-world, Scale-free and Evolutionary Characteristics of Funds’ Co-holding Network under Market Fluctuation
    GUO Xiaoping, WANG Jianwei
    2025, 34(3):  149-154.  DOI: 10.12005/orms.2025.0089
    Asbtract ( )   PDF (1169KB) ( )  
    References | Related Articles | Metrics
    As important institutional investors in the capital market, the behaviour and influence of public funds have always been a hot topic of concern for the industry and academia. In recent years, under the policy of “vigorously developing institutional investors” by the SFC, with the rapid expansion of the number and scale of public funds, in the pursuit of diversified investment, the phenomenon of “inter-fund co-holding”, that is, multiple funds hold one or more stocks together, has become increasingly common and the linkage of holdings among funds has become more networked, with profound implications for investor behaviour, asset pricing and risk management.
    However, influenced by the traditional economic thinking, the existing studies on fund co-holding have largely been conducted from the perspective of the impact and economic consequences of institutional shareholding, with most studies treating the shareholding and transactions of different institutions as independent but not considering the interconnectedness of individual institutions. A few studies on investor networks have mostly studied the simple topology (e.g. degree or betweenness centrality, etc.) and characteristics of micro-individual networks from the perspective of directly related networks such as social and business relationships, and their impact on investment decisions and performance, while less attention has been paid to the topological characteristics (especially small-world, scale-free, etc.) and evolutionary patterns of “funds’ co-holding networks” forming indirectly through common holding among funds at the overall network level, as well as what factors influence such overall network characteristics and the underlying fund group behaviour. But these are crucial for understanding information transmission, risk contagion and efficient stock market risk management and investor governance in investor networks. Based on the above,this paper draws on the complex network research methodology to construct the co-shareholding Network between large and small funds(Network 1) and the co-shareholding network between large and small funds(Network 2), and compare their overall topological characteristics and evolutionary features.
    The results show that: (1)Although the two networks are large sparse networks, the co-holding behavior among funds still widely exists. (2)Both networks have the characteristics of small-world and scale-free, but there are significant differences in the degree of specificity. (3)There are significant differences in the evolution of “small-world and scale-free of characteristics” between the two networks.
    This study provides a reference for understanding the influence of mutual shareholding among funds, and for regulators to manage stock market risk and institutional investor governance.
    Research on Emission Reduction Strategy of Multinational SupplyChains Considering Exchange Rate Risk, Risk Attitude and Consumer Preference
    TANG Yanqun, LIU Jian, LIU Lu, WU Xin, ZHANG Ying
    2025, 34(3):  155-162.  DOI: 10.12005/orms.2025.0090
    Asbtract ( )   PDF (1260KB) ( )  
    References | Related Articles | Metrics
    Low-carbon consumption has increasingly become a consensus among consumers. 64% of consumers are willing to spend more money in supporting low-carbon environmental protection, and the demand for low-carbon products is growing at a high speed. Compared with pure domestic distribution, providing low-carbon products on a global scale faces a more complex environment. On the one hand, although implementing carbon emission reduction can meet consumers’ demand for low-carbon products, the increased cost of emission reduction will affect enterprises’ enthusiasm for carbon emission reduction. On the other hand, exchange rate fluctuations will lead to the uncertainty of enterprises’ exchange gains, which in turn will affect the production and export of low-carbon products. For example, Heng Tai Lighting, which focuses on green environmental protection, benefited from the exchange gains generated by the depreciation of the RMB exchange rate in 2022 and continuously carried out technological upgrades and production process optimizations. The textile industry maintains its market share by producing environmentally friendly products that meet consumers’ needs, thus playing a role in hedging against the exchange rate risks brought about by the depreciation of the US dollar. In addition, most decision-makers are not completely rational when facing the business risks brought about by the uncertainty of exchange rate fluctuations. Since 2020, under the two-way fluctuations of the significant appreciation and subsequent decline of the RMB exchange rate, import and export multinational enterprises have had mixed feelings, making multinational traders’ attitude towards risk aversion more intense, thereby affecting their business decision-making behaviors. Therefore, based on the fact of the uncertainty of exchange rate fluctuations, this study takes into account decision-makers’ risk aversion behaviors and studies the impact of different exchange rate settlement methods on enterprises producing and trading low-carbon products. Subsequently, by introducing a revenue-sharing contract, it explores whether cooperative emission reduction among enterprises located in different countries can improve the efficiency of emission reduction and whether it can mitigate the impact of exchange rate fluctuations on transnational supply chains, providing a reference for the current carbon emission reduction requirements and exchange rate fluctuation risks faced by transnational supply chains.
    Based on the contradictions among the cost of carbon emission reduction, the risk of exchange rate fluctuations, and the promotion of market demand, this paper constructs two-level transnational supply chain models in the situations of settlement in the currency of the country where the retailer or the manufacturer is located, respectively. It explores the impact of different exchange rate settlement methods on the emission reduction decisions of transnational supply chains and introduces a revenue-sharing contract to achieve Pareto improvement among supply chain members. Through the backward induction method, the sub-game equilibrium results in the two situations of exchange rate settlement before and after the introduction of the revenue-sharing contract being obtained, and the impacts of parameters such as exchange rate risk and the risk aversion coefficient on the equilibrium results are further analyzed. Finally, through numerical analysis, we further explore the impacts of various parameters on the emission reduction amount per unit product and the expected utilities of both parties in the two situations of exchange rate settlement.
    The research shows that: (1)The impact caused by exchange rate fluctuations is transmitted among supply chain members. Regardless of who bears the exchange rate risk, exchange rate fluctuations and risk aversion behaviors will reduce the product emission reduction amount, as well as the expected utilities of the manufacturer and the retailer. However, the revenue-sharing contract can offset part of the negative impacts of exchange rate risks on the supply chain. (2)Having the other party bear the exchange rate risk is not necessarily more advantageous than bearing the exchange rate risk oneself. When multinational enterprises are more averse to risks, settlement in the currency of their own country is instead more beneficial. (3)An increase in consumers’ low-carbon preferences will encourage upstream manufacturers to increase the emission reduction amount per unit product, thus making the profits of both upstream and downstream enterprises show an upward trend. (4)When settled in the currency of the manufacturer’s country, revenue sharing contracts can achieve Pareto improvements for supply chain members, whereas when settled in the currency of the retailer’s country, revenue sharing contracts can only achieve Pareto improvements if consumer low-carbon preferences are greater.
    A New Prediction Method for Shanghai Copper Futures Price Integrating Multi-source Data Information
    SUN Jingyun, BING Guiying
    2025, 34(3):  163-169.  DOI: 10.12005/orms.2025.0091
    Asbtract ( )   PDF (1261KB) ( )  
    References | Related Articles | Metrics
    Of the commodities, copper, as the most important industrial raw material, is widely used in various fields of China’s national economy. In the process of futures trading, futures prices are the focus of market participants. However, the price of Shanghai copper futures is affected by various factors, which makes its price change have great uncertainty, which not only brings great risks to speculative traders in the copper futures market, but also has important impacts on the production and operation of enterprises and the stability of the market. Therefore, this paper takes Shanghai copper futures as the research object and analyzes the main factors affecting its price, so as to reveal the effectiveness of the copper futures market and the law of price changes. This is of great significance to our successful use of futures as an investment tool to ensure stable economic development.
    Although a large number of scholars have carried out research on the prediction of non-ferrous metal futures prices and related financial time series for recent years, the current research on Shanghai copper prices combined with multiple factors is still in its infancy. In terms of index selection, historical price data and macroeconomic data have been mostly used in the forecast research on Shanghai copper prices, and the impact of investor attention information on Shanghai copper futures prices has been rarely considered. Therefore, measuring the impact of investor attention on the price of Shanghai copper futures is a challenging task. In terms of feature extraction, data reduction is inevitable due to a large number of exogenous variables. If a large amount of exogenous information is directly dimensionally reduced, the extraction of information may not be sufficient when the amount of information is relatively complex. Therefore, clustering information before dimensionality reduction, and then categorically extracting it should increase the effectiveness of information extraction. This paper first integrates macro variables and Baidu search keyword information, and then uses the idea of classification first and then dimensionality reduction to extract effective auxiliary prediction information from multi-source data information, and then uses a variety of machine learning methods to construct a Shanghai copper futures price prediction model, and uses improvement rate indicators and statistical test methods to make an evaluation.
    This paper integrates Baidu search information and macroeconomic data to propose a new model for Shanghai copper futures price forecasting. Firstly, the systematic clustering method is used to classify and integrate the multi-source dataset, and the KPCA method is used to reduce the dimensionality and feature extraction, and finally the machine learning method is used to obtain the final monthly price prediction value of Shanghai copper futures. Our research mainly has four conclusions: (1)Using mixed data sets as exogenous auxiliary prediction information has better prediction accuracy than using single data sets. (2)The method of first clustering and then nucleating principal component extraction of multi-source and multi-dimensional data is effective. The similar information is integrated through the clustering process, and then the KPCA method is used to extract and reduce the dimensionality of the data sets with high similarity, which can more fully extract the exogenous auxiliary information related to the Shanghai copper futures price, so as to improve the prediction accuracy. (3)The four machine learning prediction methods of SVR, RF, ELM and KELM are compared, and the prediction model using the KEM method in this paper is significantly better than other benchmark models in horizontal and directional prediction accuracy. (4)Based on the prediction results of this paper, the prediction method of first clustering and then feature extracting for different research objects shows good prediction performance, indicating that the prediction framework has certain robustness in multi-source information processing.
    This paper considers the impact of investor attention on the price fluctuation of Shanghai copper futures, and integrates multi-source information to make combined prediction and obtain a good prediction effect. But there is still room for further improvement in the model, and we can incorporate more exogenous information as a secondary predictor. For example, unstructured text information such as financial news headlines and Weibo stock bar comments related to Shanghai copper futures can be used to construct investor sentiment indexes, and further improve the accuracy of forecasts by adding more exogenous forecast information.
    An Option Pricing System Based on Predictive Volatility
    DONG Jiyang, HE Wanli
    2025, 34(3):  170-175.  DOI: 10.12005/orms.2025.0092
    Asbtract ( )   PDF (1085KB) ( )  
    References | Related Articles | Metrics
    It is necessary to accurately describe the volatility of asset prices for pricing options. A large number of studies have shown that the traditional assumption that asset price volatility is constant in the market, is gradually not applicable to the development of modern financial market measurement. At present, research on stochastic models mainly includes stochastic volatility models and local volatility models. These two types of models still have some shortcomings: stochastic volatility models are usually difficult to solve, and complete hedging is even more difficult to achieve. Local volatility models usually rely on subjective experience in form, and the volatility functions fitted at different times vary greatly and even present random changes. The random model parameters can only be estimated from the market price of options, which means that option pricing is essentially an approximation of the market price rather than a theoretical price. Strictly speaking, model parameters should be obtained through the statistical properties of stock prices.
    To alleviate the shortcomings of the above volatility models, the option pricing system designed in this article seeks a compromise modeling approach between constant volatility and stochastic models, with the model only making predictable assumptions about volatility. For the determination of influencing factors in the process of volatility modeling, existing methods generally rely on subjective manual extraction settings to complete. Manually selecting influencing factors is a very laborious task, and the accuracy of selecting influencing factors largely depends on experience and understanding, lacking theoretical basis. How to automatically, scientifically, and effectively determine the influencing factors is the key to non-parametric volatility modeling, usually with more efficient and effective deep learning algorithms. The backpropagation algorithm is the most well-known deep learning algorithm, but its performance is not very good when learning structures with a large number of hidden layers in practice. Deep Boltzmann Machine (DBM) is an efficient unsupervised learning algorithm in deep learning, which empirically alleviates optimization problems related to deep models. This article first uses DBM to extract the influence factors of the model, and uses the k-step contrast divergence algorithm as the learning algorithm of DBM to establish a volatility DBM-ANN model. Based on this, it uses stochastic differentiation and martingale methods to obtain the closed form solution of European options under risk neutral conditions. The system does not need to assume the distribution form of volatility to determine model parameters through asset prices, which overcomes the shortcomings of traditional manually designed volatility model forms and the parameters can only be estimated using option market prices. In the computational experiment, the numerical results show that the system has ideal accuracy in characterizing the movement of volatility. Through comparison, it is found that the B-S formula often prices 50ETF stock options lower than the option in this paper, and its degree increases with an increase in the remaining time from the expiration date and with S/K→1.
    Throughout the outstanding research achievements in option pricing problems, almost all contain a large number of mathematical methods, and complex mathematical financial models occupy a central position. The characteristics and performance of deep learning make it widely applied in various fields of artificial intelligence, and also provide a solid and reliable tool for scientific and systematic quantitative research on financial derivatives. It is an inevitable trend in the future research of mathematical finance. Although some scholars have previously applied deep learning methods to study option problems and achieved many excellent results, the content involved is the prediction of option prices rather than the search for theoretical prices. This article introduces deep learning into the study of option pricing problems, enriching the research methods of option pricing and expanding the application boundaries of deep learning algorithms.
    Exploring Blockchain-based Measures for Mitigating Greenwashing in Green Quality Assessment and Green Lending
    YAN Xin, LI Jian, WANG Huan, LI Yongwu
    2025, 34(3):  176-182.  DOI: 10.12005/orms.2025.0093
    Asbtract ( )   PDF (1066KB) ( )  
    References | Related Articles | Metrics
    China has been committed to achieving “peak carbon” by 2030 and “carbon neutrality” by 2060 and has implemented policies to promote green enterprise improvement and green supply chain transformation. To support these goals, the Chinese government has introduced policies aimed at promoting the improvement of green enterprises and the transformation of green supply chains. This has triggered a growing trend of “green and low-carbon” initiatives, resulting in heightened environmental awareness among consumers and motivating numerous companies to embark on green supply chain transformations. However, these transformations encounter a persistent challenge known as “greenwashing”, which is largely fueled by information asymmetry. Blockchain technology has emerged as a promising solution to addressing the issue of information asymmetry in green supply chains. As a decentralized and tamper-resistant technology, blockchain enables secure and transparent information sharing, thus enhancing collaboration and efficiency among supply chain stakeholders.
    This research focuses on a green supply chain comprising a manufacturer, a retailer, and a bank, with the retailer assuming a dominant role as the core enterprise in the Stackelberg game, while the manufacturer faces financial constraints and follows suit. To bolster green quality inspection and foster consumer confidence, this study introduces a blockchain-based information sharing mechanism. Two supply chain models are developed for comparison: one representing a conventional green supply chain without blockchain technology and the other featuring blockchain-enabled green information sharing. The research findings shed light on the importance of blockchain technology in green supply chains. In the absence of blockchain technology, retailers must adopt distinct strategies to cater to price-sensitive and green-sensitive consumers, seeking to meet diverse demands while maximizing profits. The retailer’s quality inspection capability significantly influences the manufacturer’s potential greenwashing behavior, thereby impacting the overall greenness of the supply chain.
    The study reveals that an increase in loan interest rates results in higher operational costs and product prices for enterprises, potentially deterring manufacturers from investing in green initiatives and leading to decreased market demand for green products due to consumer distrust. To promote green manufacturing and foster the development of entire green supply chains, banks can offer subsidies or preferential loan rates to green borrowers, ultimately contributing to a positive brand image. However, consumer distrust negatively affects product prices, greenness, and the proportion of authentic green products in the market, necessitating the adoption of blockchain technology to enhance consumer trust, encourage green manufacturing, and significantly influence green quality control. Though cost remains a hurdle for retailers, below a certain threshold, adopting blockchain technology yields substantial economic benefits. Moreover, blockchain technology influences manufacturers’ profits and serves as a restraint against their greenwashing practices. An interesting finding of this study is the impact of the retailer’s inspection capabilities on the effectiveness of blockchain implementation. When the retailer’s inspection capability falls below a certain threshold, adopting blockchain benefits both the retailer and the bank. Blockchain not only curtails manufacturers’ greenwashing behavior but also affects their profits. This research represents the first comparative analysis of the two dimensions of blockchain technology’s impact in green supply chains, highlighting the greater significance of green quality inspection over enhancing consumer confidence.
    The study’s conclusions provide valuable insights for managers and policymakers alike. Retailers should actively promote and guide green consumption, offer convenient green shopping experiences, and enhance supply chain quality management and efficiency. Embracing green transformations, increasing information transparency, and leveraging blockchain technology will help build a strong green brand image. Manufacturers, in turn, should prioritize environmentally friendly factors in product design, improve green manufacturing standards, and adhere to green regulations. Commercial banks have an important role to play by increasing green credit allocation, providing preferential interest rates, and strengthening pre-lending investigations and post-lending management. Government is encouraged to advocate green values, guide consumers towards green consumption, and establish supportive policies and regulations for sustainable green development. By fostering a conducive market environment and combating corporate greenwashing behaviors, governments can champion green industries and environmental projects. Additionally, governments should incentivize enterprises to adopt blockchain technology by reducing blockchain platform costs, ultimately driving genuine green development in the industry.
    Prediction of Futures Price by Integrating Multivariate Network Information: An Empirical Study of Agricultural Corn Futures
    ZHANG Dabin, ZENG Zhimei, LING Liwen, YU Zehui
    2025, 34(3):  183-189.  DOI: 10.12005/orms.2025.0094
    Asbtract ( )   PDF (1024KB) ( )  
    References | Related Articles | Metrics
    In China’s financial market, the futures market has the important functions of risk aversion and price discovery. The futures price is the embodiment of the market expectations of agricultural products, and can reflect the actual supply and demand of agricultural products. Corn futures is a representative variety of bulk agricultural products, and it is also one of the most active futures in China’s futures market. It is always at the forefront of commodity futures in terms of trading volume and investor participation. With the improvement of the marketization level of corn, the uncertainty of the market increases sharply, the price of corn futures fluctuates continuously, the risk of investing in grain storage intensifies, and the demand for industrial hedging increases. Effective and accurate price prediction can provide farmers with guidance for planting and trading, give a reference basis for the production and trade of spot enterprises, and transmit efficient information to regulators to enhance the predictability and pertinence of national policy regulation.
    Benefiting from the rapid development of the Internet, a large number of unstructured data related to the market has brought an unimaginable amount of information, which not only affects the investment decisions of market participants, but also affects the specific performance of the market. In the performance, news is passively received by the public, especially reports on major policies, emergencies and weather in the place of origin, which can have a great impact on market sentiment, thus promoting the fluctuation of futures prices. On the other hand, the search engine is a way for people to access information actively. Each trader sends a request signal to the network in the form of unstructured keywords, and the engine returns the corresponding content to assist traders in making judgments on market conditions and developing trading strategies. At the same time, the engine also records the search frequency with structured data. In the current futures market, these two forms are also the easiest channels for traders and supervisors to obtain multiple information. Among them, news topics and emotional tendencies are the main expressions of the influence of news information on market fluctuations, and search data is the main reflection of the trend of public concern. The key issues of the research are the measurement of the number of topics, the quantification of sentiment index and the selection of search keywords.
    In addition, many studies in the field of price forecasting show that neural network model can efficiently fit and model the observed data due to its structural characteristics and high intelligence, which greatly improves the accuracy of financial time series forecasting.
    Based on the above ideas, in addition to the closing price of corn futures, the basic data set carried out in this paper also introduces two source data of relevant news and keyword search volume, and proposes a corn futures price prediction method integrating multiple network information. Since all kinds of major events are mostly published in the form of news text, and there is a lack of effective prior knowledge about the influence of corn futures prices, this method first uses Kullback-Leibler divergence (KL) to determine the key parameters of the topic model Latent Dirichlet Allocation (LDA), and then analyzes the news texts and extracts the topic index. Secondly, the SnowNLP method is used to judge the emotional tendency of news, and the text sentiment index is further optimized from the perspective of time cumulative effect. In addition, the related keyword map of corn futures is constructed with the help of Baidu. According to the keyword map, the Baidu search volume corresponding to each keyword is captured, and the network attention index is synthesized after it is filtered by the Spearman correlation test. In order to integrate the most effective predictor variables and reduce the redundancy of information, the Recursive Feature Elimination (RFE) method is used to construct a combination of predictor variables. Considering that the extracted multivariate indexes are all time series data, the Long Short-Term Memory (LSTM) model is used to complete the final multi-step prediction of corn futures price. The empirical results show that the proposed method is better than the benchmark models SVR, RF and BPNN, and has better prediction performance in the medium and long term. In addition, compared with the LSTM model based on univariate, the proposed method reduces the MAE, RMSE and MAPE indexes by 45%, 41% and 43% respectively, and shows significant performance advantages in Diebold-Mariano (DM) statistical tests. It shows that the proposed method can effectively measure the value of multivariate network information for the prediction of corn futures price and improve the prediction accuracy of the model.
    Since this paper focuses on the enabling effect of multiple network information on the prediction of futures price, it only discusses the effectiveness of predictor variables, but does not make a further analysis of the importance of different complex variables. Meanwhile, we only use corn futures price as an financial variable in the research. In the subsequent research work, the relevant content will continue to be improved and expanded.
    Impact of Information Heterogeneity on Early Warning of Financial Crisis of Listed Companies
    LI Jie, WANG Wenhua, YANG Fang
    2025, 34(3):  190-197.  DOI: 10.12005/orms.2025.0095
    Asbtract ( )   PDF (1140KB) ( )  
    References | Related Articles | Metrics
    The financial crisis of a listed company will cause investors to suffer huge economic losses and even have a serious negative impact on the entire society. Scientific and accurate financial crisis prediction of listed companies can effectively reduce investment risks and related losses. The composition of the existing enterprise financial crisis early warning index system focuses on the use of quantitative financial statements, annual reports and news reports and other data itself carried by the information, but due to the different entities of information release, release purposes, information content and release forms, the response to the operation of listed companies may have different tendencies. This paper defines the difference in tendencies as the “information heterogeneity” between different channel data, and in order to improve the accuracy of enterprise financial crisis early warning, it is necessary to study its impact on an early warning of financial crisis of listed companies. It expands the economic value of information heterogeneity, improves the accuracy of financial crisis prediction, further supports investors for making investment decisions, and helps maintain financial system and social stability. It provides new ideas for the early warning of corporate financial crisis from a new perspective, which is of certain significance for enriching the research in this field in China.
    Taking the characteristics and differences of financial crisis early warning data of listed companies from different sources as the main starting point, this paper sorts out the root causes of information heterogeneity, and deeply analyzes the relationship between information heterogeneity and enterprise crisis state. Based on this, the information heterogeneity index is defined, the measurement method of information heterogeneity is proposed, and the impact of information heterogeneity on the early warning of financial crisis of listed companies is verified. Firstly, the paper analyzes different types of data content and characteristics. It focuses on the causes and means of falsification of financial data, artificial manipulation of news texts and excessive embellishment of annual report texts, and explains the root causes of information heterogeneity. Secondly, a method to measure information heterogeneity is proposed. The performance of quantitative financial statements is scored by the power coefficient method, the sentiment value of annual reports and news texts is calculated by combining dictionary method and deep learning, and the calculation formula of information heterogeneity index is proposed to measure the information difference contained in the three data. Finally, based on the tree relationship between information heterogeneity and enterprise financial crisis status, XGBoost is selected to establish a financial crisis prediction model of listed companies, and the prediction effect of the total sample model before and after the information heterogeneity index is added and compared, and the contribution of information heterogeneity to improving the accuracy of model early warning is verified. Furthermore, the total samples are divided into four types of specific sub-samples based on the performance of financial statements and the level of information heterogeneity, the relationship between information heterogeneity and financial crisis status of specific subsamples is analyzed, the model prediction effect of information heterogeneity indicators before and after the addition of subsamples is compared, the information heterogeneity indicators that whether they could effectively distinguish crisis companies and healthy ones are verified, and the relationship between information heterogeneity indicators and financial crises is further analyzed from the perspective of feature importance.
    The empirical results show that the accuracy, Recall, AUC, F1 and other indicators of the model built based on the total samples and four types of subsamples are significantly improved by using the information heterogeneity index for financial crisis prediction in the machine learning model. And information heterogeneity is the first important indicator of XGBoost model. It can be seen that information heterogeneity is a major feature to judge whether an enterprise has encountered a financial crisis, and plays an important role in distinguishing crisis companies from healthy ones.
    This paper shifts the research perspective from the information contained in the data of each channel to the information heterogeneity reflected by the data from different sources, and finds that it has good value in the prediction of corporate financial crisis. In future work, it can also be applied to other research fields, such as credit risk assessment and financial fraud identification. At the same time, more other data, such as investor comments in social media, can be introduced to better optimize the financial crisis early warning model of listed companies and improve the prediction effect of the model by analyzing the heterogeneity of information.
    Research on Project Mixed Buffer Dynamic Monitoring Method Based on Parallel-link Trend Prediction
    WAN Dan
    2025, 34(3):  198-204.  DOI: 10.12005/orms.2025.0096
    Asbtract ( )   PDF (1685KB) ( )  
    References | Related Articles | Metrics
    Today’s project management technology has become increasingly mature, and various project management software also provides a more reliable analysis tool for project management. However, even so, project delays and overruns are still common. Based on this, a critical chain project management method that considers project resource constraints and decision-maker behavior characteristics is proposed, and buffer is the core concept of critical chain project management. By extracting the safe time of each activity and concentrating on it at the end of the chain to form a buffer, it can effectively avoid the student syndrome and Parkinson’s law of project members, and reduce the waste of safety time caused by the influence of factors such as multitasking and resource constraints. Effective monitoring of the buffer area in the project network can absorb various uncertainties in the project execution process, shorten the project duration under the premise of ensuring the probability of project completion, and realize the overall risk sharing of the project. However, the traditional buffer monitoring method mainly focuses on the project buffer consumption of the critical chain, ignoring the subsequent consumption information of the parallel feeding buffers, but the management efficiency and on-time completion rate of the project schedule are still difficult to guarantee.
    From the perspective of buffer management, this paper proposes a project mixed buffer dynamic monitoring method based on parallel-link trend prediction, so as to comprehensively and systematically control the project execution process. Firstly, the gray neural network monitoring and forecasting model of project buffer and feeding buffer is established by referring to the idea of gray neural network, and the follow-up trend information is quantitatively predicted according to the buffer data of parallel critical chain and non-critical chains. Secondly, considering the relationship between the actual remaining buffer and the predicted buffer consumption and the different characteristics of project buffer and feeding buffer, the monitoring trigger granularity of the two types of buffers is set differently to form a mixed buffer dynamic monitoring system based on parallel link buffer prediction. Finally, the validity of the proposed method is verified by Monte Carlo simulation experiments. The results show that the duration and cost of this method are lower than those of the traditional buffer monitoring methods, and that this method also reduces the fluctuation of the project duration during the execution process and achieves the optimization of both the project duration and cost while ensuring the completion probability.
    According to the results of the simulation experiment, it can be concluded that the buffer monitoring method proposed in this paper takes into account the actual implementation of the critical chain and the non-critical chain of the project, it not only pays attention to the project buffer information, but also considers the feeding buffer information, and adopts different monitoring trigger granularity according to the characteristics of different buffers, allowing decision makers to monitor the project chain in a differentiated and reasonable manner while grasping the development trend of the project. It can overcome the false early warning caused by one-sided project buffer monitoring, and make full use of the future buffer development trend of critical chains and parallel non-critical chains information. Therefore, the proposed method can provide managers with more comprehensive project decision-making information, which is beneficial for project managers to actively and differentially control the different buffers in the project execution process, and reduce unnecessary construction delays and resource consumption.
    The focus of the next research is to consider the introduction of resource early warning mechanism, the different impacts of bottleneck resources and non-bottleneck resources on buffer consumption, and combine resource buffer early warning with mixed buffer monitoring of parallel links to build a resource-buffer integrated schedule control system. In addition, considering the information flow of different activities of the project will also affect the function of resource early warning and buffer monitoring, so the relevant methods to describe the information flow of the project network and the consideration of the influence of information factors in the process of resource early warning and buffer monitoring could also be the next research direction.
    Estimation and Application of High Dimensional Time Varying Portfolio Model Based on DCC-MIDAS-NL Model
    LIU Liping, WANG Jiangfang, LYU Zheng
    2025, 34(3):  205-210.  DOI: 10.12005/orms.2025.0097
    Asbtract ( )   PDF (1014KB) ( )  
    References | Related Articles | Metrics
    In the era of big data, with the improvement of data availability, the dimension of financial data has exploded. Due to the increasing number of assets possessed by financial institutions or individuals, it is very common for many individuals and financial institutions to construct high-dimensional financial portfolios. Therefore, one major hot-spot and difficult issue in the field of statistics and finance is how to estimate and predict the risk of large portfolios. At present, the research on the risk of such portfolios primarily focuses on two issues: first, how to effectively estimate and predict the covariance matrix between high-dimensional assets that play a pivotal role in the portfolio; second, how to introduce the penalty function to construct a portfolio model with constraint conditions. Based on the previous research, this paper puts forward a new model to estimate and predict the covariance matrix between high-dimensional assets, so as to improve the efficiency of estimating and predicting the covariance matrix, and further explore the influence of the introduction of the penalty function on portfolio efficiency. Thus, we can more accurately analyze and describe the risk of high-dimensional portfolios. In conclusion, the research in this paper possesses significant intellectual merit.
    The DCC-MIDAS model is an upgrade of the DCC model. Although it can improve the estimation efficiency to some extent, like the DCC model, it is also influenced by the curse of dimensionality, leading to a less effective estimation and prediction of the high-dimensional covariance matrix. Therefore, in this paper, we apply the QuEST function and the nonlinear shrinkage method to the estimation of the DCC-MIDAS model to advance it, thus proposing the DCC-MIDAS-NL model to overcome the deficiency of the DCC-MIDAS model. It mainly has two advantages: Firstly, the DCC-MIDAS-NL model can effectively solve the curse of dimensionality and overcome the deficiency of the DCC-MIDAS model, enabling easier access to the estimation and prediction of high-dimensional time-varying portfolios. Secondly, in the DCC-MIDAS-NL model, we don’t need to assume that the data follow the normal distribution, which is exactly consistent with reality, for the reason that the yield data of financial assets often have the characteristics of higher peak and fat tail. Therefore, the DCC-MIDAS-NL model theoretically gets the upper hand. In addition, this paper would introduce a variety of penalty functions into the minimum variance portfolio model to discuss the application effect of the DCC-MIDAS-NL model in the portfolio and the influence of the penalty functions on portfolio efficiency.
    The following conclusions can be drawn from the research in this paper: (1)When the dimensionality of assets is high, the covariance matrix between financial assets estimated and predicted by the DCC-MIDAS-NL will have a better application effect in the portfolio than that in the portfolio estimated and predicted by the DCC-MIDAS and the commonly used NLS model. Regardless of the type of portfolio, the DCC-MIDAS-NL model corresponds to higher returns, lower risk, and higher utility function values. The reason is that it can better estimate the covariance matrix between financial assets with the characteristics of higher peak and fat tail, so as to effectively improve the efficiency of the portfolio, and meanwhile, effectively solve the dimensionality curse problem without the assumption of normal distribution. (2)Compared to the original minimum variance portfolio MVP, the MVP-C, MVP-LASSO, and MVP-W portfolios that introduce the penalty function have good performance. This shows that when the dimensionality of assets is rather high, introducing the penalty function can effectively solve the dimensionality curse problem and improve portfolio efficiency. (3)For low-dimensional assets, the estimation effect of the DCC-MIDAS model is better than that of the DCC-MIDAS-NL model. This demonstrates that the DCC-MIDAS-NL model put forward in this paper is more applied to high-dimensional assets, and the higher the dimensionality of the assets the better the estimation effect of the DCC-MIDAS-NL model.
    Comparison between Single Disease Payment and Fee for Service
    LI Jing
    2025, 34(3):  211-217.  DOI: 10.12005/orms.2025.0098
    Asbtract ( )   PDF (1339KB) ( )  
    References | Related Articles | Metrics
    The difficulty and high cost in getting medical services have always been the main problems faced by patients in China. “Medicine feeds the doctor” system is one of the reasons for the high cost of medical treatment. In order to address the issue of expensive medical treatment, the Chinese government has introduced a series of policies, including single disease payment. Compared to the traditional fee for service, single disease payment limits the total payment of the type of disease. If the limit is exceeded when a patient receives treatment, the hospital will pay for the excess. However, if the cost of treatment is below the limit, then the hospital can make a profit. The purpose of single disease payment is to enable hospitals to control medical costs. However, many problems have also arisen in the implementation of single disease payment, such that hospitals refuse some critical patients in order to control costs, or some patients are required to be prematurely discharged, which greatly harms the interests of patients. Therefore, some scholars believe that fee for service has a better therapeutic effect on patients. Based on this, a comparative study is conducted on two payment schemes. By exploring their respective advantages and disadvantages, and comparing them, the improvement advice for single disease payment is given. This is of great significance for improving the medical quality and reducing medical expenses.
    In this work we compare single disease payment with fee for service from 4 aspects: the effort made by the hospital to treat patients, the medical service pricing, the cost of treatment plan and the profits of the hospital. We first build a two-stage Stackelberg game model to study single disease payment. In this model, the medical insurance department first decides the medical service pricing to maximize the patients’ utility, then according to the decision of medical insurance department, the hospital decides the effort and treatment plan to maximize its profits. The backward induction method is employed to solve the model. Then we build a nonlinear optimization model to study fee for service. In this model, the hospital decides the medical service pricing and treatment effort to maximize its profits under some constraints. The model is transformed to a convex programing problem, KKT method is used to solve the transformed model, and the original model’s solution is obtained by solving the transformed model. We use the appendicitis as an example to do the numerical experiments. The data and calculation methods of some parameters are drawn from the literature. In the numerical experiments, two payment schemes are compared from optimal medical service pricing effort and profits of the hospital. Then the paper illustrates the impact of basis demand on the two payment schemes respectively and compares the two payment schemes under different values of basis demand. Last, the two payment schemes are compared when the hospital’s effort of single disease payment is equal to the optimal efforts of the hospital’s fee for service.
    By solving the models, the results of two payment schemes are revealed. In the single disease payment scheme, the hospital chooses the treatment plan with minimal cost, the effort of hospital increases with the medical service pricing decided by medical insurance department, the medical service pricing increases with the minimal treatment costs and the lowest requirement of treatment quality, and decreases with basis demand of patients. However, fee for service makes the hospital choose the treatment plan with maximal cost, and the effort of hospital increases with the average maximal profit of treating a patient and basis demand of patients.
    Through comparison between the two payment schemes, it is found that the 4 aspects of single disease payment are all less than those of fee for service. In addition, by improving the minimum requirement of the hospital’s effort of the single disease payment scheme, it can not only improve the hospital’s effort and increase the profits of hospital, but also control the treatment costs, making the single disease payment superior to fee for service. The results can provide policy reference for the medical insurance department to improve the medical quality of single disease payment. Besides, the single disease payment is the basis of diagnosis related groups payment, and the diagnosis related group payment is deemed as the main payment scheme in the future. However, it is currently still in the exploration stage, so this research can provide reference for it.
    Management Science
    Impact of Supply Chain Integration on Disruptive Technological Innovation of Manufacturing Enterprises: Moderating Effect of Information Technology Level and Mediating Effect of Supply Chain Agility
    FAN Jianhong, MA Yifan, CUI Wenhui
    2025, 34(3):  218-225.  DOI: 10.12005/orms.2025.0099
    Asbtract ( )   PDF (924KB) ( )  
    References | Related Articles | Metrics
    In recent years, disruptive technological innovation has been frequently mentioned and has attracted widespread attention. Manufacturing enterprises are related to the stable development of the real economy and the steady advancement of powerful manufacturing country, and they are the key subjects of disruptive technological innovation. Disruptive technological innovation points out the direction for enterprises to carry out independent innovation and is an important means for manufacturing enterprises to enhance their competitive advantages. It can be seen that it is of great significance to focus on the study of the disruptive technological innovation of manufacturing enterprises. The literature focuses on the impact factors such as executive compensation incentive, alliance management capability, and users’ continuance using intention on disruptive technological innovation. However, there is no literature exploring the relationship between supply chain integration and the disruptive technological innovation of manufacturing enterprises, and it is urgent to answer the important question of how supply chain integration affects the disruptive technological innovation of manufacturing enterprises from an academic perspective. In addition, the existing literature also fails to clarify the effect of information technology level and supply chain agility in the relationship between supply chain integration and the disruptive technological innovation of manufacturing enterprises.
    Therefore, based on 364 valid samples, this paper constructs a research framework on the relationship among supply chain integration, the information technology level of manufacturing enterprises, supply chain agility and the disruptive technological innovation of manufacturing enterprises, and analyzes the impact of supply chain integration on the disruptive technological innovation of manufacturing enterprises, as well as the moderating effect of information technology level and the mediating effect of supply chain agility.
    The conclusions of this paper include three aspects. Firstly, supplier integration, internal integration and customer integration all have significant positive impacts on the disruptive technological innovation of manufacturing enterprises. Secondly, information technology level positively moderates the impact of customer integration on the disruptive technological innovation of manufacturing enterprises, but does not significantly moderate the impacts of supplier integration and internal integration on the disruptive technological innovation of manufacturing enterprises. Finally, supply chain agility plays a partial mediating effect in supplier integration, internal integration and customer integration affecting the disruptive technological innovation of manufacturing enterprises.
    This paper has four theoretical contributions. Firstly, based on the value chain theory, this paper elaborates on the generation mechanism of the disruptive technological innovation of manufacturing enterprises from the perspective of supply chain integration, which provides a new theoretical perspective for studying the disruptive technological innovation of manufacturing enterprises. Secondly, this paper verifies the positive impact of supply chain integration on disruptive technological innovation, responds to the disagreement of previous studies on the causal relationship between supply chain integration and technological innovation, and provides new evidence for the view that supply chain integration is an important impact factor of technological innovation. Thirdly, this paper confirms that information technology level moderates the strength of the impact of customer integration, and expands the boundary condition for the impact of supply chain integration on the disruptive technological innovation of manufacturing enterprises. Finally, following the logic of “integration-agility-innovation”, this paper verifies the mediating effect of supply chain agility in the impact of supply chain integration on the disruptive technological innovation of manufacturing enterprises, and provides new ideas for the research on supply chain integration promoting the disruptive technological innovation of manufacturing enterprises.
    The management suggestions of this paper include three aspects. Firstly, manufacturing enterprises should strengthen supply chain integration through multiple ways to promote disruptive technology innovation. Secondly, manufacturing enterprises should enhance information technology level to provide a favorable context for customer integration so as to promote disruptive technological innovation. Finally, manufacturing enterprises should shape supply chain agility and make supply chain agility play a bridge role in the relationship between supply chain integration and disruptive technological innovation.
    Short-term and Long-term Effects of Government Subsidies on “Specialization and Novelty” Enterprises
    YANG Tingting
    2025, 34(3):  226-231.  DOI: 10.12005/orms.2025.0100
    Asbtract ( )   PDF (962KB) ( )  
    References | Related Articles | Metrics
    In July 2011, the Ministry of Industry and Information Technology proposed the concept of “specialization and novelty” for the first time, which refers to those small and medium-sized industrial enterprises with the characteristics of “specialization, refinement, peculiarity and novelty”. In 2018, the Ministry of Industry and Information Technology carried out the first cultivation of “specialization and novelty” small giant enterprises. In September 2021, the Beijing Stock Exchange was established with the aim of solving the financing problem of “specialization and novelty” companies. Therefore, supporting the development of “specialization and novelty” enterprises has become a national strategy. Based on this policy, local governments have introduced various policies to subsidize “specialization and novelty” enterprises to provide fiscal, tax and financial support for small and medium-sized enterprises. Therefore, it is of great practical significance to study the influence of government subsidies on “specialization and novelty” enterprises, especially on their innovation ability. On the one hand, the research in this paper can enrich the relevant studies on the influence of government subsidies on enterprise innovation performance; on the other hand, it can provide targeted countermeasures and suggestions for government departments to optimize government subsidy policies and promote the development of “specialization and novelty” enterprises.
    Based on these “specialization and novelty” small giant listed companies as the research object, this paper firstly uses the event study method and chooses the earnings data of “specialization and novelty” enterprises in the stock market, and tests whether the government’s policy information about supporting “specialization and novelty” enterprises can bring them excess returns, namely whether the short-term effects of government subsidies is positive or not. Government subsidies include three types, namely research and development subsidies, non-research and development subsidies and tax incentives. Secondly, taking these “specialization and novelty” enterprises listed on Shanghai and Shenzhen A-shares from 2008 to 2019 as the research object, and controlling the industry and year at the same time, this paper constructs a two-way fixed effect model to empirically study the effects of different types of government subsidies on the innovation input, innovation output and innovation quality of these enterprises. Thirdly, considering the differences in the nature of enterprises, the protection of intellectual property rights in different provinces and the development level of inclusive finance, this paper analyzes the moderating effects of these factors on the long-term effects of government subsidies for “specialization and novelty” enterprises respectively.
    It is found that the relevant notices of government support for “specialization and novelty” enterprises bring positive excess returns to them in the stock market, that is, the short-term effect of government subsidies is positive. The effects of different types of government subsidies on enterprise innovation performance are quite different. R&D subsidies have a significant positive effect on R&D intensity, patent output and patent quality of “specialization and novelty” enterprises, while tax incentives and non-R&D subsidies have no significant effect on enterprise innovation performance. For private enterprises, where they are in areas with good intellectual property protection level and high inclusive finance development level, the impact of R&D subsidies on the innovation performance of “specialization and novelty” enterprises is significantly positive.
    Study of R&D Coopetition among Enterprises and its Consequences Based on Multi-agent
    ZHU Shanshan, LIU Fengchao
    2025, 34(3):  232-239.  DOI: 10.12005/orms.2025.0101
    Asbtract ( )   PDF (1482KB) ( )  
    References | Related Articles | Metrics
    Many enterprises adopt the R&D coopetition to cope with the increasingly fierce market competition. The R&D coopetition is different from the conventional R&D cooperation as it involves cooperations in value creation and competitions in value capture. These cooperations and competitions bring more opportunities and threats to enterprises, which makes the benefits of the R&D coopetition more uncertain. Thus, how to benefit from the R&D coopetition has become an important and complex issue in enterprise innovation strategy.
    There are two problems to be solved in the study of enterprises’ R&D coopetition. First, the cooperation of value creation and the competition of value capture should be considered simultaneously. Second, although the R&D coopetition is a local behavior between the participating parties, the coopetition itself and its innovation results would interact with the industry environment in which the behavior takes place. The game model can explore the cooperation of value creation and the competition of value capture simultaneously. The multi-agent simulation system can simulate the complex scene of multi-coopetition among enterprises. Therefore, this paper focuses on the R&D coopetition in a non-oligopolistic market and establishes a bottom-up multi-agent simulation system of asymmetric game through the conceptual model, the quantitative model and simulations. The system contains several agents (enterprises) with heterogeneities of innovation capacities and cooperation preferences. These heterogeneities would further influence the formations of multi-phase coopetition network links and the direct results of their cooperation. Moreover, this system realizes the interaction mechanism between the micro agent interaction (the local value creation) and the macro market redistribution (the global value appropriation) by using the discrete choice model. The benefits of the R&D coopetition are related to both the current coopetition and the innovation results of other enterprises in the industry. Thus, the system reflects the coopetition in the highly competitive market appropriately. Then, a series of simulation experiments are designed to study the profits of enterprises with different cooperation strategies under diverse market environmental forces. These forces include industry integrity, technology replacement speed and betrayal tolerance level.
    The results of the experiments show significant differences in the relative and net profits obtained by enterprises with different strategies under different industry environments. Firstly, faced with good industry integrity, rapid technological upgrading or strong social betrayal tolerance, enterprises may obtain higher returns from betrayal in coopetition easily. However, the agglomeration of enterprises with this kind of opportunism will quickly drag down the earnings of those enterprises and cause the vicious circle of the industry. Secondly, the integrity R&D coopetition realizes the integration of innovation resources and the sharing of R&D costs. Enterprises with strong innovation capacities can maintain their own advantages through the R&D coopetition, while the enterprises with weak innovation capacities can promote their relative capabilities rapidly through the R&D coopetition. These benefits are more significant in industries with slow technological upgrading. Thirdly, from the perspective of the industry, a large amount of R&D coopetition may not lead to an increase in the enterprises’ net incomes because of high R&D costs. In saturated markets, enterprises gain more market share advantages (i.e.higher relative returns) through the R&D coopetition, and the consumers get more benefits.
    The results could provide guidance for enterprises to choose profitable cooperation strategies in R&D coopetition. First, in a highly competitive market, even if enterprises can temporarily benefit from opportunistic behavior, they should also fully consider its potential risks and long-term damage. Second, enterprises should raise the awareness of the importance of the R&D coopetition, especially in the industry where product upgrading is slow. Moreover, public institutions should actively promote R&D coopetition for the purpose of improving social welfare. In the future, the premise of the model could be relaxed to adapt to more realistic situations, and further comparative study under multiple situations could be extended.
[an error occurred while processing this directive]