Loading...

Table of Content

    25 February 2025, Volume 34 Issue 2
    Theory Analysis and Methodology Study
    Optimization Method of Integrated Flexible Production and Delivery Scheduling
    QIU Feier, GENG Na
    2025, 34(2):  1-8.  DOI: 10.12005/orms.2025.0035
    Asbtract ( )   PDF (1234KB) ( )  
    References | Related Articles | Metrics
    With an increasingly fierce market competition, manufacturing enterprises are facing great pressure to survive. In order to shorten the order delivery cycle and improve customer satisfaction, manufacturing enterprises begin to directly deliver the finished products to the customers or the front warehouse after the production. This new mode puts forward higher requirements for the collaboration between production scheduling and logistics delivery. However, in the real world, managers often make a production scheduling plan first, and then formulate a delivery scheduling plan based on the production plan. This independent and separated optimization method cannot effectively coordinate production scheduling and logistics distribution, resulting in meeting customer response time requirements at a higher cost, or reducing costs at the expense of violating customer response time constraints, which cannot realize the original intention of the new mode.
    Motivated by the collaborative production and delivery service of a household appliance manufacturer, an integrated production and delivery scheduling problem is studied. The household appliance production workshop studied in this paper is a typical flexible job shop, but there is no research on the integrated optimization of flexible job shop production scheduling and delivery routing. At the same time, the existing literature hardly considers the multi-trip vehicle routing problem in delivery. Considering flexible job shop scheduling, multi-vehicle delivery scheduling and multi-trip vehicle routing, a mixed integer programming model is developed to minimize the total cost including order completion time cost and delivery distance cost. This model is a typical NP-hard problem. Small-sized instances can be directly solved by calling a commercial solver, while large-sized instances are difficult to be solved to optimal by a solver.
    In order to solve large-sized instances, an improved memetic algorithm (IMA) framework is adopted with a new designed chromosome coding according to the characteristics of the problem. IMA improves search efficiency by introducing the idea of local search into the mutation operator of the Genetic Algorithm. The local search procedures are used to educate the offspring so that they have a large amount of professional knowledge. A new parent selection method based on the Softmax function is proposed, which not only ensures that the parents of the previous iteration is better and the solution declines faster, but also ensures the parents of the later iteration to be more diverse, so that the algorithm can jump out of the local optimum. The proposed IMA also includes two crossover operators and four education operators. The algorithm evaluates the quality and diversity of chromosomes through fitness and biased fitness function, and adopts a survivor selection method that considers the contribution of diversity, which can effectively balance the exploration and exploitation of the algorithm. Finally, a three-step local search is designed to fine-tune the optimal individual to improve the quality of the optimal solution and speed up the convergence of the algorithm.
    The numerical experiments show that when the number of orders and machines is greater than 5, the Gurobi solver cannot find the global optimal solution within an acceptable time, while the proposed IMA can find the optimal or very close to optimal solution for small-sized instances within 10 seconds. For large-sized instances, the proposed IMA shows better performance than the classical genetic algorithm and the three algorithms that remove the improved operator, indicating that the improved operator designed in this paper has a certain effect. So, the performance of the algorithm and the effectiveness of the improved operator are also verified. On the other hand, integrated scheduling is more effective than separated scheduling in both small-sized and large-sized instances. In scenarios where there are many distinct customer nodes, long-distance delivery, and more emphasis on time objectives, the advantages of integrated scheduling are more obvious.
    This research provides theoretical guidance for the decision of integrated production and delivery scheduling. Some uncertain factors, such as dynamic arrival of orders and uncertain transportation time affected by congestion, can be taken into account in future research to update the scheduling decision in real time and solve the dynamic integrated scheduling problem. It is also possible to study the integrated optimization of networked production planning for multi-workshop production and trunk distribution planning.
    Sequence-definite Vehicle Routing Problem: Models and Algorithms
    WEN Ruolin, CHEN Feng
    2025, 34(2):  9-15.  DOI: 10.12005/orms.2025.0036
    Asbtract ( )   PDF (1204KB) ( )  
    References | Related Articles | Metrics
    The problem proposed in the paper is motivated by automobile aftersales logistics, which particularly addresses a so-called sequence-definite routine which is a new operational pattern of transportation vehicle routine optimization from the consolidation centers to dealers. In the problem, a relative fixed time window is always assigned with each dealer because a relative fixed time window can reduce the waiting time, and increase the efficiencies of unloading in accord. In addition, the demands of volumes and weights for dealers, and the capacity of volumes and weights for trucks are generally considered. This paper formally defines the new concept of sequence-definite vehicle routine, where a series of no-intersection dealer sets are given to represent all dealers in a whole routine, each dealer set has its fixed routine sequence of dealers, and any feasible subroutine can only contain dealers as a subset of a given dealer set. Moreover, the routine sequence of dealers for a subroutine will follow the routine sequence of the given dealer set that contains dealers of the subroutine. The objective is to minimize the total transportation routine cost in the constraints of time windows of dealers, and volumes and weights of trucks. The paper has three contributes as follows. The first contribution is to simplify and propose a new theoretical problem of sequence defined vehicle routine problem with a new well theoretical defined concept of sequence defined vehicle routine. The second is to present a new fundamental vehicle routine mixed integer linear programming model with sequence-definite pattern and to prove its NP-hardness by a reduction from the partition problem. The third is to develop some basic properties of the sequence-definite routine problem and design efficient approximation and exact algorithms followed by comprehensive computational experiments. The studies have practical significance because the proposed new vehicle routine optimization problem exactly meets the practical pattern and the corresponding models and algorithms can be applied in automobile aftersales logistics decisions directly.
    The computational intractability theory is used to show the hardness of our problem. We use the mixed integer linear programming method to model our new problem. The algorithm design and analysis methods including heuristics and branch and bound are applied to design efficient algorithms. Case studies are also given to show the efficiencies of our models and algorithms in practice.
    Motivated by real practices, the paper successfully finds and formulates an interesting new vehicle routine optimization problem occurring in automobile aftersales logistics. The new concept of sequence-definite routine is introduced and a new optimization problem with sequence-definite is initially proposed. Our problem is shown to be NP-hard. Heuristic algorithms based on the bin packing characteristics and routine saving properties such as sequence-definite NF, sequence-definite BF, sequence-definite FF, sequence-definite saving algorithm and sequence-definite insertion are constructed. The branch-and-bound algorithm with efficient lower and upper bounds are further developed to solve the problem exactly. Furthermore, comprehensive numerical experiments show the effectiveness of our model and the efficiencies of proposed algorithms, and the sensitivity analysis are given as well. The branch and bound algorithms and the heuristic algorithms can achieve 27.53% and 17.93% cost savings, respectively. The case studies based on real data of an automobile after-sales spare parts logistics enterprises including three whole routines with 8, 10, 15 dealers respectively are conducted. Case studies show that the outputs of our model and algorithms can exactly meet the real decision requirements.
    The concept of sequence definite routine is of theoretical significance not only in problem formulation, model building and algorithm design but also in providing a potential tool for algorithm design particularly for vehicle routine problem. Therefore, the future studies can be conducted by the following directions. One is to apply other advanced methodologies to study the sequence-definite vehicle routine problem of itself including machine learning, column generation, and Lagrange relaxation, in the hope of deriving more efficient mathematical models and algorithms. The other interesting direction is to develop a new search strategy in which a neighborhood of a local solution can be constructed as a set of visiting nodes with sequence definite routine, then the local search is performed on the neighborhood, and finally constructed iterative algorithms can be derived.
    The paper is supported by the National Science Foundation of China (No.72172091). The authors thank Anji Zhixing Logistics for supports for investigations.
    Flexible SERU System Formation Problem Based on Transferable Full-skilled Workers
    REN Yuhong, TANG Jiafu
    2025, 34(2):  16-22.  DOI: 10.12005/orms.2025.0037
    Asbtract ( )   PDF (1360KB) ( )  
    References | Related Articles | Metrics
    Due to the fierce global market competition, the manufacturing environment shows characteristics like diversified customer demands, smaller batch sizes, and shorter product lifecycles and lead times. This drives manufacturing companies to transform their systems for improved flexibility and responsiveness. A SERU system consists of one or multiple Serus, which comprises multi-skilled workers and flexible resources such as a simple and movable equipment. Such a system can be rapidly and frequently constructed, reconfigured, dismantled, and reconstructed. The construction of SERU systems is a critical step in achieving flexibility and a focal point of research in SERU production. Existing research predominantly focuses on Task-Oriented SERU system formation (TOSF), which assumes specific production tasks are known. However, such strategies are difficult to avoid frequent reconstructions in dynamic and evolving market environments. To address this issue, some scholars have considered the dynamic nature of demand in SERU system formation. However, they have not focused on the flexibility performance, resulting in the SERU systems lacking flexibility. There is still a significant shortage of research on flexible SERU systems formation. Many related studies have shown that multi-skilled workers are an important factor in achieving flexibility. The higher the level of multi-skilling among workers within the system, the greater the system's flexibility. However, cross-training is both a time-consuming and costly activity. In practice, training for multi-skilled workers has become increasingly challenging. Therefore, the existing literature based on the assumption of full-skilled workers is difficult to apply in SERU systems. To solve this problem, many manufacturing companies have adopted some transferable full-skilled workers (TFSW) strategies, in which only a small number of workers are fully cross-trained and allowed to move between serus. In TFSW strategies, capacity can be dynamically shifted from any serus to any other by TFSWs within the serus. This makes them very robust to fluctuations in workloads (e.g., due to temporary shifts in product mix) or staffing levels (e.g., due to absenteeism). While some literature has examined the flexibility achieved through cross-seru worker transfers, their primary focus is on workforce scheduling within a given SERU system. Therefore, studying how to construct a flexible SERU system by considering transferable full-skilled workers is of practical significance.
    This paper addresses the Flexible SERU System Formation Problem based on Transferable Full-Skilled Workers (FSFP-TFSW). A multi-objective model (FSFP-TFSW) is established to minimize the training costs for skills and the expected total completion time of orders. The complexity of the problem is analyzed and the NSGA-II algorithm is designed to solve it. The effectiveness and superiority of the FSFP-TFSW model is verified by comparing its performance with the non-transferable full-skilled workers strategy (NTS) and the fully flexible SERU system formation (FULL) strategy based on workers who are full-skilled under different demand scenarios. The results show that the proposed FSFP-TFSW model can better balance cost and responsiveness by configuring a reasonable number of transferable full-skilled workers, more suitable for SERU system formation in dynamic demand environments. The experimental results have shown that FSFP-TFSW can achieve a balance between cost and responsiveness.
    As a flexible SERU system formation strategy, the solutions obtained by the FSFP-TFSW model achieve a better balance between training costs and responsiveness. However, this article does not consider the TFSWs scheduling or the internal configuration of serus. How to simultaneously consider SERU system formation, SERU system scheduling, and TFSWs scheduling is a problem worth further research in the future.
    Equipment Production Decisions for Service-oriented Manufacturing with Cost Sharing
    ZANG Yuanji, JIANG Zhongzhong, MA Mingze
    2025, 34(2):  23-30.  DOI: 10.12005/orms.2025.0038
    Asbtract ( )   PDF (1066KB) ( )  
    References | Related Articles | Metrics
    Service-oriented manufacturing is a new manufacturing mode and industrial form that integrates the developments of manufacturing and service, and is an important direction for the high-quality development of China's manufacturing industry. Compared with traditional sales modes, in the equipment production, service-oriented manufacturers not only need to consider the operators' demands for equipment, but also pay attention to the impact of equipment operations and maintenance. Therefore, it is crucial for service-oriented manufacturing enterprises to optimize equipment production with consideration of equipment functionality and durability. However, due to practical factors such as slow capital return, supply chain risks and high production cost, the potential capital shortage has always been an important problem that restricts the service-oriented manufacturing transformation of enterprises. By sharing the production costs of service-oriented manufacturers through operators, cost sharing contracts can achieve higher quality equipment production and become a feasible method to solve financial problems. Therefore, this paper studies the production optimization of service-oriented manufacturing equipment with cost sharing contracts.
    This paper considers a supply chain composed of a service-oriented manufacturer and an operator, and builds a Stackelberg game model for two scenarios: without capital constraint (i.e., sufficient capital) and with capital constraints (i.e., scarce capital). The service-oriented manufacturer produces equipment and provides equipment usage service, and then charges the operator usage fees based on the usage time. The operator obtains utility by using equipment to produce products or provide services. As the cost sharing party, the operator first decides whether to make a cost sharing contract with the service-oriented manufacturer. If so, the operator determines the sharing proportion. Next, the service-oriented manufacturer decides the durability based on the functionality and completes equipment production. Finally, the operator decides the usage time and starts the equipment usage service.
    Through the analysis, the main results are as follows: (1)Cost sharing contract can effectively motivate the service-oriented manufacturer to improve equipment durability and deal with the lack of capital. The operator should appropriately share the production cost according to the service market price and the service-oriented manufacturer's capital status. (2)Although the cost sharing contract helps to mitigate the pressure of production cost, the service-oriented manufacturer should still trade off the service revenue and operation cost, and design appropriate equipment functionality based on the service market price environment and operation cost parameter. (3)The strategy of introducing the third-party assistance to share cost can improve the cost sharing contract and achieve a multi-win-win situation.
    Future research can consider how to implement a mechanism for supply chain profit increment redistribution to the service-oriented manufacturer, operator, and third-party institution. Moreover, our research assumes that equipment functionality and unit time usage fee are exogenous. The endogenous situations are also worth further research.
    An Improved Squirrel Search Algorithm for the Surgical Case Assignment Problem with Fuzzy Surgery Duration
    ZHU Lei, SU Qiang
    2025, 34(2):  31-37.  DOI: 10.12005/orms.2025.0039
    Asbtract ( )   PDF (1211KB) ( )  
    References | Related Articles | Metrics
    As the core of medical institutions, operating room department involves the most extensive personnel, and occupies a large amount of funds. According to incomplete statistics, the surgery involves 70% of hospital departments, accounting for 9% of the annual budget and 40% of the total revenue. Due to the rapid growth of population and the worsening aging problem, residents' medical needs continue to expand. The demand for surgery often exceeds that for the medical load, which causes a long waiting time.
    Surgical case assignment problem (SCAP) is an important part of operating room planning and has been proved to be NP-hard. It can be simply described as a set of surgical cases that are assigned to the appropriate operating rooms within the planning period and meet the constraints of the corresponding deadlines and durations. In SCAP, the surgery duration is predetermined. Due to the influence of doctors' skills and intraoperative emergencies, the duration usually fluctuates within a certain range which could affect the efficiency of operating room circulation.
    In this paper, an extended model of SCAP which considers the fuzzy surgery duration is proposed (FSCAP), and an improved squirrel search algorithm is designed to address this problem. The contributions of this paper can be concluded as follows: (1)we extend the model of SCAP to the fuzzy environment, which considers the uncertain surgery duration; (2)we modify the squirrel search algorithm and apply it to solve the proposed FSCAP. Generally, this study could optimize the surgery sequence of patients and improve the utilization efficiency of operating room resources.
    Combined with the actual surgery situation of hospital, the uncertainty of surgery duration is considered in SCAP, where the triangular fuzzy number is introduced to establish the surgical case assignment model with the objective of minimizing fuzzy operating cost.
    In this paper, an improved squirrel search algorithm (ISSA) is developed to address the problem. First, a single list encoding scheme and a corresponding decoding method are proposed. Second, several effective heuristics are employed to improve the quality of initial population. Third, the path relinking technology and the reverse operator are embedded into the algorithm to simulate the foraging behaviors of flying squirrels. Based on the datasets proposed by RIZK and ARNAOUT(2012), 10 instances for the surgical case assignment problem with fuzzy surgery duration are developed to evaluate the effectiveness of the proposed algorithm.
    To further evaluate the effectiveness and superiority of ISSA, several existing methods including CPLEX, discrete particle swarm optimization (DPSO), hybrid biogeography optimization (HBBO) and memetic algorithm with novel semi-constructive evolution operators (MASC) are employed for comparisons. For each instance, the proposed algorithm is performed 30 times independently.
    Table 2 presents the comparison results between ISSA and CPLEX for the small-scale instance. From the table, it can be seen that the ISSA can always obtain the optimal solution for each instance at each run, where the values of the best, average and worst are the same. In comparison with the value found by CPLEX, the relative error of the average value is 0.00%. Hence, it can be concluded that the ISSA demonstrates great accuracy and stability in addressing small-scale instances. The comparison results between ISSA and other meta-heuristics for bigger scale instances are shown in Table 3. From the table, it can be seen that the ISSA could obtain feasible solutions for all the instances, while other algorithms fail with an increase in the instance scale. The reason lies in the fact that several effective heuristics are embedded into the ISSA, which improves the quality of initial population. Therefore, ISSA demonstrates a greater superiority than other algorithms in solving FSCAP.
    However, there are still a few limitations of this study. The FSCAP is an ideal mathematical model, which is an extension of classic SCAP. In the process of actual surgical planning, medical institutions should not pay attention to the operating cost merely. More indicators such as doctor-patient satisfaction, surgical resources utilization and patient waiting time ought to be taken into consideration. Additionally, the classic SCAP simplifies the constraints of human resources and surgical equipment. These resources constraints affect the operating room planning critically. Therefore, subsequent studies will further consider the extension of objective functions and realistic constraints. To test the performance of the proposed ISSA, the benchmarks generated by random are conducted. In future, the authors would apply the proposed algorithm to the datasets collected from medical institutions.
    Sales Forecasting of New Energy Vehicles with a Decomposition-cluster-ensemble Method
    WANG Fang, ZHAO Ankun, BU Haoyue, YU Lean
    2025, 34(2):  38-43.  DOI: 10.12005/orms.2025.0040
    Asbtract ( )   PDF (1105KB) ( )  
    References | Related Articles | Metrics
    The monthly sales data of new energy vehicles have the phenomenon of multi-data characteristics such as nonlinear and seasonal aliasing, and the use of a classical single model for forecasting has the disadvantage of low prediction accuracy. To improve the accuracy of monthly sales forecast of new energy vehicles, based on the modeling idea of “decomposition-ensemble”, on the basis of making full use of the advantages of each single model, and following the principle of “divide and conquer”, a comprehensive prediction model of “decomposition-clustering-ensemble” is constructed to achieve high-precision prediction of monthly sales of new energy vehicles.
    Firstly, the ensemble empirical mode decomposition (EEMD) model is applied to decompose the time-series data of monthly sales volume of new energy vehicles. This approach effectively handles the nonlinear and non-stationary data characteristics of the data series, and effectively suppresses the occurrence of modal aliasing phenomenon. Then, to improve the efficiency of prediction modeling and reduce the accumulation of errors, sample entropy and K-means method are used to cluster the obtained decomposition components, and three types of components are obtained: high frequency sequence, medium frequency sequence and low frequency sequence. Based on the advantages of GM(1,1) model, which is suitable for the prediction of exponential law data series, the low frequency class component series is predicted. The autoregressive integrated moving average (ARIMA) model can transform complex non-stationary sequences into stationary sequences for modeling, and use it to predict intermediate frequency component sequences. The long short-term memory (LSTM) network model selects and processes the data through the three gates inside the control, which is suitable for more complex high-frequency data series prediction modeling, and uses it to predict the high-frequency component series. Finally, the linear weighting method is used to combine the forecast results of each component, and the forecast results of monthly sales of new energy vehicles are obtained.
    The monthly sales volume of new energy vehicles from January 2012 to May 2022 published by the China Association of Automobile Manufacturers is used as the data set to verify the “EEMD-K-LSTM/ARIMA/GM(1,1)” comprehensive forecasting model proposed in this study. The results show that compared with the traditional single model and the “decomposition-ensemble” model, the “decomposition-clustering-ensemble” comprehensive forecasting model achieves a good forecasting effect, and the MAPE value of the one-step forward and three-step forward forecasting of the monthly sales volume of new energy vehicles is 8.75% and 10.62%, respectively. Using the “EEMD-K-LSTM/ARIMA/GM(1,1)” comprehensive forecasting model, the sales data of new energy vehicles in China from January 2012 to October 2022 are modeled. The predicted sales for November 2022 to January 2023 are 800,000 vehicles, 830,000 vehicles, and 520,000 vehicles, respectively, consistent with the overall trend in 2020 and 2021.
    It should be noted that in reality, there are multiple factors that influence the monthly sales of new energy vehicles, including national policies, seasonal factors, economic conditions, and so on. To achieve long-term trend forecasting, the next step should consider incorporating various influencing factors into the model and conducting more comprehensive predictions and discussions through methods such as scenario analysis.
    Robust Scheduling Optimization for Multi-objective Resources Constrained Projects in Uncertain Environment
    ZHANG Houkun, MA Ran, PENG Kunkun, ZHANG Yuzhong
    2025, 34(2):  44-51.  DOI: 10.12005/orms.2025.0041
    Asbtract ( )   PDF (1387KB) ( )  
    References | Related Articles | Metrics
    Resources-constrained project scheduling problems widely exist in construction engineering, equipment manufacturing and other enterprises, and have significant practical value. Most of the classical projects scheduling problems are deterministic, which assumes that the internal and external environment will not change. However, in real life, there are a number of uncertainties in most projects. In an uncertain environment, a schedule made during the project design is likely to be delayed due to interference. It is particularly important to set up a scheduling plan with strong anti-interference ability in a complex and uncertain environment. Time and cost are two important indicators in project scheduling, and the robustness of the schedule is the key to ensuring the smooth implementation of the project in uncertainty. It is particularly important to set a scheduling plan with a strong anti-interference ability in the complex uncertain environment.
    In this paper, we study a multi-objective project scheduling optimization problem in an uncertain environment. We try to balance the completion time, robustness and delay penalty cost to find a solution to meet various needs. What is more, we first define the problem and list the corresponding symbols and expressions of calculation formulas. This paper proposes the necessity of setting resources buffer on the basis of time buffer. The resources buffer can offset the resources conflict in an original project schedule caused by the delay of a certain process for some reason or the unavailability of part of the planned available resources when an activity is executed, so as to ensure that the overall operation of the project is not disturbed. The time buffer and resources buffer are combined, and the effectiveness of the combination is further illustrated by an example, which is further verified in the data simulation. Then, a multi-objective robust optimization model of this problem is constructed and introduced in detail. A NSGA-II algorithm is a multi-objective genetic algorithm based on non-dominated sorting, which has been widely studied and applied in solving multi-objective problems, so it is chosen to solve the problem. In the fourth part of this paper, an improved multi-objective non-dominated sorting genetic algorithm is designed, and the relevant steps of the algorithm are improved to make it more suitable for the problems proposed in this paper. The improved steps are described in detail in the article. The algorithm designed in this paper shortens the time required to solve the problem by ensuring the feasibility of the solution, and adds an uncertain environment to make the solution closer to reality, and also helps the algorithm to screen out better individuals through the simulation environment so as to enter the next iteration. Finally, numerical experiments are designed in the fifth part of the paper. In order to reflect the performance of the improved algorithm in this paper, the traditional non-dominated sorting genetic algorithm is compared with the improved algorithm. Through the generated standard sample set, the control experiments under different activity numbers and different duration constraints are designed. The output of the data experiment is presented in the attached table. The effectiveness and feasibility of the algorithm are verified by a large number of experiments, and the Pareto optimal solution obtained by the algorithm is tested in the uncertain environment, and the test results further verify the performance of the obtained solution. For the output optimal solution set, the manager can choose the appropriate solution according to personal preferences.
    Finally, according to the mathematical model and data experiment designed in this paper, the following conclusions are drawn: (1)Having a resource buffer can effectively deal with the problem of resource usage conflicts caused by activity delays. With the help of the optimization model proposed in this paper, the resources buffer can be reasonably allocated among the activities, and then the robustness of the project schedule can be effectively improved. (2)By improving the NSGA-II algorithm, it is not difficult to find that its performance is better than the former, and it is more suitable for solving this problem. (3)The robustness of the project schedule increases with the extension of the construction period. The results can provide a reference for project managers to weigh objectives and make progress plans in an uncertain environment. It should be pointed out that the research in this article does not consider the cost of adding buffer, which needs to be further discussed in the next study.
    Delivery Target Design of Aviation Complex Equipment Considering Master-slave Nature of Customers and Enterprises
    TONG Huagang, ZHU Jianjun, WU Lei, LIU Weiqiao
    2025, 34(2):  52-58.  DOI: 10.12005/orms.2025.0042
    Asbtract ( )   PDF (1105KB) ( )  
    References | Related Articles | Metrics
    Delivery system of aviation complex equipment is an important means to realize corporate profits and ensure the sustainable combat capability of the military, but the delivery system for aviation complex equipment has not been established, which restricts the development of complex equipment development. To establish the delivery system, the primary task is ensuring the objectives, like the delivery cycle, customer satisfaction, and claims. The complex equipment's delivery system is composed of many important components, like the delivery system. Among the components, the objective of the delivery system is one of the most important parts. In previous works, the leader always accumulates all work, so that the whole process is a push-type construction. A push-type results in low efficiency. To avoid this shortcoming, we design a pull-type construction, which could enhance efficiency. In the pull-type construction, the objective is the primary thing, hence, we should define the objective of delivery first. The delivery concerns several groups, including the delivery and customer teams. The objective of delivery could only be obtained after a full discussion in multiple groups. However, it is difficult to realize the consensus of two different groups. On one hand, two diverse groups play different roles. Generally, a customer team is highly more important than a delivery team. On the other hand, the design of the objective is different from the selection of alternatives. The parameters of the designing objective are continuous, which is different from the selection of alternatives.
    To address these issues, this study proposes the following innovative solutions: First, a dual-layer structural model is established to characterize team relationships. The client team is positioned in the first layer and the delivery team in the second layer, reflecting a client-first principle. Second, targeting the research gap in delivery parameter optimization, a data-driven group decision-making method is introduced. Considering the inefficiency of large-scale team negotiations, an intelligent evaluation method based on Recurrent Neural Networks (RNN) is designed. Compared with traditional Multi-Criteria Decision Analysis (MCDA) methods, this approach not only handles complex nonlinear relationships between attributes but also effectively captures cumulative effects in evaluations, significantly enhancing assessment efficiency. For model solving, an improved Grey Wolf Optimization (GWO) algorithm is proposed to address the nonlinear characteristics of the dual-layer programming model. By incorporating a Lévy flight mechanism to avoid local optima and integrating a gravitational search algorithm to enhance global search capabilities—particularly boosting local search performance—this method effectively resolves the inefficiency issues of traditional exact algorithms. To verify the good performances of the proposed method, a case study, which indicates the delivery of complex equipment, is used. According to the results, we could know that the predicting performance of recurrent neural networks is better than that of neural networks, which verifies the proposed conclusion. Meanwhile, the comparison between our proposed improved gray wolf algorithm and some heuristic algorithms shows the advantages of our proposed algorithms. The proposed mechanism in the gray wolf algorithm is useful.
    Considering the features of fuzzy numbers, other kinds of predicting methods are encouraged to predict the experts' preferences. Also, as the delivery system is composed of kinds of different parts, it is worthwhile to study how to assign the objective value to each system.
    Generalized Game Cross Fixed Cost Allocation Model Considering Equity and Coordinated Allocation
    DONG Feng, PAN Yuling
    2025, 34(2):  59-65.  DOI: 10.12005/orms.2025.0043
    Asbtract ( )   PDF (1250KB) ( )  
    References | Related Articles | Metrics
    Previous research proposed the Game-Fixed Cost Allocation Model (Game-FCAM) that combined game theory and efficiency, but it overlooked the element of equity in the resources allocation process. This article develops a fixed cost allocation model that considers equity, game theory and efficiency. To achieve this goal, we first devise an enhanced general cross-efficiency algorithm that considers equity and cooperative game theory to optimize the conventional Game-FCAM. The outcome of this approach is the Game-Equity Fixed Cost Allocation Model (Game-EFCAM). Based on Game-EFCAM, we further consider multiple resources allocation and propose the Generalized Game-Equity Fixed Cost Allocation Model (Generalized Game-EFCAM) applicable to multiple resources allocation.
    Comparing the allocation results of the Generalized Game-EFCAM with ones of other resources allocation schemes, we find that the former better satisfies the requirement of equity and achieves higher resources allocation efficiency. In other words, the proposed model allocates resources from the perspectives of equity and game theory, while maximizing overall efficiency. Qualitative and quantitative analyses both show that the allocation results of the Game-EFCAM are superior to those of the traditional Game-FCAM. In addition, we take the coordination allocation of carbon emission quotas and energy use quotas as an example, and examine the coordination results of different resources allocation schemes. The results show that the correlation between carbon emission quotas and energy use quotas in the Generalized Game-EFCAM is much higher than that in other fixed cost allocation models, indicating that the proposed model has higher coordination and applicability for coordinated allocation and collaborative emissions reduction.
    The unique contributions of the proposed Generalized Game-EFCAM are as follows: (1)This paper extends the applicability of cross-efficiency. In previous research, cross-efficiency was based on an equal distribution of weights across units. This paper modifies this approach, allowing cross-efficiency to be applicable to units with different weights. At the same time, the paper combines equity principles to propose a fixed cost allocation model that integrates game theory, equity, and efficiency principles. (2)This paper expands the applicability of allocation models. Previous research typically focused on the allocation of a single fixed cost, with limitations in scenarios involving the allocation of multiple fixed costs, especially synergistic allocation. The Generalized Game-EFCAM proposed in this paper is suitable for scenarios involving the allocation of multiple fixed costs, considering the characteristics of different fixed costs, particularly in the case of synergistic features. In this paper, a two-type fixed-cost allocation example is used to derive the Generalized Game-EFCAM, and the corresponding solution steps and theorems are proposed. (3)This paper uses the pressing problem of carbon emission quotas and energy use quotas allocation in China as an example and applies the Generalized Game-EFCAM to solve the coordination allocation quota of carbon emission rights and energy use rights.
    The main advantages of the model presented in this article are as follows. (1)In the process of resources allocation, it comprehensively considers efficiency and equity principles from a game perspective. Therefore, it is more in line with actual demand when allocating resources. (2)It can be applied to multi-resources allocation scenarios, allocating resources from the perspective of maximizing overall resources allocation efficiency, which not only improves resources utilization efficiency but also improves the distribution efficiency. The modelproposed in this article can also be improved in the following aspects. Firstly, this article doesn't set relevant equity standards when proposing equity principles, but adopts the per capita pay ability as the equity principle. In fact, the equity principle can also refer to the Gini coefficient, Theil index, and other indicators when designing. Secondly, the Generalized Game-EFCAM proposed in this article only considers games within one camp. If it is expanded to games between two camps, the model can be modified by considering factors such as benevolence and confrontation. In addition, the Generalized Game-EFCAM proposed in this article is an extension of Game-FCAM. However, the equity principles and multi-resources coordination allocation proposed in this article can also be extended based on other DEA models.
    MTO Supply Chain Hedging Strategy under Delivery Time Sensitive Demand
    ZHAI Yue, ZHENG Dazhao, XU Suxiu, LAI Kinkeung
    2025, 34(2):  66-72.  DOI: 10.12005/orms.2025.0044
    Asbtract ( )   PDF (936KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of make-to-order (MTO) supply chain, customers are becoming more and more sensitive to the service level, e.g., the delivery lead-time and delivery efficiency. A short delivery stimulates the market demand and vice versa. In order to maintain competitive advantage in the customized market, the retailer needs to shorten the delivery time while ensuring the efficiency of on-time delivery. However, there exist inherent uncertainties such as machine failure, a shortage of raw materials and unskilled work crew in the production process. The MTO manufacturer often fails to satisfy the required delivery time. Unfortunately, the retailer must pay tardiness penalty if the actual delivery time exceeds the promised delivery time. In order to improve on-time delivery rate under a short delivery time, the retailer often requires the manufacturer to hedge against its production uncertainty, which is defined as production lead-time hedging in this research. Although the production lead-time hedging strategy may improve the retailer's profitability by mitigating tardiness delivery, the manufacturer must spend more on the hedging, e.g., hiring more workers, leasing more production lines, and requiring workers to work overtime. Hence, the manufacturer will withdraw from the production lead-time hedging strategy if its profit is worse off. To solve the conflict between the retailer and manufacturer, we propose a side payment contract under which the retailer makes a direct money transfer to the manufacturer for compensating its hedging effort. This work is devoted to exploring the effect of production lead-time hedging on the delivery time decision, on-time delivery probability, and profits of each party while examining the effect of the proposed side payment contract on coordinating the decentralized supply chain.
    We consider three scenarios corresponding to different power settings, namely, the centralized model, the Nash game model and retailer-led Stackelberg game model. Under the centralized model, the decisions are made by a super manager who aims at maximizing the profit of entire supply chain. Under the Nash game model, the retailer decides the delivery time while the manufacturer decides the production lead-time hedging amount simultaneously. Under the retailer-led Stackelberg game model, the retailer who acts as the game leader chooses the delivery time at the first stage. Given the delivery time, the manufacturer chooses the production lead-time decision at the second stage. By analyzing the game equilibrium, we derive the optimal decision for each participant.
    Through numerical analysis and model comparison, we find that the production lead-time hedging strategy improves the channel profit as well as promote the market demand. The win-win outcome is reached through the proposed side payment contract. The major findings and managerial implications are summarized as below:
    First, adopting production lead-time hedging strategy facilitates the retailer to choose a shorter delivery time. Especially, under the centralized model, the delivery time reaches its shortest while the production lead-time hedging amount reaches its highest. The production lead-time hedging strategy protects both retailer's and manufacturer's profits even when the consumer's delivery time sensitivity or the tardiness penalty becomes higher. The relationship between different game models depends on the tardiness penalty sharing rate, production lead-time hedging cost and unit tardiness penalty.
    Second, the retailer should actively provide side payment contract to encourage the manufacturer to take part in the production lead-time hedging strategy. Although the retailer should pay extra cost for hedging, his profit is compensated by the reduced tardiness penalty and increased sales. Under the decentralized supply chain, each party's profit under the retailer-led Stackelberg game is higher than that under the Nash game. Hence, we suggest the manufacturer should not blindly pursue an equal power structurer, for the retailer-led Stackelberg game is the dominant strategy for both parties.
    Third, in the light of the equilibrium decisions, we suggest both the retailer and manufacturer should consider the tardiness penalty, retail price, consumer delivery time sensitivity, consumer price sensitivity and initial market size while optimizing their operation decisions. From the retailer's perspective, raising retail price while selling products to market with low sensitivity towards price/delivery time, or low tardiness penalty brings more profits. From the manufacturer's perspective, selling products to market with low or moderate delivery time sensitivity and retail price increases its profitability.
    Multi-product Pricing Model of Monopoly Retailers under Background of Blockchain
    JI Qingkai, CHEN Ruoyu, ZHAO Da, HU Xiangpei
    2025, 34(2):  73-79.  DOI: 10.12005/orms.2025.0045
    Asbtract ( )   PDF (1436KB) ( )  
    References | Related Articles | Metrics
    Blockchain technology is reshaping various industries with its features of decentralization, immutability, openness and transparency. In the retail industry, blockchain applications are in full swing. Walmart, an offline retail giant, has urged its suppliers to join the platform and “put” many fresh products on the blockchain platform. By the end of 2020, Walmart China's traceable fresh meat had accounted for 50% of total packaged meat sales, 40% of traceable vegetables, and 12.5% of traceable seafood. The online Chinese retail giant JD.com has also built a blockchain platform. According to the data shown in JDDigits in 2020, JD's blockchain platform brought about an overall sales growth of 9.97% in 2020. Among them, the sales of nutrition and health products increased by 29.4%, those of maternal and infant milk powder by 10.0%, and those of fresh products by as much as 77.6%. The data shows that the application of blockchain enhances the brand image of retailers and makes consumers have more trust in products on the chain, thereby increasing their willingness to purchase. However, according to the author's research on offline supermarkets (Wal-Mart, Olé boutique supermarkets, etc.), some products are on the chain, but others are not on the chain. In addition to the reasons why suppliers are not willing to put the products on the chain because of the unknown costs and benefits on the chain, for these retailers who have built blockchain platforms, they express their doubts about the products on the chain during the survey: Should they put as many products on the blockchain platform as possible? What's more, these retailers often sell substitute products with different brands at the same time, so putting substitute products on the chain may affect the competition between products and eventually affect sales and revenues of retailers. Therefore, retailers who have already built blockchain platform, should discreetly consider which products to be placed on the blockchain platform and how to set retailing prices. In this paper, we aim to answer the following questions: (1)Considering products with different consumer preferences, what products should the retailer choose to put on the blockchain? (2)How should the retailer price multi-products in the background of blockchain? (3)What is the value of blockchain in multi-product pricing?
    We consider a monopoly retailer who has built a blockchain platform. The retailer can choose two substitute products to put on the blockchain and decide their prices. We develop analytical models to explore the impact of blockchain on the retailer's pricing strategies and profits. We consider cases with zero or positive variable cost of deploying blockchain, and cases with putting only one product or both products on the blockchain. In sum, we have six models, including three models with zero cost (BN/NB/BB) and three models with positive cost (BNC/NBC/BBC), where “C” indicates the positive cost, “BN” indicates that the first product is put on the blockchain while the second is not, and “NB/BB” likewise. The model NN in which there is no blockchain at all is introduced as a benchmark.
    Our main findings are as follows: 1.(Zero variable cost of blockchain). If the retailer decides to put on only one product (model BN or NB), she should choose a more popular product and set a higher price than other products. The more popular product, even with a higher price, still has higher demand than the less popular product, thanks to the blockchain. Compared with the case of no blockchain (baseline model NN), we find that the less popular product's price also increases. However, its demand remains the same, hence the overall profit of the retailer increases. We call it the spillover effect of the blockchain value. The product without being on the blockchain can also indirectly benefit from the blockchain. Moreover, the retailer should put both products on the blockchain. 2.(Positive variable cost of blockchain). If the retailer puts only one product on the chain (i.e., the BNC or NBC model), compared with the baseline model NN, counterintuitively, the product without being on the chain brings more profits while the product on the chain may not. Moreover, the spillover effect of blockchain value is more evident—the demand for the product without being on the chain increases. When the market potential growth brought by the blockchain is large enough and the variable cost is small enough, it is profitable for the retailer to put products on the chain. Besides that, when the cost is in the appropriate range but still high, when the cost is low (high), the retailer would better put the more (less) popular product on the chain.
    Design of Resilient Supply Chain Network Considering Customer Type and Three-level Disruption
    CUI Qing'an, JIA Xiaodi
    2025, 34(2):  80-87.  DOI: 10.12005/orms.2025.0046
    Asbtract ( )   PDF (1080KB) ( )  
    References | Related Articles | Metrics
    Nowadays, in the era of economic globalization, people attach importance to the concept of lean production, coupled with the frequent occurrence of natural disasters such as earthquakes, all of which bring great pressure to the nodes of the supply chain and exacerbate the vulnerability of the supply chain. Since 2019, the outbreak of COVID-19 has been a major public health event whose future long-term nature and severity cannot be predicted, causing ripple effects along the supply chain, leading to disruptions at both upstream and downstream nodes. Therefore, in this context, without proper planning, enterprises will bear high maintenance costs and loss costs, and even face serious consequences such as reduced customer satisfaction and distrust. Therefore, under this condition, resilient supply chain emerges. Supply chain resilience is defined as a dynamic ability to respond to and recover from disruptions, so that the supply chain can effectively adapt to the risks of disruptions, thus increasing the competitive advantage of enterprises. Therefore, on the one hand, designing a resilient supply chain can effectively alleviate the risks caused by supply chain disruption and ensure the sustainable development of enterprises in the future. On the other hand, studying the optimal resilient supply chain for the normal production operation of enterprises can reduce unnecessary redundancy, which is conducive to reducing the cost pressure of the overall operation of enterprises and supply chain. Therefore, it is of research significance for enterprises to establish resilient supply chain to reduce cost risk.
    Therefore, this paper considers the construction of resilient supply chain through proactive investment and post-recovery measures. It discusses demand fluctuations and different customer types when designing supply chain scenarios. A constraint programming model is established to minimize the total cost of supply chain. The effectiveness coefficient of resilient supply chain is designed according to the time of interruption and the total cost of supply chain. Whether the established resilient supply chain is reasonable under different scenarios is evaluated. A resilient index is designed to evaluate the resilient capacity of different resilient supply chains in the face of disruptions by combining the disruption time and disruption degree. Through the analysis of these two coefficients, the state of resilient supply chain under different scenarios is comprehensively evaluated. In the decision-making process, different demand fluctuations and different customer types will affect the total cost of the supply chain, and different customer types and different resilience measures will affect the selection of supply chain nodes, which provides a theoretical reference for enterprises at the risk of the epidemic to establish a resilient supply chain.
    We use genetic algorithm to solve the model by referring to supply chain related article data for example analysis. The results of example analysis show that, at the risk of interruption of the supply chain network, although the resilience of the supply chain will be the strongest when new customers are selected with a high probability, the cost of maintaining the normal operation of the supply chain is also the most. Although the loss cost of choosing old customers is greater than that of choosing new customers, by comparing these three scenarioscomprehensively, the total cost and λ of choosing old customers with high probability are the smallest. Therefore, in order to maximize the economic benefits of the supply chain, it is the best choice to supply old customers with a high probability. On this basis, customer types are further subdivided according to acceptable interruption time and shortage penalty cost, and the cost changes in different customer types and different demand fluctuations are analyzed. And the resilient coefficient and the effectiveness coefficient of resilient supply chain are analyzed to further analyze the influence of different customer types on the target. The results of the example show that when the demand fluctuation is small, it will be of the most economic value to choose the customer with small fixed cost and low penalty cost. Moreover, the resilient supply chain of this kind of customers is also more effective. However, the acceptable interruption time of this kind of customers is short, and the resilience of the supply chain is poor, which cannot meet the supply in a short time. In the early stage of interruption, it can have higher economic benefits, but it is not conducive to the long-term development of supply chain.
    In future studies, more uncertain factors can be considered based on the realistic background, and the algorithm in this paper can be further optimized to get more accurate results in more complex supply chain network resilience design studies.
    Equilibrium Strategy of Supply Chain with Multiple Competing Retailers under Demand Disruption
    JIANG Lining, LIU Liping
    2025, 34(2):  88-95.  DOI: 10.12005/orms.2025.0047
    Asbtract ( )   PDF (1051KB) ( )  
    References | Related Articles | Metrics
    A frequent occurrence of emergencies not only poses a threat to society, people's lives as well as property, but also causes great disturbance to supply chain management. One of the serious consequences is the demand disruption. If effective countermeasures are not taken in time, it is likely to lead to profit loss or even breakdown of the supply chain. Especially in the market with multiple competing retailers and under demand uncertainty, since the order quantity of each retailer is not only related to its own retail price, but also affected by his competitors, it is necessary to obtain retailers' equilibrium pricing from their discretionary pricing choices, which makes disruption management more complicated. So this paper will investigate the equilibrium strategies in the supply chain with multiple competing retailers under the demand uncertainty and endogenous price, when the supply chain suffers demand disruption.
    Firstly, according to the Kuhn-Tucker theorem, the order quantity of each retailer is analyzed after the demand disruption. Then, according to the supermodel game, a disruption management model of the multi-retailer is constructed, and its equilibrium strategies are analyzed. In the analysis process, a supply chain profit function is expressed as the product of the expected profit generated by the deterministic demand and the comprehensive profit loss coefficient. Based on this coefficient, the comprehensive effect of the profit reduction caused by the demand uncertainty and the extra costs by the demand disruption is investigated. By analyzing the disruption model, the main conclusions can be drawn as follows: after the demand disruption, the pricing game between retailers is a supermodel game, and there is at least one Nash equilibrium solution. If there are multiple Nash equilibriums, they can be ordered, and the optimal equilibrium can reach the optimal expected profit. On this basis, the conditions for the existence of the unique optimal Nash equilibrium solution are analyzed, under which a complete set of equilibrium strategies is obtained, which includes internal equilibrium strategies and corner ones. When all retailers obtain the corner equilibrium strategies, such equilibrium behaviors can be reached only by changing the retails' prices. This shows that it is robust to arrange production plan in advance according to demand forecast and sell products when the selling season comes. Otherwise, retailers need to jointly change the order quantities and retail prices to achieve the equilibrium strategy to deal with demand disruption. Moreover, the price discount sharing mechanism is designed to realize the coordination of decentralized supply chain. Finally, a numerical example is used to demonstrate the effectiveness of the countermeasures, and the results show that when retailers interact with each other effevtively, not only can the price be maintained, but also can greater system profits be obtained. Therefore, it is suggested that a communication platform should be established to facilitate information exchange, jointly prevent and control disruptions, and even coordinate disrupted supply chain, so that the price order can be maintained and recovered as soon as possible, and the disruption management effect can be improved effectively.
    The innovation of this paper mainly lies in: (1)Expanding the research on disruption management to the scenario of supply chain with multiple competing retailers under the demand uncertainty and endogenous price, investigating the complete set of the retailers' equilibrium behaviors, and providing quantitative support and decision-making reference for the disruption management from the perspective of analytical solution. (2)Most existing studies adopt a revenue-sharing contract for coordination. However, in a multi-retailer supply chain, the complexity of suppliers' monitoring of retailers' sales revenue increases, and revenue sharing may not be able to be effectively implemented. Therefore, this paper designs a price discount sharing mechanism to coordinate the supply chain. This mechanism has the effect of risk sharing, and when the retailer reduces the price to regulate the market demand, the supplier will provide the retailer with a compensation based on a percentage of the price reduction. So this price mechanism gives the retailer some degree of flexibility, which makes it easy to achieve a long-term cooperation.
    In this study, we have not considered the behavioral preferences of retailers, the influence of market power of large retailers or the situation of repeated games in multicycle, which can be expanded in the future.
    Research on Operation Strategy of Manufacturers and E-commerce Platform under Donation Behavior
    GUO Jinsen, WANG Jumei, ZHOU Yongwu
    2025, 34(2):  96-103.  DOI: 10.12005/orms.2025.0048
    Asbtract ( )   PDF (1350KB) ( )  
    References | Related Articles | Metrics
    Charitable donations not only are an important way for companies to fulfill their social responsibilities, but also effectively influence consumers' choices and evaluations of their products, enhancing the company's social reputation and value. For example, in the “7.20” extremely heavy rainstorm event in Henan Province in 2021, ERKE's “bankrupted donation” to the disaster area ignited the network, and its sales performance increased dozens of times. At the same time, with the rapid development of e-commerce, more and more manufacturers rely on e-commerce platforms to sell products through distribution or agency selling models. Under different sales models of e-commerce platforms, on the one hand, the pricing and revenue models of manufacturers and e-commerce platforms have changed. On the other hand, the combined effect of donation behavior, leading to increased costs for manufacturers but increased sales, has a new impact on their pricing decisions and profits, making the operational strategies of the supply chain more complex and challenging. For example, businesses on JD's “Love Dongdong” platform support public welfare in the form of sales donations. Despite the increase in unit product cost expenditure, businesses have received higher platform search traffic and product demand. Over 36000 JD businesses have opened “Love Dongdong” and donated over 25 million yuan. Therefore, studying the operational decisions of manufacturers and e-commerce platforms under donation behavior is of great significance.
    This paper focuses on a supply chain system composed of two manufacturers of alternative products and an e-commerce sales platform. The following models are considered for selling products: (1)The e-commerce platform distribution model, where two manufacturers sell their products to e-commerce platforms at a certain wholesale price, and the e-commerce platform determines the retail prices of the two products after obtaining product ownership.(2)The e-commerce platform agency selling model, where the e-commerce platform does not have the ownership and pricing rights of the products, but helps manufacturers sell products based on the prices given by the two manufacturers and earns profits by charging a unit product sales commission. (3)The e-commerce platform hybrid sales model, which refers to one manufacturer distributing products through e-commerce platforms and another manufacturer selling products on a commission basis through e-commerce platforms.
    First of all, the paper solves the equilibrium solution of the game through the backwards-induction under different e-commerce platform sales modes. Then, it analyzes the impact of donation levels and product substitutability on supply chain operation decisions and profits under different e-commerce platform sales models. Finally, it compares and analyzes the profit levels of the two manufacturers and e-commerce platforms under different e-commerce platform sales modes, and discusses the preferences of the two manufacturers for different e-commerce platform sales models. The research results show that: (1)Under different e-commerce platform sales modes, the wholesale prices, retail prices, and agency prices of the two manufacturers' products gradually increase with an increase in donation levels. However, an increase or decrease in profits of the two manufacturers and the total profit of the supply chain are comprehensively influenced by the level of corporate donations and the sensitivity of consumers to corporate donations. (2)Under certain conditions, an increase in substitutability between two products can increase the profits of some enterprises and the total profit of the supply chain. (3)When the sales commission shared by e-commerce platforms is relatively high, both manufacturers will prefer the e-commerce platform distribution model. Otherwise, they prefer to choose e-commerce platform agency selling or e-commerce platform hybrid sales models.
    This paper investigates the operational strategy of two manufacturers and a single e-commerce platform under donation behavior, and future research can further expand to the operational strategies of multiple manufacturers and multiple e-commerce platforms. In addition, this paper only considers the impact of upstream manufacturers' donation behavior on supply chain operation strategies. In the future, further research can be conducted on supply chain operation strategies when both upstream manufacturers and downstream e-commerce platforms make charitable donations.
    Competition or Cooperation? ——Supply Chain Outsourcing in the Presence of Spillover Effect of Consumer Awareness
    YOU Guanzong, LUO Chunlin
    2025, 34(2):  104-110.  DOI: 10.12005/orms.2025.0049
    Asbtract ( )   PDF (1160KB) ( )  
    References | Related Articles | Metrics
    In the context of economic globalization, as market competition intensifies, key parts suppliers may choose to produce terminal products by themselves, compete with manufacturers in downstream markets, and outsource cooperation with competitors at the same time. Samsung, for example, sells its phones in the consumer market in addition to supplying premium screens to competitors like Apple and Xiaomi. In this context, suppliers producing key parts may face the choice of competing with product manufacturers in downstream markets due to their technological advantages, encroaching on the retail market or focusing on providing outsourcing services for key parts to manufacturers. At the same time, the manufacturer can choose to bear the cost of producing the parts themselves or outsource them to a cost advantage supplier. But the availability of such a critical component could trigger changes in the companies' consumer base. When the manufacturer chooses to outsource, more consumers will be aware of the existence of the supplier, and thus may be aware of the supplier's products. This phenomenon, which is caused by cooperation, promotion and other factors, is known as the spillover effect of consumer awareness. For example, when Xiaomi, Huawei and other mobile phone manufacturers release mobile phones, they will emphasize the quality screen provided by Samsung. Some users who do not know Samsung may realize the existence of Samsung mobile phones from the publicity or distinctive logos, and choose to buy its mobile phones. This is because the spillover effect of consumer awareness enriches customers' purchase choices and intensifies market competition, thus affecting the strategic choices of parts outsourcing and market encroachment. Although many scholars have studied the spillover effect of consumer awareness in various aspects, the demand spillover resulting from outsourcing has not been investigated. In addition, most of the above literatures only consider the outsourcing selection of one party. Therefore, the study of how the spillover effect of consumer awareness affects suppliers' market encroachment and manufacturers' outsourcing decisions can contribute to enriching supply chain outsourcing literature and providing theoretical basis for supply chain management and practice.
    In view of the competition and cooperation structure of the supply chain mentioned above and the fact that the existing outsourcing literature does not consider the spillover effect of consumer awareness, this paper explores in depth how the spillover effect of consumer awareness affects the market encroachment of suppliers and the outsourcing decision of manufacturers. Considering the spillover effect of consumer awareness and cost advantage, we construct a two-level supply chain game theory model consisting of one supplier and one manufacturer, and obtain the results of sub-game equilibrium in four cases of supplier's market encroachment and the outsourcing decision of the supply chain by the backward induction. The impacts of the spillover effect and cost on the equilibrium results are further analyzed. Finally, we resort to numerical analyses to explore the supplier's encroachment strategy and profit changes with respect to the cost.
    The research shows that, due to the behavior of suppliers inducing manufacturers to outsource, the spillover effect of consumer awareness has two sides. The party in the supply chain that tends to choose outsourcing suffers the negative impact of spillover effect. When the manufacturer is motivated to choose outsourcing, the larger the spillover effect is, the more beneficial it is to the supplier, and vice versa. The spillover effect of consumer awareness also reduces the incentives of both manufacturers and suppliers to outsource. In addition, when the supplier encroaches on the market, inducing the manufacturer to outsource needs to provide a wholesale price lower than the cost. Under the composite influence of double marginalization effect and efficiency improvement effect, when the manufacturer's cost is low, the supply chain profit will rise with an increase in the cost. When the intrusion strategy of suppliers is considered and the spillover effect of consumers is large, the manufacturer may obtain higher profits at a moderate cost than those at a low cost due to the moderating market competition, and the profit of the supply chain will be improved.
    Robust Multiple Regression Prediction Model Based on Level Dependent Choquet Integral and its Application
    GAO Xiaohui, GONG Zaiwu
    2025, 34(2):  111-117.  DOI: 10.12005/orms.2025.0050
    Asbtract ( )   PDF (1202KB) ( )  
    References | Related Articles | Metrics
    In the face of complex data, the existing data outlier method has been difficult to meet the demand. Especially the modeling data suffers from various interference, leading to the deviation of modeling results from the model. The consistency plays an important role in the complex fluctuating data modeling, where the hidden inconsistencies pose a significant threat to the model performance. Therefore, it is urgent to find suitable methods for correcting data. In the field of decision-making, robust ordinal regression is achieved by repeatedly communicating with decision-makers to obtain more robust parameter results. In this paper, this idea is introduced into the prediction model to identify inconsistencies in the data to improve the robustness of the model. Besides, limited data limits the performance of models in forecasting process. However, fully mining the hidden information contained in data under existing conditions and utilizing the information of existing data is a concerned issue in current prediction models. Level dependent Choquet integral is an interval division based on traditional Choquet integral, which obtains more data information through more precise division. It can effectively solve the problem of insufficient data information mining in existing data. Multiple regression is widely used in many fields, but there are still two shortcomings in dealing with multivariate prediction problems. The relationship between dependent and independent variables should be considered simultaneously for outlier detection, that is, whether there is an exception in the whole rather than just a single sequence. Traditional decomposition techniques do not consider the interaction between variables when enriching data, and it is necessary to enrich sample data and fully mine information based on the consideration of interaction. Therefore, effectively solving the above two problems is of great significance for improving the performance of multiple regression models, and providing new ideas for the development of predictive model.
    This article proposes a robust multiple regression model based on horizontal Choquet integration. Firstly, the model first checks the relationship between dependent and independent variables through 0-1 planning. If the results are all 0, there will be no outliers, that is, the data is consistent. If not all 0, there will be outliers that make the data inconsistent, and then the outliers will be eliminated to ensure that the data used are not disturbed. Secondly, level dependent Choquet integration processing is carried out on the data to obtain more precise data through interval division. The purpose is to obtain more abundant data samples on the basis of considering the interaction between indicators to deeply mine the information in the original data. Finally, the refined sample data obtained is subject to fractional order accumulation. The multiple regression model is established using the least squares principle to obtain model parameter estimates. The grey wolf optimization algorithm is used to optimize the fractional order accumulation coefficients to improve the performance of the model. The significance of fractional order accumulation aims to improve the predictive performance of multiple regression models. The multiple regression model established based on the sequence obtained by the r-order accumulation operator has more selectivity than the traditional multiple regression. When r equals 0, it is the traditional multiple regression. Therefore, adding the r-order accumulation operator is an extension of the traditional multiple regression. On this basis, the new model is applied to the prediction of carbon dioxide emissions of the Chinese fleet. The research results show that the prediction effect of the robust multiple regression model based on level dependent Choquet integration is better than other classic models. At the same time, the data mining system designed in this paper can also be applied to many other prediction models.
    Owen Value, Coalition Equal Division Value and Balanced Cycle Contribution between Unions
    SHI Jilei, SHAN Erfang
    2025, 34(2):  118-124.  DOI: 10.12005/orms.2025.0051
    Asbtract ( )   PDF (945KB) ( )  
    References | Related Articles | Metrics
    For cooperative games with transferable utilities or short TU-games, many allocation rules or values are defined to allocate the worth of the grand coalition to all players in coalition. For instance, the Shapley value, the egalitarian value, the solidarity value and Banzhaf value and so on are the famous single-values. MYERSON (1980) used the balanced contribution axiom to give a characterization of the Shapley value, which means that for each pair of players, each loses (or gains) the same amount if the other leaves the coalition. Moreover, the Shapley value is a unique efficient value satisfying balanced contributions but there is no literature to study the characterization of the solidarity and egalitarian values by employing the balanced contribution axiom because the axiom is so strong.
    KAMIJO and KONGO (2010) proposed balanced cycle contribution motivated by the idea of equilibrium in economics and this property states that for any order of all the players, the sum of each player's claims on his predecessor equals that of each player's claims on his successor. Finally, they gave a characterization of the Shapley value by invoking the properties of efficiency, balanced cycle contribution and null player out property. Moreover, KAMIJO and KONGO (2010) found that not only does the Shapley value satisfy the balanced cycle contribution axiom but also some other values for TU-games do so, such as the solidarity value, the egalitarian value and Banzhaf value. Hence, the balanced cycle contribution axiom is a less restrictive requirement than the balanced contribution property. In order to characterize these above values, KAMIJO and KONGO (2012) introduced the invariance axiom, which states that the removal of a particular player from the game does not affect the payoffs of other players, and that the removal is different in value. They also gave the axiomatic characterization of these above values for TU-games. Concretely, the egalitarian value is a unique value satisfying efficiency, the invariance from proportional player deletion and balanced cycle contribution. The solidarity value is a unique value satisfying efficiency, the invariance from quasi-proportional player deletion and balanced cycle contribution. The Banzhaf value is a unique value on TU-games that satisfies 2-efficiency, efficiency with respect to 1-person games, balanced cycle contribution and the invariance from null player deletion.
    In this paper we extend the balanced cycle contribution to TU-games with coalition structure and coin the phrase—the balanced cycle contribution between the unions. Furthermore, we characterize the Owen value by invoking efficiency, balanced cycle contribution between the unions, null priori union out property and balanced contribution within the unions. Moreover, we find that the difference between the Owen value and the equal division value for TU-games with coalition structures is that the deletion of a specific union from games does not affect the other unions' payoffs, and this deletion is different in both values. Finally, we characterize the equal division value for TU-games with coalition structures by using the axioms of efficiency, balanced cycle contribution between the unions, invariance of the priori union's payoff and balanced contribution within the unions.
    Research on Driving Modes of Chinese Manufacturing Enterprise Servitization Based on fsQCA
    WANG Sixiang, HU Wenxiu, LI Lei
    2025, 34(2):  125-132.  DOI: 10.12005/orms.2025.0052
    Asbtract ( )   PDF (989KB) ( )  
    References | Related Articles | Metrics
    The servitization accelerates the transformation of manufacturing enterprises from pure product manufacturers to solution providers, significantly improving the comprehensive strength of enterprises and enhancing the competitive advantages. According to data from the Ministry of Industry and Information Technology, Chinese enterprises are paying more attention to the role of service in value-added products. However, in view of the diversified provincial environment, complex industry competition and disparate enterprise resources, the single driving mode may make those enterprises in complex internal and external environments fall into a “servitization dilemma”. Therefore, exploring the differentiation driving mode of servitization has become an urgent problem to be studied.
    Based on the PEST analysis model and strategic triangle framework, this paper uses induction and deduction methods to define seven conditions: institutional environment, market scale, human capital, informatization level, customer concentration, team stability and production efficiency. Seven conditions above cover three levels: regional business environment, industry competition structure and enterprise resources base. Among them, based on the institutional base and PEST analysis model, institutional environment, market scale, human capital and informatization level correspond to four types of macro environmental factors: politics, economy, society and technology. Customer concentration, which reflects the bargaining power of buyers, is a meso industry factor concerned by the industrial base view. Team stability and production efficiency respectively reflect the human resources and manufacturing base of the enterprise, and both are micro enterprise factors concerned by the resources base view. This paper attempts to explore the combined effect of the three levels of the antecedents above on the servitization of manufacturing enterprises. The research contributions of this paper mainly include the following two aspects. On one hand, it clarifies the diversified driving modes of manufacturing enterprises' servitization. On the other hand, it alleviates the contradiction between the servitization of manufacturing enterprises and their antecedents.
    This paper uses fsQCA3.0 software to explore the antecedent configuration of enterprise servitization and analyze its adequacy, which will enrich the mechanism of the driving factors of enterprise servitization. The research data of this paper mainly comes from following channels: 1.Based on the PEST analysis model, this paper explores the regional macro level conditions that trigger the servitization of Chinese manufacturing enterprises from the four dimensions of politics, economy, society and technology. The data are mainly from Marketization Index of China's Provinces: Neri Report 2021 and Statistical Bulletin of National Economic and Social Development in 2021 issued by provinces (autonomous regions, municipalities directly under the Central Government). 2.Based on the strategic triangle framework, this paper explores the conditions at the industry meso level and the enterprise micro level that drive the servitization of Chinese manufacturing enterprises from the two dimensions of industry competition structure and enterprise resources base. The data are all from CSMAR and WIND databases.
    This paper will start from the three antecedent conditions of servitization: regional business environment, industry competition structure and enterprise resources base. It will use the fsQCA method to identify the realization mode of servitization, and analyze the differences between each mode based on the configuration perspective. The main research conclusions are as follows: 1.The servitization of manufacturing enterprises is comprehensively affected by multi-level factors such as macro regions, meso industries and micro enterprises. It enriches the theoretical basis for manufacturing enterprises to adopt differentiated servitization mode under diversified situations. 2.The multi-level driving factors of manufacturing enterprises' servitization have significant equivalent substitution relationship, which makes up for the limitation of existing research based on contingency to explain the linear correlation between specific internal and external environmental factors and enterprises' servitization. 3.The servitization of manufacturing enterprises is affected by the dual role of regional human capital and informatization level. The results of configuration analysis show that the four modes driving high-level servitization are all affected by high informatization level and low human capital, indicating that the above-mentioned dual regional business environment factors can play a universal role.
    The contributions of this paper are mainly reflected in the following two aspects. Firstly, it clarifies the diversity of driving modes about servitization in manufacturing enterprises. This paper combines the PEST analysis model with the strategic triangle framework to analyze the key factors, which drive the servitization in enterprises. And we use fsQCA to summarize the four types of configuration modes, providing effective guidance for enterprise servitization and upgrading. Secondly, it alleviates the contradiction between the servitization of manufacturing enterprises and their antecedents. Previous studies have shown that the impacts of the specific internal and external environmental factors on the servitization of enterprises are diverse and even contradictory. This paper summarizes the driving model of enterprise servitization from a configuration perspective, providing new evidence for explaining the transformation of multiple antecedents through joint empowerment.
    Decision of Manufacturer's Selling Mode and Platform's Store Brand Considering Risk Aversion
    LYU Lubing, ZHAO Haixia
    2025, 34(2):  133-139.  DOI: 10.12005/orms.2025.0053
    Asbtract ( )   PDF (1032KB) ( )  
    References | Related Articles | Metrics
    Since its formation, e-commerce has achieved remarkable development results. With the rapid development of e-commerce, the platform, as an important carrier of online selling, has attracted great attention from enterprises. There are two main modes for enterprises to sell products through the platform: one is the reselling mode, that is, the platform buys products from upstream enterprises at the wholesale price and then resells them to consumers at the retail price, such as JD.com, Amazon and so on; the second is the agency mode, that is, upstream enterprises set the retail price and sell products directly to consumers, but need to pay a fixed percentage of revenue to the platform, such as Tmall. For enterprises looking to expand their business through the platform, the choice between two selling modes will be a top priority. Thus, it is necessary to analyze the influence mechanism of different selling mode on enterprise profits.
    In the face of such a huge market, more and more platforms choose to introduce store brands and encroach on the market to improve their own profits and bargaining power. Behind the rise in e-commerce and store brand is the increase in consumer purchasing power, which makes consumers more inclined to buy diversified and personalized products and show obvious preference differences for different products, which increases the uncertainty of product demand. At the same time, new brands in the market will intensify market competition. Hence, manufacturers often adopt a risk-averse attitude to cope with volatile markets and this attitude to risk can have a significant impact on the manufacturer's decisions. On the other hand, platforms' store brand strategy as an additional profit channel can also be used as a means to regulate the sales of national brands. Besides, multiple profit sources improve platforms' resilience in the face of demand fluctuations. Hence, this paper assumes that the platform is a risk-neutral enterprise.
    This paper considers a supply chain consisting of a manufacturer and a retail platform. Among them, the manufacturer is risk-averse and sells the national brand product on the platform through the reselling mode or agency mode. The retail platform is risk-neutral and can choose to introduce the store brand product. This paper divides all decisions into long-term and short-term decisions. In the long-term decision-making phase, the manufacturer first decides to adopt the reselling mode or the agency mode. The platform then decides whether to introduce the store brand. In the short-term decision-making phase, the manufacturer first decides on the wholesale price or retail price of the national brand. The platform then decides on the retail price of the store brand and the national brand, or only the retail price of the store brand. In each scenario, the market demand for different products is derived according to the consumer's utility estimation of different products, the uncertain demand function of the market is constructed in the form of market demand plus random variables, the utility function of the risk-averse manufacturer is constructed by means-variance theory, and the optimal solution in each scenario is obtained by reverse induction. The main questions discussed are as follows: (1)When should the platform introduce the store brand? (2)Can the manufacturer prevent the platform from introducing the store brand by adopting specific selling modes? (3)How do the manufacturer's risk aversion, consumers' store brand preference, and commission rate affect the decisions of the manufacturer and the platform?
    The results show that, for the platform, the reselling mode is conducive to the introduction of the store brand, while the agency mode is only suitable for the introduction of store brand when the commission rate is low. Consumers' preference for the store brand and the manufacturer's risk aversion are harmful for the platform to introduce the store brand. For the manufacturer, the higher the degree of risk aversion, the more likely it is to adopt the agency mode. When the manufacturer tends to be risk-neutral and consumers have quite different preference for different brands, it is difficult for the manufacturer to prevent the platform from introducing the store brand; besides, as long as the commission rate is moderate, the manufacturer can prevent the platform from introducing the store brand by adopting the agency mode. Finally, this paper extends the decision-making analysis when consumers prefer the store brand, and finds that the impact of consumers' store brand preference on game equilibrium is contrary to the basic model.
    Effect of Risk Aversion on Sharing Strategy of Social Responsibility in Supply Chain
    NIE Jiajia, JIANG Chen, LAI Xuemei
    2025, 34(2):  140-145.  DOI: 10.12005/orms.2025.0054
    Asbtract ( )   PDF (1085KB) ( )  
    References | Related Articles | Metrics
    In recent years, with economic development, social issues such as food safety, product quality, and environmental pollution have frequently occurred, leading to increased attention to Corporate Social Responsibility (CSR) across all sectors of society. In order to encourage companies to take on CSR and promote social progress, many countries and governments have implemented various policy measures. For instance, in April 2015, Shenzhen introduced the Corporate Social Responsibility Requirements and Corporate Social Responsibility Evaluation Guidelines to standardize CSR norms and guide CSR work. Meanwhile, consumers are increasingly concerned about whether companies are fulfilling their CSR obligations, prompting more businesses to incorporate CSR into their market decision-making processes, aiming to enhance their image and improve profitability. A survey by The Economist revealed that approximately 85% of corporate executives and investors consider CSR an important factor in investment decisions. However, companies undertaking social responsibility in the supply chain face considerable cost pressures. As a result, collaboration between upstream and downstream partners in the supply chain has become a feasible strategy. In practice, many companies proactively shoulder CSR costs for upstream manufacturers or share them in their CSR efforts. For example, FOTILE invests in hiring consulting firms to implement on-site lean improvement projects for strategic suppliers, ensuring product quality and safety. Bayers collaborate with upstream suppliers, such as the Penglai Haorizi Fruit Professional Rural Cooperative, to provide technical guidance and jointly address food safety issues. However, as consumer demand becomes increasingly diversified, companies often struggle to accurately capture market demand, leading them to adopt risk-averse measures when faced with the potential risks of demand uncertainty. A company's risk-averse attitude can significantly impact its market strategies. Clearly, CSR-sharing strategies, as a form of supply chain cooperation, are also influenced by companies' risk-averse attitudes. However, there is limited literature comparing different CSR-sharing strategies in supply chains, and no research has yet considered the impact of corporate risk-aversion on CSR-sharing strategies between companies.
    To address these issues, this study considers a supply chain system consisting of a risk-neutral manufacturer and a risk-averse retailer. Two cooperation models for sharing CSR costs and efforts are established, and the influence of the retailer's risk aversion on different cooperation models is analyzed. The study finds that: when the retailer is risk-neutral, CSR-sharing contracts can promote CSR adoption and supply chain optimization, with the cooperative model sharing the manufacturer's CSR efforts being superior to the model sharing CSR costs. However, when the retailer is risk-averse, CSR-sharing contracts may not necessarily lead to supply chain optimization. Under certain conditions, when the risk-aversion attitude is sufficiently high, CSR-sharing cooperation in the supply chain can lower the expected utility for both the manufacturer and retailer, and even reduce the overall CSR efforts within the supply chain. When the risk-aversion attitude is low, the effort-sharing strategy will be still superior to the cost-sharing strategy.
    This paper only considers a supply chain system composed of a single manufacturer and a single retailer. Future research could explore situations involving competition among upstream manufacturers. Additionally, further studies could investigate CSR cooperation strategies under information asymmetry in the supply chain.
    Application Research
    Bank Stability, Bank Competition and Zombie Lending
    LIU Xiaomeng, LIU Xinrui, ZHOU Aimin
    2025, 34(2):  146-151.  DOI: 10.12005/orms.2025.0055
    Asbtract ( )   PDF (915KB) ( )  
    References | Related Articles | Metrics
    “Zombie Lending” refers to those enterprises that have low productivity, lose profitability, and can only survive by relying on subsidies and preferential interest rates from the government or financial institutions (especially commercial banks). These enterprises have become one of the important obstacles to the high-quality development of China's economy. Then, in China, as the most important mean of macro-control and the main channel of real economy financing, how does the bank's stability affect the formation of zombie lending? Under the procedure of interest rate marketization reform, what role does the bank competition play? The relationship and influence mechanism between these variables are the main topic that this paper wants to explore.
    To answer the questions above, this paper first establishes a two-period commercial bank model, which takes the bank's stability level as a constraint, controls the commercial bank's disposal scale of zombie lending, and incorporates the bank competition variable into the constraint condition. Based on this model, this paper puts forward three pairs of hypotheses to be tested. Then, based on the existing literature, this paper identifies zombie enterprises in the industrial enterprise database by interest expenditure and debt scale. In terms of bank stability, this paper selects asset quality, profitability and capital adequacy as three aspects to select non-performing loan ratio, provision coverage ratio, and total asset return rate as proxy variables for bank stability in this paper, and regresses them with zombie lending. On this basis, based on the model analysis, this paper takes bank competition as a mediating variable for mediating effect analysis to clarify the influence mechanism of the above analysis. In terms of data sources, this paper's zombie enterprise data comes from China's industrial enterprise database from 2000 to 2013. In terms of indicator selection for commercial bank competition level, this paper constructs Herfindahl Index (HHI) by the number of commercial bank branches in each province to represent the bank competition level in that province.
    The main findings of this study are as follows: the stability of commercial banks can effectively reduce the amount of zombie lending. Commercial banks aim to maximize their own profits, so they have a potential motivation to dispose of zombie lending that jeopardizes their long-term profit level. When commercial banks have a higher level of stability, they can better deal with zombie lending and improve their asset quality. Enterprises with shorter establishment years, non-state-owned, non-manufacturing and small and medium-sized enterprises are more affected by a change in bank stability. These enterprises are relatively small in scale and do not have long-term and comprehensive business with banks like large state-owned enterprises, so they tend to be disposed of at first. The improvement of bank stability can reduce the amount of zombie lending by alleviating the inter-bank competition of commercial banks. The stable operation of banks often accompanies relatively conservative capital allocation and higher credit thresholds. At the same time, when commercial banks have a higher level of stability, they correspond to a period when various performance indicators are better. At this time, the risk-taking willingness decreases, so does the market competition level of commercial banks to a certain level. When the market competition level declines, zombie enterprises with poor debt repayment ability and high risk are no longer favored by commercial banks, thus reducing the degree of support for zombie enterprises by the financial system, and then the amount of zombie lending by decreasing relative loans. Therefore, the stability of commercial banks may resolve the zombie lending problem.
    Data Confirmation, Government Governance and Personal Data Protection
    SUN Yong, WANG Yalin, ZHANG Yafeng
    2025, 34(2):  152-158.  DOI: 10.12005/orms.2025.0056
    Asbtract ( )   PDF (1390KB) ( )  
    References | Related Articles | Metrics
    The digital economy has profoundly transformed the ways in which humans work and live, with data serving as a foundational, critical and decisive production factor for its development. The conflict between personal data protection and commercial utilization presents a significant challenge in the evolution of the digital economy. Therefore, exploring the roles of data ownership and government governance in personal data protection is of great importance. However, existing research has largely focused on issues such as privacy protection, data authorization, and government governance in personal data protection, with less attention paid to the dynamic impacts of data ownership and various governance approaches on the behaviors of multiple stakeholders. To address this gap, this paper concentrates on the information security issues during the commercialization phase of personal data. It constructs an evolutionary game model involving both enterprises and users to investigate the effects of judicial protection and administrative regulation on the behaviors of enterprises and users under conditions of data ownership, revealing the mechanisms of interest interaction between individuals and enterprises.
    The results indicate that under certain conditions, three stable equilibrium strategy combinations can be achieved: stable strategy combination E1 (0,0), which corresponds to {non-compliant use, no rights protection}; stable strategy combination E3 (0,1), which corresponds to {non-compliant use, rights protection}; and stable strategy combination E4 (1,1), which corresponds to {compliant use, rights protection}. Among these, the ideal stable equilibrium strategy is for enterprises to choose compliant use of personal data while individuals opt for rights protection. An increase in the degree of data ownership has a significantly stronger impact on individuals' willingness to pursue rights protection than on enterprises' willingness to engage in compliant use. When enterprises engage in non-compliant use of personal data, the potential for judicial compensation to users significantly constrains such a non-compliant behavior. Conversely, excessively high litigation costs for users can diminish their willingness to pursue rights protection. As the strength of judicial protection increases, both enterprises' willingness to comply with personal data usage and individuals' willingness to protect their rights strengthen. Mechanisms such as data ownership and strict enforcement can effectively promote the coordinated development of personal data protection and commercial utilization. The higher the intensity of administrative penalties and regulatory oversight imposed by the government on a non-compliant enterprise behavior, the stronger the constraints on enterprise behavior strategies.
    An administrative regulation can effectively oversee and manage the commercialization of personal data at various stages, including prevention, supervision, and post-event handling, demonstrating high effectiveness and convenience. The government can enhance individuals' awareness of rights protection through regulatory case studies and promotional activities, thereby increasing their willingness to protect their rights. Therefore, advancing personal data protection necessitates the fulfillment of governance responsibilities by government departments. This includes improving judicial remedy mechanisms, establishing data ownership systems, enforcing strict judicial practices, enhancing administrative oversight, creating unified independent regulatory bodies, refining review processes, and imposing strict penalties. Additionally, it is essential to engage individuals actively, promote data security knowledge, and optimize the judicial remedy environment to reduce the costs of rights protection, thereby safeguarding personal data through a multifaceted approach.
    Construction of an Early Warning Model for Coal Power Overcapacity Risk Considering Expected Loss and Interpretability
    MAO Jinqi, WANG Delu, SHI Xunpeng
    2025, 34(2):  159-165.  DOI: 10.12005/orms.2025.0057
    Asbtract ( )   PDF (1300KB) ( )  
    References | Related Articles | Metrics
    Reliable early warning mechanism of coal power overcapacity is the necessary premise and key to ensure its power supply security in the short term and carbon-neutrality goal in the long term. The “double carbon” strategy has become one of the important national strategies. Under this established strategy, as the largest “contributor” to carbon emissions, coal power overcapacity is an unchangeable development trend and its phase-out is imperative. However, China's economic development stage and coal-based energy resources endowment, coupled with the volatility of renewable energy output and the immaturity of energy storage technology require coal power to be the “ballast” of safe and stable power supply for a long time in the future. Therefore, the exit of coal power overcapacity must be planned in advance, and its foundation lies in accurate early warning of coal power overcapacity.
    However, the existing research on early warning of overcapacity has suffered some limitations. First, the existing research on the construction of early warning models does not fully consider the matching relationship between data characteristics and model characteristics, which results in a non-inferior model rather than an optimal model. Second, scholars focus on accuracy when evaluating the models. However, the early warning of overcapacity risk is closely related to capacity regulation. Therefore, it is essentially a cost-sensitive decision-making problem and the potential loss caused by prediction error needs more attention. Third, existing research often pursues prediction performance and builds complex models, ignoring the opacity caused by the complexity of the model while management decision scenarios need not only relevance, but also causality.
    Therefore, first, in view of the high-dimension of coal power overcapacity warning indicators and sample's sparseness, we construct a SVM model (linear kernel) good at dealing with small sample and high-dimensional data. Second, due to the difference between the economic consequences of capacity shortage and overcapacity, we build the total cost index to reduce the expected loss of the early warning model. Third, given the decision-making demand for “correlation+causality”, the interpretable method is constructed to reveal model reasoning mechanism and the driving mechanism of factors on risk.
    The results show: 1)Under the constraint of the highest accuracy, the accuracy, macro recall, and macro precision of the SVM (linear kernel) are better than in other models, but the total cost of the SVM (linear kernel) is higher, which is approximately 1.5 times that of the BP neural network. 2)Under the constraint of the minimum total cost, the total cost, accuracy, macro recall, and macro precision of the SVM (linear kernel) are better than in other models. Due to sacrificing a small amount of accuracy in exchange for a significant decrease in overall cost, it is recommended to choose the SVM (linear kernel) model with the minimum overall cost constraint. Furthermore, revealed by post interpretability techniques, the evolutionary pattern of key characterization indicators for coal power overcapacity risk (low risk→medium risk→high risk) is sensitive indicators→periodic indicators→comprehensive indicators; the corresponding important cause change law is market factors→policy and transmission factors→comprehensive factors.
    To summarize, the paper has contributed to the literature in two ways. First, our models improve the modeling logic of overcapacity risk early warning models under high-dimensional data, expand the model evaluation approach from achieving the highest accuracy to minimizing overall cost, and overcome the opacity of machine learning models. It provides comprehensive, quantitative analytical tools for the governance decision-making of overcapacity risk. Second, we have revealed the primary characterization indicators and important causes of overcapacity under different risk levels, and the evolutionary law of the risk state. This provides a solid decision-based foundation for preventing and controlling coal power overcapacity.
    Dual Channel Capacity Replenishment Considering Capacity Sharing
    XIAO Wei, LI Kai, FU Hong
    2025, 34(2):  166-173.  DOI: 10.12005/orms.2025.0058
    Asbtract ( )   PDF (1246KB) ( )  
    References | Related Articles | Metrics
    Capacity constraints and deviation of demand prediction pose challenges for many manufacturing enterprises, preventing them from achieving a perfect match between supply and demand. The rapid development of the new generation of information technology has given rise to the emergence of the “sharing economy” business model. Sharing manufacturing, as one of the most important application fields of sharing economy, leverages industrial Internet platforms to realize the efficient integration of geographically dispersed idle manufacturing resources and capacity through the sharing of the right to use these resources. This innovative model can improve production efficiency in the manufacturing industry and yield notable economic and social benefits. It is worth noting that risks such as random yield and production variability present significant challenges that seriously hinder the development and scalability of sharing manufacturing. These risks arise from factors such as fluctuating production quality, equipment breakdowns, and unforeseen delays, all of which contribute to the unpredictability of output. On this basis, this paper considers a widely adopted supply contract—quantity flexibility, which allows for a certain level of deviation between the actual supply quantity provided by the capacity supplier and the ordering quantity specified by the demander. To address the challenges arising from output/quality instability and the unpredictability of market demand, the capacity demander leases the production capacity with a lower price through the platform on the one hand, and signs capacity reservation contracts with the backup capacity suppliers with reliable supply on the other hand. This dual strategy aims to mitigate the risk of insufficient production capacity and safeguard smooth operations. If the capacity offered on the platform is insufficient to fulfill the demander's production requirement, then the reserved capacity will be utilized. Taking into account the inherent yield randomness of the platform capacity supplier, this paper investigates the impacts of some key parameters on the optimal dual-channel capacity supplement strategy for the capacity demander, the production decision of the platform supplier, and the profit of each supply chain player when supply flexibility is allowed. By considering various market conditions and their interaction, we provide a more comprehensive understanding of the operational dynamics.
    This paper constructs the Stackelberg game model, where the capacity demander acts as the leader, the backup supplier and the platform supplier are the followers. The sequence of events goes as follows: (i)in the first stage, the demander decides how much capacity to reserve from the backup supplier and how much to lease from the platform supplier; (ii)in the second stage, the platform supplier determines the production inputs. All players are independent decision-makers, and each aims to maximize its own expected profit. We establish a two-stage analytical model and determine the subgame perfect equilibrium following a standard backward induction procedure. This paper generates several important findings. First, we show that the demander can derive benefits by granting the platform supplier some supply flexibility. Allowing a certain degree of fluctuation between the capacity demand and the actual supply reduces inventory risks for the platform supplier, enhancing the motivation to share idle capacity. This finding validates that a little quantity flexibility goes a long way. Second, the backup supplier who is perfectly reliable in capacity supply should set the capacity reserve price at an intermediate level, so as to achieve a trade-off between the unit capacity reserve price and the total reservation quantity. Setting the capacity reserve price too low may attract more capacity reservations, but it does not necessarily result in higher returns for the backup supplier. Conversely, setting the capacity reserve price too high significantly weakens the demander's willingness to reserve capacity. Finally, it is important to note that, although a higher service fee may increase the platform's revenue from each successful transaction, excessively high service fees can potentially drive the platform supplier away from the capacity sharing market. Therefore, it is essential to achieve a balance between maximizing the platform's profit and guaranteeing the profitability of other stakeholders involved in capacity sharing when determining the service fee. From a managerial perspective, moderate supply flexibility and balanced pricing strategies are key to optimizing capacity sharing. Flexible supply reduces inventory risks and motivates suppliers to share idle capacity, while intermediate reservation prices balance demand and supplier returns, ensuring long-term profitability and sustainability for all parties.
    Enterprise Financialization and Debt Default: From Perspective ofShort-term Loans for Long-term Investment
    WU Yongxia, HU Haiqing, WANG Xianzhu
    2025, 34(2):  174-181.  DOI: 10.12005/orms.2025.0059
    Asbtract ( )   PDF (931KB) ( )  
    References | Related Articles | Metrics
    As an important source of systemic financial risk, debt default is one of the most destructive events in the operational process of enterprises. It not only has a fatal impact on the development of enterprises themselves but also poses a serious threat to financial security and social stability. Due to the dual impact of global economic downturn pressure and the tightening business environment, the risk of default on China's real economy debt has significantly increased, becoming a focus of attention from all sectors of society. Debt default of 124.3 billion RMB occurred in China's credit bonds in 2018, rising to 147.6 billion RMB in 2019. In the past two years, many large enterprises have experienced frequent serious debt default events. Not only have defaults in the bond market increased, but the balance of non-performing loans from commercial banks has also continued to rise, from 1774.3 billion RMB in the first quarter of 2018 to 2982.9 billion RMB in the fourth quarter of 2022. With the gradual accumulation and exposure of debt default issues, the profit space of physical enterprises continues to shrink. It has become a common phenomenon for enterprises to invest their funds in financial assets to obtain short-term returns. However, excessive financialization behavior can have a serious negative impact on enterprises, leading to a disruption in their capital flow and even exacerbating the risk of debt default.
    Short-term loans for long-term investment are the phenomenon of enterprises using short-term debt funds to support long-term investment activities, which is an important reason for corporate debt default. The “reservoir” effect and “crowding out” effect of corporate financialization will affect the short-term loans for long-term investment behavior of enterprises, thereby affecting the risk of corporate debt default. It can be seen that the study of financial asset investment behavior under short-term loans for long-term investment strategy has significant implications for the impact of corporate debt default risk. The research conclusion of this article provides not only new evidence for the moderate hypothesis of corporate financialization, but also guidance for enterprises to reasonably allocate financial assets and choose investment and financing strategies. It also provides empirical support for regulatory agencies to effectively prevent “default waves” or even “bankruptcy waves” in the real economy, and maintain financial security and stability. Based on the above discussion, in this article, A-share listed companies from 2007 to 2021 are used as samples to conduct in-depth research on the overall level of corporate financialization, the impact of short-term and long-term financial investments on debt default risk, and verify the transmission mechanism of short-term loans and long-term investments in the above impact relationships.
    The results show that if the “reservoir” effect dominates, corporate financialization will inhibit the risk of debt default, and if the “crowding” effect dominates, corporate financialization will aggravate the risk of debt default, i.e., the relationship between the two is U-shaped, which is largely due to long-term financial asset investment, as short-term financial asset investment mainly plays a “reservoir” effect and only suppresses the risk of debt default. The analysis of the impact mechanism finds that short-term loans for long-term investment play a mediating role in the relationship between corporate financialization and debt default, exacerbating the U-shaped impact of the two. This is mainly because short-term financial asset investment suppresses the risk of debt default by weakening the short-term loans for long-term investment, while excessive long-term financial asset investment aggravates the risk of debt default by increasing short-term loans for long-term investment. Furthermore, heterogeneity studies have shown that among larger enterprises with weaker cash flow and higher financing constraints, corporate financialization, short-term financialization investments, and long-term financialization investments have a more significant impact on debt default risk. Based on the above conclusions, we put forward targeted policy recommendations from the level of real enterprises, government supervision and financial institutions, respectively, so as to provide empirical guidance for all parties to correctly recognize the dual impact of investment in financial assets and effectively prevent systemic financial risks.
    Linkage Effect of Technology Shock and Consumption Shock on Multi-factor of Reduction Behavior Emission ——Empirical Research Based on DSGE Model
    QIU Lixin, ZHAO Yanan
    2025, 34(2):  182-187.  DOI: 10.12005/orms.2025.0060
    Asbtract ( )   PDF (1154KB) ( )  
    References | Related Articles | Metrics
    Carbon emission is an important link affecting the ecological protection and high-quality development of the Yellow River basin. In order to effectively protect the ecological environment of the Yellow River basin, achieve the goal of “carbon neutrality and carbon peak”, and actively promote the high-quality development of the Yellow River basin, it is urgent to further analyze the carbon emission behaviors in the provinces and regions in the Yellow River basin. This paper takes the emission reduction behaviors of enterprises and residents in the Yellow River basin as the main research object, and analyzes the main problems affecting the carbon emission behaviors of enterprises and residents under the impact of technology and residents. This is of guiding significance to the emission reduction in the Yellow River basin, and will further promote the high-quality development and strategic layout of carbon peak and carbon neutrality.
    On the basis of RBC structure, this paper constructs a two-sector closed economic model including enterprises and residents, and the model follows the principle of utility maximization and profit maximization. Some parameters in the model are obtained by Bayesian estimation based on relevant data of the Yellow River basin. By using the robustness of Bayesian estimation results, the final multi-variable diagnosis results show that the numerical estimation results are robust. Other parameters are calibrated based on existing research andrelevant statistical data. In this paper, the DSGE model is used to simulate the impact of technological innovation by applying the unit positive technological impact. It mainly analyzes the response of enterprises' emission reduction behaviors, carbon emissions and macroeconomic conditions to productivity. All the results are given the percentage of stable state in 20 quarters. According to the numerical simulation results of the DSGE model, the simulation data of 9 provinces in the Yellow River basin are studied when different indicators are faced with technology shock and consumption shock. The impact intensity, stable state and response trend of 20 quarters are analyzed. The factors that affect the carbon emission reduction behaviors of enterprises and residents include internal factors and external environmental factors.
    The research results show that: Firstly, technology shock has a positive impact on the enterprise emission reduction behaviors, carbon emissions and macroeconomic indicators. The expansion effect of economy under consumption shock is obvious, but different provinces have different sensitivity to consumption shock. Secondly, the fluctuation and cycle characteristics of the provinces in the Yellow River basin are different when facing the impact. On the one hand, different provinces and regions have different rates of return to steady state when facing shocks. Under the positive impact of the unit, Shandong, Shanxi and Henan provinces respond and basically stabilize in the 8th quarter. Sichuan and Shanxi provinces do not directly tend to steady state, but continue to fluctuate positively and negatively, with a fluctuation cycle of about 12 quarters. The rest of the provinces are in a state of shock and tend to be stable gradually. The fluctuation cycle of Gansu and Ningxia provinces is about 8 quarters, and that of Inner Mongolia Autonomous Region and Qinghai Province is about 4 quarters. On the other hand, the fluctuation amplitude of the response is also different. Taking carbon emissions as an example, in the face of technology shocks and consumption shocks, Inner Mongolia Autonomous Region, Gansu, Qinghai and Ningxia provinces experience the most drastic fluctuations. In the face of technology shock, the fluctuation range in the above provinces is far greater than that in other provinces, and the fluctuation range of consumption shock is relatively small. Third, the technological impact is more sustainable for Inner Mongolia Autonomous Region and Qinghai Province. The impact of increased technology on these two provinces is greater than that in other provinces, and the impact of consumption on the sustainability of Shanxi Province is better. Improving the technological influence in Shanxi Province is better than that in Inner Mongolia Autonomous Region and Qinghai Province. Fourth, the sustainability of technological shocks in the provinces in the Yellow River basin is lower than that of consumption shocks.
    Resources Matching Decision-making Method for Large-scale Engineering in Uncertain Scenarios Considering Psychological Preferences
    DU Qiang, HUANG Ning, PARTICK ZOU X M, GUO Xiqian
    2025, 34(2):  188-194.  DOI: 10.12005/orms.2025.0061
    Asbtract ( )   PDF (1012KB) ( )  
    References | Related Articles | Metrics
    With the increasing demand for infrastructure construction, large-scale engineering construction has had opportunities for rapid development, but consumed a large amount of resources, including funds, manpower, materials, and machinery, etc. Due to the complexity of large-scale engineering construction tasks and the need for specialized division of labor, there are often dozens of construction enterprises in operation together. If the resources management is not coordinated, it would easily lead to project delays and losses. In practice, decision makers optimize resources allocation by considering the matching relationship among resources, which could promote the derivation of incremental benefits. However, large-scale engineering resources allocation is faced with a complex decision-making environment, and decision makers have to consider the adaptability of schemes in various situations. Even the same scheme may produce divergent or even opposite results in different situations. Moreover, large-scale engineering resources are always heavily invested in, and there are various influencing factors during the implementation process. Decision makers could make corresponding judgments based on accumulated experience and information. Their actual behavior is often irrational, and they exhibit a risk aversion awareness when facing higher than expected returns, and a risk tendency when facing losses that exceed expectations. Therefore, based on the “irrational” behavior preference of decision makers, choosing an effective resources matching scheme under uncertain situations is an important problem to be solved in project management.
    This research introduces the concept of matching to large-scale engineering resources management. Decision makers need to consider the superiority of matching scheme from multiple perspectives, which means that the evaluation should have the characteristics of comprehensiveness. Relevant literature and expert experience are used to identify and screen multi-attribute indicators in the process. And the psychological expectations of decision makers are considered when evaluating attribute indicators. If the benefits of implementing matching scheme are higher or the costs are lower than corresponding expected values, the decision-maker's perception is a benefit, or otherwise, decision-maker's perception is a loss. According to the prospect theory, the irrationality is transformed into the sensitive attitude of decision-making process towards profit and loss, and the value function is introduced into decision matrix. Then, the mixed attribute judgment information is normalized into a unified form to eliminate the difference. Based on this, the identification framework of multi-attribute index of resources matching scheme is constructed. Due to the limited experience and cognition of decision makers, the lower and upper limits of the weights of each indicator are represented by intervals. Finally, the evidence reasoning algorithm and ER rules are used to synthesize the multi-attribute foreground decision matrix and weight information, as evidence for determining the comprehensive prospect value of each resources matching scheme.
    To illustrate the effectiveness of the method proposed in this research, a case is used to validate the evaluation model for large-scale engineering resources matching decision-making. After obtaining relevant data, a comprehensive prospect ranking of each scheme is calculated based on the decision-making steps. The results show the difference in scheme rankings under different situations, indicating the effectiveness of the decision-making method. It makes the research results more in line with the behavior of decision makers in reality. Different from the prospect theory, expected utility studies the choices of rational people under uncertain conditions through strict axiomatic assumptions. It believes that decision makers can make rational judgments when facing various risks and weight all possible outcomes to maximize expected utility. Therefore, this paper also compares the decision results based on the prospect theory and expected utility theory, and verifies the feasibility and effectiveness of the large engineering resources matching decision method. In conclusion, this method could integrate multi-attribute mixed decision information, and consider the factors of decision maker's psychology and situation change. It provides theoretical reference for resources decision-making in project management and engineering practice.
    Study on Measurement and Spatial-temporal Evolution in National Innovation System Efficiency
    LIU Xinxin, HAN Xianfeng
    2025, 34(2):  195-202.  DOI: 10.12005/orms.2025.0062
    Asbtract ( )   PDF (1275KB) ( )  
    References | Related Articles | Metrics
    Under the new development pattern, China is faced with huge challenges such as international instability, frequent health incidents, and fading demographic dividend. The economic development model driven by traditional factors is not sustainable. Innovation-driven development has gradually become an important source of high-quality economic development. How to effectively accelerate the construction of innovation system is the fundamental way to improve national innovation ability and competitiveness. Since the CPC Central Committee formally put forward the resolution to comprehensively promote the construction of the national innovation system with Chinese characteristics, the construction of the national innovation system has achieved considerable results. According to the statistics of the World Intellectual Property Indicators, China ranks the first in the world in the total number of R&D personnel and applications for patents, trademarks and industrial appearance. However, the rapid growth of individual innovation indicators does not mean that the national innovation system efficiency will be improved simultaneously. China still has a series of problems, such as poor coordination of innovation entities, scattered allocation of innovation resources and low quality of innovation output. Therefore, how to evaluate national innovation system efficiency scientifically has become a practical problem to be solved urgently. Unfortunately, the academic circle's understanding of this issue is still very limited, and the current evaluation of national innovation system efficiency is still lagging behind the practical needs of the construction of an innovative country. Under such circumstances, systematically constructing the index of the national innovation system efficiency and accurately describing its evolution trend, spatial distribution, regional differences and convergence characteristics are of great practical significance for objectively understanding the operation status of the national innovation system, systematically improving the overall efficiency of the innovation system, and accelerating the construction of an innovation-oriented country.
    Based on the framework of “foundation-process-result”, this paper constructs a comprehensive index system of national innovation system efficiency from five aspects: innovation basis, innovation environment, innovation synergy, innovation openness and innovation quality. And this paper uses the global principal component analysis method and relevant data from 2009 to 2020 to measure the index of China's innovation system efficiency at the provincial level. Furthermore, the Dagum Gini coefficient and β convergence model are used to empirically analyze the spatial-temporal evolution trend, spatial differences and sources, convergence characteristics and speed of innovation system efficiency in China and the three regions. The results show that: First, the comprehensive level of national innovation system efficiency is not high. And there are serious problems of unbalanced, inadequate and uncoordinated spatial distribution, which is manifested as the gradient characteristics of “eastern region>central region>western region”. There is still a large room for improving national innovation system efficiency in the country and the three regions, and all of them show a steadily rising growth trend. Second, the spatial differences of national innovation systems efficiency show a fluctuating upward trend, and the intra-group differences and inter-group differences also present different increases in the three regions. The inter-group differences are always the main source of spatial differences, and the intra-group differences in the eastern region and the inter-group differences between the eastern and western regions are relatively large. Third, innovation system efficiency in China and the three regions all have typical characteristics of absolute and conditional β convergence, and the convergence speed in the western region is higher than that in the eastern and central regions. In the conditional β convergence after the introduction of control variables, the convergence rate of innovation system efficiency in the country and the three regions is significantly improved, and the convergence rate of the central and western regions is obviously faster than that of the eastern regions. Based on the research results, this paper puts forward policy implications from strengthening the construction of the national innovation system, paying attention to the spatial imbalance of innovation system development, and grasping the driving role of economic and social factors.
    Improved TOPSIS Based on Combination Weight of Information Cloud and Cobweb Similarity
    HUANG Jianhua, ZHANG Xiang
    2025, 34(2):  203-209.  DOI: 10.12005/orms.2025.0063
    Asbtract ( )   PDF (1123KB) ( )  
    References | Related Articles | Metrics
    Scientifically formulating a multi-criterion scheme and selecting an appropriate decision-making method are the premise of tackling multi-attribute decision-making problems and crucial factors in determining the optimal solution. The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is one of the most popular used multi-criterion decision-making methods, and it has been widely applied in various fields, including natural disasters, construction engineering, and environmental safety. While TOPSIS has undoubtedly made decision-making more convenient, most problems that require TOPSIS to solve are often based on a large amount of unclear information and subjective judgments. As a result, decision-makers often find themselves operating in a fuzzy environment full of things unknown, which can make the decision-making process challenging. In addition, it is worth noting that the results obtained via TOPSIS can be influenced by several factors, including the weighting of the indexes, the proximity algorithms, and the subjective preferences of decision-makers. As a result, the direct application of the classic TOPSIS method may be subject to certain limitations. Given the above-mentioned issues, this paper proposes a prospect interval TOPSIS method based on information cloud combination weighting and cobweb similarity improvement.
    The first step in the proposed approach involves differentiating between the various types of decision indicators and considering their respective fundamental properties. An improved interval number ideal solution identification method is suggested as a result, which intends to enable decision-makers to have a more accurate and comprehensive depiction of each indicator, thereby making a more informed and reliable decision. Furthermore, recognizing the presence of limited rationality in actual decision-making behavior, a prospect interval decision matrix is constructed based on integrated prospect theory. The second step aims to address the issue of weight determination for the decision index, which is often a critical source of uncertainty in multi-attribute decision-making. To mitigate this uncertainty and balance the advantages of subjective and objective, the indicator weights are determined by utilizing an information cloud combination weighting method based on the principles of inverse cloud generator and entropy weight method. By integrating information from subjective and objective weighting, this step seeks to enhance the accuracy of multi-attribute decision-making in a fuzzy environment by reducing the impact of weight-related issues. Finally, by introducing a cobweb structure model, the approach calculates the similarity between the alternatives and the positive and negative ideal solutions. It replaces the traditional Euclidean distance with the term “cobweb similarity”, and incorporates the maximum-minimum squared sum criterion to propose a new closeness measurement method. The novel algorithm mitigates the issue of classic algorithms that tend to produce candidate solutions that are close to both the positive and negative ideal solutions simultaneously, leading to ambiguous and potentially misleading results.
    To illustrate the effectiveness of this approach, it is applied to the decision-making problem of a prefabricated component supplier for an office building project in Shaanxi Province. The results of the analysis demonstrate that this approach can effectively evaluate and compare various options from several different perspectives, including shape similarity, and area similarity and proximity. Moreover, the method allows for appropriate adjustments based on the decision-makers risk preferences, which ensures that the final decision is both optimal and realistic. This differs from other methods, making itself a powerful tool for decision-makers facing complex and uncertain scenarios. Compared with traditional algorithms, the approach presented in this paper demonstrates a significantly greater level of stability. Even when uncertain factors and extreme values are present, the evaluation results of this method will remain stable and capable of producing suitable outcomes.
    Even if this approach has shown itself to be highly operable in the context of multi-objective decision-making situations, there are still a few points that call for further research, for example, the proportion of subjective to objective weighing methods used to determine combination weights, as well as the influence of sensitivity and avoidance coefficients on the outcomes of decisions. The research team believes that with more study, this approach may be strengthened and applied more. Finally, we would like to express our gratitude for the invaluable guidance provided by Professor Huang Jianhua and the financial support from the China Social Science Foundation(20BGL003).
    Research on Chinese Fake Product Review Detection Considering Time Burst Characteristics
    DENG Yujia, WANG Peng, FANG Xinghua, QIN Fang
    2025, 34(2):  210-217.  DOI: 10.12005/orms.2025.0064
    Asbtract ( )   PDF (1392KB) ( )  
    References | Related Articles | Metrics
    In the era of digital economy, online reviews can influence consumers' consumption decisions, which, in turn, plays a critical role in the revenue of an organization. That is why some businesses resort to shady means to post fake reviews. However, genuine customer reviews of products or services contain a lot of useful information, which helps enterprises to further improve their offerings and obtain a better reputation and profitability. Consequently, extensive research has been conducted in recent year to identify fake reviews. Most of the existing studies focus on recognizing fake reviews based on the characteristics of comment text and reviewers' behavior, with a few also considering temporal burst features. In order to enhance the accuracy of fake review detection, this paper develops a comprehensive fake review recognition model that incorporates various features, including review text, reviewers' behavior, and time burst characteristics. This approach addresses the challenges posed by time bursts and class imbalance in online reviews.
    Online user reviews can be collected from e-commerce websites, such as JD.COM, using a web crawler. This paper crawls 9141 reviews about Huawei MateX3, Nova11, and P60 mobile phones. Regarding these reviews, this article carried out data cleaning by removing automatically generated system default positive reviews, duplicate comments, and invalid comments, ultimately leaving 8075 valid reviews (referred to as Dataset 1). To label the reviews, a manual annotation process is adopted, considering factors such as authenticity of review object, rationality of reviewer's behavior, overall linguistic coherence, and consistency between image and text descriptions. Fake reviews are assigned a label value of 1, while genuine reviews are labeled with a value of 0. This paper introduces a sliding time window approach to categorize reviews. Additionally, the Local Outlier Factor (LOF) outlier detection algorithm is employed to determine the suspiciousness index of reviews based on a three-dimensional time series analysis. The dimensions considered include the mean of the review scores, the number of reviews, and the Kullback-Leibler Divergence. By combining the suspicion degree feature, text features of the review, and behavior features of the reviewer, a comprehensive feature set is proposed. Based on Dataset 1, seven experiments in total are established, utilizing Convolutional Neural Network, Recurrent Neural Network, Bi-directional Long Short-Term Memory, Multilayer Perceptron, Random Forest, Support Vector Classification, and Adaboost algorithm to construct the model. The Random Forest with the optimal classification effect is selected. To address the issue of imbalanced training samples, the eighth experimental group is created by combining the SMOTE oversampling method with the best performing classifier from control groups. To analyze the influence of each feature category on the final recognition performance, this paper conducts ablation experiments by combining different categories. Sensitivity analysis is performed to explore the impact of varying time window sizes on the identification of fake reviews. Additionally, a dataset of 5,314 comments on Huawei Nova11 mobile phones is collected. After screening, 5,030 valid comments (referred to as Dataset 2) are obtained. The proposed approach is then applied to analyze Dataset 2. To verify the robustness of the model, the statistical features between genuine and fake reviews is compared with Dataset 1.
    The experimental results of the model comparison show that the SRF model, combining the SMOTE method with the random forest algorithm, outperforms others with a recall rate of 0.9693 and F1 score of 0.9705. The results of ablation experiments indicate that reviewer behavior features are the most effective category for identifying fake reviews, and adding suspicion degree feature can further improve recognition performance. Combining all of the three categories achieves the best classification performance. Furthermore, the sensitivity analysis experiment shows that as the time window increases, the performance of the fake review recognition model deteriorates. Thus, the model performs best when the time window is set to one day. The robustness analysis confirms the applicability and stability of the model across different datasets.
    The theoretical contribution of this paper is the construction of a comprehensive framework for detecting fake reviews, which expands previous research. The practical implication is that the approach proposed in this paper can be utilized by enterprises and platforms to eliminate fake reviews effectively, thereby enhancing consumers' trust, improving company reputation and maintaining order in the e-commerce market.
    This paper considers the multidimensional features and class imbalance commonly observed in online reviews. It provides valuable insights to assist e-commerce platforms in effectively filtering fake reviews and offering consumers more reliable review data. However, it is important to note that the SMOTE method may lead to data redundancy and impact classification accuracy. Therefore, future research should explore alternative methods to address data imbalance and improve model accuracy. Moreover, the proposed fake review recognition method in this paper focuses only on mobile phone reviews for verification. Subsequent research in other domains is necessary to validate its applicability. Additionally, enriching the multidimensional feature set of fake reviews should be undertaken to enhance identification accuracy.
    Investigating Strategy Choice of Genuine Freemium Service of Information Goods Manufacturer under Effects of Piracy
    ZHANG Wenjie, SHI Yuqi, ZHANG Rongxin, YUAN Hongping
    2025, 34(2):  218-224.  DOI: 10.12005/orms.2025.0065
    Asbtract ( )   PDF (1130KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of the mobile Internet technology, the piracy of information goods has affected the development in many countries around the world. In order to deal with piracy, many information goods vendors typically adopt various strategies. For example, the multi-version strategy (releasing different versions of products to allow consumers to choose according to their quality preferences), differential pricing strategy (different pricing strategies for different market conditions, consumer copyright awareness, willingness to pay, etc.), technical policy (through technical means, such as non-standard disk hardware technology and activation codes, registration codes, serial numbers and other software technologies). In fact, besides those strategies, some information goods vendors also consider starting with genuine users, by improving the utility of genuine users and increasing user stickiness to deal with the problem of piracy in the market.
    This paper studies the strategy choice of Genuine Freemium Services (GFS) of a monopoly firm in the information goods market under effects of piracy, the effect of this strategy on the piracy rate and the optimal pricing of genuine goods. In this paper, the consumer choice model based on Individual Rationality (IR) and Incentive Compatibility is constructed to deal with the consumer decision-making problem mainly. Following it, the information goods vendor's decision-making problem of GFS strategy is considered based on the profit functions. Specifically, firstly, we build Model 1 to describe the scenario in which the information goods vendor does not provide GFS. Secondly, we build Model 2 to describe the scenario in which the information goods vendor provides GFS. Thirdly, by comparing the vendor's optimal profit in Model 1 and Model 2, we discuss and analyze the vendor's choice of GFS strategy and the influence of this strategy on the pricing of genuine products and the rate of market piracy. Finally, combing with numerical examples, we furtherly validate the validity of the research conclusions.
    The study finds that whether the information goods vendor adopts the GFS strategy is related to the piracy cost. When the piracy cost is small, there is no difference in the optimal pricing of genuine goods whether the manufacturer adopts the GFS strategy or not, the piracy rates in the two scenarios are equal and both are negatively correlated with the piracy cost. Then, the optimal choice of the manufacturer is not to adopt the GFS strategy; when the piracy cost is moderate or large, if the manufacturer adopts the GFS strategy, the pricing of the genuine goods will be higher than that of no GFS strategy. Although there are differences in the strength of the manufacturer's optimal GFS levels under different piracy costs, adopting GFS strategy is always the optimal choice of the manufacturer. In addition, the study also finds that when the piracy cost is moderate, adopting the GFS strategy will effectively reduce the piracy rate. The research conclusions can provide theoretical reference for the scientific decision-making of information goods manufacturers under the influence of piracy.
    Management Science
    Strategic Aggressiveness, Ownership Concentration, and Corporate Risk-taking
    ZHANG Anjun, WU Jiayu, ZHI Yixia
    2025, 34(2):  225-231.  DOI: 10.12005/orms.2025.0066
    Asbtract ( )   PDF (929KB) ( )  
    References | Related Articles | Metrics
    With the accelerated transformation and upgrading of China's economic structure and the increasing degree of market internationalization, the competitive environment faced by enterprises has become increasingly fierce. In order to better survive and develop in the market, many enterprises choose aggressive strategies in the hope of gaining a competitive advantage in fierce market competition. However, whether the aggressiveness of corporate strategy can effectively enhance the risk-taking level of corporate investment projects remains a question.
    Corporate strategy is the medium to long-term development goals determined by a company and the series or set of decisions and actions to achieve these goals. Strategy serves as the highest guideline for corporate management and operation and the way of resource allocation. Different strategic choices imply differences in organizational structure, talent allocation, management power and characteristics, and corporate culture orientation, all of which can affect the willingness of corporate managers to undertake risky investment projects. Corporate risk-taking is a willingness choice by corporate managers to undertake high-risk but potentially high-return investment projects, which is significantly influenced by internal governance characteristics and factors as well as the external governance environment.
    Existing literature has explored the impact of corporate strategy choices on the level of corporate risk-taking. Some scholars argue that the higher the aggressiveness of a corporate strategy, the higher the level of corporate risk-taking. However, the higher the aggressiveness of a company's strategy relative to other companies in the same industry, the greater the cost of exploring new strategies. This can lead to increased operating and financial risks for the company and greater uncertainty regarding future operating performance. Moreover, the higher the aggressiveness of a corporate strategy, the more it exacerbates the information asymmetry between shareholders, creditors, and other stakeholders and management. This can lead to weakened supervision of management by shareholders, resulting in the risk of over-investment by management. Additionally, increased information asymmetry can lead to higher external financing constraints for the company, thereby increasing the risk of tight funding needs for investment projects. Furthermore, some scholars have empirically found that differences in corporate strategy can lead to higher borrowing interest rates, shorter borrowing terms, and smaller borrowing amounts from banks. This makes it difficult for companies to obtain the funds needed for investment projects and reduces their ability to pursue high-risk, high-return projects.
    This paper takes listed companies on the Shanghai and Shenzhen A-shares in China from 2007 to 2021 as the research sample and empirically examines the interactive impact of corporate strategic aggressiveness and ownership concentration on the level of corporate risk-taking. The purpose is to answer the following questions: (1)Does strategic aggressiveness contribute to enhancing the level of corporate risk-taking? (2)Does ownership concentration affect the level of corporate risk-taking? (3)Does the level of ownership concentration significantly influence the relationship between strategic aggressiveness and risk-taking?
    The results show that the higher the corporate strategic aggressiveness, the higher the level of corporate risk-taking. However, this positive correlation is weakened as ownership concentration increases. The higher the ownership concentration, the lower the level of corporate risk-taking, and this negative correlation is enhanced as strategic aggressiveness increases. Further research indicates that these relationships are mainly observed in non-state-owned enterprises with combined CEO and chairman roles, companies with lower growth potential, non-high-tech enterprises, and regions with a higher degree of marketization. The conclusions of this study enrich the literature on the economic consequences of corporate strategic aggressiveness and the factors affecting corporate risk-taking. They also provide important insights for how companies can choose appropriate development strategies based on their ownership structure characteristics and risk-taking capabilities, as well as for government departments to further improve the modern corporate ownership governance structure to promote corporate strategic risk-taking and support the accelerated high-quality development of China's economy.
    Live-streaming Mode Choice of Brand Company with Consumer's Waiting Cost and Word-of-mouth Effect
    MA Jiaxin, ZHANG Depeng, LIN Qiang, FU Lihong
    2025, 34(2):  232-239.  DOI: 10.12005/orms.2025.0067
    Asbtract ( )   PDF (1107KB) ( )  
    References | Related Articles | Metrics
    The outbreak of the epidemic has made the live streaming extremely popular, and many brand companies have also joined the live streaming. They not only cooperate with experts in live streaming, but also open their own self-live streaming room. As a result, the brand company has formed three live streaming modes, namely, the brand self-live streaming mode, the expert live streaming mode, and the mixed mode of both. However, not all live streaming modes are beneficial to a company's live revenue, especially when considering the impact of customer's waiting costs and word-of-mouth effects, thus, the question of when to choose the best live streaming mode is the focus of this study. By examining this question, we provide a theoretical reference for brand companies to choose a live mode based on customer demand and word-of-mouth effect, thus helping brand companies to choose the optimal live mode according to the real situation.
    At first, the word-of-mouth effect of product (the word-of-mouth weight and the word-of-mouth value evaluation) and consumer's waiting cost are included into the consumer demand function in this paper. Then the game theory is used to construct the decision-making model of the brand company under the three live streaming modes, and optimal solutions in three live streaming modes are solved. Finally, the company's optimal profits in the three live streaming modes are compared to seek the best choice. This paper analyses the impact of consumer's waiting cost and word-of-mouth effect on the optimal live mode choice of brand companies. In addition, the influence of the anchor's influence on the optimal choice of live streaming mode is also analyzed through simulations.
    The following conclusions are drawn from this study. (1)The word-of-mouth effect of product (both the word-of-mouth weight and the word-of-mouth value evaluation are high) is not necessarily good for the live streaming profits of the company. (2)Secondly, the choice of the optimal live streaming model of the company is not only affected by word of mouth, but also closely related to consumers waiting cost. When the consumer waiting cost is low, if the positive effect of word of mouth is not good (consumers do not value word of mouth or the value evaluation is low), the live streaming mode of cooperation with experts should be chosen. For example, The Zhong Xue Gao brand generated negative word-of-mouth due to the high price of ice-cream, resulting in a performance crisis, but achieved annual sales of 800 million with the promotion of Luo Yonghao, a celebrity anchor. If the positive effect of word-of-mouth is general (consumers do not value word-of-mouth but the value evaluation is high or consumers value word-of-mouth but the value evaluation is low), a mixed live streaming mode should be chosen. If the positive effect of word-of-mouth is significant (consumers pay more attention to word-of-mouth and the value evaluation is high), only the self-live streaming mode should be chosen. With consumer's waiting cost increasing, brand companies are more inclined to mixed live streaming modes, and for popular products that are predicted to sell out or where there is uncertainty about when the product will be available in the live streaming room, it is necessary to consider a hybrid model of live streaming, i.e., opening a brand self-streaming room to provide customers who are not willing to wait. (3)In addition, the influence of anchors will also affect the choice of the optimal live streaming mode of the brand company. If the cost of waiting for consumers is low, and a self-streaming anchor influence is greater, only open brand self-broadcast mode will be better. This also reflects why more and more brand companies often invite influential business leaders and so on to broadcast live from the broadcast room, such that the department of Xiaomi mobile phone sales invites Lei Jun to sit in the Xiaomi self-live streaming room; or L'Oreal and other brand companies gradually train internal professional anchors. However, with consumers waiting cost increasing, the mixed live streaming mode is the best choice.
    In addition, two extended discussions have been made in this study. First, the paper further considers the question of firms' choice of the optimal live mode when waiting costs have a negative effect on the word-of-mouth effect, and finds that the findings do not change. This suggests that the findings of our study are more robust. Second, this study also discusses the revenue erosion effect generated by companies switching to the expert live streaming mode. More specifically, when the customer waiting cost is below a certain threshold, the opening of the expert live streaming mode erodes part of the market demand for the self-live streaming room, which indirectly leads to the revenue of expert live streaming room eroding the revenue of self-live streaming.
[an error occurred while processing this directive]