主办:中国运筹学会
承办:合肥工业大学
ISSN 1007-3221  CN 34-1133/G3
ORMS
  • Home
  • About Journal
  • Editorial Board
  • Instruction
  • Honor
  • Indexed In
  • Subscription
  • Contact Us
  • 中文
Office Online
  • Online Submission
  • Peer Review
  • Editor Work
  • Office Work
  • Editor-in-Chief
Information
Operations Research and Management Science
(Monthly,Started in 1992)
Superintendent: China Association for Science and Technology
Sponsored by: Operations Research Society of China
Co-sponsored byHefei University of Technology
Published: Editorial by Operations Research and Management Science
Editor in Chief: Xiang-Sun Zhang
China Post Code:26-191
Address:Institute of Systems Engineering, Hefei University of Technology, Hefei, Anhui, China
Zipcode: 230009
Phone: (0551)62901503
Email: ycygl@hfut.edu.cn
CN 34-1133/G3
ISSN 1007-3221
Download
More>>
Wechat
Links
  • Journal Translation Project
More>>
Visited
    Total visitors:
    Visitors of today:
    Now online:
中文核心期刊
中国科技论文统计源期刊
中国科技核心期刊
中国科学引文数据库来源期刊(CSCD)
中文社会科学引文索引(CSSCI)来源期刊
中国管理科学A类重要期刊
News More>>
  • ff fd safd f 2022-01-21
  • aad fds fd sfdsa fda 2022-01-21
  • afdasf 2022-01-21
  • Application and optimization of intelligent heating in distribute gas heating systems 2022-01-21
  • aadfas vcvsda fsd fa fdas fdas fdsa fdas fdsa fds afads dsa fsd fsda 2022-01-21
  • Air Clearance Calculation and Compacting Layout Design of Valve Hall in Converter Station of VSC-HVDC Grid with High Voltage and Large Capacity 2022-01-21
Share:
  • Current Issue
  • Just Accepted
  • Archive
  • Most Read
  • Most Download
  • Most Cited
  • E-mail Alert
  • RSS
25 January 2025, Volume 34 Issue 1
Previous Issue   
Theory Analysis and Methodology Study
Improved Bipartite Graph Recommendation Algorithm Integrating Probability Matrix Factorization Model
GAN Peilu, SONG Yihao, ZHU Xiaoxiong, ZHOU Zhili
2025, 34(1):  1-7.  DOI: 10.12005/orms.2025.0001
Asbtract ( )   PDF (1178KB) ( )  
References | Supplementary Material | Related Articles | Metrics
With the development of the Internet technology and the diversity of personal demand, personalized recommendation attracts wide attention. For some items, such as movie, goods, news and so on, interaction data is generated as users and items interact mutually, which is mainly achieved by users’ selection of items. Based on such interaction data, user preference behavior information can be mined by the recommendation algorithm. Accordingly, more items that may be favored by users will be recommended to them, which implies that personal recommendation is achieved. Such data mining requires adequate historical data to achieve a great recommendation performance. However, interaction data is often sparse and unbalanced as each user has only selected a few items, and each item has not been selected by all users. Therefore, the recommendation effect of the algorithm will be greatly affected. A bipartite graph based on the characteristics of interaction between users and items is widely used as an effective recommendation algorithm. Although it has a good recommendation effect, it is similar to other recommendation algorithms and is also affected by sparse and unbalanced interaction data of users and items. In the light of this challenge, this study is committed to mitigating impacts of sparse and unbalanced data on the bipartite graph recommendation algorithm and improving its recommendation performance.
The core idea of the traditional bipartite graph recommendation algorithm is to allocate resources to items by two steps that include initial resources acquisition and reallocating resources to obtain final items resources so that the item can be ranked and recommended to each user. Accordingly, on the one hand, this study improves the acquisition criteria of initial resources and reallocation method to obtain final resources to make full use of historical interaction data. Firstly, modify the user scoring criteria. Then, incorporate time effect factors. Lastly, expand the item attribute information of user scoring. On the other hand, the probability matrix decomposition model transforms the initial matrix with higher dimensions into two lower matrices, and estimates the initial matrix through the product of these two matrices, thus mitigating the problem caused by the lack of data. Therefore, in the light of the characteristics of the problem, this study predicts score of some items which are not selected by users through the constrained probability matrix decomposition model. Then predicted data will be filled into the initial interaction data in set weights to make full use of real data and predicted data. Afterwards, we use a benchmark dataset which is commonly used in the research field of recommendation algorithms, and the MovieLens dataset to test performance of algorithms improved to different degrees. The data has been proved to be sparse. The experiment is conducted in the 50 fold cross validation, and compares each improved steps with the results of the traditional bipartite graph recommendation algorithm. The experiment uses commonly used recommended indicators, including accuracy rate, hit rate, ranking rate, and comprehensive indicators F1, which integrate accuracy rate and hit rate, as well as precision indicators, including root mean square error, average absolute error, to evaluate the performance of the algorithms.
The experiment results show that the hit rate of all algorithms exceeds 50%, but the improved algorithm improves the hit rate to more than 52% while other factors are the same. In addition, the accuracy rate of the algorithm also rises slowly. With the continuous improvement of the algorithm, the F1 value considering the hit rate and accuracy rate of the algorithm gradually increases, which shows that the improvement of the bipartite graph recommendation algorithm in this study is effective. Therefore, there may be some latent user preference information in time factor or item category attribute factor. The algorithm has to integrate other information related to user interest more comprehensively, which is of great significance to improving the performance. What is more, the recommendation performance of the bipartite graph algorithm will significantly improve when it is merged with the probability matrix decomposition model. As a result, the improvement in this study is very effective and makes great contributions to further development of recommendation algorithms. In the time of information overload, it is very important and meaningful to reduce people’s information filtering costs and improve the efficiency of information searching. As one of the most important research directions of information filtering, there are still many aspects of the recommendation algorithm worth being further explored, which has great significance for both theoretical development and practical problems solving.
Although the item attribute information is integrated into this study, only the category attribute information of the item is considered. For the item, there is more attribute information. Take movie as an example, there is also attribute information such as the character and director of the movie. The more item attribute information is complemented, the better recommendation performance of the bipartite graph algorithm will be achieved. Besides, the research on the probability matrix decomposition algorithm in this study is limited to the potential eigenvectors of users and projects. There are still some details to improve, such as incorporating time factors, adding trust relationships, etc. Thus, the accuracy rate of predicted data by probability matrix decomposition model may be higher, improving the recommendation performance of the bipartite graph recommendation algorithm.
Owen Value and Weak Symmetries
SHAN Erfang, NIE Shanshan, CUI Zeguang
2025, 34(1):  8-11.  DOI: 10.12005/orms.2025.0002
Asbtract ( )   PDF (926KB) ( )  
References | Supplementary Material | Related Articles | Metrics
A cooperative game with transferable utility (a TU-game) is a pair(N,v), N being the finite set of players and v : 2N→R with v(Ø)=0, the characteristic function of the game, that is a real valued map that assigns to each coalition S$\subseteq$N the worth v(S) that its members can obtain by cooperating. The worth v(S) represents the economic possibilities of the coalition S if it is formed. A central issue is to find a method to distribute the benefits of cooperation among these players. A (single-valued) solution for TU-games is a function that assigns to every TU-game a vector with the same dimension as the size of the player set, where each component of the vector represents the payoff assigned to the corresponding player. The Shapley value (SHAPLEY, 1953) probably is the most eminent single-valued solution concept for this type of games. In 1977, Owen suggested an extension of the Shapley value to TU-games with a coalition structure. The coalition structure is represented by a partition of the set of players. He defined and characterized the coalitional value for TU-games with coalition structures. This coalitional value is also called the Owen value. The Owen value can be seen as the two-step application of the Shapley value, which first regards the unions as players, uses the Shapley value for the first allocation between unions, and then uses the Shapley value for the second allocation within each union.
The Owen value (OWEN, 1977) is axiomatically characterized by employing efficiency, additivity, null player axiom, symmetry across unions and symmetry within unions. In 2007, KHMELNITSKAYA and YANOVSKAYA established another axiomatization via four axioms of efficiency, marginality, symmetry across unions, and symmetry within unions. Symmetry among unions requires that the unions with the same marginal contribution in quotient game should get the same total union payment. While symmetry within unions requires that if two players in the same union have the same marginal contribution, they should get the same payment. However, both symmetry across unions and symmetry within unions have the payment comparison between the players which have the same marginal contribution. But sometimes it is hard to achieve it in reality. For this reason, the two kinds of symmetries are weakened respectively in this paper. We propose weak symmetry across unions and weak symmetry within unions axioms for the axiomatizations of the Owen value. For the unions with the same marginal contribution in quotient game, weak symmetry across unions axiom requires the unions to obtain the total payment with the same sign. Similarly, for the players with the same marginal contribution in the same union, weak symmetry within the union only requires them to get the same sign of payment.
First of all, by using weak symmetry across unions and weak symmetry within unions axioms, we give an axiomatic characterization of the Owen value by combining efficiency, additivity and null player axiom. The axiomatization can be obtained by showing that the value with the above five properties must satisfy symmetry across unions and symmetry within unions. Furthermore, by replaced additivity and null player axiom with marginality, we provide an alternative characterization of the Owen value. Marginality states that if a player has the same marginal contribution in any two games, then he will receive the same payment in both games. In other words, marginality demands that a player’s payoff only depends on her own productivity. This axiom plays a significant role in the axiomatizations of values for TU-games and has been successfully applied to characterize a variety of values. In order to give the axiomatic characterization, we propose a lemma which plays a key role in the proof of the axiomatization. Finally, we show that the axioms involved in the two axiomatic characterizations of the Owen value presented in this paper are logically independent.
A Robust Control Chart for Monitoring High-dimensional Data Streams
DING Dong, JIANG Yalei
2025, 34(1):  12-18.  DOI: 10.12005/orms.2025.0003
Asbtract ( )   PDF (1058KB) ( )  
References | Related Articles | Metrics
As technology advances quickly, the functions of products are expanding in number, and their structures are becoming progressively more complicated. Therefore, it is often necessary to monitor multiple quality characteristics simultaneously during the production process. However, the data dimension is expanding quickly with the rapid growth of data collecting technology in the data age with the innovation of science technology and the advanced Internet. The number of product indicators that need to be monitored during the production process is growing day by day. High-dimensional data streams appear more and more frequently in various industries, especially in sensor-based manufacturing and image processing. High-dimensional data streams have attracted a lot of attention as a new type of data, and they have been already pervasive in daily life. Examples include information returned by sensors, real-time meteorological cloud images captured by satellites and user communication records.
However, the complexity of high-dimensional data brings many new challenges to quality monitoring. For instance, due to a large number of variables, the normality assumption of data is often invalid in high-dimensional cases, and the distribution form is usually unknown in practical applications. At the same time, the control chart that only detects mean shifts has been unable to satisfy the practical needs. Therefore, we urgently need statistical methods to monitor high-dimensional data streams.
To this end, a new robust control chart for monitoring independent high-dimensional data streams is proposed. Firstly, the local statistics for monitoring each dimension of the data streams are constructed by combining the score test statistic with the exponentially weighted moving average strategy. As a result, for the tth observations of kth data stream Xk,t, the final local charting statistic is given by
Rk,t=(θk,t)TI-10θk,t,
where, θk,t is the EWMA-type score function vector, and I0is the Fisher information matrix in control. Naturally, this type of statistic makes use of all data up to the current time point, and the control chart gives different observations varying weights. On this basis, the global monitoring statistics are constructed by utilizing the sum, the maximum value, and the top-r strategy. Especially, the proposed control chart method based on top-r method Ztop-r monitors is better than the method Zmax and more efficiently than the method Zsum because it only needs to calculate the first r local statistics. Therefore, this method is more convenient for calculation and more economical in cost. Accordingly, we advise we use the method Ztop-r whether it is for detecting mean shifts or variance drifts. In fact, the numerical simulations and a real case study have demonstrated its effectiveness. Practically, the top-r control chart method can be expressed as
$Z_{\mathrm{top}-r}=\sum_{k=1}^{r} R_{(k), t}=\sum_{k=1}^{r}\left[\left(\theta_{(k), t}\right)^{\mathrm{T}} I_{0}^{-1} \theta_{(k), t}\right], 1 \leq k \leq p,$,
where R(k),t denotes the kth largest local statistic. In practice, the simulation results have shown that using the Ztop-r statistics is sensitive and robust to detect process changes with suitable choices of the parameter r.
This method is appropriate for data with normal distribution or non-normal distribution. At the same time, it can detect not only shifts of mean value, but also shifts of variance, which is not available in many control charts. In order to evaluate the monitoring effect of the proposed control charts, the Monte Carlo simulation method is used. The average run length is used as an indicator to evaluate the monitoring performance of the control chart. The effectiveness and robustness of the proposed control charts are verified by the numerical simulation.
In order to illustrate the monitoring effect of the new control chart method in practical application, a practical case study is carried out with a set of real data. The data set contains 1,567 samples in total from a semiconductor manufacturing process. Each observation vector is composed of 590 dimensional variables. The final results prove that the proposed method Ztop-r has a higher calculation and detection efficiency. And it can detect abnormal shifts well in practical production in high-dimensional data streams.
The proposed new control chart in this paper has several advantages. Firstly, it can deal with both the normal and non-normal data. Secondly, it can not only detect mean shifts, but also variance shifts. Finally, the method only needs to focus on the first r local statistics. The statistics are simple in form and calculation, and more efficient. Therefore, these advantages of this new method guarantee that in the actual production process, any shifts of data streams can be quickly and effectively alarmed. The new control chart can be used in actual production process and effectively monitor product quality.
In this paper, we assume that the data streams are independent of one another, but in the actual production process, the relationship among data streams will be more complex as the dimension increases. In future research, we can consider extending the proposed method to the case of more general high-dimensional data streams.
Decentralized Multi-project Flexible Scheduling Optimization Based on Task Cutting Mechanism
LIN Xinyu, LIU Guoshan, WANG Min
2025, 34(1):  19-26.  DOI: 10.12005/orms.2025.0004
Asbtract ( )   PDF (1474KB) ( )  
References | Related Articles | Metrics
With the development of economy, the projects of enterprises are generally large, diversified and decentralized, and the decentralized multi-project management mode is attracting more and more attention. In the process of decentralized multi-project scheduling, each project leader has more autonomy, and often has different decision-making goals from the senior manager. For sub-projects, how to get more global resources and complete the project as soon as possible is his decision focus, while senior managers need to take a holistic view to minimize the impact of limited resources on multiple projects. Therefore, how to coordinate global resources allocation and plan multiple project scheduling in a decentralized multi-project environment has become a key concern.
However, most of the existing studies on decentralized multi-project scheduling are based on the assumption that the task’s duration is known and constant and the tasks are scheduled at a uniform rate, which is not entirely consistent with the realistic background and also causes waste of resources, limiting the flexibility and practicality of tasks scheduling. For some projects that do not have a strict limit to the duration of tasks, the duration of tasks can be taken flexibly within a certain range and according to the supply of resources, and the execution rate of tasks can be adjusted according to the supply of resources at different time periods, to achieve the purpose of making full use of resources. Therefore, based on the existing research, the paper proposes a flexible scheduling method for the decentralized multi-project, which provides a new approach to solving global resources conflicts.
First, a bi-level optimization model is proposed in the paper. The upper-level model is a global resources scheduling model with the decision objective of minimizing the delay cost of multi-projects, and the lower-level model is a local sub-project scheduling model with the decision objective of minimizing the completion time of sub-projects. In the local project scheduling process, the duration and start time of tasks are jointly used as decision variables, and a nested heuristic algorithm is used. In the global resources scheduling process, the paper proposes a new coordination mechanism. That is, in addition to the method of “delaying the start time of newly arrived tasks”, the resources conflict can be solved by cutting the executing task into two subtasks at the moment of conflict, and rescheduling the incomplete part without allowing interruptions by changing its execution rate.
The paper conducts experimental tests on some randomly generated cases of the multi-project. The results show that the task-cutting coordination mechanism is effective in reducing both the multi-project costs and delay days. That is because it increases the ways to resolve global resources conflicts, and the duration and unit resources requirements of tasks are no longer strictly limited. The performance of this flexible scheduling approach is also influenced by the intensity of global resources conflicts and project size. The larger the multi-project size or the stronger the global resources conflict, the more difficult the project scheduling will be, and the advantage of the flexible scheduling approach decreases but tends to be stable. When the global resources conflict is more intense, the marginal utility of one more way to mitigate conflicts will be smaller, and it is better to solve by increasing the supply of resources. The flexible scheduling method better meets the needs of decentralized multi-project management and also improves the scientificity, flexibility and practicality of multi-project scheduling decisions. At the same time, the multi-skilled global resources also contribute to mitigating resources conflicts, which can be further considered in future research.
Plasma Bank Location-allocation Problem for Large-scaleInfectious Diseases and Improved Multi-objective Gray Wolf Optimization Algorithm
ZHU Yaming, ZHANG Huizhen, MA Liang, ZHANG Bo
2025, 34(1):  27-33.  DOI: 10.12005/orms.2025.0005
Asbtract ( )   PDF (1277KB) ( )  
References | Supplementary Material | Related Articles | Metrics
For developing countries with uneven medical resources and a large population base, how to reasonably locate and allocate plasma banks to effectively ensure the supply of plasma for the treatment of severe infectious disease patients in the recovery period is an urgent problem. In order to better cope with the impact of large-scale infectious diseases on the health and safety system, a multi-objective LAP optimization model of plasma banks considering multiple scenarios, capacity constraints, supply chain networks, collaborative positioning and other factors is established with the goal of maximizing the timeliness of emergency plasma guarantee and minimizing the total cost.
The gray wolf optimization algorithm uses real number vectors to simulate the location of wolf packs to solve continuous optimization problems. For both functional and high-dimensional combinatorial optimization problems, it has excellent optimization capabilities. According to the characteristics of the model as a multi-objective discrete optimization problem, an improved multi-objective gray wolf optimization algorithm (IMOGWO) is designed to solve it. IMOGWO mainly makes several changes. Firstly, it uses external population to store the current non-dominant solution. Secondly, it proposes a new head wolf selection strategy for discrete multi-objective optimization. Finally, it uses non-dominant strategy and elite strategy. These strategies can work well with the original algorithm to improve the ability to solve problems.
Taking into account medical conditions, convenient transportation and other conditions, the article plans to identify 19 candidate sites for plasma banks and 47 candidate sites for collection facilities. As India’s population base and density are similar to those of China, the plasma demand here is based on India, which is also a developing country. Relevant Indian epidemic data can be obtained at covidinda.org. Based on the population density ratio, it is possible to estimate the possible 30-day epidemic data of China under the liberalization policy. Blood donation data are obtained from the Health Commission. All these are used to set various data for this scenario.
In order to verify the effectiveness of the algorithm for solving the model example, this algorithm is compared and analyzed with the traditional multi-objective gray wolf algorithm (MOGWO) and other three different multi-objective intelligent optimization algorithms. The experimental results show that both IMOGWO based on the total time and cost are better, and it is also the best of the algorithms compared with the average values of the two targets after normalization. The results of the model can minimize the total travel time of plasma, ensure the timely storage and supply of plasma, reduce the total cost, and quickly and effectively select a reasonable plasma bank location and distribution scheme.
The article’s research model still has many aspects that can be explored in the future, and this article proposes prospects for future research: (1)Though the article’s model is reasonable, it still cannot reflect the complexity of large-scale infectious disease environments, making it difficult to truly predict and implement solutions. In the future, it can be used to build more realistic models. (2)Certainty has a significant impact on the formulation of the entire scheme. The future article will consider the impact of uncertainty to make the entire model more reasonable. In future research, it will be improved in order to extend the basic model established in this article to the actual location and distribution planning of plasma banks.
Multi-modes Hybrid Operation Scheme of Trains from Loading Area Based on Railway Network with Unbalanced Block Section Carrying Capacity
LI Bing, CHENG Yan, XUAN Hua
2025, 34(1):  34-40.  DOI: 10.12005/orms.2025.0006
Asbtract ( )   PDF (1325KB) ( )  
References | Related Articles | Metrics
Railway transportation is the main transport mode for bulk cargo transport. The flow of wagon generated in the railway network should be primarily consolidated into that of the direct cargo train in the original loading area and then delivered to that of their destination stations. For those wagons not consolidated into the direct cargo train in the original loading area, they will be delivered to adjacent technical stations and collected to organize the other type of train, i.e. the direct cargo train from technical stations. Therefore, this will reduce the reclassification work from the intermediate station, relieve the workload of technical stations and make the organization of cargo trains more efficient, so that the wagons are prioritized in the original loading area and the cargo train operation scheme is optimized.
China’s railroad network is composed of many rail block sections with various train-carrying capacities. That is to say, the train traction tonnage allowed corresponding to different rail block sections is not the same. The characteristic of the unbalanced block section carrying capacity is caused by the railroad line level, topographic condition and marshalling infrastructure. Due to the disturbance caused by the inconsistent train traction tonnage and unbalanced carrying capacity of block sections located at the train journey, two types of train operation modes are usually adopted, i.e. the constant train traction tonnage scheme and train load transfer scheme. But no matter which scheme is adopted, this will bring some negative impacts. Specifically, the constant train traction tonnage scheme may lead to trains coupling insufficient wagons and running below rated traction tonnage along some rail sections. Moreover, the train load transfer scheme is likely to cause the additional train detention time generated at the intermediate switch station. Therefore, it is an urgent problem to be solved how to scientifically work out the train operation scheme to improve the utilization of railway network with unbalanced block section carrying capacity.
Aiming for railway network with unbalanced block section carrying capacity, a multi-mode hybrid operation scheme of the train from the loading area is studied. It can minimize the impact caused by the inconsistent train traction tonnage and unbalanced block section carrying capacity. Firstly, a train formation plan based on the constant traction tonnage according to the minimum carrying capacity of block sections along the train running corridor is studied. Three combination operation modes of trains from the loading area are given. And the objective function intends to minimize wagon-hour consumption induced by loading/unloading work at the handling station, reclassifying the train at the middle technical station and the light load train wasting block section carrying capacity. Then some constraints indicating per centralized loading capacity limitation and unique wagon flow arrangement are considered. So, a train combination operation model based on constant traction tonnage is developed. Secondly, a train formation plan based on dynamic traction tonnage caused by pick-up and cut-off work at the intermediate switch station is studied. Two combination operation modes of trains from the loading area are introduced. So, a train combination operation model based on train load transfer is presented. Finally, the two types of proposed train combination operation schemes are merged to set up a hybrid train operation scheme integrating the constant traction tonnage and dynamic traction tonnage.
A case study according to the Binzhou railway corridor of the Hailar loading station is given to test the train constant traction tonnage scheme and train load transfer scheme. The study goes a step further to achieve a hybrid train operation scheme by integrating the two above-mentioned schemes. The testing results show that the wagon-hour consumption induced by the constant train traction tonnage scheme and train load transfer scheme are 18817.24 and 12743.33, respectively. Obviously, the train load transfer scheme is more advantageous. And the further testing indicates that the hybrid train operation scheme can reduce the wagon-hour consumption to 12107.44.
The current study focuses on the railroad network with the physical path of only one wagon flow between pairs of handling stations. The next research work will aim at the complex railroad network with many wagon flow physical paths between pairs of handling stations.
Hierarchical Joint Optimization for Product Line Design and Cloud Manufacturing Decisions
WU Jun, PAN Xiaotian, ZHANG Lei
2025, 34(1):  41-46.  DOI: 10.12005/orms.2025.0007
Asbtract ( )   PDF (1564KB) ( )  
References | Related Articles | Metrics
Due to the penetration of new generation advanced information technologies such as cloud computing, big data, and the Internet of Things in the manufacturing field, the close integration of manufacturing and information technology has become an inevitable trend for future development. The cloud manufacturing model has also become a hot research topic in today’s academic community. As a core strategy for companies to meet the diverse needs of customers, product lines have had a broad and profound impact on industry since its concept was first introduced. This impact is not only reflected in the fact that a diversified product range within a product line allows companies to cover a broader market demand, reduce the risks associated with dependency on a single product, satisfy the varying preferences of different consumers, and increase market share, but also in the shift from centralized to decentralized production methods, which has fostered the development of related industries such as logistics. In the academic world, product lines have also gained increasing attention in recent years. These theoretical studies encompass various aspects, including design, manufacturing, logistics, and marketing, with product line design, as the core link in the entire value chain process, becoming a central focus of researchers. However, existing researches on product line design mainly focus on the performance of product varieties in the market, and there is little research literature that delves into the inherent interactive influence relationship between architecture design and engineering manufacturing within product lines. Thus, this paper emphasizes the inherent interactive influence relationship between product line design and cloud manufacturing decisions, and proposes a hierarchical joint optimization approach.
A nonlinear bi-level programming model is formulated to reveal the complex interaction process between product line design and cloud manufacturing decisions. The manufacturer in the upper-level model designs the product line architecture to maximize its expected profit. In the lower-level model, multiple service providers on the cloud manufacturing service platform simultaneously optimize the types of cloud manufacturing product modules to maximize their respective expected profits, and the decisions among them are mutually independent. A nested Levy-Jaya algorithm is developed to solve the model. A cloud manufacturing case study of electric vehicle product line is presented to demonstrate the feasibilities of the model and algorithm.
Our research findings have several important managerial implications: (1)There is a leader-follower joint influence between product line design and cloud manufacturing decisions. Thus, the optimization approach for them should also be a hierarchical optimization. (2)The proposed non-linear bi-level programming model exemplifies a typical hierarchical interactive optimization problem, which is easy to expand and apply. Therefore, it has universal research significance and practical value. (3)The nested Levy-Jaya algorithm has superior performance and stability in solving bi-level programming models.
Compared with previous researches, the major contributions of this study are as follows. (1)By establishing an interactive decision mechanism between product line design and cloud manufacturing decisions, the leader-follower interactive influence and complex decision-making process between them are analyzed in detail. (2)We establish a game-theoretic model based on Stackelberg game theory to quantitatively optimize the complex decision process of product line design and cloud manufacturing decisions, and address technical difficulties in constructing its mathematical expressions. (3)Aimed at the complexity of solving the non-linear bi-level programming model proposed in this paper, an improved nested Levy-Jaya algorithm is developed to solve the model by combining the global search strategy of Levy distribution random walk and the efficient search efficiency of Jaya algorithm.
A Bike Repositioning Problem on Multigraph
XU Guoxun, ZOU An, XIANG Ting, ZHAO Da
2025, 34(1):  47-53.  DOI: 10.12005/orms.2025.0008
Asbtract ( )   PDF (955KB) ( )  
References | Related Articles | Metrics
Nowadays, as a new form of the sharing economy, bike sharing systems (BSSs) have been widely recognized as an economical and low-carbon mode of transport, and they have effectively solved the problem of ‘the last mile’ in short-distance trips. One fundamental problem of bike sharing is that the numbers of bikes required at some stations are not enough to satisfy the bike user demand. Therefore, the operators need to deploy trucks to transport bikes from surplus stations to deficit ones to meet the bike user demand. This redistribution problem is currently called the bike repositioning problem (BRP).
Due to the practical importance and unique characteristics, BRP has attracted much attention in recent years. In the literature, BRP is modeled as static BRP (SBRP) and dynamic BRP (DBRP). A vast majority of BRP studies concern SBRP, partly as it is easier to model and the impact of repositioning may be more important at night. SBRP considers the operations within a period and neglects the station demand variations, while DBRP considers the real-time operations and takes station demand variations into account. SBRP considers the scenarios in which demand is low or the system is closed, implying that the change in demand can be negligible. DBRP considers the scenarios that take real-time system usage into account. Three different types of models are presented, including Arc-Indexed (AI) formulation, Time-Indexed (TI) formulation, and Sequence-Indexed (SI) formulation. In view of the characteristics of dynamic demand of DBRP, previous studies mainly analyzed the time-varying characteristics of demand and the stochastic characteristics of demand, and formulated two corresponding DBRP models.
However, all BRP studies are generally treated via the representation of the road network as a weighted complete graph, where arcs represent the shortest path between pairs of stations, and several attributes (e.g., travel time, travel cost) are defined for one arc. However, the shortest path implied by this arc is computed according to a single criterion. As a result, some alternative paths proposing a different compromise between these attributes are discarded.
In this paper, a bike repositioning problem on a multigraph (BRP-MG) is proposed to minimize the total cost (i.e., the total travelling costs and the total penalty costs for the deviating from the expected demands), where the alternative routes are considered. Ideally, the proposed BRP-MG would define one arc between two stations for each Pareto optimal road-path according to arc attributes in the bike sharing system. Any good road-path would then be captured in the graph.
In order to solve the proposed BRP-MG, this study may consider different solution methods used to solve BRP, including exact methods and heuristics. However, it is quite hard to adopt exact methods to solve realistic BRP because the problem is NP-hard. Exact methods only obtain optimal solutions in small BSS networks and are unsuitable for large BSS networks. As our ultimate aim is to develop an efficient solution method that can solve large BSS networks, this study prefers to develop a heuristic to solve BRP-MG. Tabu search is well-known to be quite efficient to solve routing problems. Particularly, the tabu search has been adopted in the multigraph-based vehicle routing problem (VRP) and BRP to obtain high-quality solutions in a short computing time with great success. Hence, the tabu search is chosen as the backbone of our solution method.
However, BRP-MG involves the loading/unloading quantities at each visited station and the arc selection between each two successive nodes in addition to the determination of the sequence of nodes. Therefore, the tabu search cannot be directly applied for routing problems to solve BRP-MG and some extra operations should be added to the tabu search.
For this purpose, several specific operators are developed (i.e., insertion operators, deletion operators, and exchange operators) to deal with the loading/unloading quantities at each visited station. In addition, decisions need to be made regarding which is to be selected between two successive nodes each. Therefore, an efficient arc selection heuristic is proposed to handle the arc selection for a fixed sequence of nodes.
To illustrate the accuracy and efficiency of our solution method, this study tests different sizes of instances and compares the results with ones obtained from the genetic algorithm, the classic tabu search, and the variable neighborhood search. In addition, in order to compare the advantages and disadvantages of repositioning in a complete graph and a multigraph, 10 groups of large instances are used to test the repositioning in a complete graph (retaining the parallel arc with the lowest transportation cost) and in a multigraph. Although the computing time based on the multigraph repositioning is slightly more than that in the complete graph, the quality of decision-making is significantly improved, which is shown by the effective reduction of total costs and a substantial increase in the demand for bike users.
Allocation and Stability for Games with Relation Function
LI Shujin
2025, 34(1):  54-61.  DOI: 10.12005/orms.2025.0009
Asbtract ( )   PDF (1082KB) ( )  
References | Supplementary Material | Related Articles | Metrics
Classical cooperative games assume that any coalition can be formed. However, in real cooperative situations, the assumption is not the truth. In many cooperative cases, there may not be a direct cooperative relation between two players, but through a third player as a middleman, an indirect relationship is established between them. Different from classical cooperative games, this class of games is called restricted cooperative games. A coalition structure game is one branch of restricted cooperative games, corresponding to Shapley value for classical cooperative games. Owen established Owen value for coalition structure games. Winter extended Owen value to the NTU game. In the coalition structure game, the relation structures between players in coalition cannot be expressed explicitly. Considering the impact of different relationship structures on coalition payoffs, Myerson proposed communication games. Corresponding to Shapley value for classical cooperative games, he also proposed a new allocation rule, which is called Myerson value. According to Myerson’s communication games, for players set {1,2,3}, if the connected ways among the three players are {12,13} and {12,23} respectively, then for players set {1,3}, their payoffs under the two cooperation structures will show differences. But for players set {1,2,3}, because they are connected, their payoffs under the two cooperation structures are considered equal, both of which are equal to the payoffs of the grand coalition {1,2,3}. However, in many situations, the payoffs of players {1,2,3} in different connected ways are different, and the communication game cannot indicate this difference.
Considering the payoff difference caused by the different cooperative ways among players, Jackson and Wolinsky modified the Myerson’s model of the communication game and built the network game model, which can distinguish the difference in connected ways between players. Jackson then proposed a new allocation rule for the network game—the link-based flexible network allocation rule, which is an allocation rule with respect to non-directed graph. Generally, non-directed graph represents a symmetry relationship between players. But for asymmetry relationships among players, the explanatory power of Jackson’s network game models has limitations. Earlier in 1990s, GILLES et al., DERKS and GILLES, van den BRINK and GILLES, van den BRINK, and GILLES and OWEN etc. begun to concentrate on the difference in payoffs generated by asymmetry relations among players and described this structure by directed graph. Based on previous conclusions for the directed graph game,SLIKKER et al. made further research on this issue. Especially, they discussed the existence ofallocation rules which satisfies Component Efficiency and the Hierarchical Payoff Property.
In cooperative games, the basic cooperation relation derives from the interaction between two players, which can be expressed with three types: the initiative-passive, the passive-initiative and the interactive. Obviously, the payoffs of coalitions with different relation structures showing differences are a natural thing. For a complex group of players, according to different regulations and rules, players will form different mutual relations. Naturally, the payoffs of group, the allocation of payoffs, the satisfaction degree of allocation, should take on different characterization. Compared to set function, relationship function has advantages in depicting the difference in payoffs created by different relation structures, so it should be a more powerful tool in depicting cooperative games. To a degree, the directed network game has described the difference in payoffs resulting from different directed relations, but more often than not, it only discusses specific directed networks. Compared to the directed network game, the cooperative game with relation function is a generalized model of games, of which classical cooperative games, non-directed graph game and directed network game etc. are all the special cases.
In this paper, focused on the relation structures among players, cooperative games with relation function are established. As the extension of the allocation rule for classical cooperative games, the Shapley value for games with relation function is proposed, its relative properties are proved, and the stability of relation structures are discussed. Further, based on the concept of stability for games with relation function and PROMETHEE method, an approach to ranking different relation structures is proposed, which is verified by a numerical case as well.
Resources Transfer Time-cost Repetitive Construction Project Scheduling Problem Based on Hybrid Strategy
ZOU Haobo, ZHOU Guohua, YANG Li
2025, 34(1):  62-68.  DOI: 10.12005/orms.2025.0010
Asbtract ( )   PDF (1287KB) ( )  
References | Related Articles | Metrics
A vast majority of infrastructure construction projects are called repetitive construction projects (RCP) because of their wide distribution and high repeatability. Due to dispersed locations, a large number of resources involved, the difficulty with transportation, and the time and cost of resources transfer between units will significantly affect the results of RCP. However, current studies seldom consider the cost of resources transfer, and the way to determine the path of resources transfer is relatively simple. Especially most of RCP regard the working group as a basic resources allocation unit, which fails to carry out plans or pays high idle costs.
This paper studies a resources-constrained multi-mode repetitive construction project scheduling problem with an integrated resources transfer cost and tries to find a combination of construction sequence and resources transfer path to minimize the total cost. The idea of resources allocation based on the working group is changed. The resources transfer cost and idle cost caused by resources redundancy are fully considered to satisfy the resources demand of different units. The quantity and path of resources transfer are determined. An improved genetic algorithm (AEIAGA) is designed to solve the problem based on the self-adaptive elite retention and mutation strategy and the crossover strategy based on the iteration number and fitness value.
In the simulation analysis, taking the lower part of a bridge as an example, we find that: 1.The scheme that does not consider the actual resources transfer cost is quite different from the actual situation, and the plan needs to be significantly adjusted to meet the actual project needs. Compared with scenario 1 and scenario 2, the scheduling results of most of the units are adjusted to meet the actual project requirements. The overall construction period changes from 87 days to 131 days, increasing by 44 days or 50.57%, and the fee increases from 803,758 yuan to 922,544 yuan, by 118,786 yuan or 14.78%. 2.The introduction of resources transfer cost will have a more significant impact on the construction sequence, and the consequences are more inclined to transfer resources between units with a lower transfer cost. In scenario 1, the sequence of resources transfers is relatively chaotic and there is a large amount of cross-cell transfer behavior because of the small impact of resources transfers; in scenario 3, considering the influence of resources transfer, most of resources transfer processes take place between adjacent units. 3.Considering the actual cost of resources transfer, a hybrid strategy can effectively reduce the project time and total cost. Compared with scenario 2, the number of resources transfers in scenario 3 is reduced from 40 to 35, resulting in a reduction of 36-day by 27.48%, and a reduction of 111,130 yuan by 12.05% in cost, so the overall scheme has been significantly optimized. 4. The improvement effect of the genetic algorithm in this paper is remarkable, especially in solving large-scale problems, so the algorithm has obvious advantages. When dealing with large-scale problems, the difference between AEIAGA and AEIAGA begins to widen, and the mean value of the solution is 1.039 times and 1.028 times of AEIAGA, respectively.
Research on the Method of Purchasing Decision Considering Online Reviews and Ratings
WANG Meiqiang, TU Danyang
2025, 34(1):  69-76.  DOI: 10.12005/orms.2025.0011
Asbtract ( )   PDF (1114KB) ( )  
References | Supplementary Material | Related Articles | Metrics
Purchasing is a complex behavioral decision-making process that often occurs in consumers’ daily lives. When consumers make product purchases, especially when they first purchase expensive, less frequent, or risky products, they will collect extensive information to make purchase decisions. The rich online review information of e-commerce websites becomes a potential resource for consumers to gather product information. Therefore, it is of great significance to explore the emotional attitudes in reviews and combine them with ratings to provide consumers with recommendations to assist in decision-making when shopping.
Existing studies can be divided into two categories, namely online review-based and online rating-based approaches to product purchase decisions, with the following four main problems. (1)The data type of indicators is single. While the reviews have a broader and more flexible affective expression effect, the ratings have a relatively bright emotional benchmark, which can better obtain the satisfaction of website users by combining the two kinds of data. However, most existing studies focus on only one type of data, ignoring the interdependent and complementary relationship between reviews and ratings. (2)Attribute weights are determined in an overly subjective way. On the one hand, the method of attribute weighting in the form of scoring by experts is highly subjective, and the reasonableness of the given weights is open to question. On the other hand, it is hard for consumers to assign attribute preference weights or expectation values in advance because it is difficult to quantify their preferences and expected needs into specific values in actual purchase decision scenarios. (3)The consumer’s regret psychology is ignored. Existing studies have focused all their attention on evaluating user satisfaction with products rather than consumer purchase psychology, and this satisfaction is hardly perceived by the consumers directly. Consumers can usually only perceive the idea of regret after purchase, especially if they miss a better choice. (4)Simplifying purchasing behavior into a selection process without considering the prices of the products. Purchase decision contains the most basic transaction process, which means that consumers need to pay the currency corresponding to the price to obtain the application value and usage experience of the products. Existing studies have considered only the choice problem but ignored the transactional attributes.
We present a novel product purchase decision method based on online reviews and ratings, considering consumer regret psychology and product prices. Firstly, we use a web crawler to obtain online evaluations of alternative products and process reviews and ratings separately. Secondly, after pre-processing the reviews, the sentiment dictionary of the product attribute domain is constructed using the improved character sentiment value calculation method. And the sentiment intensity of alternative product attribute reviews and ratings is calculated. Thirdly, we calculate the regret-rejoice values of alternative product attribute reviews and ratings, respectively, based on the regret theory, as the value of regret psychology will arise after consumers purchase the products. Finally, we take the regret-rejoice values of reviews and ratings as outputs and prices as inputs. The efficiency values are measured using a game cross-efficiency model with negative indicators to obtain the recommendation order of alternative products. In particular, when the efficiency values are the same, the lower price will be ranked higher.
The innovations are in four main aspects. (1)Considering both reviews and ratings in online evaluations to maximize the retention of valid information and obtain the user emotion. (2)Considering the regret psychology of consumers, and using the regret theory to measure the regret-rejoice psychology of consumers, which is more in line with the subjective psychological activities of consumers in the purchase decision process. (3)Considering the unique nature of product prices, the existing choice problem is refined into a product purchase decision process, which is more in line with the actual objective material activity. (4)The DEA model is used in the product purchase decision, as well as the combination of the base point method and the game cross-efficiency model to obtain the game cross-efficiency model that can handle negative indicators.
We demonstrate that the proposed method is clear, operable and has practical application value through the example of the purchase decision of 10 new energy vehicles of AutoHome, and provides a new idea for further research on the method of product purchase decision based on the online evaluations.
The shortcoming of this paper is that the given attributes of the platform are used directly without feature re-extraction. Future study can consider re-extracting the attribute features, so as to facilitate multi-platform data fusion. In addition, this paper and other literature assume that positive sentiment words in reviews have the same degree of influence on consumers as negative sentiment words when identifying sentiment intensity. However, negative sentiment words of the same sentiment intensity may have a higher impact on consumers, which is a direction for future research.
Research on Structure Optimization of Urban Microcirculation Road Network Based on Key Edge Identification
HU Liwei, WANG Xingzhong, ZHAO Xueting, YANG Zhiying, HU Feiyu, YU Xianlin
2025, 34(1):  77-83.  DOI: 10.12005/orms.2025.0012
Asbtract ( )   PDF (1595KB) ( )  
References | Supplementary Material | Related Articles | Metrics
At present, the road network structure of most cities in China has certain drawbacks: the traffic volume of both main and branch roads in some road networks is unevenly distributed, resulting in congestion of main roads and surplus traffic capacity of branch roads. In addition, some cities attach importance to main roads and neglect branch ones, making it impossible to effectively connect high-level roads to low-grade ones within the city. In order to avoid the similar problems in the process of future urban development, it is necessary to introduce the microcirculation theory in urban future planning, which can make a reasonable combination of roads of various grades and improve road utilization. It can also divert traffic on arterial roads, relieve traffic pressure, and increase the driving speed of vehicles in the city.
In order to study the microcirculation organization optimization of urban internal road network, a two-tier planning model based on key edge combination optimization is introduced. Firstly, Python is used to crawl the traffic flow data of a certain area in Zhengzhou from October 2020 to November 2021, including traffic volume and running speed, and the collected data is used to perform a single-sample Kolmogorov-Smirnov test by the SPSS software to redefine the service level of the road network. Secondly, the maximum increase key edge is used to expand the capacity of the city’s internal branch road, the traffic volume of the congested section of the main road is diverted in the expanded secondary branch, the SMHD mobile grid method is used to determine the specific location of the road section to be opened, and the optimal combination of the key edge of the branch expansion key side with the road section to be opened is obtained by simulating the vehicles in the road network by VISSIM. Finally, based on the optimal combination of key edges, the upper planning model with the best average vehicle running speed and road network smoothness and the lower planning model with balanced distribution of traffic flow are constructed, so as to further optimize the road network. After the analysis of the examples, it is found that: (1)compared with the original road network structure, the road network structure optimized again after identifying the key sections of the road network can increase the average running speed of the main road and branch road vehicles by 20.43% and 11.29%, respectively; (2)the original E-level service level of main and branch roads is upgraded to D-level, and the original D-level service level of the main and branch roads is upgraded to C-level; (3)after optimization, the smoothness of roads at all levels has been greatly improved, and vehicle operation has become smoother.
Based on the key sections of the road network, a double-layer planning model of the microcirculation road network structure in the city is constructed, which effectively alleviates the traffic pressure of the main road and the congestion waiting time of vehicles, improves the smoothness and safety of vehicle driving to a certain extent, and has certain feasibility in the microcirculation of the actual urban road network. However, it is still necessary to refine the “capillaries” in the city, such as considering the control of branch road traffic signals in different areas and adjusting some road sections to “ban left or right”, which is the focus of future road network research.
Multilevel Coverage Optimization of Mobile Emergency Facility Location for Urban Large-scale Emergencies
LI Jianxun, ZHANG Ruochen, SHANG Yanying, FU Haoxin
2025, 34(1):  84-90.  DOI: 10.12005/orms.2025.0013
Asbtract ( )   PDF (1359KB) ( )  
References | Related Articles | Metrics
An uncertainty of and a sudden increase in large-scale emergencies can easily lead to the inability to effectively control some areas, and bring great challenges to the distribution of emergency supplies and medical rescue. Multi-stage coverage optimization of location selection for mobile emergency facilities facing large-scale emergencies requires rapid response, effective disposal and loss reduction before the occurrence of emergencies. A comprehensive coverage of disaster-affected areas can be achieved by deploying mobile emergency facilities reasonably at multiple sites in the disaster-affected areas in advance.
This paper explores a mathematical model and location heuristic algorithm of mobile emergency facilities location for large-scale emergencies, which can provide new ideas and methods for the research of related fields. When facing emergencies, a reasonable location and effective deployment of mobile emergency facilities can improve the efficiency of emergency handling and response ability. Therefore, this study has relatively important theoretical and practical significance, which can provide theoretical support and methodological guidance for the research in related fields and the solution of practical problems.
The model in this paper aims to maximize coverage and discusses how to reasonably deploy a certain number of mobile emergency facilities to cover multiple demand points under the condition that mobile emergency facilities fully cover demand points. The location allocation heuristic algorithm is usually more suitable for emergencies such as large-scale natural disasters, and each demand point only needs to be served by a nearby facility, and can provide a good solution in a relatively short computing time. Therefore, the solution steps of the location allocation heuristic algorithm are formed: (1)Select an initial location for each facility. (2)Determine the optimal demand points for facility location allocation. (3)Each group divides the demand points into subgroups with a facility as the center, and deploys the optimal facility location for each subgroup. (4) If any position changes, repeat (2) and (3); if not, stop doing so.
This paper takes Xi’an city in the COVID-19 epidemic as the background, and carries out a site selection for emergency facilities based on population density under the conventional epidemic prevention policy. According to the population density, the centroid of each census area is regarded as representing the total population of the area, and 452 eligible alternative points are selected. The population distribution data of Xi’an city is obtained from GPW-V4. The locations of alternative points are derived from Shaanxi Provincial Health Commission, and the actual distances of alternative points are obtained through DataMap and Amap. The number of initial deployment facilities has a significant impact on the coverage rate of emergency facilities. Policy makers should complete a certain amount of initial deployment of emergency facilities as soon as possible in order to achieve a coverage of areas with high population density as soon as possible. In response to large-scale emergencies, cross-regional coordinated emergency response can often improve the deployment rate of emergency facilities. The follow-up research can focus on how to divide the service demand of mobile facilities more reasonably, and carry out the optimization of emergency service route, so as to further improve the quality of emergency services in large-scale emergencies.
Game Analysis of Directional Ability Investment Decision on Online Platform
MA Biao, LI Li
2025, 34(1):  91-97.  DOI: 10.12005/orms.2025.0014
Asbtract ( )   PDF (944KB) ( )  
References | Supplementary Material | Related Articles | Metrics
The development of the Internet has given rise to the advertising business of online platforms, which can deliver targeted advertisements online based on consumers’ behavior. Since online platforms would serve both consumers and advertisers, and provide two operation patterns, there is a bilateral competition between the consumer market and advertiser market simultaneously. Specifically, in the consumer market, online platforms provide their basic services, such as video on demand, email, etc., while also limiting targeting technology, reducing consumer privacy invasion, and gaining a competitive advantage. On the other hand, in the advertiser market, online platforms improve their targeting ability through investment in targeted technology, and the effectiveness of targeted advertising to gain competitive advantages. The two-sided nature of targeting makes it important to weigh the cost of improving advertising effectiveness and reducing consumer privacy. The balance difficult to coordinate has a huge impact on the revenue and growth of online platforms.
Aiming at this prominent problem, this paper constructs a tripartite game model of online platform, consumers and advertisers in two common market structures (monopoly market and competitive market), and systematically studies the optimal investment, advertising pricing and subscription price of online platform. With a benefit utility profile of online platforms, consumers and advertisers, a mathematical calculation model is built, in the analysis and expansion model of the game stage in the two market structures, the income process of online platforms is simulated, and then by the balance analysis and sensitivity analysis of the benefits of all sides of the market structure, the impact of each factor on the platform income level is investigated. Thus, the decision reference is given under the maximization of online platform income.
The results show that there is a linear positive correlation between the online platform subscription price and platform targeting ability, and the advertising interference cost in monopoly market. Meanwhile, there is a linear positive correlation between the advertising price and targeting ability, while there is a linear negative correlation between the advertising price and advertising interference cost. The higher the advertising interference cost, the less willing consumers are to subscribe for free, and they prefer to pay subscription to avoid advertising interference. Therefore, monopolistic online platforms can charge higher prices. And as more consumers opt for paid subscriptions, they become less attractive to advertisers and advertisement prices fall. The best option for a monopoly online platform is to attract half of the advertising market, and the ratio of paying subscribers to interference cost and targeting power is linearly positive. Advertising interference costs increase platform revenue, which is why monopolistic platforms often run advertisements regardless of consumers’ feelings. In the dual-attribution symmetric competition market for advertisers, the revenue of online platforms will increase with an increase in subscription consumers and differentiation level; on the contrary, the revenue of platforms will decrease with an increase in consumer privacy concerns and targeted technology cost parameters. The decreasing advertising market makes the market competition fiercer, so the revenue of online platforms will decrease. In the asymmetric competitive market, homogenization brings more fierce market competition, and online platforms have lower returns. Platforms with low directional ability will choose to give up directional technology investment. This study proves that the advertising platform in the competitive market under the current operating model mainly derives its income from paid subscribers. If it fails to attract enough paid subscribers, it will fall into losses, which can also be proved by actual market cases.
This study can provide reference and support for the systematic decision-making of online platforms, but the influence of consumers’ behavior on multiple online platforms at the same time needs further research.
Quadripartite Evolutionary Game Analysis of Urban River Ecological Environment Governance from Perspective of Government Purchase of Services and ENGO Informal Environmental Regulation
ZHANG Yang, HAN Cheng, SHEN Jin, ZHAN Rui
2025, 34(1):  98-104.  DOI: 10.12005/orms.2025.0015
Asbtract ( )   PDF (994KB) ( )  
References | Supplementary Material | Related Articles | Metrics
Urban inland rivers are vital components of the urban water environment, playing a crucial role in shaping the aesthetic, ecological, and social structure of cities. These rivers significantly enhance the environmental quality of urban areas, influencing green development, urban landscapes, and the health and well-being of residents. The ideal state of urban inland rivers characterized by “smooth waterway, clear water, green bank, and scenic beauty” represents the aspirations of city dwellers. In 2021, the proportion of black and odorous water bodies in Chinese cities was reduced by over 98% through the joint governance efforts of government, market forces, and societal organizations. This achievement demonstrates the effectiveness of multi-stakeholder governance, yet illegal wastewater discharges from industrial enterprises continue to undermine these efforts. The re-emergence of “black and odorous” water in some urban inland rivers indicates that pollution control remains an ongoing challenge. Therefore, regulating enterprises to ensure legal wastewater discharge and addressing the long-term management of urban inland rivers have become a central focus of current policy efforts. In practical terms, the role of government-purchased services in advancing ecological civilization has become increasingly important. Government procurement of services is seen as a key mechanism for guiding multiple stakeholders in controlling pollution, and it is expected to become a crucial measure for the long-term governance of urban inland river ecosystems.
This paper aims to address the potential “government failure” and “market failure” that may arise in multi-stakeholder governance by exploring how local governments incentivize environmental protection social organizations (ENGO) through service procurement. ENGO, in turn, collaborates with the public to engage in informal environmental regulation to supervise enterprises and ensure legal wastewater discharge. To investigate this mechanism, a four-party evolutionary game model is constructed, incorporating local governments, polluting enterprises, ENGO, and the public. The equilibrium of each player in the game is analyzed, resulting in the identification of eleven stable evolutionary points. These points are further classified and analyzed for their stability. Additionally, numerical simulations are conducted to study how changes in key parameters—such as government incentives, ENGO influence, and public participation—affect the stable strategies of the different players involved.
The findings indicate that: (1)Strengthening local government efforts in purchasing services significantly promotes the active engagement of ENGO in service provision, thereby facilitating the system’s convergence toward an ideal stable state. Polluting enterprises exhibit greater sensitivity to subsidies provided by local governments compared to fines. However, the combination of subsidy and fine policies proves more effective in expediting the system’s transition toward the desired equilibrium. Furthermore, raising the environmental protection tax rate serves to incentivize enterprises to engage in lawful pollutant discharge practices. (2)Enhancing the influence and environmental enforcement capacity of ENGO can regulate the legal discharge of pollutants by enterprises to varying degrees, thereby encouraging them to assume greater environmental responsibility. (3)Increasing public reporting efforts has a relatively limited impact on system evolution, although local government incentives have led to a short-term increase in public willingness to report violations. (4)Sensitivity analysis reveals that changes in local government service purchase efforts are more likely to prompt alterations in the decision-making behavior of polluting enterprises, while modifications in ENGO influence and environmental enforcement efforts more significantly affect the decisions of both enterprises and the public. (5)The initial willingness of different stakeholders to participate affects the evolution of the system in varying ways. However, a high level of participation willingness among stakeholders plays a positive role in accelerating the system’s convergence toward the ideal stable state. Specifically, when ENGO with high participation willingness supervises the pollutant discharge behavior of enterprises through informal environmental regulation methods, these enterprises no longer attempt to engage in illegal pollutant discharge practices. This indicates that the participation willingness of ENGO is a decisive factor in shaping the pollution control intentions of enterprises, highlighting the urgent need for governments to enhance the participation willingness of ENGO.
In light of these findings, this paper develops a mechanism for the long-term governance of urban river ecosystems under the synergistic effect of formal and informal environmental regulation. The proposed mechanism comprises three primary components: the service purchase mechanism, the incentive and constraint mechanism, and the reputation influence mechanism. Additionally, this paper outlines effective safeguard measures tailored to each component. From the perspectives of both local governments and ENGO, policy recommendations for enhancing urban river ecosystem governance are also presented.
Evolutionary Game Analysis and Empirical Test of Digital Financial Innovation Evolution under Dynamic Reward and Punishment Mechanism
FU Hao, CHENG Pengfei
2025, 34(1):  105-111.  DOI: 10.12005/orms.2025.0016
Asbtract ( )   PDF (1491KB) ( )  
References | Supplementary Material | Related Articles | Metrics
As an innovative financial service, digital finance can provide more efficient and convenient financial services for various industries and promote the development of technology, industry and society. However, due to the characteristics of digital finance such as high concealment, cross-border, decentralization and intelligence, the rapid development of digital finance also makes the financial supervision more complicated and puts forward higher requirements for the supervisory capacity of regulators. Therefore, it is of both important theoretical value and practical significance to deeply analyze the dynamics, mechanisms and strategies of digital financial innovation carried out by financial institutions, explore how regulators can build a scientific regulatory system to effectively protect the enthusiasm of digital financial innovation while strengthening the supervision of digital financial innovation.
At present, scholars have studied digital financial innovation and regulation from different perspectives using different methods. However, there are few studies on the dynamic evolutionary game process of digital financial innovation and regulation, and most of them use the research framework of a general evolutionary game model without fully considering the impact of the dynamics of reward and punishment mechanism on digital financial innovation and regulation. This paper builds an evolutionary game model between digital financial innovation and supervision. On the one hand, dynamic reward and punishment mechanism is introduced to deeply analyze the influence of reward and punishment intensity on the dynamic evolution of digital financial innovation behavior under the situation that regulators implement dynamic reward and punishment mechanism. On the other hand, in order to reflect the dynamic evolution behavior of digital financial enterprises and regulatory institutions more clearly and intuitively, simulation analysis is applied to analyze the influence of various parameters on the evolutionary game equilibrium. Finally, through the construction of the regression model, the paper further verifies and discusses the internal relationship between digital financial innovation behavior and government regulation by using empirical test.
The results show that when the intensity of reward and punishment is adjusted in time with the probability change of compliance innovation of digital financial enterprises, the game process of game players can reach evolutionary stability, and the effective supervision of regulatory agencies on the innovative behavior of digital financial enterprises can be realized. The intensity of rewards and punishments has different impacts on digital financial enterprises and regulators. When regulators increase the intensity of punishment, the probability of compliance innovation of digital financial enterprises will increase, while the probability of active supervision of regulators will decrease. When regulators increase incentives, the probability of compliance innovation of digital financial enterprises and active supervision by regulators will decrease, and the probability of active supervision by regulators will decrease more obviously. When the cost difference between different regulatory strategies of regulators is smaller and the cost difference between different innovation strategies of digital financial enterprises is larger, digital financial enterprises will tend to make compliance innovation. When the cost difference between different regulatory strategies, the upper limit of rewards and punishments and the cost difference between different innovation strategies of digital financial enterprises are smaller, the regulatory agencies will tend to actively regulate. The regulatory strength will have a significant impact on the digital financial innovation behavior. Within a certain range, when the regulatory strength increases, the digital financial compliance innovation behavior will also increase. When regulators increase the reward for digital financial compliance innovation and the punishment for illegal digital financial innovation, digital financial compliance innovation will be promoted.
In subsequent studies, we will further explore and study the evolutionary game relationship between digital financial innovation and regulation involving more subjects, such as the introduction of social public, social media and other subjects in the study, so as to make the research results more abundant and reliable.
Supporting the Weak and Suppressing the Strong or Otherwise: Quality Investment and Pricing Game among App Platform and Developers
ZHU Chenbo, REN Zeqiong, CAO Jian
2025, 34(1):  112-119.  DOI: 10.12005/orms.2025.0017
Asbtract ( )   PDF (1161KB) ( )  
References | Supplementary Material | Related Articles | Metrics
The booming Internet economy has promoted the rapid growth of the App sales, and the quality and price of Apps are two important factors in attracting App users. Besides the price competition, App platforms and developers are increasingly focusing on investing in the quality of their Apps. For instance, Google is adding a number of new features to make Android App development easier for developers, which will help the developers build Apps that load fast and can be released instantly. However, platforms also make negative quality investments in Apps sometimes. For example, Apple Store has blocked some features of the Safe Kids App developed by Kaspersky Lab from going online on the ground that it jeopardizes the privacy and security of users. How to optimize App quality investment and pricing decisions to maximize profits for App platforms and developers? We will answer this question in this paper.
Considering one App platform and two App developers, each developer provides one App, and the two Apps are functionally similar but have quality heterogeneity. This paper applies the hotelling model to reflect the horizontal difference between the two Apps, and studies the problem by using game theory. In the game, the following events happen sequentially: (1)Both the platform and two developers make their quality investment decisions on Apps. (2)Each developer makes the pricing decision on his own App. We assume both the platform and two developers are risk-neutral, and they make decisions to maximize their own profits. We first conclude two types of quality investment strategies of the platform: one is to support the weak and suppress the strong, and the other is to support the strong and suppress the weak. For each type of the quality investment strategy of the platform, we then compare the equilibrium decisions and profits of both the platform and developers in three scenarios, and analyze the optimal decision-making choices of the platform and developers. Note that, the three scenarios are: the platform invests in both Apps, the platform only invests in the high-quality App, and the platform only invests in the low-quality App.
The results show that: (1)When the unit quality investment cost of the platform is relatively low, the optimal quality investment strategy of the platform is to support the high-quality App. In this case, the high-quality App developer will improve the quality and increase the price of the App, while the low-quality App developer will reduce the quality and the price of the App. Meanwhile, among the three scenarios, the profit of the high-quality App achieves the maximum, but the profit of the low-quality App reaches the minimum, and a strong-take-all market will be formed. (2)When the unit quality investment cost of the platform is relatively high, the optimal quality investment strategy of the platform is to support the low-quality App and meanwhile suppress the high-quality App. In this case, the low-quality App developer will improve the quality and increase the price of the App, while the high-quality App developer will reduce the quality and the price of the App. Meanwhile, among the three scenarios, the profit of the low-quality App achieves the maximum, but the profit of the high-quality App reaches the minimum, and a market filled with evenly matched competitors will be formed. (3)In general, a platform with a higher unit quality investment cost will choose the quality investment strategy to support the weak and suppress the strong. This strategy is more conducive to the development of the App market, promoting the coordinated development of the strong App and the weak App, and preventing the monopoly of the App market. (4)A large difference in the maturity of the two Apps can lead to an invalidation of the platform quality investment. Therefore, when differences in the maturity of Apps on a platform are small, it is more conducive to improving the quality of Apps and promoting the healthy development of the App market.
The main contributions of this paper are: (1)For two Apps with different maturity levels and quality heterogeneity, we study the optimal quality investment strategy of the platform under different investment costs, and summarize two types of quality investment strategies for the platform, which has not been covered in the existing literature. (2)We find that it is more conducive to the healthy development of the App market to maintain a high unit quality investment cost for the platform, because a high unit quality investment cost will push the platform to choose the strategy of supporting the weak and suppressing the strong, which will lead to forming a market filled with evenly matched competitors.
Research on Risk Management of International Development of Wind Power Enterprises in China Based on Multi-objective Optimization
YANG Yanyan, YANG Hongjuan, LI Jialian, YANG Jinlong
2025, 34(1):  120-125.  DOI: 10.12005/orms.2025.0018
Asbtract ( )   PDF (934KB) ( )  
References | Supplementary Material | Related Articles | Metrics
The international development of wind power enterprises in China is one of the strategic measures to achieve enterprise growth and sustainable competitive advantage. It is of great significance for China’s major strategic needs and economic transformation and upgrading to achieve “carbon peak” and “carbon neutrality”. The issue of risk management in the national development of wind power enterprises has become an increasingly urgent practical issue. However, research on the international risk of wind power enterprises mostly focuses on qualitative research, lacks a comprehensive and systematic study of risk factors, and there is a serious lack of quantitative and empirical research on risk. The practical guidance significance for risk management and control in the cross-border operation of China’s wind power industry is limited.
This article first systematically sorts out 12 risk factors involved in the international process of wind power enterprises from three dimensions: environmental risk, inherent risk, and control risk, based on risk sources. These factors mainly include political relations, national economic level, wind power technology development, social stability, wind power market, wind power assets, human resources, financial resources, international management, technological innovation, marketing, and strategic management. The impact of these risk factors on the international competitiveness and performance level of enterprise international development is analyzed.
Secondly, the risk management problem of international development of Chinese wind power enterprises is abstracted as a multi-objective optimization problem. The known conditions are the risk level of each risk factor, the cost of improvement, and the degree of its impact on the competitiveness and international performance of the enterprise. The decision-making objectives mainly include minimizing enterprise risks, maximizing enterprise competitiveness, and maximizing enterprise international performance. The constraint is that the improvement cost of all risk factors does not exceed the upper limit of the enterprise budget, and some risk factors may have to choose to accept and respond to risks due to huge costs, such as local political risks or external disruptive technological risks. The key assumptions are that the international development performance of wind power enterprises is affected by the improvement of risk factors, the expected output of wind power enterprises is inversely proportional to the level of risk, and the competitiveness of wind power enterprises is affected by the improvement of risk factors.
Thirdly, constructing a risk management strategy model for the international development of wind power enterprises in China. This model is a typical multi-objective 0-1 integer programming model, which uses the weighting method to linearly combine the three objectives into a single objective. Essentially, it transforms the multi-objective problem into a single objective optimization problem to solve. Using three algorithms, namely enumeration method, greedy algorithm, and dynamic programming algorithm, to solve and conduct comparative analysis.
Finally, taking a domestic wind power enterprise as an example, a numerical example is constructed to verify the effectiveness of the model. Based on a comprehensive consideration of the impact of various risk factors on the international performance and competitiveness of the enterprise, domestic wind power enterprise managers and experts in the international business field of wind power enterprises are introduced to quantitatively score the importance of risk factors and the cost of improvement, and the scores are uniformly dimensionless processed, 7 different value combinations for the weights of the three decision objectives are set, and the obtained data are substituted into the optimization model for solution. The calculation results indicate that, considering the risks, competitiveness, performance, social stability, marketing, and strategic management in the international process of wind power enterprises, all three risk factors should be improved. The political relationship situation should not be improved, and preparations for risk response should be made. Other risk factors need to be determined based on different decision-making preferences to determine whether to improve.
This study provides methodological support for comprehensively considering the performance, competitiveness, and risk prevention of wind power enterprises in the international process, and proposes risk management improvement strategies for the international development of wind power enterprises in China. It can further track the actual application effect and adjust the model and algorithm accordingly, making the established optimization model more and more realistic.
Pricing, Production and Coordination of Symbiotic Supply Chain
ZHANG Wuyi, YANG Lifan, DAI Jiansheng, WANG Yan
2025, 34(1):  126-133.  DOI: 10.12005/orms.2025.0019
Asbtract ( )   PDF (1224KB) ( )  
References | Supplementary Material | Related Articles | Metrics
Industrial symbiosis is an effective way to achieve sustainable production and reduce waste emissions. Unlike traditional emission reduction technologies that focus on reducing waste generation in the production process, it pays more attention to the reuse of waste between different entities or supply chains. From the perspective of operations and supply chain management, also different from the traditional reverse supply chain, which focuses on the recycling of waste or used products in the supply chain, symbiotic supply chain is concerned with the reuse of waste between originally independent supply chains. In a symbiotic supply chain, the core business of the symbiotic supplier is not to provide recycled materials to the manufacturer, but to provide the final product to its end market. This leads to an uncertainty in the supply of recycled materials and thus creates decision-making difficulties for the symbiotic supplier. Additionally, the emergence of symbiotic suppliers has also changed the structure of manufacturers’ supply sides, which brings operational and supply chain management challenges to manufacturers. It is worth considering how to choose between recycled materials and raw materials. Furthermore, designing contracts to coordinate symbiotic supply chains to balance production among all parties and maximize profits for all are an important and complex issue. Differing from the traditional coordination focused on the intra-supply chain, a symbiotic supply chain essentially involves the coordination between two supply chains, and there is little existing research focusing on this coordination issues.
This paper studies the optimal pricing of recycled materials and the optimal production decisions of all parties in a symbiotic supply chain, based on the potential for symbiotic cooperation between two manufacturers, and designs contracts to coordinate the supply chain. First, we construct a symbiotic model consisting of two supply chains that reach symbiotic cooperation, with a symbiotic supplier in a dominant position, a manufacturer in the following position, and the wholesale price of raw materials as an exogenous variable. Second, we concentrate on the production decisions of the manufacturer and symbiotic supplier and the pricing of recycled materials. Finally, a revenue-sharing contract is designed to coordinate the supply chain, and the feasible region of the contract under different conditions, as well as the impact of various parameters on the feasible region, is analyzed.
The results indicate that: 1.When the price sensitivity coefficient faced by the manufacturer is relatively low, there will be a shortage of recycled materials; when this price sensitivity coefficient is moderately increased, supply and demand will reach equilibrium; until this point, the symbiotic supply chain operates in a fully symbiotic mode. When this price sensitivity coefficient further increase, it will lead to an oversupply, at which point a partially symbiotic mode is formed. 2.In the decentralized decision scenario, the manufacturer decides the optimal production quantity according to its conditions. The symbiotic supplier, on the other hand, may not reach the optimal production quantity. 3.Under certain conditions (e.g., manufacturers face a medium price sensitivity factor), symbiotic cooperation may have a disruptive impact on the operation decisions of the symbiotic supplier, i.e., the primary business of the symbiotic supplier shifts from supplying the product for its end market and “incidentally” supplying recycled materials for the manufacturer, to primarily supplying recycled materials for the manufacturer and “incidentally” providing products for the end market. In other words, for the symbiotic supplier, the utilization of recycled materials not only brings additional revenue but also fundamentally changes the operational decision. 4.For the manufacturer, there will be no difference between centralized and decentralized decision-making scenarios only when the manufacturer is faced with a very small price sensitivity factor. For the symbiotic supplier, there will be no difference between the two scenarios when the manufacturer is faced with a very small or a very large price sensitivity factor. In other scenarios, the manufacturer’s final product quantity is smaller than that in the centralized decision-making scenario. 5. Under certain conditions (e.g., the manufacturer is faced with a large price sensitivity factor), a revenue-sharing contract enables the symbiotic supply chain to achieve the desired results in the decentralized decision-making scenario. A revenue-sharing contract is more beneficial to the symbiotic supplier, i.e., it can achieve the coordination effect when the supply of recycled materials exceeds the demand, or otherwise, it cannot.
A Complementarity Model for Supply Chain Equilibrium with All Free Gift with Purchase and its Solution Properties
WANG Hengdi, JIE Zhixin, SUN Hongchun
2025, 34(1):  134-140.  DOI: 10.12005/orms.2025.0020
Asbtract ( )   PDF (1024KB) ( )  
References | Supplementary Material | Related Articles | Metrics
This paper investigates a multi-node, multi-level and interactively connected supply chain network structure consisting of manufacturers, retailers and consumer markets. The nodal enterprises in the network compete with each other and depend on each other because of their benefit objectives. A change in any nodal enterprises in a certain link may cause one in the whole supply chain network. The supply chain network equilibrium is based on the nodal enterprises to maximize their own interests, and the interests of other members are considered, so that the overall supply chain network can be optimized by all-staff collaboration. This method can take the internal and external factors that affect the supply chain network equilibrium into consideration in this supply chain network, and provide a scientific basis for exploring the optimal decision of nodal enterprises under the intervention of internal and external forces. It is of great theoretical significance and practical value to study supply chain network from the perspective of global equilibrium optimization.
In recent years, both in application and in academic research, the topic of supply chain network equilibrium problem has been of great interest, so as to arouse a strong research interest among many scholars. The research area involves the model, analysis and computation for this problem. In this paper, the research aims to establish a nonlinear complementarity model for the supply chain equilibrium problem. This work is different from previous research in that it focuses on two conditions in which a manufacturer cannot sell all of its products, and both manufacturers and retailers use all free gift with purchase to promote sales. Based on this, optimizing behaviors of the various decision-makers can be modeled by KKT conditions, the equilibrium conditions of the manufacturers, the retailers and the consumer markets are derived respectively, and a supply chain equilibrium nonlinear complementarity model with all free gift with purchase and unbalanced production-marketing is presented. Some properties of the equilibrium pattern in terms of existence, boundedness and global uniqueness are provided. In particular, the (strict) monotonicity of the functions involved in the model is shown. For this reason, the algorithm proposed by CHEN et al. (2021) is used to solve this model, and the global convergence and R-linear convergence rate of the algorithm are given.
Finally, we illustrate the model through several numerical examples, in which the equilibrium prices and product shipments are computed under a variety of scenarios, and the variation characteristics on the equilibrium price and product shipment pattern are investigated by a change in relevant parameters such as the number of the product given away, retailer’s management cost and so on. The numerical results indicate that both the equilibrium price and the product shipment quantity increase with an increase in the quantity of gifts given away by manufacturers to retailers and by retailers to consumer markets within a certain number of gifts. The validity and reliability of the model are verified.
However, some factors affecting the supply chain network equilibrium, such as uncertain market demand, time-varying selling prices of retailers and carbon emission policies, are not considered in this paper. It will be the future research direction to further explore the network equilibrium of perishable low-carbon supply chain network equilibrium with random demand and all free gift with purchase. In addition, this work is supported by the Natural Science Foundation of China (12071250), and Shandong Provincial Natural Science Foundation (ZR2021MA088). Particularly, the authors gratefully acknowledge the valuable comments of the editor and the anonymous reviewers.
Research on Co-opetition Strategy of Main Manufacturer and Aftermarket Vendors of Complex Products Considering Quality Awareness
DU Pengqi, CHEN Hongzhuan
2025, 34(1):  141-147.  DOI: 10.12005/orms.2025.0021
Asbtract ( )   PDF (1073KB) ( )  
References | Supplementary Material | Related Articles | Metrics
In recent years, the trade friction between China and the United States, coupled with the COVID-19, has had a major impact on the development of China’s manufacturing industry, and China has timely proposed the construction of a new development pattern of “double circulation”. Building a superior manufacturing industry chain and promoting it to the high end are the fundamental grasp for implementing the new development pattern, and is the key to the “double circulation” to promote each other. At the same time, representative of China’s high-end manufacturing, the industrial chain of complex products is still not perfect, with its presence heavier in the upstream development field led by the main manufacturer, than in the downstream after-sales service field, which also has high added value, and the new development pattern of “double circulation” is pushing the main manufacturer to implement the “brand extension” strategy, to continue to extend to the high-end of the value chain and penetrate into the field of aftermarket service for complex products. It is worth noting that in the current market of aftermarket service of complex products in China, the main manufacturer is mainly responsible for assisting the aftermarket vendors to provide services, but with the promotion of new development pattern of “double circulation” and the implementation of “brand extension” strategy, the main manufacturer has just started to consider introducing their own service brands to compete with aftermarket vendors. Obviously, whether the main manufacturer introduces its own service brands will have a certain impact on the related enterprises and supply chain system. In addition, quality awareness is the basis of quality behavior and quality behavior is the external expression of quality awareness. The safety incidents of complex products caused by aftermarket service quality problems have made supply chain decision makers realize that it is necessary to pay attention to the role of quality awareness in the aftermarket service quality control of complex products. Therefore, by combining the characteristics of the development of collaborative services for complex products in China at the present stage, the internal influence relationship between the main manufacturer and aftermarket vendors is deeply discussed, and the impact of the main manufacturer’s own service brand introduction strategy and the quality awareness of aftermarket vendors on the equilibrium of the supply chain is analyzed, which are important for realizing the optimization and upgrading the “R&D-manufacturing-service” industrial chain of complex products in China.
In view of this, taking the complex product aftermarket service supply chain composed of the main manufacturer and aftermarket vendors as the research object, according to the market situation in which the main manufacturer introduces its own service brand and aftermarket vendors are from cooperation to co-opetition, based on the quality awareness parameter of the aftermarket vendors’ advantage, two Stackelberg game models are constructed, in which, 1.the aftermarket vendor is the leader and the main manufacturer introduces its own service brand, and 2.the aftermarket vendor is the leader and the main manufacturer does not introduce its own service brand. And then by comparing the equilibrium results obtained, the influence of a quality awareness level and introduction of its own service brand on supply chain equilibrium is analyzed.
The results shows that the aftermarket vendors improve their quality awareness level. Although this is conducive to the cooperation between the two parties and the profit of aftermarket vendors, it will cause the aftermarket vendors to have a quality cannibalization effect on main manufacturer and will not necessarily be conducive to the profit of main manufacturer. Whether and when the main manufacturer will introduce its own service brand depends on its own quality cost coefficient and the quality awareness level of the aftermarket vendors. When the quality awareness of aftermarket vendors is at a high level, the main manufacturer’s introduction of its own service brand may be beneficial to aftermarket vendors, and at this time, the main manufacturer and aftermarket vendors can achieve a “win-win” situation. In addition, the entire supply chain system may benefit from a higher quality cost scheme for the main manufacturer, although the main manufacturer’s interests may be affected. In general, it is expected that the results of this study will provide some reference to the management practices of the main manufacturer and aftermarket vendors in a competitive game environment. There are still some limitations and shortcomings in the research process. Firstly, in this paper, the relationship between the main manufacturer and aftermarket vendors is a spontaneous and active cooperative one, and subsequent research can try to design a contractual mechanism or authorization mechanism to enhance an in-depth cooperation between the main manufacturer and aftermarket vendors. Secondly, after the main manufacturer introduces its own service brand, the aftermarket service will be divided between the main manufacturer and aftermarket vendors. Subsequent research can build a co-opetition game model considering a share ratio of aftermarket service to study how the share ratio affects the decision making of enterprises in the aftermarket service supply chain of complex products.
Scheduling Method and Application of Multi-mode Resources-constrained Projects with Ignorant Tardiness Duration
DU Yuanwei, YUAN Ye
2025, 34(1):  148-155.  DOI: 10.12005/orms.2025.0022
Asbtract ( )   PDF (1045KB) ( )  
References | Supplementary Material | Related Articles | Metrics
With the increasing complexity of project implementation and resources occupation, external uncertainties and risks are increasing. A reasonable scheduling of limited resources and activity has become the key to a project scheduling. In order to better guide social production practice, scholars further expand the model of the resources-constrained project scheduling problem (RCPSP) and study related issues. Due to the uncertainty of the project implementation environment, activities are faced with inaccurate time estimation, insufficient resources update and other human factors, which delays the project activities. The tardiness of activities will cause the uncertain risks and costs of the project to rise, and even have a “ripple effect” on the follow-up activities, which will affect the overall stable operation of the project. Therefore, experts are needed to estimate the tardiness probability in the ignorant situation. However, the current research generally assumes that in the project there is an experience to follow when judging whether an activity is delayed, and the experts who judge the tardiness probability are omniscient. However, when a real problem is in an ignorant situation, they will not be able to accurately express the tardiness probability of the project activity. In this context, addressing the challenge of scheduling under such conditions of ignorance becomes crucial for the successful management of projects. In this paper, combined with the generalized combination rule, the ignorance of expert prediction is overcome, and the scheduling method and application of multi-mode resources-constrained projects with ignorant tardiness duration is studied. It provides a new idea for solving the project scheduling problem under uncertain circumstances and provides a basis for the corresponding management practice.
In this paper, a robust optimization model of the multi-mode resources-constrained project scheduling problem with tardiness ignorance is constructed, and the genetic algorithm is used to optimize it. By maximizing the robustness of the project, we can reduce the influence caused by the activity delay and ensure the stability of the project operation. In the process of calculation, it is drawn that the greater the resources of an activity, the greater the importance of the activity in the project and the higher the priority. This correlation is particularly important as it allows for the prioritization of resources based on the criticality of the activities, which in turn can significantly impact the overall project timeline and success. This paper assumes that there is a positive relationship between the importance of an activity and the allocation of renewable and non-renewable resources it needs. Through the robust optimization of project tardiness, this paper explores how to carry out and reasonably determine the duration-resources model of each project activity under the constraints of activity priority, project duration, renewable and non-renewable resources.
In order to verify the validity of the model, this paper constructs an ecological security supervision network of marine ranching. As a key measure to develop marine economy and ecological fishery in China, marine ranching plays a significant role in conserving fishery resources, coping with environmental damage and improving ecological benefits, but in recent years, ecological problems of marine ranching have arisen frequently. Due to the nascent stage of marine ranching and scarcity of historical data, optimizing the robustness of the ecological security supervision network is not only a theoretical challenge but also a practical necessity. Due to a late start of marine ranching construction and lack of historical experience and achievements, it is of practical significance to optimize the robustness of the marine ranching ecological security supervision network with delay and ignorance. In this paper, the work breakdown structure method is used to divide the marine ranching ecological supervision project into four stages: the design stage, the marine ranching ecological security monitoring and evaluation project, the marine ranching ecological security early warning project and the marine ranching ecological emergency decision-making project. The final optimization results verify the effectiveness of the robust optimization model of tardiness-ignorant MRCPSP, and provide theoretical guidance for the development of inexperienced projects.
Price Decision-making in a Gray Market Supply Chain Considering Information Traceability for Gray Market Products
FENG Ying, XUAN Biao, XU Rong, FENG Yangchao, ZHANG Yanzhi
2025, 34(1):  156-163.  DOI: 10.12005/orms.2025.0023
Asbtract ( )   PDF (1011KB) ( )  
References | Supplementary Material | Related Articles | Metrics
The price difference caused by manufacturers’ price discrimination strategies in different regions is the fundamental reason for the emergence of the gray market. In most cases, consumers cannot accurately know the source information of product channels, so some speculators may disguise gray market products as authorized products for sale, so as to obtain higher profits. This will cause consumers in the high price market to buy gray market products at a higher price, when they expect to buy licensed products, which seriously damages the interests of consumers. By introducing the RFID or blockchain information-tracing technology, manufacturers can effectively curb the improper behavior of gray market speculators to trade inferior products for superior ones, and reduce the probability of consumers who expect to buy high priced licensed products, but turn out to buy gray market products.
This paper considers a gray market supply chain composed of a brand manufacturer and a retailer participating in gray market speculation. We build three game models dominated by the manufacturer in three cases, and explore the impact of the information-tracing technology adopted by the manufacturer on the decision-making and operation of the gray market supply chain. Firstly, taking no traceability technology as the benchmark model, we find that the probability of consumers buying high-price products and the preference coefficient of consumers for gray market products will directly or indirectly affect the sales of the three types of products, and then affect the profits of the manufacturer and the retailer. Increasing the probability of consumers buying high priced products has a dual effect. On the one hand, the increasing probability can inhibit the expansion of the gray market. On the other hand, it may also bring additional benefits to manufacturers and improve the efficiency of the supply chain. Subsequently, taking the application of the traceability technology which can trace the source of product channels into consideration, the RFID technology and blockchain technology are introduced in turn. We find that the introduction of different information traceability technologies will have a significant impact on the price and demand of authorized channel products. In addition, adopting information traceability can effectively inhibit the expansion of gray market, but the effects vary from one technology to another. Due to the “free rider” phenomenon without any cost, the retailer always benefits from the blockchain technology, but the profit and loss of the manufacturer is closely related to the fixed cost of the blockchain. When the costs of the two traceability technologies are equal, the manufacturer can obtain higher profit by using the blockchain than RFID.
Furthermore, whether the manufacturer chooses to introduce the tracing technology is closely related to his goals. If the goal is to curb gray market expansion, he should introduce the tracing technology, and which technology he chooses is also related to the marginal cost of RFID. When the cost is low (high), RFID (the blockchain) inhibition effect is better than the blockchain (RFID). If the goal is global expansion, he should not introduce the tracing technology or the blockchain technology. The introduction of RFID is not conducive to global expansion. If the goal is to maximize his own profit, the manufacturer should introduce (not introduce) the blockchain when the cost of the blockchain is low (high). Whether to introduce RFID is also closely related to the marginal cost of RFID; under the same cost expenditure, the introduction of the blockchain is better than RFID. In practice, manufacturers should decide whether to introduce the information-tracing technology according to their own goals.
The conclusions of this paper provide theoretical supports for manufacturers to introduce the information-tracing technology in the gray market environment, and enrich the research on gray market supply chain pricing. The research can be further expanded. For example, we consider the incentive of information traceability to consumers’ purchase behavior, but do not involve the incentive to restrain retailers’ gray-market speculation. We study the specific case where the consumer valuation follows uniform distribution, which can be extended to the case of general probability distribution in the future. These problems will become the focus of our next research work.
Research on DEWMA-RZ Control Chart and its Application in Food Production
HU Xuelong, SUN Guan, ZHANG Jin, QIAO Yulong, LIU Wei
2025, 34(1):  164-170.  DOI: 10.12005/orms.2025.0024
Asbtract ( )   PDF (1040KB) ( )  
References | Supplementary Material | Related Articles | Metrics
With the continuous improvement of the market economy system, the focus on production and scale in the market competition has been gradually shifted to one on quality and service. Therefore, to have an advantage in the fierce market competition, it is crucial to implement quality strategy. Quality management focuses on the effective control of the production process, and quality inspection is one of the key aspects. How to implement quality inspection strategies on the production to achieve the dual goals of quality improvement and cost saving is a challenge that modern manufacturing managers need to solve. Statistical Process Control (SPC), as an important quality management technique, provides many statistical tools to monitor the production process, among which the control chart is considered to be one of the most widely used tools. The control chart is often used to monitor and analyze the quality characteristics of products in the process, improve the product quality and ultimately bring down production costs for enterprises. In recent years, the research on the control chart for monitoring the ratio of two normal variables (RZ) is one of the important directions of SPC, which plays a significant part in the actual production process.
In production scenarios, when the product specification is related to the relative ratio of two components in a mixture, or when the ratio represents the quality characteristics of the product, or the difference between a product’s quality measurement before and after an operation (such as a chemical reaction), the control chart for monitoring RZ can be applied to ensure the stability of the process and make the product quality meet the production expectations.
The traditional Shewhart chart is weak in monitoring small or moderate shifts in the process, while the Exponentially Weighted Moving Average (EWMA) chart can improve the performance of the Shewhart chart by making full use of previous samples’ information. To further improve the sensitivity of the traditional RZ chart to small or moderate process shifts, based on the traditional EWMA-RZ chart, this paper puts forward several new EWMA schemes for monitoring RZ. First, by weighting the smoothing coefficient of the EWMA-RZ control chart twice, this paper puts forward the Double EWMA (DEWMA) RZ control chart. Second, the performance of the control chart can be improved by adopting the Variable Sampling Interval (VSI) strategy in the actual production process, while the traditional RZ control chart usually adopts the Fixed Sampling Intervals (FSI) strategy. Therefore, this paper further introduces the VSI strategy into the DEWMA-RZ control chart, and puts forward the VSI-DEWMA-RZ control chart. Third, the Monte-Carlo (MC) simulation is used to simulate the run length (RL) distribution of the proposed control charts in this paper. Moreover, a bisection search algorithm is used to calculate the control limit coefficient and warning limit coefficient. Under different combinations of the chart parameters, the performances of VSI-DEWMA-RZ, VSI-EWMA-RZ and DEWMA-RZ charts are analyzed and compared in this paper. The results show that the VSI-DEWMA-RZ chart is superior to the DEWMA-RZ chart, and superior to the existing VSI-EWMA-RZ control chart for monitoring small process shifts. Finally, this paper uses food formulation data to illustrate the practical application of the VSI-DEWMA-RZ and DEWMA-RZ control charts. The weight of “pumpkin seeds” and “flaxseeds” in food processing is monitored by an example of the food processing plant, which further illustrates the superiority of the proposed control charts.
The major goal of this paper is to construct an improved EWMA control chart model for monitoring RZ and to further improve the sensitivity of the traditional RZ control chart to small process shifts. Therefore, this paper has a theoretical and practical significance. Theoretically, this paper improves the quality of the control chart for monitoring RZ in small process shifts, enriches a theoretical study of the RZ control chart, and provides new ideas and reference bases for improving the performance of the control chart for monitoring RZ. In practice, the effective continuous monitoring of RZ by improving the performance of the EWMA-RZ control chart ensures the stability of the production process. When the process is out of control, it can detect small process shifts, and triggers an out-of-control signal more quickly. Then, the quality engineer should take actions to find and remove the potential assignable causes, and bring the process back in control. Therefore, this study is of great practical importance to improve the production efficiency and product quality for enterprises.
Future work can investigate the effect of measurement error on the monitoring process for the proposed control charts in this paper. Moreover, the current studies on RZ control charts are basically based on the assumption of independent normal observations of the two quality characteristics. However, due to the high frequency of sensor data collection, autocorrelation may exist between consecutive observations of X and Y. Therefore, subsequent studies can be centered on constructing an autocorrelated RZ control chart.
Application Research
Evolutionary Game Analysis of Over-treatment Behavior in Public Hospitals under Medical Service Income Proportion Regulation
XU Xiao, QI Yong, HOU Zemin
2025, 34(1):  171-177.  DOI: 10.12005/orms.2025.0025
Asbtract ( )   PDF (1787KB) ( )  
References | Supplementary Material | Related Articles | Metrics
Since the introduction of the New Healthcare Reform, the Chinese government has been steadfast in addressing the issues of “difficulties and high costs of medical treatment” faced by the public. To address the issues of the unreasonable income structure of public hospitals and the distorted allocation of medical industry resources, a new performance assessment indicator system was launched nationwide in 2019. The system uses indicators such as a proportion of medical service income to assess the rationality of the hospital income structure. As a result, the focus of Healthcare Reform has shifted from reducing the proportion of drug income to improving the quality of medical services provided by medical staff. In the face of sustained growth in China’s medical and healthcare costs, it is necessary to explore the root causes of doctors’ over-treatment behaviors in public hospitals and investigate the mechanism of the impact of medical service income proportion regulation on doctors’ “compensation-type over-treatment behaviors” and patients’ medical decision-making. Therefore, what is the fundamental etiology of the over-treatment behavior exhibited by healthcare providers in Chinese public hospitals? What is the optimal and rational spectrum of the medical service income proportion, which the government should regulate to fundamentally mitigate the problem of overuse of medical services? Which factors impact the doctors’ over-treatment behavior under the constraints of medical service income proportion regulation? These queries are pivotal in fundamentally alleviating the predicament of “difficulties and high costs of medical treatment” and enhancing the current status of overutilization of medical care among physicians in public hospitals in China.
This paper presents an evolutionary game theory model to investigate whether the over-treatment behavior can be mitigated by increasing the price of medical services in public hospitals. During the process of solving the evolutionary game model, we initially examine the evolutionarily stable strategies of patients and their correlation with the variations in the strategies of doctors. By establishing the Jacobian matrix, we study the evolutionary stable strategies of the system under different initial conditions and provide an effective range of the medical service income proportion regulation. Additionally, we construct a net income indicator for doctors and examine the factors that affect the doctors’ over-treatment behavior, including medical costs, doctors’ revenue performance factors, doctors’ disguise costs, doctors’ professional ethics and altruistic psychology, a government regulation of medical service income proportion, and doctors’ diagnostic service fees. Our study results demonstrate that regulating the proportion of medical service income can reduce the doctors’ over-treatment behavior within a certain range. However, when the policy reference value exceeds this range, doctors may opt to engage in the over-treatment behavior. Taking factors such as doctors’ professional ethics and altruistic psychology and the imbalance of public hospital income structure into account, we observe that the over-treatment behavior of doctors is essentially a type of “compensatory over-treatment behaviors” caused by the gap between medical service prices and the actual labor value of medical staff. A regulation of the proportion of medical service income can address this type of over-treatment behaviors by increasing the prices of doctors’ diagnostic services and expert consultation fees. Moreover, the greater the degree of reward or punishment for the medical service income proportion, and the higher the diagnosis and service fees, the more likely doctors are to choose appropriate medical treatments and get the pay corresponding to the actual labor value. Based on these findings, we propose three recommendations for the government and relevant departments. Firstly, the government should actively promote the reform of public hospitals and establish a reasonable reference value range for the proportion of medical service income regulation. Secondly, public hospitals should provide regular education and training for their employees and create mechanisms for the hospital information disclosure and online evaluation of doctors to enhance the trust between doctors and patients. Finally, relevant departments should collaborate to encourage medical technological innovation and enhance a sense of achievement for medical staff. In summary, this study provides valuable insights into the evolution of doctor-patient relationships and the factors influencing doctors’ strategic decisions. The implications of these findings are significant for policymakers and public hospital management to enhance the quality and efficiency of medical services and establish a mutually beneficial doctor-patient relationship.
Environmental Regulatory Synergy, Public Participation and Corporate Green Innovation: Evolutionary Study and Case Validation Based on Mining Enterprises
LIU Yiqing, LI Dayuan, LIANG Yanru, XU Chundong
2025, 34(1):  178-185.  DOI: 10.12005/orms.2025.0026
Asbtract ( )   PDF (1740KB) ( )  
References | Supplementary Material | Related Articles | Metrics
As the fundamental resources of industrial development, the ecological degradation and pollution externalities resulting from the exploitation of mineral resources have positioned them as a central focus for government regulatory efforts and public environmental protection initiatives. Notwithstanding the government’s ongoing reinforcement of the environmental policy framework, violations of environmental regulations by mining enterprises persist, prompting a critical evaluation of the effectiveness of green governance implementation. This study addresses two pivotal questions: (1)What constitutes the causal mechanism through which heterogeneous environmental regulation instruments influence green innovation adoption in mining enterprises? (2)How do strategic interactions among governmental entities, civil society, and corporate actors evolve under differentiated regulatory regimes?
Current scholarship predominantly employs econometric analyses to measure average treatment effects of environmental regulations while utilizing simulation approaches to model stakeholder behavior. However, these approaches frequently overlook the contextualized behavioral responses of enterprises operating under institutional constraints. To bridge this theoretical-empirical gap, our research adopts an innovative tripartite methodology integrating “regulatory regime progression analysis, evolutionary game theory modeling, and empirical case validation”: First, we have established a comprehensive typology of regulatory scenarios:(1)coercive regulation(Stick Approach): pure penalty-based regime; (2)mixed incentives(Carrot-and-Stick Approach): penalty-subsidy hybrid system; (3)voluntary synergy(Sermon-Enhanced Approach): multi-stakeholder co-regulation incorporating public participation. Second, we operationalize this framework through evolutionary game theory modeling, the simulation analysis incorporates three strategic actors: local governments (as regulatory enforcers), mining enterprises (as regulated entities), and the public (as social supervisors). Through computational iterations, we examine behavioral equilibria and convergence patterns across the three regulatory scenarios. Third, we conduct longitudinal case studies (2015-2022) of NRE and Zijin Mining, employing process-tracing methodology to validate the simulation outcomes. This empirical validation examines how these firms adapted their environmental strategies under China’s regulatory evolution.
Our multi-method analysis reveals three key findings:(1)Regulatory Regime Progression Effects: the transition from pure coercion (Scenario 1) to incentive-mixed regulation (Scenario 2) generated suboptimal green innovation outcomes due to subsidy dependency. The synergistic regime (Scenario 3) demonstrated superior performance through its dual emphasis on formal institutions and informal social norms. (2)Stakeholder Dynamics: evolutionary paths exhibited distinct patterns: governmental actors(U-shaped participation curve), enterprises(monotonic increase in green innovation adoption)and Public Participation(inverted U-curve peaking under Scenario 2, declining under Scenario 3).(3)Implementation Paradox: despite theoretical predictions of strong public engagement in co-regulation, empirical evidence revealed declining participatory vigilance under advanced regulatory regimes-suggesting “regulatory complacency” effects.
We makes dual theoretical contributions: First, it extends Porter’s hypothesis by demonstrating that voluntary environmental regulation acts as a force multiplier when synergized with command-and-control mechanisms. Second, it reveals the non-linear relationship between regulatory sophistication and public engagement, challenging assumptions in participatory governance literature. Methodologically, our “evolutionary simulation-empirical tracing” approach advances stakeholder analysis.
Cross-listing and Corporate ESG Performance: An Empirical Study Based on Chinese A-share Listed Companies
YU Ying, WU Hecheng, YI Ronghua
2025, 34(1):  186-192.  DOI: 10.12005/orms.2025.0027
Asbtract ( )   PDF (948KB) ( )  
References | Supplementary Material | Related Articles | Metrics
The 14th Five-Year Plan points out that China’s economic development has shifted from high-speed growth to high-quality development. The “dual carbon” goal is proposed to promote the comprehensive green transformation of economic and social development. The development of low-carbon economy has become the consensus of globaleconomic development. At the micro level, in the process of promoting sustainable economic development, enterprises, especially listed companies as public companies, need to undertake more environmental protection and social responsibility. The concept of ESG is in line with the development trend of global low-carbon economy and the national strategy of green and sustainable economic and social development. Corporate ESG performance has been paid more and more attention in the international capital market. Manyinvestors have incorporated corporate ESG factors into their organizational planning and investment decision-making processes. In the new development stage, how to improve ESG performance under the encouragement and supervision of policies has become an important issue.
This paper takes China’s Shanghai and Shenzhen A-share listed companies from 2012 to 2020 as the research object to empirically study the impact of cross-listing on ESG performance and its mechanism. The study finds that, first, A+H cross-listed companies have better ESG performance than A-share-only companies, where the larger the company size, the larger the return on assets and free cash flow from equity, and the lower the leverage, the better the company has ESG performance. After endogeneity and robustness testing, the conclusions remain reliable. Second, the mechanism analysis suggests that financing constraints and investor attention have a partial mediating effect between cross-listing and firms’ ESG performance. Cross-listing alleviates firms’ financing constraints and increases investor attention, which in turn contributes to the improvement of firms’ ESG performance. Third, the extended analysis finds that with an increase in cross-listing years, the ESG performance of enterprises is further strengthened. At the same time, the better ESG performance makes the cross-listed companies enhance the enterprise value.
Cross-listing, an external governance factor, provides an important channel for companies to improve their ESG performance. Enterprises have better ESG performance through cross-listing, thus enhancing corporate value. In the context of the development of low-carbon economy, enterprises should strengthen the self-disclosure of ESG performance, fulfill their ESG responsibilities, promote the sustainable development of enterprises, meet the twin goals of carbon peak and carbon neutrality, and contribute to the national strategy. Government departments should strengthen top-level design, formulate perfect ESG disclosure policies, standardize disclosure standards for different industries, and urge enterprises to make correct ESG decisions. Investors should establish the concept of ESG investment and pay attention not only to the financial performance of enterprises, but also to their non-financial performance in environmental protection, social responsibility and corporate governance, so as to promote listed companies to fulfill their ESG responsibilities.
The main contribution of this paper is reflected in the following three aspects: First, it enriches the research on cross-listing and its economic consequences. By studying the impact of cross-listing on ESG performance of enterprises, this paper makes an important supplement to the study of the impact of cross-listing on non-financial performance. Second, it enriches the research on the drivers of enhancing corporate ESG performance. Existing studies have examined the drivers affecting corporate ESG performance mainly from the perspectives of internal corporate governance, digital transformation, M&A transactions, and capital market liberalization. Based on the external governance factor of cross-listing, this paper examines its impact on ESG performance of enterprises, providing an important channel for enterprises to improve ESG performance. Through the intermediary effect analysis, the mechanism of cross-listing affecting the ESG performance of enterprises is revealed. Finally, it enriches the exploration of the feedback function of capital market to the real economy of emerging markets. The vast majority of the literature uses data from listed companies in Europe and the United States as the research sample. This paper uses Shanghai and Shenzhen A-share data to provide literature for the study of ESG performance of listed companies in emerging markets.
Uncertain International Portfolio Selection with Tax Consideration
MA Di, HUANG Xiaoxia, CHOE Kwang-Il
2025, 34(1):  193-198.  DOI: 10.12005/orms.2025.0028
Asbtract ( )   PDF (984KB) ( )  
References | Supplementary Material | Related Articles | Metrics
In order to get better investment returns, more and more investors pay attention to international portfolios. This paper studies how to make investment decisions in the face of uncertain securities returns, exchange rates and tax rates in a complex international investment environment.
Since the mean-variance model was proposed, the mean-variance model has been the foundation of the modern portfolio theory and become a widespread criterion to help investors make their investment decisions. However, studies concerned only discuss the home portfolio selection problem. In fact, with the rapid economic development and liberalization of capital flows, international portfolio investment has become more common and has attracted more and more the attention of practitioners and academic scholars. Some scholars indicate that an international portfolio is meaningful for investors because it can bring more benefits to investors than only domestic investment. Therefore, exchange rate risk has a significant impact on international portfolios and cannot be ignored. Furthermore, apart from exchange rate risk, some countries levy capital gain tax on investment gain and tax risk should also be a concern. There are papers that provide insightful findings about international portfolio selection or portfolio selection with tax risk. But they all assume that asset returns, exchange rate risk and tax rate are considered as random variables, which are based on the assumption that the future frequencies can be precisely reflected by the historical data. However, in practice, it is difficult to obtain precise probability distributions due to the fast-changing environment or the occurrence of unexpected events like the spread of COVID-19. Then we cannot treat the indeterminate quantities as random variables, so we need to explore to use a new tool other than probability theory to make investment decisions.
Using the uncertainty theory, we propose a mean-variance-entropy model for international portfolio selection with tax consideration under the assumption that the return rates of risky assets, exchange rate and tax rate are uncertain variables. Specifically, we follow the uncertain mean-variance modeling idea and apply a proportion entropy to gain the required investment diversification. The proportion entropy has been used to describe portfolio selection problems’ degree of diversification. For further discussion, we will deduce the general deterministic equivalent form of the model and the deterministic equivalent form when the returns of risky assets, uncertain exchange rate and tax rate take normal uncertainty distributions.
On the basis of the model and its equivalent forms, an empirical study is carried out with a portfolio of fifteen stocks from the Nasdaq and New York Stock Exchanges and a risk-free asset. Based on the data, we give the optimal investment allocation on these stocks and risk-free asset. And we also analyze the questions of whether foreign exchange forward contracts should be used to hedge foreign exchange risks in international investment portfolios, and the necessity of considering uncertain tax rates in investment decisions. We find that when the risk tolerance level of investors is low, they should use forward contracts to hedge exchange risk, but when the risk tolerance level of investors is high, they should choose a portfolio without forward contracts. In addition, the experiment also shows that uncertain tax rates must be addressed in portfolio decision-making, and a more accurate distribution of uncertain tax rates should be given as far as possible. We also do sample validation by comparing the optimal portfolio produced by the mean-variance-entropy model with the equal-weighted portfolio. The results show that the optimal portfolio obtained by the model performs better than the equal-weight portfolio. The model proposed in this paper is effective.
“Paradox of Openness” in Digital Entrepreneurship: Value Creation and Protection of Open Source Community Collaboration
HU Xiaoyu, ZHANG Baojian, LI Nana
2025, 34(1):  199-206.  DOI: 10.12005/orms.2025.0029
Asbtract ( )   PDF (1269KB) ( )  
References | Supplementary Material | Related Articles | Metrics
With the development of digital technology, open-source communities that aggregate high-quality resources continue to emerge, lowering the barriers to entrepreneurship for new start-ups. Digital startups regard them as an important source of innovation elements. Both sides realize openness and sharing through an open-source collaboration (OSC) behavior. However, enterprises not only want to use knowledge flow to achieve collaboration across organizational boundaries, but also face the inevitable risk of technology leakage. Therefore, openness will bring not only shared benefits, but also an infringement risk going along with them. Digital entrepreneurship is faced with the paradox of openness, a dilemma in the choice between value creation and value protection.
To address this dilemma in the entrepreneurial process, this paper analyzes the open-source collaboration behavior and its behavioral consequences from an outside-in inbound OSC and an inside-out outbound OSC two-way process. A duopoly differential game model is constructed based on the open-source community and entrepreneurial enterprises, in which the open-source community has two strategy choices: “sharing” and “not sharing”. Enterprises allow the inbound OSC to be regarded as an opening strategy and outbound OSC as an open-source strategy. In addition, the stochastic differential game model is further constructed by incorporating different external factors and the differentiated strategy choices of digital entrepreneurship at different stages are analyzed according to the enterprise life cycle theory. Finally, the equilibrium strategy obtained is verified by numerical simulation to further verify the reliability of the conclusions of this paper.
The results of differential game show that when the total revenue distribution ratio of the open-source community is greater than one-third, the cooperative game can make both parties achieve Pareto optimality, the Stackelberg master-slave game is second and the Nash non-cooperative game has the lowest return. At the same time, the results of stochastic differential game show that when digital startups are in the conceptual and commercialization stages, the protection is crucial, the strategy of strong opening and weak open source should be adopted and the protection of enterprise value should be paid more attention while using external knowledge. In the growth stage, facing the core task of expanding market share, enterprises have the capability for value protection. Enterprises should adopt a strong open-source strategy of inbound OSC and outbound OSC in parallel, so as to enhance the value creation ability of new products. As digital entrepreneurship makes full use of OSC for cost control and data accumulation, this study can provide a reference for decision-making on the optimal strategy in different stages.
In this paper, the development stage of digital startups is listed as the main interference factor in the stochastic differential game. But, in the actual process, the random interference is complex and difficult to measure; the influencing factors that are difficult to observe also include the business model of venture capital institutions, the attitude toward open-source risks and the founders’ ability to control open-source strategies, etc. Case studies or survey interviews are used to supplement the situational variables studied in this paper. What’s more, this paper mainly analyzes two large open-source communities based on the foreign GitHub platform and the domestic Open-Source China. The future work can extend the selection strategies of open-source communities for startups and further analyze OSC activities of startups on different platforms, which in turn can provide decision-making reference for the top-level design of open-source community platforms.
Dynamic Evaluation Study of High-quality Economic Development Empowered by Digital Economy: Based on Time-weighted Optimal Combination Assignment Method
WANG Shaohua, YANG Zhiwei, ZHANG Wei, WANG Fei
2025, 34(1):  207-213.  DOI: 10.12005/orms.2025.0030
Asbtract ( )   PDF (998KB) ( )  
References | Supplementary Material | Related Articles | Metrics
The report of the 20th CPC National Congress points out that high-quality development is the primary task of building a modern socialist country in an all-round way. This high level assertion shows that promoting high-quality development is the fundamental requirement for promoting sustained economic growth in the new era. High-quality economic development is a multi-dimensional and systematic concept, which should burst a new vitality on the basis of old and new elements. In the current context of deepening supply-side structural reform and demand-side management, high-quality economic development means that in the new historical stage, we have to make a change and introduce a new dynamic on the basis of driving the transformation and upgrading traditional factors. It is no coincidence that a series of new economies represented by the digital economy are flourishing, and have become a powerful driving force for high-quality economic development by transforming traditional industries and creating new ones. In 2020, data was included as a new factor of production, alongside traditional factors such as labor and capital. The report by the 20th CPC National Congress proposed the task of “accelerating the development of the digital economy, promoting the deep integration of the digital economy and the real economy, and creating an internationally competitive digital industry cluster”. It can be seen that the combination of digital economy with traditional supply-side and demand-side factors can provide a new path for promoting high-quality economic development, and it is of great theoretical and practical significance to study the combination of old and new dynamics for high-quality economic development.
Therefore, based on the connotation and theoretical analysis of high-quality economic development, this paper constructs an evaluation index system of high-quality economic development covering traditional supply-side factors, supply-side digitalization, traditional demand-side factors and demand-side digitalization. The optimal combination weighting method containing AHP method, entropy weighting method, CRITIC weighting method and independence weighting method is applied to determine the time-varying index weights, and the time-weighted vector is introduced to improve it by eliminating the time attribute and calculating the time-weighted combination weights. On this basis, the level of high-quality economic development in 30 provinces and cities in China from 2013 to 2019 is evaluated, and its spatial and temporal evolution is analyzed using various means. The results show that: the traditional supply and demand sides are the top two factors affecting the high quality development of the economy, and the digitalization of the demand side gradually overtakes that of the supply side and becomes the third factor affecting the high quality development of the economy. The development level of China’s digital economy has achieved an overall leap, which has greatly promoted the improvement of the level of high-quality economic development. As far as a region is concerned, the level of China’s digital economy and high-quality economic development is characterized by a low level in the Northwest and a high level in the Southeast, and most of the regions with rapid development of the digital economy are those with high levels of high-quality economic development, and compared with the North, the digital economy is developing rapidly in the South, with the Southwestern region developing the fastest. There is a large regional heterogeneity in high-quality economic development, and the Northern coast, the Eastern coast, the Southern coast and the middle reaches of Yangtze River are the four poles of high-quality economic development in China. The empirical evaluation results of this paper provide reference for further promoting China’s high-quality economic development. As the digital economy and society continue to develop, new technologies and production factors will burst forth one after another to help the economy develop with high quality. Therefore, it is one of the future research subjects to continuously enrich and expand the meaning of high-quality economic development with a developmental and dynamic perspective, and to measure it using new measurement tools.
Servitization and Quality of Accounting Information Disclosure: Empirical Evidence from Chinese Manufacturing Listed Companies
HU Wenxiu, WANG Sixiang, LI Lei
2025, 34(1):  214-220.  DOI: 10.12005/orms.2025.0031
Asbtract ( )   PDF (982KB) ( )  
References | Related Articles | Metrics
Based on the information disclosure assessment results of enterprises on the main boards in Shanghai and Shenzhen Stock Exchanges from 2007 to 2021, enterprises fail to shoulder their information disclosure obligations. If a wrong path to strategic transformation is chosen, the enterprise performance will be greatly impacted, which will lead to the decline in the quality of accounting information disclosure. Theoretically, the impact of servitization on the quality of accounting information disclosure has four effects: scale negative effect, supervision effect, multiple discount effect and difference complementary effect. Which kind of effects takes the lead is still unknown. The contributions of this paper are reflected in the following three points: Firstly, it enriches the research on the factors influencing the quality of accounting information disclosure. The existing research on the quality of accounting information disclosure pays insufficient attention to the transformation of enterprises. This paper measures the degree of servitization from the perspective of output, and studies the relationship between servitization and the quality of accounting information disclosure on this basis. Secondly, it expands the research on the economic consequences of servitization. The existing research mainly focuses on the relationship between servitization and the enterprise performance. This paper combines the strategic management theory and the information disclosure theory, so as to expand the research framework of strategic management and make up for the shortcomings of existing research. Thirdly, it refines the research on the boundary conditions of the relationship between servitization and the quality of accounting information disclosure. Based on the dual perspectives of external supervision and internal incentives, this paper reveals and verifies the boundary conditions of the relationship between servitization and the quality of accounting information disclosure from the perspective of analyst follow-ups and executive compensation.
This paper selects the data of A-share manufacturing listed companies in Shanghai and Shenzhen Stock Exchanges from 2007 to 2021 as the initial samples. The sample data come from CSMAR and WIND. The data screening process is as follows: (1)delete the samples of ST,*ST and PT companies; (2)eliminate the samples with missing data; (3)delete the samples of companies whose main business income composition is less than 0 or the total amount is more than 100%. In order to avoid the influence of extreme values, this paper makes a tail reduction of 1% on all continuous variables, and finally retains 11,350 samples. The data are brought into the Logit measurement model, and the relationship between servitization and the quality of accounting information disclosure is analyzed by Stata16.0 software. This paper uses a series of methods to test the robustness of hypotheses, such as the instrumental variable method, propensity score matching method, replacement of the measurement index of the dependent variable, and the relevant conclusions have not changed substantially. Then, this paper verifies the regulatory effects of analyst follow-ups and executive compensation on the relationship between servitization and the quality of accounting information disclosure.
Based on the integration of strategic management theory and information disclosure theory, this paper constructs a comprehensive research framework for servitization, analyst follow-ups, executive compensation and the quality of accounting information disclosure. Based on the dual perspectives of analyst follow-ups and executive compensation, this paper studies the relationship between servitization and the quality of accounting information disclosure. The main conclusions include: Firstly, there is a U-shaped curve between the servitization and the quality of accounting information disclosure. Secondly, analyst follow-ups weaken the U-shaped relationship between servitization and the quality of accounting information disclosure. Thirdly, executive compensation weakens the U-shaped relationship between servitization and the quality of accounting information disclosure. The management enlightenments of this paper mainly lie in how enterprises formulate the plan for servitization, and improve the quality of accounting information disclosure through servitization.
Mean-lower Partial Deviation Portfolio Optimization with Return Prediction Using Deep Learning
ZHANG Peng, YANG Yang, HE Jiayi
2025, 34(1):  221-226.  DOI: 10.12005/orms.2025.0032
Asbtract ( )   PDF (1415KB) ( )  
References | Related Articles | Metrics
The purpose of investment portfolio is to allocate an amount of fund to multiple financial assets more effectively. The Mean Variance (MV) model, proposed by Markowitz, is an important foundation for modern portfolio theory. However, the data in stock market is complex, and it is difficult to accurately describe the returns and risks of assets using the mean and variance of historical returns only. Therefore, based on the MV model, numerous studies have proposed various extended models to optimize the decision-making effectiveness of portfolios. There are two main directions of portfolio optimization: the first one is to predict the assets’ return, which can be used to represent the return of the portfolio, the second one is to change the measurement method of risk.
In the process of portfolio optimization the prediction of return is a crucial factor. Traditional statistical methods, such as the Autoregressive Integrated Moving Average (ARIMA) model, are mainly based on the hypotheses of linearity and normal distribution, and these assumptions may not be satisfied in stock return series. To overcome this problem, deep learning models are considered to predict return in portfolio optimization, which can deal with complex, multi-dimensional and noisy time-series data, and have shown better performance than traditional statistical models. With the superiority of deep learning models in stock market prediction, it is meaningful to investigate the combination of deep learning models’ return prediction with classic portfolio optimization models.
As a candidate of measuring risk, variance has been highly criticized in academics as well as in practice. The primary reason is that the variance punishes both upside deviation and downside deviation of returns in the same way. Thus, an alternative promising candidate, Lower Partial Deviation (LPD) is used as the measure of risk, which penalizes only those outcomes that fall below the target return. In addition, in the process of making investment decisions, the needs and risk preferences of investors also can be taken into account. Therefore, this paper proposes a novel risk measurement: LPDα, which considers investors’ demand and risk preference by combining target returns with investors’ demands and risk preferences.
In conclusion, it is very significant to measure the returns and risks of an investment portfolio reasonably and accurately. Integrating return prediction in portfolio formation can improve the performance of original portfolio optimization model. In this paper, comprehensively considering various indicators and factors, three deep learning methods including Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN) and Deep Neural Network (DNN) are used to predict stock returns, and the mean absolute error, mean squared error, root mean squared error and hit ratio are selected to evaluate the performance of the three deep learning models. Since LSTM performs better than CNN and DNN in forecasting, the predicted returns based on LSTM are introduced to portfolio optimization model. Considering the demands and preferences of investors, the transaction costs, the threshold constraints and the borrowing constraints, we construct a M-LPDα portfolio model. This model can be solved using a sequence of quadratic programming method and pivoting algorithm. Finally, in-sample tests and out-of-sample tests are conducted using historical data of SSE50 obtained from Tonghuashun. The in-sample test results indicate that when investors have higher expectations of target returns or higher risk preferences, the effective frontier of the M-LPDα based on LSTM predicting will be lower. In addition, with the transaction costs and borrowing constraints decreasing, the effective frontier of the M-LPDα based on LSTM predicting will become higher. And the effective frontier of the M-LPDα based on LSTM predicting will become higher when the threshold constraints increase. The out-of-sample test results show that the M-LPDα based on LSTM predicting proposed in this paper performs better than the Equal Weight (EW) model and MV model.
Using the deep learning to analyze the stock market data can improve the ability of individual and institutional investors to deal with complex financial data, and provide technical support for scientific and reasonable investment strategies. Moreover, considering the impact of investor preferences on investment decisions can help individual and institutional investors make more personalized investment decisions. In future study, we will use some methods, including text mining, data augmentation, and feature engineering, to improve the indicator factor system, which can enhance the predictive performance and stability of deep learning.
Management Science
Research on Performance Evaluation of Shared Economic Enterprises Based on Weight Combination
REN Yadan, ZHU Xianglin, XIAO Huimin, CUI Chunsheng
2025, 34(1):  227-232.  DOI: 10.12005/orms.2025.0033
Asbtract ( )   PDF (933KB) ( )  
References | Related Articles | Metrics
This study focuses on the field of performance evaluation of sharing economy enterprises, and aims to provide an effective performance evaluation method for sharing economy enterprises by constructing a scientific and reasonable performance evaluation system. With the vigorous development of the sharing economy in China, the number of sharing economy enterprises has surged, covering clothing, food, housing, transportation, health, knowledge education and life services and other fields. However, some sharing economy enterprises are facing the risk of poor performance or even bankruptcy in the rapid development, so it is particularly important to scientifically evaluate their performance.
In this study, we refer to Osterwalder’s business model theory and combine the research results of LIU Yanan(2017) on the performance of sharing economy enterprises to preliminarily screen out multiple indicators that affect the performance of sharing economy enterprises. Through interviews and questionnaire surveys with senior leaders, this study further refines the indicators into two categories: product factors and enterprise quality credit factors, and determines specific indicators including product life cycle, environmental protection degree, user rating, financing ability, cost management ability, technological innovation, number of awards, executive breach of trust record and contract performance ability.
In terms of determining the weight of the indicators, this study innovates the existing methods. In view of the shortcomings of the traditional single weight determination method, this study proposes a new method that combines subjective weight determination methods (such as the expert investigation method, analytic hierarchy process, and decision laboratory method) with objective weight determination methods (such as the entropy weight method and correlation function method). By calculating the similarity of the weight ranking results under each method, the optimal combination of subjective and objective weights is determined by using the angle cosine method, so as to improve the accuracy and reliability of weight determination.
In order to verify the feasibility of the proposed method, this study takes the “LD Technology” shared power bank enterprise in the big data credit network as an example, and conducts an empirical analysis. By collecting the index data of the enterprise and combining the fuzzy comprehensive evaluation model, the performance level of the enterprise is comprehensively evaluated. The evaluation results show that the performance level of “LD Technology” is good, but there is still some room for improvement, especially in terms of financing ability, user rating and performance ability.
The contributions of this study are: First, to construct a set of performance evaluation system that adapts to the characteristics of sharing economy enterprises. Second, an improved index weight determination method is proposed, which has improved the accuracy and scientificity of performance evaluation. Third, the feasibility and effectiveness of the proposed method are verified through an empirical analysis, which provides new ideas and methods for the performance evaluation of sharing economy enterprises.
However, there are some limitations in this study, such that the construction of the indicator system may not be comprehensive, and the selection of weight determination methods may be limited by the sample data and research conditions. Future research can further refine the indicators, optimize the weighting method, and expand the scope of an empirical analysis to improve the scientificity and applicability of the research. At the same time, with the continuous development of the sharing economy, the performance evaluation methods and systems also need to be constantly updated and improved to adapt to the new market environment and enterprise needs.
Digital Transformation of GFAM Manufacturing: A Collaborative Approach Study
LU Shichang, GE Xiao, LI Dan
2025, 34(1):  233-239.  DOI: 10.12005/orms.2025.0034
Asbtract ( )   PDF (1400KB) ( )  
References | Supplementary Material | Related Articles | Metrics
The rapid development of the digital economy has gradually led global manufacturing enterprises into a new wave of change. As a result, manufacturing enterprises worldwide are facing a new division of labor, and the competition within the global value chain’s high-end links is becoming increasingly fierce. Digital transformation has emerged as a vital initiative for manufacturing enterprises to enhance their core competitiveness. Moreover, digital technology innovation serves as a powerful lever for manufacturing enterprises to achieve innovation-driven growth and facilitate their digital transformation. Countries worldwide have acknowledged the positive role of digital technology innovation in the digital transformation of manufacturing enterprises.
Technology innovation is a complex process involving multiple stakeholders. Academic research on the digital transformation of manufacturing enterprises is increasingly concentrating on collaborative innovation and value sharing among these stakeholders. Additionally, research is focused on establishing a new paradigm for collaborative research and development, as well as innovation in digital technology, among industry, academia, and research institutions. However, there have been few studies that incorporate multiple subjects, such as government, industry, academia, research, and finance, into a single model simultaneously. Furthermore, these studies have not yet analyzed the dynamic effects of collaborative innovation among these subjects on enabling the digital transformation of manufacturing enterprises through microscopic simulation. To address this gap, we propose the establishment of a digital manufacturing ecosystem based on the GFAM multi-subject collaborative innovation model, which comprises “government (G)-financial institution (F)-academic and research institution (A)-manufacturing enterprise (M)”. We construct a three-party game model and employ MATLAB2019b to conduct simulations that investigate the influence of the behavioral decisions of government, industry, academia, and research subjects. Our aim is to identify key factors that influence the decision-making of these parties and analyze their roles in the systematic development of innovative digital technologies and the promotion of digital transformation in manufacturing enterprises.
The results indicate that the GFAM multi-subject collaborative innovation model is influenced by key factors to varying degrees. The synergy coefficient, financial drivers, government incentives, and penalties are directly proportional to the system’s rate of evolution, effectively promoting the digital transformation of manufacturing enterprises. The innovation benefit of the system and the ratio of R&D cost sharing exhibit an inverted “U” shape concerning the behavioral decisions of industry-university research. The marginal contributions are as follows: 1.Expanding the industry-university-research collaborative innovation model to include government and financial institutions in the analysis framework enables the study of the behavioral evolution patterns of collaborative R&D and innovation in digital technology among government, industry, university, research, and funding institutions. This expansion also identifies key factors affecting collaborative innovation among these subjects, contributing to the theory of synergism. Moreover, it helps address obstacles such as opportunism, externalities, and financial risks in collaborative innovation. 2.The inclusion of financial drivers, supported by previous empirical analysis, further demonstrates the positive role of financial drivers in facilitating the transformation of manufacturing enterprises through simulations. This finding guides financial institutions to continuously invest in technological innovation and the high-quality development of manufacturing industries, thereby promoting the allocation of financial resources to the manufacturing sector. As a result, the digital transformation process of manufacturing enterprises can be accelerated. 3. This research provides valuable insights for the collaborative innovation practices of manufacturing enterprises within the digital economy context. Additionally, it offers guidance for governments in formulating policies to regulate the stability of the digital manufacturing ecosystem.
京ICP备05002817号-5   京公网安备11010802027858号
Copyright © Operations Research and Management Science
Address: Anhui hefei in hefei university of technology institute of system engineering. Code:230009
Tel:0551-62901503 E-mail: xts_or@hfut.edu.cn
Powered by Beijing Magtech Co., Ltd.