Loading...

Table of Content

    25 August 2025, Volume 34 Issue 8
    Theory Analysisand Methodology Study
    Channel Operation and Pricing Decisions with Power StructureDifferences from Customer Experience Perspective
    KANG Kai, XU Guitao, WU Chenchen
    2025, 34(8):  1-7.  DOI: 10.12005/orms.2025.0233
    Asbtract ( )   PDF (1366KB) ( )  
    References | Related Articles | Metrics
    In the process of experience economy development, customer consumption concepts and purchasing patterns have changed, and more emphasis has been placed on experiences in terms of purchasing environment, channel convenience, and price transparency. Retail enterprises providing experiential services can stimulate customers’ purchasing desire to a certain extent, but they usually incur additional investment costs, thereby affecting the pricing strategy of products. With the iterative upgrading of retail channels, the power structure between online and offline channels has also shown diversified characteristics, with both online and offline dominant power structures, as well as online and offline power structures that are evenly matched. Nowadays, unlike dual channel and multi-channel distribution models, many retailers integrate channels to enhance the value of customer experience and overall operational efficiency. “Buy Online and Pick up Store” (BOPS) is the most common omnichannel retail method, which can achieve user traffic conversion between different channels, share inventory information, and meet consumers’ one-stop shopping needs.
    In the omnichannel model, considering the impact of differences in customer experience and power structure on channel integration strategies, it is urgent to address whether channel integration can continuously enhance customer experience value and bring higher profitability to retail enterprises. This paper takes a retailer that simultaneously opens traditional offline and online channels as the object of study, and establishes game models based on customer experience effects under three different power structures: offline dominance, online dominance, and Nash equilibrium. Based on different levels of channel integration, the omnichannel retail system models of “buy-online-and-pick-up-in-store” and “offline-experience-and-online-logistics-distribution” have been further established. Then, the decision-making behavior under different power structures and channel integration levels is explored for both dual channel and omnichannel scenarios, and the characteristics of equilibrium solutions for each decision-making subject are analyzed. Finally, numerical examples are utilized to analyze the impact of channel preferences on the optimal decision and performance of retail systems.
    The main research work and conclusions are summarized as follows: When consumers have a moderate preference for online purchasing behavior, they can appropriately increase the product pricing of the omnichannel retail system, and at this time, the omnichannel model has a more profit advantage. When consumers have a high preference for online channels, the pricing and experience investment of the deeply integrated omnichannel BOPS-PLUS model should be lower than those of the BOPS model. When consumers have a higher acceptance of online shopping, a higher experience factor, and a lower perceived value from customers, the BOPS-PLUS model has more advantages, and fully integrating channels is beneficial for retail enterprises.
    The management implications of this paper are as follows: First, decision makers in retail enterprises should implement omnichannel strategies when consumers have a moderate level of online acceptance. In this situation, they can obtain the highest sales profit on a larger scale while effectively controlling risks. When consumers have low online acceptance, integrating channels will not be the best choice for retail enterprises. Second, channel integration does not always bring profit growth to retail enterprises, and multiple factors need to be considered comprehensively. When meeting a certain market environment, fully integrating channels can satisfy consumers’ expectations of being “cheap” with price advantages, and retail enterprises can also obtain higher profits, thus achieving a win-win between consumers and retail enterprises. Then, in retail practice, the BOPS approach is more practical than the deeply integrated BOPS-PLUS approach. Because it does not require real-time attention to changes in consumer market information, it can fully leverage the advantages of channel integration, promote cooperation between online and offline channels, and bring new market share and profit growth.
    Financing Decisions of Capital-constrained Manufacturer ConsideringProcess Innovation for Remanufacturing
    MA Peng, YUAN Qin, CAO Jie
    2025, 34(8):  8-14.  DOI: 10.12005/orms.2025.0234
    Asbtract ( )   PDF (1406KB) ( )  
    References | Related Articles | Metrics
    Driven by the carbon peak and carbon neutrality goals, remanufacturing can not only reduce carbon emissions but also reduce costs and increase profits. Therefore, manufacturers collect and remanufacture used products, and reduce the unit variable cost of the remanufactured product through process innovation for remanufacturing to increase their economic benefits. A series of initial investments, designs, and work in the phase of making used products easy to be collected and reducing unit remanufacturing costs are called process innovation for remanufacturing. For example, Fuji Xerox intentionally has added a disassembly design during the design and production of the copier to ensure that the collected products could be reused and remanufactured as much as possible, thereby significantly saving the remanufacturing cost. These designs do not increase the production costs of new products, but bring great convenience to the collection and remanufacturing of second-hand products. However, if manufacturers choose to remanufacture themselves, they need to consider both producing new products and remanufacturing products, which requires significant funds and investment in equipment. The manufacturer may have insufficient funds and difficulties in completing retailers’ orders, thereby affecting the better development of the supply chain. Therefore, our research is motivated by how a capital-constrained manufacturer chooses the optimal financing modes under process innovation for remanufacturing and different remanufacturing strategies.
    Based on the process innovation for remanufacturing, we consider three manufacturer-led Stackelberg game models: one without financing, another with bank financing, and still another with retailer financing. The manufacturer can adopt three different remanufacturing strategies: a strategy without remanufacturing, one with partial remanufacturing, and one with full remanufacturing. We use the Kuhn-Tucker conditions in nonlinear programming to solve the three models and obtain the optimal decisions and profits of the manufacturer and retailer in all three strategies of remanufacturing. Firstly, we consider a benchmark model for the manufacturer without capital constraint. By constructing the Lagrange function of the expected profit of the retailer, we use the Kuhn-Tucker conditions to obtain the optimal response function of the retailer. Then, according to the retailer’s response functions, we again construct the Lagrange function of the manufacturer’s expected profit function and obtain the optimal decisions and profits of the manufacturer and the retailer under the three remanufacturing strategies. Then, considering that the manufacturer has financial constraints, we construct the game models under two scenarios: one with bank financing and the other with retailer financing. Similarly, the optimal equilibrium solutions and profits of both parties in the supply chain under three different remanufacturing strategies are obtained based on the Kuhn-Tucker conditions. On this basis, we analyze the optimal financing strategies of the manufacturer and the financing preferences of the retailer and we verify the conclusions through numerical simulations. Furthermore, we further analyze the impact of the innovation level in process innovation for remanufacturing on the expected profits of the manufacturer and the retailer.
    These results indicate that: (1)In the case of bank financing, the optimal interest rate of the bank should be the risk-free rate. (2)Under the bank financing and retailer financing modes, the manufacturer and retailer without remanufacturing strategy have the lowest expected profits and have the highest expected profits with full remanufacturing. Therefore, the capital-constrained manufacturer has an incentive to engage in remanufacturing. (3)Overall, regardless of which remanufacturing strategy the manufacturer adopts, the manufacturer prefers the retail financing mode in most cases, while the retailer prefers the bank financing. (4)As the level of process innovation for remanufacturing improves, the expected profits of the manufacturer in the bank financing and retailer financing modes decrease accordingly. In the full remanufacturing strategy, the manufacturer’s expected profits first increase and then decrease. What is more, the manufacturer’s expected profits under retail financing are always higher than those under bank financing.
    Research on Decision-making of E-commerce Logistics ModelConsidering Brand Differences and Power Structure
    ZHANG Jianxiong, MU Jing, KANG Lin
    2025, 34(8):  15-21.  DOI: 10.12005/orms.2025.0235
    Asbtract ( )   PDF (1863KB) ( )  
    References | Related Articles | Metrics
    The rapid development of the Internet has prompted e-commerce to show a rapid development trend. With the popularity of online shopping among consumers, many suppliers have begun to sell through opening online channels or entering large e-commerce platforms. As the differences between products gradually narrow and homogeneous competition between products becomes more and more intense, suppliers begin to shift their focus on improving their service competitiveness. As the terminal link of e-commerce, the importance of logistics service quality in e-commerce has been widely recognized. Many large e-commerce platforms have established their own logistics and distribution systems to provide consumers with high-quality logistics services, thereby improving their competitiveness. For suppliers who are unable to establish a logistics distribution system, they can only outsource logistics services after entering the e-commerce platform. In the past, most suppliers completely outsourced logistics services to third-party logistics providers. However, with the full opening of the platform’s self-operated logistics, some suppliers would also deliver through the platform’s self-operated logistics. In order to meet the growing diversified needs of consumers, e-commerce platforms allow third-party logistics providers to penetrate into the self-operated logistics system, forming a distribution model that combines third-party logistics and self-operated logistics. After settling on an e-commerce platform, which logistics service model to choose is an important issue faced by suppliers.
    A high-quality brand image can always win more consumers’ favor, and the brand advantage of logistics companies provides strong support for them in the competition. The brand value of logistics companies will not only affect their own interests, but also have a certain impact on competitors. Suppliers will also consider the brand influence of logistics providers in the decision-making process of logistics service models, hoping to increase sales and revenue through the brand influence of logistics providers. In addition, the different decision-making powers between suppliers and logistics providers lead to different power structures between suppliers and logistics providers, and suppliers’ logistics model decisions also show obvious differences. In this context, studying supplier logistics service model decisions has important practical significance.
    This article focuses on the decision-making process of third-party suppliers entering the e-commerce platform to choose logistics service models, taking into account the power structure between suppliers and logistics providers and the brand differences of logistics providers, and establishes 6 logistics decision-making models led by supplier and logistics. The research results show that as competition among logistics providers intensifies, self-operated logistics with brand advantages will always lead to the loss of third-party logistics customers; when the brand difference of logistics providers is large, medium or small, the supplier’s optimal logistics decisions is to choose self-operated logistics, mixed logistics and third-party logistics in order of importance; when suppliers are more dominant, suppliers can obtain more profits and demand, while logistics providers will provide higher levels of logistics services when they are weak. At the same time, the change of dominant position will affect suppliers’ decision-making on logistics service models.
    Optimization Study of Offshore Cold Chain Logistics NetworkConsidering Fishery Production Area Layout
    WANG Yixuan, WANG Nuo
    2025, 34(8):  22-28.  DOI: 10.12005/orms.2025.0236
    Asbtract ( )   PDF (1331KB) ( )  
    References | Related Articles | Metrics
    The waters of the South China Sea are vast and rich in fishery resources. However, due to the distance of areas such as the Nansha Islands from the mainland, fishing vessels lack the endurance for prolonged operations, thus hindering their effective exploitation. Recently, with the progress of island construction projects in the South China Sea, governments have increased their support for fishermen, enabling the establishment of an integrated fishing production system covering fishing, acquisition, storage, and transportation. Against the backdrop of escalating disputes over marine resources in the South China Sea, the Chinese government has proposed policies such as “developing the Nansha Islands with fisheries as a priority” and “setting up fishery settlements to safeguard borders”, making the waters of the Nansha Islands in China an important direction for the development of offshore fisheries. Constructing a cold chain logistics network for fisheries is crucial for the development of China’s marine economy and the safeguard of maritime interests. However, existing literature lacks a theoretical analysis of the coupling concept and operating mode of the cold chain logistics network for offshore fisheries, and there is a lack of effective modeling methods to explore the network planning issues, which directly affects the in-depth development of theory and practice.
    In view of this, this paper takes the waters of the Nansha Islands in China as the background. Based on an analysis of the characteristics of cold chain logistics for offshore fisheries in distant waters, it comprehensively uses fishery resources survey data, geographic information technology, and bio-economic production models to quantitatively assess the catchable amount and potential fishing production areas. Based on this, the paper studies the layout of fishing production areas and the selection of transit cold storage sites in the Nansha Islands, considering both fishing catch volume and the total operating cost of the cold chain system. Its main contributions are as follows: (1)Establishing a bi-objective optimization model focusing on maximizing the weighted coverage rate of fishery resources and minimizing the total operating cost of the cold chain system, with primary nodes including fishing production areas, island-based transit cold storage facilities, and mainland fishing ports. The layout of fishing production areas, selection of fishing transportation modes, site selection and capacity determination of transit cold storage facilities, and configuration of refrigerated vessels are involved. (2)Introducing GIS to obtain fishery resources quantity and spatial distribution for optimization analysis, establishing the collaborative interaction between fishery information and optimization information. (3)Designing an integrated algorithm with the improved NSGA-II algorithm as the external framework and the improved k-means clustering algorithm as the internal module, determining the final optimized solution through continuous interaction between modules. (4)Optimization analysis is conducted on the real case of fishing production activities in the Nansha Islands. By analyzing the basic principles guiding decision-makers in selecting the optimal solution and filtering Pareto non-dominated solutions, the effectiveness of the proposed algorithm and model is validated.
    The outcomes of this paper propose an optimization method aiming to effectively organize and manage cold chain activities in offshore fisheries. Establishing cold storage facilities of a certain scale on islands can significantly enhance the efficiency of fishing production, providing new insights for more effective development of South China Sea fishery resources and the construction of an integrated fishing production system encompassing fishing, acquisition, storage, and transportation. Therefore, it holds important theoretical and practical value.
    Group Consensus Decision-making Model Based on Fairness in SocialNetworks from Perspective of Quantum Game Theory
    CAI Mei, HU Suqiong, XIAO Jingmei
    2025, 34(8):  29-35.  DOI: 10.12005/orms.2025.0237
    Asbtract ( )   PDF (1120KB) ( )  
    References | Related Articles | Metrics
    Group consensus decision-making is a process of collecting and integrating the opinions of each individual in a group through effective communication, negotiation, and discussion in order to reach a consensus and make a decision. The game consensus decision-making model has received widespread attention because it takes into account the opinions and coordination of all individuals, which can effectively solve the consensus-reaching problem with large differences in opinions. However, the existing game consensus decision-making models have several drawbacks, such as the assumption that individuals are completely independent and cognitively rational. In fact, an individual is often influenced and interfered with by the opinions of other individuals in decision-making, forming a kind of entanglement effect. Specifically, before making a final decision, an individual’s consciousness will be disrupted by the superposition of different external ideas, resulting in a “superposition state”. They will not be expressed externally as a specific action strategy until a final choice is made. During this process, an individual’s psychology and behavior will contradict the “principle of certainty” and the “law of total probability”, resulting in the individual’s inability to make decisions independently, which in turn affects the effectiveness and quality of consensus. In addition, the existing game consensus decision-making models ignore the fairness perceptions of the individuals. In many real-world decision-making problems, an individual prefers to compare the benefits he obtains with those another does, generating a perception of fairness of the decision options. This may also influence the subsequent decision-making behavior.
    Based on the above discussion, in order to more accurately simulate the group consensus process, this paper proposes a quantum game consensus model based on social network fairness. We design the objective function considering the entanglement and fairness concerns in the interests of the coordinator and decision maker. Among them, an objective function to minimize the total consensus cost and the difference between individual and group opinions is designed for coordinators, and an objective function to maximize financial compensation and satisfaction is designed for each decision-maker. Based on this, we establish the quantum game model for both sides and analyze the existence of a quantum equilibrium strategy. Then, a quantum game consensus optimization model is constructed with the objective of minimum cost for coordinators, and the solution set containing the proposed opinion, adjustment opinion, and unit adjustment cost can be obtained.
    To evaluate the accuracy and practicality of the proposed model, the application scenario of the model is illustrated by taking the development of coordination contracts for a new energy vehicle supply chain as an example. In addition, a sensitivity and comparative analysis of the model in this paper is performed to verify the validity of the model. The results show that the quantum game consensus model broadens the game’s strategy space and improves the game consensus efficiency. It also implies that the proposal can accurately simulate the individual’s irrational cognitive decision-making process. The research findings of this study can, on the one hand, assist coordinators in precisely identifying decision makers’ social network fairness in decision-making behaviors, increase decision-making accuracy, and better facilitate group consensus. On the other hand, it assists decision-makers in actively participating in the group consensus process, reducing the danger of information asymmetry, maintaining market order, and improving the game consensus benefits of the participants.
    There are still certain areas where we may make improvements in the future. For instance, the coordinator can apply asymmetric unit coordination costs because it has inconsistent subsidy preferences for the direction of decision-maker opinion alteration. In addition, there are ambiguous trust relationships among decision-makers. It is also necessary to pay close attention to how to employ quantum probability to depict the interference effect between numerous trust sources and better describe the degree of interconnectedness between nodes.
    Robust Optimization Decision and Coordination of Big Data Investment in Livestreaming Supply Chain under Random Demand
    PENG Liangjun, LYU Gang, SONG Huiling, LIU Mingwu
    2025, 34(8):  36-43.  DOI: 10.12005/orms.2025.0238
    Asbtract ( )   PDF (1295KB) ( )  
    References | Related Articles | Metrics
    Livestreaming sales are considered an effective combination of digital promotion and manufacturing, and more and more manufacturers have opened livestreaming sales channels. Compared with traditional online shopping, consumers are more likely to have unplanned and sudden impulsive consumption behaviors during livestreaming shopping, which makes it difficult to obtain the distribution information of market demand for live shopping. At the same time, the development of digital technology has enabled livestreaming sales to generate massive market demand and personalized preference data. These data, known as “oil”, have become important production factors for analyzing the characteristics of livestreaming sales behavior. Livestreaming retailers and data companies investing in big data can accurately analyze consumers’ heterogeneous needs, increase sales, and reduce production costs. However, it is difficult to estimate the costs and benefits of big data investment in the face of random demand, leading to a wait-and-see attitude towards big data investment. Therefore, this paper aims to address the following questions: (1)How do livestreaming supply chain members make decisions under random demand? (2)Can big data investment improve the livestreaming supply chain members and overall profits under random demand? (3)How to design a coordination contract to achieve perfect coordination of the three-echelon livestreaming supply chain including data company?
    To solve these problems, this paper establishes a three-echelon supply chain game model composed of a live streaming retailer, a manufacturer and a data company, adopts the robust optimization method to study the impact of big data investment on supply chain decisions and profits under centralized and decentralized decision-making, and designs a profit-sharing-cost-sharing contract to achieve perfect coordination of live-streaming supply chain.
    It is found that: (1)The overall profit of live streaming supply chain is positively correlated with the mean of random demand, but negatively correlated with the standard deviation of random demand. (2)If the total cost of big data investment meets certain conditions, both live streaming retailer and data company investing in big data can improve live streaming supply chain members and overall profits under random demand. (3)Whether or not big data is invested, the profit-sharing-cost-sharing joint contract can improve the profits of live-streaming supply chain members and the overall profits under random demand. In particular, data companies achieve perfect coordination of the live-streaming supply chain in the form of subsidies.
    The contribution of this paper has three aspects. First, in view of the background, that is to say, livestreaming sales face random market demand, this paper discusses the new problem of big data investment decision and coordination of livestreaming supply chain, which makes up for the lack of random demand consideration and the lack of data companies as decision-making member consideration in current livestreaming supply chain studies. Second, it is found that the investment of big data by both live-streaming retailers and data companies can improve the members of the live streaming supply chain and the overall profits, and the profit-sharing-cost-sharing contract can realize the perfect coordination of the three-level live streaming supply chain. Third, different from the linear market demand modeling and solving method adopted in the prior research on decision-making and coordination of live-streaming supply chain, this paper adopts a new method of stochastic demand and robust optimization to solve the optimal decision-making and coordination problem of stochastic market demand of live-streaming supply chain, which cannot be solved by the modeling method of definite market demand.
    The study gives the following management implications to the members of live-streaming supply chain: (1)Government can formulate relevant policies to encourage data firms and live-streaming retailers that have not yet invested in big data to invest in big data for sustainable development. (2)Government should guide supply chain members investing in big data to sign a profit-sharing and cost-sharing joint contract, to achieve the supply chain performance of centralized decision-making. (3)Supply chain members who sign the joint coordination contract should control the big data investment cost within an appropriate range according to their own cost-benefit rate.
    Resilient Supplier Selection and Optimal Order AllocationBased on Data-driven under Disruption Risks
    ZHAO Bing, SU Ke, WEI Yanshu, SHANG Tianyou
    2025, 34(8):  44-51.  DOI: 10.12005/orms.2025.0239
    Asbtract ( )   PDF (1152KB) ( )  
    References | Related Articles | Metrics
    In the fiercely competitive global market, companies are more willing to entrust some business processes to external organizations to achieve benefits such as reducing costs, improving product quality, and enhancing competitiveness. A typical example of this type of outsourcing is purchasing accessories and services through global suppliers. Therefore, how to select suitable suppliers and make the best order allocation plan has become a problem worth in-depth consideration. Traditionally, supplier selection has considered standards for such things as cost, quality, and delivery time. But recently, due to the vulnerability of global supply chains in the face of unexpected and man-made disasters such as tsunamis, earthquakes, transportation accidents, and strikes, suppliers are facing various supply disruption risks, and the harm caused by these risks can immediately spread downstream in the supply chain, creating what is known as a “chain reaction”. Therefore, considering resilient suppliers has also become a key strategic decision in supplier selection and order allocation issues.
    In response to the above considerations, this paper establishes a two-stage distributionally robust optimization model based on data-driven, which can flexibly solve supplier selection and order allocation problems when facing disruption risks. Firstly, we take into account the possible random disruptions that suppliers may experience, namely a decrease or loss of their production and supply capabilities, and aim to deal with them through strategies such as supplier fortifying, recovery and signing with backup suppliers. Secondly, in view of the uncertainty of disruption scenarios and the available limited historical data, three models based on stochastic programming, classical robust optimization and distributionally robust optimization with Wasserstein ambiguity set are established. Finally, using duality and linearization techniques, the three established comparative models are transformed and solved, and the corresponding results are obtained.
    The numerical results indicate that adopting appropriate coping strategies can effectively alleviate the chain reaction caused by supply chain disruptions. The impact of supplier disruptions on downstream enterprises cannot be ignored. For example, after the tsunami and earthquake in Japan in 2011, suppliers of the automotive brand Toyota were unable to deliver parts in the expected quantity and speed, causing Toyota to suspend production for several days, resulting in a loss of approximately 50000 vehicles per day. Therefore, when designing the supply chain, managers should take corresponding active or passive measures to prevent or mitigate the occurrence of disruption. The model established in this paper also proves that the corresponding measures will have a positive effect on improving supply chain elasticity and reducing chain reactions between facilities.
    The occurrence of supplier disruptions is irregular and uncertain, and historical data collection may be insufficient and incomplete. In order to handle such uncertainties, the distributionally robust optimization model is superior to the stochastic model and the classical robust model in terms of robustness and stability, respectively. The results of this paper also show that compared with the stochastic models, distributionally robust optimization models do not require distribution information of uncertain parameters and have the ability to cope with uncertainty in distribution information such as mean variance. The classical robust model is less stable in supplier selection results but more conservative in numerical results. Therefore, when the disruption history data is limited and the distribution information of uncertain probabilities is not completely known, it is a good choice to adopt the distributionally robust optimization method.
    The risk of disruption may be related to natural disasters or specific types of events that occur through intentional or unintentional human behavior, which are less likely to occur but have a significant impact on business operations. Adopting corresponding strategies can effectively reduce the harm of supplier disruption to the economic benefits of enterprises, and with the support of reasonable mathematical models, can assist enterprises in formulating optimized decision-making plans.
    Joint Optimization of Order Splitting and Delivery for Multi-item and Multi-warehouse
    ZHANG Yanju, CHENG Jinqian, WU Jun
    2025, 34(8):  52-59.  DOI: 10.12005/orms.2025.0240
    Asbtract ( )   PDF (1223KB) ( )  
    References | Related Articles | Metrics
    In recent years, e-commerce has given rise to new formats and patterns, which has fueled the vigorous growth of China’s online retail industry. Considering the order features of multi-item customer orders and the warehouse layout of multi-warehouse in one city that are frequently present in the actual operations of e-commerce enterprises, order splitting occurs easily. Unreasonable order splitting will inevitably lead to multiple dispersed delivery of orders as the e-commerce orders continue to shift to small batches, multiple varieties and high frequency. This not only raises the total cost, but also runs counter to the proposition of green logistics. Although academics both domestically and internationally have carried out a greater number of beneficial studies on order splitting and order delivery and written many valuable works, there are still shortcomings. For example, order splitting and order delivery as two key aspects of order fulfillment are an interrelated and organic unity. However, most existing research separates the order splitting from order delivery, and only optimizes a single problem in isolation, ignoring the correlation between the two. In view of the above, and for the pressing problem of order splitting and order delivery that e-commerce enterprises need to solve, it is of great theoretical and practical significance to investigate how to reasonably split and distribute orders to improve the overall efficiency of order fulfillment.
    Driven by the aforementioned considerations, this paper focuses on order splitting and order delivery in a multi-item, multi-warehouse as a whole for joint optimization, and builds a mixed integer programming model with the objective of minimizing order fulfillment cost. Furthermore, this paper proposes an Improved Adaptive Large Neighborhood Search (IALNS) algorithm. The main contributions of this paper are the following aspects: (1)Breaking through the constraints of existing research, this paper views the order splitting and order delivery as a whole for joint optimization instead of treating them as two separate problems. (2)Based on the analysis of the problem characteristics and the idea of decreasing the solution space, this paper designs a 2-Hierarchically Separated Tree(2-HST) algorithm with tree metric advantage to cluster the customer orders initially by introducing the clustering analysis theory. (3)This paper proposes a tournament strategy that takes penalty into account. This strategy can effectively reduce the probability of operator selected repeatedly while maintaining the convergence performance of the algorithm.
    The results illustrate that compared with the results of CPLEX and the four baseline algorithms: Ant Colony Optimization (ACO), Tabu Search (TS) algorithm, Adaptive Large Neighborhood Search (ALNS) algorithm and Product Link-based Hybrid Heuristic Large Neighborhood Search algorithm (PLBH-LNS) on order fulfillment cost and CPU running time, the proposed IALNS algorithm can obtain a higher quality local optimal solution within a reasonable time. Moreover, compared with the order splitting and delivery strategy actually adopted by e-commerce enterprise, the order splitting and delivery strategy obtained by the IALNS algorithm can decrease the order fulfillment cost by about 26% on average, which verifies the practicability of the algorithm.
    Price of KS Fairness in Two-agent for Number of MinimumWeighted Tardy Jobs and Maximum Tardiness
    LIU Zhenxuan, FAN Baoqiang, WU Yongzhi, LIU Tingting, JIANG Yanjun
    2025, 34(8):  60-65.  DOI: 10.12005/orms.2025.0241
    Asbtract ( )   PDF (1063KB) ( )  
    References | Related Articles | Metrics
    In a multi-agent scheduling problem, where each agent has its own set of jobs and competes to use the same machine, the goal is to find all pareto optimal solutions or a schedule of jobs that maximizes system utility, which is weighted sum of the agent’s utilities. However, every agent has its own objective and even a system optimum schedule may be unfair to the worse-off agent. Rather, a schedule that incorporates some criterion of fairness may be more accepted by all agents. In this regard, it is interesting and important to introduce a fair measure for resources distribution in the multi-agent scheduling.
    This paper discusses the price of fairness under a fairness measure in two-agent scheduling (namely agent A and B) with agent A aiming to minimize the number of weighted tardy jobs and agent B to minimize maximum job tardiness. Note that the maximum job tardiness is determined by the job with the maximum completion time, and the job of agent B will be continuously processed in system optimum schedule. So, assume that agent B has only a long job. The jobs of agent A have different weights and the processing time for all jobs are the same. The price of fairness (PoF) is the maximum relative loss in overall system utility of a fair schedule with respect to the system optimum schedule. The problem is to find a scheduling with respect to a given fair measure and analyse the bound of PoF.
    Firstly, we investigate the concepts of agent’s cost, utility, normalized utility, system utility and Kalai-Smorodinsky fairness (KS). KS fairness arises from the classical max-min fairness, in which the agent’s utility is replaced by the normalized utility. The idea of KS fairness is to maximize the normalized utility of the agent who is worst-off. Because Pareto optimality aims to make full use of limited resources and create maximum benefit at the minimum cost. This paper considers KS fairness schedules with respect to Pareto optimal schedules. Then we obtain characteristics and optimality properties by analyzing the structure of Pareto optimal schedules and KS fairness schedules. We show that the KS fairness schedules can be found in linear time and provide a tight bound on the PoF of fairness.
    For further research, it will be interesting to extend the two-agent scheduling model to more general cases, such as different job’s processing time and more due dates. Another direction is to study the price of fairness with respect to proportional fairness, and in different agent’s objective scenarios, such that both of the two agents want to minimize the maximum tardiness.
    Note on Single-machine Scheduling Problems withLearning Effect and Resource-dependence
    MAO Rongrong, WANG Yichun, FENG Wei, WANG Jibo
    2025, 34(8):  66-69.  DOI: 10.12005/orms.2025.0242
    Asbtract ( )   PDF (960KB) ( )  
    References | Related Articles | Metrics
    Scheduling problems with learning effects are common in actual production environments. For instance, when a machine (worker) needs to assemble or process a product (job), the time required for processing depends on their knowledge, skill, and experience. As the learning effect takes place gradually, products that are processed towards the end of the schedule usually have shorter processing times. In the chemical industry, the processing time of a compound can be varied by increasing the amount of catalyst used. Similarly, in steel production, the length of the preheating time depends on the amount of fuel used. When there is sufficient fuel, the processing time will be reduced. All of the examples mentioned above are influenced by the learning effect and the available resources during the completion time. The scheduling problems related to the learning effect and resource allocation have received significant attention from scholars in recent years. The efficient use of the learning effect and resource allocation can improve production and processing efficiency, leading to increased economic benefits. This note considers the single machine scheduling problems with learning effect and resource-dependence processing times, in which the actual job processing time is a decreasing function of its position scheduled in a sequence and a linear decreasing function of resource consumption.
    The paper YU and CHENG (2008) discussed the problem of single-machine scheduling, where the actual machining time of a job is affected by both learning effect and allocated resources. There are jobs to be processed on a single machine, assuming that all jobs arrive at time 0 and that the machine and all jobs cannot be interrupted. The objective is to determine the optimal sequence of all jobs and resource allocation such that the weighted sum of total (weighted) completion time and total resource consumption cost is minimized. The problem assumes that the learning effect of the job is an exponential function of the sum of the normal processing times of the previously processed jobs. Meanwhile, the actual processing time of the job decreases linearly with the resources allocated to the job. The published result showed that the problem of minimizing the sum of the total completion time and the total resource consumption cost is solvable in polynomial time. For the problem of minimizing the sum of total weighted completion time and resource consumption cost, in a special case (i.e., the normal processing time and weight of jobs satisfy an agreeable condition), the published results showed that this problem is polynomially solvable. Firstly, the paper lists the methods for obtaining the optimal sequence of the jobs and the optimal allocation of resources. Then, three counter-examples are given to show that the published results are incorrect. Finally, the main reasons for the incorrectness are presented, i.e., the portion of objective function corresponds to the assignment problem. Future research may consider the time complexity of scheduling problems with learning effects and resource dependence, i.e., whether polynomial time algorithms exist in these problems.
    Energy Consumption Minimization Method of UAV Based onJoint Multi-variable Optimization
    ZHANG Lihua
    2025, 34(8):  70-76.  DOI: 10.12005/orms.2025.0243
    Asbtract ( )   PDF (1435KB) ( )  
    References | Related Articles | Metrics
    Due to flexibility and high line-of-sight link probability, unmanned aerial vehicle (UAV) has been widely used in wireless communication systems. UAV is considered as air base station, and it can improve the coverage of the wireless communication system to the ground users as well as enhance the user rate. UAV is often powered by onboard batteries. Considering the weight and size of UAV, the capacity of onboard battery is very limited, which severely restricts the endurance of UAV. Therefore, in order to improve the continuous service of UAV to ground users in optimizing UAV-based applications, to reduce the energy consumption of UAV is one of the key issues that need to be solved. In addition, UAV often serves multiple users at the same time, and there are individual differences among these users, such as user traffic. Therefore, how to reasonably allocate resources to users according to user traffic, and then reduce energy consumption on the premise of meeting the user’ minimum speed requirements is an urgent problem to be solved in UAV-based applications. By solving this problem, the energy utilization rate of UAV can be improved, the endurance time of UAV can be extended, and the service continuity for users can be improved.
    To this end, for scenarios where UAV serves multiple users, a power minimization algorithm of UAV based on traffic prediction of user (PMTP) is proposed. The PMTP algorithm controls UAV energy consumption from three aspects: resources allocation, transmission power control and UAV position. Firstly, user traffic is predicted by Gaussian process regression method, and the minimum user rate is calculated. This model is solved by optimizing UAV resources allocation, UAV transmission power and UAV position. The optimization problem is a mixed integer nonlinear programming problem, which relates three variables: UAV resources allocation, UAV transmission power and UAV position. Solving the optimal solution of this problem directly involves a lot of computation and high complexity.
    Based on this, the suboptimal solution of the optimization problem is obtained by using block coordinate descending method. The original optimization problem is divided into three sub-problems and then solved step by step. Specifically, a sub-problem for optimizing resource allocation for a given transmission power and UAV location is first established. This sub-problem is an optimization problem of concave targets and linear constraints, which can be solved directly by using the CVXPY toolkit in Python. Then, taking the solution of resources allocation as a known condition, a sub-problem for optimizing transmission power for a given UAV location is established. This sub-problem is a convex optimization problem and can be solved directly. Finally, the obtained resources allocation and transmission power solutions are taken as known conditions to establish a sub-problem for optimizing UAV position. This sub-roblem is a convex optimization problem, which is solved by sequential deduction method.
    Finally, the performance of the PMTP algorithm in predicting traffic accuracy, user average transmission rate and UAV energy consumption is analyzed through simulation experiments. The simulation results show that the Gaussian process regression method can accurately predict user traffic. The PMTP algorithm improves the average transmission rate of users. With a bandwidth of 50MHz, the average transmission rate for 10 users reaches 250Mbps. Compared with the maximum power and random algorithm, the PMTP algorithm reduces the energy consumption of UAV.
    The PMTP algorithm uses Gaussian process regression to predict user traffic. Then, with the reliability of data transmission as the constraint condition, a target optimization problem is established, which includes UAV position, UAV transmission power and resources allocation. Finally, the block coordinate descending method is used. However, we only consider the scenario of one UAV. Later, the PMTP algorithm will be extended to adapt it to the multi-UAVS communication scenario, which will be the next research direction for this paper.
    Research on Text Feature Selection Method Based onFixed Initial Population Genetic Algorithm
    WANG Zhaogang
    2025, 34(8):  77-82.  DOI: 10.12005/orms.2025.0244
    Asbtract ( )   PDF (964KB) ( )  
    References | Related Articles | Metrics
    Selecting text features to reduce feature dimensions, improve classification accuracy, and reduce the time consumption of the classification process is undoubtedly a problem that text classification tasks inevitably face and need to solve in the context of big data. In existing text feature selection methods, genetic algorithms (GA) and other optimization algorithms are often used to transform the text feature selection problem into a feature optimization combination problem with classification accuracy as the goal.
    Research on text feature selection based on GA has achieved significant results in reducing the optimization range of feature selection, reducing dependence on classification models, and mitigating the adverse effects of genetic randomness such as selection, crossover, and mutation. However, existing research has largely overlooked the randomness of the initial population of GA, which has a negative impact on feature iterative optimization selection. Randomizing the initial population ignores the contribution of feature words to classification, which to some extent increases the difficulty of the algorithm quickly converging near the global optimal solution.
    Therefore, this article proposes a text feature selection method CHI_FIPGA that combines CHI square test with a fixed initial population GA. The initial population of GA is set to select feature words with higher CHI values. By selecting different numbers of feature words, the differences between individuals in the initial population are maintained. The classification accuracy of the classification model is used as the fitness, and genetic operations such as selection, crossover, and mutation are performed to iteratively optimize within the range of all feature words.
    Multiple Chinese text classification experimental datasets are selected, and performed to preprocess operations on the data, including word segmentation, removal of stop words, matrix transformation, and feature word weighting. Different classification models such as multi-layer perceptron neural networks, random forests, naive Bayes, K-nearest neighbors, decision trees are used to compare optimal solution results with GA, CHI_GA, PSO, and CHI_PSO.
    The experimental results indicate that compared with the random initial population GA, the CHI_FIPGA method has an average improvement of 14% in classification accuracy and an average reduction of 66% in the number of feature words. Compared to the CHI_GA method, the average classification accuracy of the CHI_FIPGA method improves by 9%, and the average number of feature words decreases by 36%. Compared with the PSO method, the CHI_FIPGA method has an average improvement of 7% in classification accuracy and an average reduction of 61% in the number of feature words. Compared with the CHI_PSO method, the CHI_FIPGA method has an average improvement of 3.4% in classification accuracy and a 25% reduction in the number of feature words.
    CHI_FIPGA is based solely on the CHI method for feature word evaluation and ranking. It attempts to use various filtering methods such as IG, TF-IDF, F-value, etc. to comprehensively evaluate and rank feature words, analyze the differences in the contribution of feature words to different category classifications, and consider the structural characteristics of classification models in the feature word evaluation and ranking process. This is a feasible path to further improving the quality of the fixed initial population.
    Reputation Certification Strategy of Hybrid E-commercePlatforms under Online Retailer Competition
    DUAN Yulan, ZHANG Lei
    2025, 34(8):  83-90.  DOI: 10.12005/orms.2025.0245
    Asbtract ( )   PDF (992KB) ( )  
    References | Related Articles | Metrics
    In recent years, hybrid e-commerce platforms such as Amazon and eBay in the United States, Flipkart in India, JD.com and Suning in China have grown strongly, and the role of marketplace channels has become increasingly prominent. According to Amazon’s financial report for the fourth quarter of 2022, the sales share of sellers in marketplace channels accounted for 59% of total sales. However, there are also a series of problems in the rapid development of marketplace channels, for example, due to the fact that the ownership of products in marketplace channels belongs to online retailers, and the supervision of e-commerce platforms is limited, problems such as delayed delivery, and poor logistics service are common among online retailers in marketplace channels. If these problems continue to trouble consumers, the further development of e-commerce platforms will be seriously restricted.
    In response to the above phenomena, some e-commerce platforms (such as JD.com, Suning, etc.) have launched a good store reputation certification for online retailers in the marketplace channels. Specifically, the e-commerce platform evaluates the credibility of online retailers in the marketplace channel through a number of indicators (opening time, store status and integrity, etc.), and adds additional “good store” marks to online retailers that meet the certification standard. According to the signal transmission theory, the reputation certification strategy can transmit signals of products’ quality and service in the store to consumers, and help consumers select high-quality products and services. After the e-commerce platform launches the reputation certification, some online retailers actively improve their store operations according to the certification standards, but there are also some online retailers that maintain the status quo. Therefore, this paper intends to discuss the following questions: (1)If the e-commerce platform launches reputation certification, it is conducive to obtaining higher commission fees from online retailers, but may reduce the revenue of self-operated products, so should the e-commerce platform launch reputation certification for online retailers? (2)When the e-commerce platform launches reputation certification, the completion of reputation certification by online retailers is conducive to increasing revenue, but it needs to pay a certain cost. How should competitive online retailers make decisions? (3)Can reputation certification enable all participants in the online sales system to achieve a win-win situation? If so, what conditions need to be met?
    In order to answer the above questions, this paper constructs an online sales system composed of a hybrid e-commerce platform and two competing online retailers. According to the reputation certification strategies of the e-commerce platform and two online retailers, the following three scenarios may occur: the e-commerce platform does not launch reputation certification (scenario NN), the e-commerce platform launches reputation certification but only one online retailer completes the reputation certification (scenario CN or NC), and the e-commerce platform launches reputation certification and both competing online retailers complete the reputation certification (scenario CC). By using the game theory method to solve the optimal solution of the three scenarios and further comparative analysis, the main conclusions of this paper can be obtained: only when the commission rate and consumers’ acceptance of the marketplace channel are high, the e-commerce platform will launch a reputation certification. In the scenario where the e-commerce platform launches a reputation certification, when the cost of reputation certification is low, both of the two online retailers complete the reputation certification; when the cost of reputation certification is in the medium range, only one online retailer will complete the reputation certification. In addition, an increase in consumers’ acceptance for uncertified products and commission rate helps competitive online retailers achieve differentiated reputation certification, while an increase in consumers’ acceptance of the marketplace channel helps competitive online retailers complete reputation certification at the same time. The conclusion of this paper can provide decision-making reference for the reputation certification of e-commerce platforms and the strategy formulation of online retailer service investment.
    Research on Trade-off between Present Value of Program Owner Costsand Robustness Thresholds under Different Payment Methods
    FENG Hui, ZHANG Yan, NIE Ruiqi
    2025, 34(8):  91-98.  DOI: 10.12005/orms.2025.0246
    Asbtract ( )   PDF (990KB) ( )  
    References | Related Articles | Metrics
    Firstly, we define the research question and analyze the mechanism and principle of the impact of uncertainty on the present value of employer costs under different payment methods. Secondly, we analyze the connotation and characteristics of program robustness, and propose the concept of program robustness threshold. Thirdly, we construct a balance model between the present value of employer’s expenses and the robustness threshold under different payment methods, and analyze the impact of different parameters and levels on the present value of employer’s expenses. Fourthly, we study the robustness thresholds, buffer settings, robustness constraints, and robustness constraints of contract items under different payment methods. Finally, based on the Yangtze River Protection Water Environment Governance Program Z, a case study is conducted to propose management insights. The research results indicate that there is a trade-off between the present value of costs based on different degrees of delay and the robustness threshold under different payment methods. The sensitivity of the discount rate is the strongest, followed by the buffer cost coefficient, robustness threshold, claim rate, and contract project duration compression rate. The sensitivity of the five factors to the present value of delay costs is greater than that to the present value of costs, with the former being 30 to 40 times higher than the latter. The research results of this article provide a basis for employers to develop a robust program benchmark schedule and effectively control the present value of costs.
    (1)Programs are the fundamental units for the implementation of large-scale projects. Compared with single interest parties, delays in contracted projects under multi-interest parties have a more significant adverse impact on programs, namely a more significant cascading effect. Under a single stakeholder system, when activities are delayed, contractors can take remedial measures to achieve the goal of expediting work without affecting other contractors. The subordinate linkage effect of multiple stakeholders is very limited. In contrast, delays in contracted projects under multiple stakeholders will cause significant cascading effects on other contractors. It can be seen that in the implementation process of programs, different payment methods have different impacts on the distribution of cost flow and present value of cost for program owners in uncertain environments. Therefore, it is necessary and complex to achieve the minimization of present value of cost for owners under different payment methods and the robustness of program schedule by adding time buffers and setting robustness thresholds.
    (2)According to the current rules on compensation and claims for delay damages (such as FIDIC), losses (including costs and/or construction period) caused to other contractors due to project delays in the contract are borne by the owner, and different payment methods (one-time payment, monthly payment, milestone payment, etc.) have an impact on the present value of program owner costs. In this situation, the owner hopes to accurately assess their own risks and reduce the losses caused by contracted project delays by setting the robustness threshold of the contracted project. At the same time, in order to meet the preferences of homeowners for decision-making, it is necessary to clarify the trade-off between the present value of homeowner costs and the robustness threshold.
    Part 1: Defining the robustness and robustness threshold of water environment governance programs and large river water environment governance programs. One is the water environment governance program, which refers to a group of interrelated and centrally managed water environment governance contracted projects in a contractual environment. The second is the robustness of the large river water environment governance program, which refers to the ability to truly reflect the anti-interference ability of the governance program, and also reflects its adaptability to complex and changing environments. The third is the robustness threshold for water environment management of large rivers, which refers to a reasonable value set in an uncertain environment to reduce the adverse effects between different stakeholders, avoid frequent adjustments to program plans due to uncertainty factors, and ensure the robust execution of program plans. The program plans need to have a certain degree of robustness, and therefore, are set accordingly.
    Part 2: Analyzing the impact mechanism of three payment methods, including one-time, monthly, and milestone payments, on the present value of program owner costs. The impact of a one-time payment method on the present value of costs depends on the completion time of each contracted project. As long as the completion time remains unchanged, the impact of a one-time payment method on the present value of costs will not change. The second is that the impact of milestone payment methods on the present value of costs is not only related to the program schedule and its implementation status, but also closely related to the milestone schedule and its implementation status in the contracted project. Thirdly, in the monthly payment method, the payment for the current month shall be calculated based on the actual amount of work completed by the contractor in the current month and in compliance with the measurement rules, and the owner shall pay the corresponding amount to the contractor.
    Part 3: In response to delays in the implementation process of the program contracted project, with the goal of minimizing the present value of owner costs, research is conducted on the relationship between the present value of program costs and robustness thresholds under one-time and monthly payment methods. Firstly, a trade-off model is constructed between the present value of program costs and the robustness threshold under both one-time and monthly payment methods. The second is to analyze the sensitivity of relevant parameters under different payment methods to the optimization results of the present value of owner costs.
    The calculation results indicate that there is a trade-off between the present value of costs and the robustness threshold under different payment methods. The second is that the discount rate is the most sensitive, followed by the buffer cost coefficient, robustness threshold, claim rate, and contracted project duration compression rate. The sensitivity of five factors to the present value of delay costs is greater than that to the present value of costs, with the former being 30 to 40 times higher than the latter.
    Next research direction: research on cost present value optimization model based on robustness threshold under mixed pricing methods.
    Dynamic Energy-saving Scheduling Method Based on DeepReinforcement Learning for Flexible Job Shop
    LU Xinyi, HAN Xiaolong
    2025, 34(8):  99-104.  DOI: 10.12005/orms.2025.0247
    Asbtract ( )   PDF (976KB) ( )  
    References | Related Articles | Metrics
    The growing speed of economic globalization has exposed the manufacturing industry to fierce market competition and a changeable production environment. Therefore, responding effectively to the challenges posed by various dynamic events has become a key issue affecting the survival of businesses. In recent years, many enterprises have been focusing on flexible manufacturing models. In this context, the dynamic flexible job-shop scheduling problem (DFJSP) has attracted extensive attention from industry and academia.
    Random events associated with workpieces are one of the reasons for driving dynamic scheduling. One of the typical problems is the dynamic flexible job-shop scheduling problem with random job arrivals (DFJSP-RJA). To address this problem, this paper develops a mixed-integer programming model in conjunction with green manufacturing propositions. The optimization objective is to simultaneously minimize the total production delay and energy consumption that includes the machine idling and processing energy consumption.
    Since deep reinforcement learning (DRL) can achieve both high quality and high response in dynamic environments, and scheduling rules are widely used as an immediate response method in the pre-study of dynamic scheduling problems, this paper combines the two and proposes an algorithm, called DDQN-ST9, based on composite scheduling rules and DRL. First, based on the optimization objectives of timely completion and energy saving, six production state feature indicators with values between [0,1] are set and three scheduling rules are designed for procedure and machine scheduling respectively, which are used for the construction of feature vectors and action spaces in the later usage of the algorithm. And then the prioritized experience replay, called Sum Tree, is introduced on the basis of the DDQN algorithm to accelerate the convergence speed and improve the training efficiency. DFJSP-RJA can be regarded as a Markov decision process (MDP), in which the agent, after a perturbation occurs, selects the most suitable scheduling rule among nine composite scheduling rules by integrating the information of the current production state to complete the scheduling of the original and afterwards arrive jobs.
    In order to comprehensively test the performance of the DDQN-ST9 in solving the dynamic energy-saving scheduling problem for flexible job-shop, the algorithm is simulated using several benchmark examples from the Kacem and Brandimarte series. Firstly, the nine composite scheduling rules proposed in this paper are compared with the five classical scheduling rules appearing in the literature, and the Kacem and Brandimarte benchmark instances of different sizes are solved respectively, which verifies the superiority of DDQN-ST9 in the aspects of scheduling rule design, scheduling algorithm design and improving algorithm structure. Secondly, by varying both parameters of the delivery urgency factor and the exponential distribution followed by the random arrivals of the jobs, a number of Brandimarte benchmark instances with different delivery requirements and market demands are solved, verifying that the DDQN-ST9 algorithm can effectively cope with a variety of production environment configurations.
    This paper focuses on combining DRL with scheduling rules to apply to the dynamic energy-saving scheduling problem in a flexible job shop, which can be extended to other shop environments in the future, and to consider the impact of other dynamic events on production scheduling, such as machining time changes, machine failures, and so on. In addition, it can be investigated how to better optimize the solution process of complex dynamic scheduling problems based on deep reinforcement learning.
    Application Research
    Small Enterprise Default Discriminant Model Based on Weight of Maximization Profit
    WANG Shanshan, ZHOU Ying, CHI Guotai, DONG Yanru
    2025, 34(8):  105-112.  DOI: 10.12005/orms.2025.0248
    Asbtract ( )   PDF (991KB) ( )  
    References | Related Articles | Metrics
    In recent years, the number of small enterprises in our country has continued to grow, and they have become the main force in economic development as well as an important force in creating new jobs and promoting innovation and entrepreneurship. However, due to the characteristics of small enterprises, such as incomplete financial information, high operating risks, less collateral, and low credit ratings, the problem of loan financing has long existed, which constrains the development of small enterprises. Solving the financing difficulties of small enterprises is of great significance for developing the national economy and promoting sustainable economic development. Taking into account the issue of the cost-benefit ratio, it is difficult for banks to accurately distinguish the default risk of small enterprises. How to establish a reasonable default discriminant model to help alleviate the current situation of small enterprises’ financing difficulties has become an urgent problem to be solved. Establishing a reasonable default discriminant model is of great significance to commercial credit and credit decisions between financial institutions including banks and enterprises.
    In the domain of credit risk, because the dataset of enterprises is high-dimensional and characterized by redundant and irrelevant features, selecting the optimal feature subset helps reduce both the dimensionality of data and computation costs, and enhance the predictive capability of the classifier. Therefore, the determination of the optimal feature subset that identifies the default status of enterprises effectively is worth considering. This paper selects the optimal feature set collecting data of enterprises from internal financial factors, non-financial factors, and external macro factors. Firstly, Bootstrap samples are generated by sampling multiple instances from training data. In each subsample, we select the features that contribute to the default discriminant by computing feature importance scores measured by information gain using extreme gradient boosting (XGBoost), then candidate feature subsets are generated by calculating the intersection of selected features in each subsample, and finally, we select the optimal feature subset by using support vector machine (SVM) with an objective to maximize AUC.
    Weight is a key factor affecting the accuracy of the default discriminant model, and reasonable weight has received widespread attention. In the process of building the model, weight reflects the importance of features. Different weights are assigned to the same features, and the results are different, or even completely opposite. How to determine the weights of features is the crucial problem to be discussed in this article. This paper adapts the idea of being profit-driven to the task of determining the weight vector of the feature. The profit is composed of the benefits associated to the correctly classified non-default enterprises minus the losses with respect to the misclassified non-default enterprises. An objective function is established by constructing the function relationship between the discriminant result of SVM and the profit obtained by distinguishing two groups of enterprises. A nonlinear programming model that maximizes the above-mentioned objective function is established to find the optimal penalty coefficient, and then the SVM model is built using the optimal penalty coefficient to find the optimal weight vector.
    This article uses the credit data of small enterprises from a regional commercial bank as the research sample to validate the effectiveness of the proposed model. The empirical study shows that the optimal feature set covers the principle of credit evaluation 5C. Meanwhile, the profit and comprehensive precision of the proposed default discriminant model in the study are higher than those of the other six models, such as logistic regression, etc. Furthermore, the findings in this study illustrate that non-financial features of enterprises have the greatest impact on default discriminant with the weight of 0.475. Per capita disposable income of urban residents is the most important indicator that affects default discriminant with the weight of 0.12. This study provides a reference for the credit decisions of commercial banks and new insights into credit risk assessment for small enterprises.
    The data of small enterprises is characterized by a highly imbalanced distribution of class between default samples (minority class) and non-default samples (majority class). In an imbalanced classification, the minority samples are usually ignored or misclassified. Future directions can consider a suitable sampling technique so as to improve the identification rate of default enterprises and thus boost the profit maximization effect.
    Construction of Judicial Mediation Model:Taking Judicial Mediation in Cases Involving Statutory Inheritanceand Support Disputes as Example
    WU Daoxia, JIANG Chunying, LIN Xinyu
    2025, 34(8):  113-119.  DOI: 10.12005/orms.2025.0249
    Asbtract ( )   PDF (971KB) ( )  
    References | Related Articles | Metrics
    Judicial mediation in China has undergone three developmental stages, evolving from an initial focus on mediation to a later emphasis on adjudication, and now, in response to a sharp increase in pending cases and the frequent escalation of social conflicts, China has adopted a “combination of mediation and adjudication.” The positive value of judicial mediation in “reducing disharmony and promoting social harmony” is particularly prominent at present. At its core, the institutional value of judicial mediation lies in achieving a Pareto improvement in social welfare through innovative models of interest distribution. Its core value is to transcend the zero-sum game of litigation and establish a positive-sum game framework to promote social harmony. However, the current judicial mediation model in China still exhibits strong rigid characteristics, meaning that it resolves disputes strictly in accordance with the law, thereby excluding the possibility of seeking appropriate solutions from a holistic perspective of the dispute. This is inconsistent with the aforementioned institutional value. To address this dilemma, this paper identifies the characteristics of cases suitable for judicial mediation and explores how to construct a corresponding judicial mediation model to enable parties to achieve a positive-sum game and increase factors contributing to social harmony.
    This paper is based on Lindblom’s three-element theory of authority, exchange, and persuasion, combined with nonlinear programming to construct a value creation model. Specifically, it quantifies the distribution status of property resources (divisible/indivisible assets) and non-property benefits (apology level, visit frequency, etc.), and under the constraints of total resource quantity and pre-set acceptance thresholds, establishes a total benefit maximization objective function through optimization algorithms. A simplified Shapley value algorithm is employed to calculate the cooperation surplus by comparing the net benefit difference between cooperative alliances and non-cooperative states, and allocate value-added based on marginal contribution weights.
    The article analyzes successful mediation cases involving statutory inheritance and defamation disputes from the Peking University Law Database, extracting two dimensions of value creation. Based on the analysis of successful mediation cases, by deconstructing the dual dimensions of property income appreciation and non-property income optimization, the paper proposes three criteria for case selection: cases must possess dual-dimensional interest appreciation potential, feature a composite interest appreciation exchange structure, and meet the equilibrium conditions of dynamic game theory. This identifies four common types of cases suitable for mediation: family relationship restoration cases, property rights restructuring cases, cooperative surplus development cases, and social capital regeneration cases. Using cases involving statutory inheritance and support disputes as examples, the study demonstrates the implementation process of judicial mediation based on quantifying value-added distribution using a nonlinear programming model. The research indicates that such a judicial mediation can achieve a dual optimization of legal outcomes and social governance efficiency, providing theoretical support and operational pathways for the paradigm shift of judicial mediation from “resolving disputes” to “creating value.”
    Future research could further explore quantification methods for non-property gains, such as developing assessment tools based on dimensions like emotional accounts and reputational capital, combined with intelligent technologies like natural language processing and big data analysis to extract non-material gain indicators. Additionally, interdisciplinary collaboration should be deepened, incorporating assessment methods from psychology and sociology to establish a more comprehensive quantification system for non-material gains.
    Evaluation and Selection of New Energy Vehicles Based on Online Consumer Reviews
    LU Xiaoxue, XU Haiyan, HU Limei, Yangzi Jiang
    2025, 34(8):  120-126.  DOI: 10.12005/orms.2025.0250
    Asbtract ( )   PDF (1222KB) ( )  
    References | Related Articles | Metrics
    New energy vehicles have become one of the emerging trends in the development of the automobile industry by virtue of their clean and environmentally friendly advantages. Over the past decade, the Chinese government has implemented a series of financial policies and tax benefits to promote the development of China’s new energy vehicles industry. At the end of 2022, the government announced the termination of the subsidy policy for the purchase of new energy vehicles. The rapid industry development and policy changes have shifted the development of new energy vehicles from policy-driven to market-oriented. With a growing maturity of e-commerce and third-party website evaluation systems, consumers are faced with the extraneous task of extracting useful information from massive online reviews to make an educated product purchasing choice. In this paper, we introduce a novel model for evaluating and selecting new energy vehicles based on online consumer reviews within the context of the Internet big data environment. Based on the evaluation outcomes, the study provides recommendations and references for governmental bodies, enterprises, and consumers involved in the design, recommendation, and selection of new energy vehicles. This proactive approach aligns with the overarching objective of achieving a “dual-carbon” strategy, promoting sustainability, and facilitating the advancement of the new energy vehicles industry. In essence, this study plays a pivotal role in contributing to the realization of the “dual-carbon” goal while fostering the robust development of the new energy vehicle industry.
    First, this paper establishes a comprehensive evaluation index system of new energy vehicles. Leveraging advanced data mining and text analysis techniques, this paper crawls, processes and restructures the online consumer review data about new energy vehicles from third-party websites. Using the Latent Dirichlet Allocation (LDA) theme model, this paper identifies consumer preference information, which is then used as the foundation for decision makers to identify evaluation attributes. This information, in conjunction with market analysis and new energy vehicles specific characteristics, informs the construction of a pertinent set of evaluation indices, thereby facilitating the establishment of a robust evaluation index framework. Next, we develop an evaluation model based on multi-attribute decision. Combining the sentiment lexicon analysis and probabilistic linguistic term set, the sentiment preferences in the textual comment information are fully converted to obtain the probabilistic linguistic evaluation matrix to avoid the loss of information. In cases where evaluation information is available, our paper takes into account information sharing and attribute correlation. We introduce the λ fuzzy measure, Shapley function value, and the weight value of the associated attributes to obtain the weight value. Considering the limited rational behavior in product evaluation choice, we further integrate the generalized TODIM method to construct a new ranking model to fully portray the loss aversion psychology of consumers.
    This paper focuses on the selection of four new energy vehicles as the evaluation object and the constructed model is applied to evaluate the selection choice. Based on the sensitivity analysis, the evaluation ranking results are robust to changes in the degree of risk aversion in the present model. A comparative analysis further verifies that the results obtained by applying the present model are more accurate and effective than other existing models.
    The study shows that in the data-driven product evaluation model, it is necessary to establish a scientific and reasonable evaluation index system. The transformation of the multiple emotional preference information found within textual comments into actionable insights is paramount. Additionally, our paper considers the information sharing among online comments and the correlation between evaluation, all while accommodating consumers’ loss aversion tendencies. The outcome achieved through this comprehensive approach aligns more closely with the genuine evaluation of new energy vehicles, carrying substantial theoretical and practical significance for the selection and assessment of these vehicles.
    Option Pricing Based on Non-affine Double HestonStochastic Volatility Jump-diffusion Model
    SUN Youfa, CHEN Jiaqi, GONG Yishan, LIU Caiyan
    2025, 34(8):  127-133.  DOI: 10.12005/orms.2025.0251
    Asbtract ( )   PDF (1296KB) ( )  
    References | Related Articles | Metrics
    The out-of-money (OTM) options with short expiration are quite popular in the market due to their cheap prices and lottery attributes. However, the accurate pricing of this type of options has always been a great challenge in financial industry due to the heavy affection from both market illiquidity and sentiment. The existing patching works based on the Heston stochastic volatility model, such as relaxing the constraint of the square root setting for the volatility diffusion, introducing the jump-diffusion process, and adding stochastic factors, have their own applicable scenarios, but generally fail to improve significantly the pricing accuracy of the short-term vanilla options. For this reason, this paper attempts to integrate the existing feature structures that have been proved to be effective and widely used, such as the non-affine volatility structure, the Poisson jump structure, and the two-factor volatility structure, into the Heston model, thus yielding a non-affine double Heston stochastic volatility model with jump (NDHJ).
    Given the above comprehensive model, it is an urgent but quite challenging task to obtain the analytic formula of option pricing for practical use in real markets. In this paper, we apply the local perturbation method and the Fourier-Sinc approach to derive an approximate explicit formula for European option price. To be detailed, we decompose the conditional characteristic function into a deterministic component and an undetermined part which is further expanded into a series of terms by the perturbation method. Provided with the prerequisite of a tractable characteristic function, the Fourier-SINC method guarantees that the massive option prices can be numerically and efficiently computed in parallel. By doing this, our approach achieves a good tradeoff between the computational efficiency and accuracy.
    Numerical experiments and empirical evidences show that:
    (1)The price path simulated by the NDHJ model exhibits more volatility than that by a reduced version of NDHJ, i.e., the non-affine one-factor stochastic volatility model with jump (NHJ), even sharing the same parameter settings. This good performance of the NDHJ model is surely attributed to the double volatility components that drive very stable and quite steep paths, respectively. As a result, the NDHJ model exceeds the NHJ model in capturing the statistical characteristics, such as the “excess kurtosis”, “fat tails” and “skewness” empirically observed in the probability density function of asset return in real markets.
    (2)Compared with the classical Heston model, the introduction of the non-affine structure and jump diffusion process contributes to producing steeper implied volatility (IV) curves. The NHJ model cannot fit well both the steep short-maturity IV curve and the smooth long-maturity IV curve simultaneously. As a result, the NHJ model is incompetent in characterizing the high-volatility markets. Therefore, it is necessary to use the NDHJ model to capture the overall characteristics of IV surfaces of options, especially for short-term OTM options, in different dimensions.
    (3)Numerical experiments show that the pricing formula provided in this paper has high accuracy. The NDHJ model outperforms the alternatives in pricing the short-term OTM option.
    (4)The empirical study validates the good performance of NDHJ model in forecasting the price of SSE 50ETF option. We select the close prices of SSE 50ETF option from January to June 2023 as the objective data, and exclude the ones with remaining trading days less than 5 days and the ones that do not satisfy no-arbitrage condition. The empirical evidences show that, compared with the Heston model and the NHJ model, the NDHJ model has the smallest in-sample fitting and out-of-sample prediction errors, and the overall pricing accuracy for options with different maturities and moneyness is improved, especially for those short-term OTM options.
    The marginal contribution of this paper is two-folded: first, it extends the application of the local perturbation method from one-dimensional volatility models to two-dimensional volatility models with jump; second, this paper provides a universal framework of option valuation under a generalized stochastic volatility model.
    A Decomposition-integration Model with Staged GranularityReconstruction for Daily Water Supply Forecasting
    BAI Yun, YAN Zhengjie, ZENG Bo, CHEN Guoqiang, XIE Jingjing
    2025, 34(8):  134-140.  DOI: 10.12005/orms.2025.0252
    Asbtract ( )   PDF (2266KB) ( )  
    References | Related Articles | Metrics
    A stable water supply is the cornerstone of social stability and economic development, and accurately forecasting the water supply enables cities to allocate resources more effectively, so as to avoid waste and unnecessary costs in the water supply planning and management.
    Inspired by the ideas of “division and conquest” and “granularity reconstruction”, this paper proposes a decomposition integration forecasting model with staged granularity reconstruction. First, we decompose the original time series into multiple intrinsic mode functions (IMF). Then, we perform staged granularity reconstruction on IMF, which include the first reconstruction in terms of time-frequency features (identifying different granularity information) and the second reconstruction based on complexity evaluation (improving the representativeness of high-frequency granularity). Finally, we construct deep reservoir computing networks (DeepLiESN) on each granularity after staged granularity reconstruction and integrate the results as the final forecast. The staged granularity reconstruction method proposed in this paper improves the performance of local feature extraction (especially high-frequency features), thereby improving the accuracy of decomposition-integration model.
    To verify the effectiveness of the proposed model, this work investigates two types of tests (i.e., four independent models and two reconstruction patterns). The comparative analysis of independent models reveals that the DeepLiESN, by introducing leaky integrated spiking neurons and a deep learning framework, improves its ability to describe nonlinear systems and temporal features, enhances short-term memory capacity, captures rich inherent information from the data, and is more capable of tracking the dynamic evolution of daily urban water supply. The comparative analysis of reconstruction patterns reveals that the staged reconstruction is advantageous for enhancing feature representation, particularly in overcoming the problem of mixed-frequency features. It enhances the effective utilization of high-frequency granularity information while reducing random interference in high-frequency granularity. The proposed models in this paper comprehensively consider the advantages of single-model forecasting and staged granularity reconstruction. As a result, it achieves the best forecasting performance with (1)the global error MAE=1053m3/d, RMSE=1397m3/d, and MAPE=0.577%; and (2)the individual error distribution (0,1%) accounts for 81.74%, (1,2%) for 97%, and (2%,3%) for 100%.
    In summary, the proposed decomposition-integration model with staged granularity reconstruction is enhanced from three perspectives: (1)converting one-dimensional mixed information into multidimensional components to reduce temporal complexity, (2)learning time-frequency features and entropy probability features of multidimensional components for granularity reconstruction, particularly for mitigating high-frequency noise interference, and (3)integrating deep reservoir computing networks to improve the learning of complex nonlinear system characteristics. This model can offer accurate decision-making support for urban daily water supply management.
    Research on Algorithmic Price Discrimination Supervision underDynamic Rewards and Punishments Mechanism
    ZHANG Jincan, YANG Jinhang, ZHANG Juntao
    2025, 34(8):  141-147.  DOI: 10.12005/orms.2025.0253
    Asbtract ( )   PDF (1911KB) ( )  
    References | Related Articles | Metrics
    In the new form of the digital economy, the deep connection between the algorithm technology based on large amounts of data and the application scenarios such as travel, online shopping, and takeout flash delivery has put strong momentum into the high-quality development of the digital economy and become an important starting point for countries to build new competitive advantages. However, due to the deep commercialization and extensive marketization of algorithm technology and application scenarios, the negative sides caused by algorithm technology such as “big data killing”, inducing users to indulge in the network, and excessive consumption in the development of the digital economy have deeply impacted the market competition order and social management order. Among them, the algorithm price discrimination infringes on the interests of a large number of consumers, so the complaints from all walks of life are heard constantly.
    The algorithmic price discrimination in this paper refers to the use of digital mining technology by the operators of the index service platform to set higher prices for old customers, and its economic essence is almost “first-level price discrimination” realized on data information analysis technology. How to realize the effective regulation of algorithm price discrimination in the era of digital economy is a theoretical and practical problem that needs to be solved. In order to solve the regulatory dilemma of algorithmic price discrimination on digital service platforms from the perspective of government regulation in the era of digital economy, this paper constructs an evolutionary game model between digital service platforms and consumers under four combination mechanisms of government reward and punishment policies, and analyzes the evolutionary path and influencing factors of digital service platforms and consumers’ strategic choices.
    The results show that there will be no evolutionarily stable strategy when the government adopts static reward and punishment mechanism and dynamic reward and punishment mechanism. The adoption of dynamic punishment and static reward and punishment mechanism can effectively make up for the shortcomings of the first two mechanisms and achieve evolutionarily stable state. The dynamic reward and punishment mechanism is better than the dynamic punishment and static reward mechanism in the regulation of digital service platform algorithm price discrimination. In the final stable strategy combination of dynamic punishment and static reward and dynamic reward and punishment mechanism, the probability of digital service platform choosing fair pricing is positively correlated with digital mining technology and predicting consumers’ retention payment intention, while the probability of consumers choosing purchase is positively correlated with incentive coefficient and fine amount, and negatively correlated with digital mining technology and predicting consumers’ retention payment intention. Although this study provides new perspectives and policy recommendations for the regulation of digital platforms at both theoretical and practical levels, there are some limitations. First of all, the establishment of the game model is based on specific assumptions and simplified reality. Future studies can improve the applicability of the model by introducing more realistic factors. Secondly, the research mainly focuses on the static and dynamic reward and punishment policies of the government, without considering the influence of other potential market players such as regulators and third-party organizations. Future research can explore the evolutionary game model under a more complex market environment, considering multi-participant and multi-level strategy interaction.
    Impact of Lead Independent Directors on Stock Price Crash Risk
    LI Weian, ZHOU Ning, LI Ding, ZHANG Xiaofei
    2025, 34(8):  148-153.  DOI: 10.12005/orms.2025.0254
    Asbtract ( )   PDF (958KB) ( )  
    References | Related Articles | Metrics
    In order to strengthen the role of independent directors in corporate governance, the “Opinions on Reforming the Independent Directors System of Listed Companies” issued by the General Office of the State Council in April 2023 clearly stated: “A mechanism for special meetings entirely attended by independent directors should be established. Potential major conflicts of interests such as related-party transactions should be subject to prior approval by a special meeting of independent directors before being submitted to the board of directors for review”. In practice, the director convening and presiding over this meeting becomes the de facto lead independent director. Notably, 96% of large U.S. listed companies have established this role to mitigate risks such as stock price crashes through enhanced oversight. Similarly, some Chinese firms, like Zijin Mining, have appointed lead independent directors to improve information transparency and management-board communication, thereby reducing stock price crash risk. Based on this, from the perspective of a focal firm, this study proposes the following research questions: (1)Can the establishment of a lead independent director reduce stock price crash risk? (2)What is the mechanism by which the lead independent directors affect stock price crash risk? (3)Under different internal and external environments, what are the heterogeneous impact results of the lead independents director on stock price crash risk?
    This study uses non-financial A-share listed companies in China from 2010 to 2023 as samples to empirically test the relationship between the lead independent directors and stock price crash risk. We collect data on lead independent directors from listed company websites and announcements,and other data from CSMAR. The study finds that the establishment of the lead independent director can reduce agency costs, and enhance corporate transparency, thereby reducing stock price crash risk. The results of heterogeneous analysis show that the inhibitory effect of the lead independent directors on stock price crash risk is more significant for companies with higher industry competitiveness and digital transformation, higher executive shareholding ratios, and higher institutional investor ratios.
    The contributions of this paper are threefold. First, it advances the literature on board governance and stock price crash risk by foregrounding the governance significance of lead independent directors. Prior research largely centers on board structures, neglecting the pivotal leadership and monitoring functions unique to lead independent directors. By providing empirical evidence, this study establishes the influence on crash risk, thereby demonstrating governance value and offering theoretical support for institutional design and policy formulation.
    Second, this paper deepens the understanding of the mechanisms through which lead independent directors curtail crash risk. Despite growing recognition of the importance, the micro-level channels of governance effects remain insufficiently identified. The findings indicate that lead independent directors effectively curb crash risk by reducing agency costs and enhancing information transparency, underscoring their supervisory role in preventing opportunistic managerial behavior.
    Third, this study expands the institutional context of research on lead independent directors. Existing evidence predominantly derives from developed economies, where the system is designed to counter CEO dominance and accompanied by substantial compensation incentives. In contrast, Chinas lead independent directors mainly chair independent directors meetings, maintain independence, and are neither mandatory nor financially privileged. By examining their governance effects in emerging-market setting, this paper provides context-specific evidence and enriches the global understanding of the lead independent director.
    Research on Decarbonization of Rural Energy Structuresfrom Complex Network Perspective:Social Structure and Individual Selection
    CHEN Yalin, MOU Yaqing, DAI Lincheng, WANG Xianjia
    2025, 34(8):  154-159.  DOI: 10.12005/orms.2025.0255
    Asbtract ( )   PDF (1322KB) ( )  
    References | Related Articles | Metrics
    The transition of rural energy structure towards “decarbonization” is an essential component in achieving the goal of “dual carbon”. Since the era of economic reform and opening up, rural cohabitation culture has undergone significant changes, resulting in two distinct social structures: “traditional rural” and “atomized rural”. These social structures have been internalized to form social norms, which subsequently influence the participation of rural households in the upgrading of energy structure. Currently, there is limited research on the impact of social structures on the upgrading of rural energy structure.
    This study, from the perspective of complex system dynamics, explores the influence of rural households’ social structures on the selection of decarbonization behaviors. Specifically, it introduces scale-free and small-world networks to formalize the social network structures of traditional and atomized rural areas, and constructs corresponding behavioral strategies of “imitation” and “self-reflection” for households. By utilizing interactive Markov chains, a dynamic equation for the upgrading of rural energy structure is established. Furthermore, the multi-agent technology is employed to simulate household behavioral choices, revealing the emergent characteristics of rural household behaviors: (1)In the traditional rural social structure, the process of energy structure upgrading is affected by factors such as initial support rates, village size, and opinion leaders, rather than the cumulative net income of an individual household. When the initial support rate reaches 45% and there are more than two opinion leaders, rural households will emerge in support of decarbonization energy. (2)In atomized rural social structure, support for energy structure upgrading will emerge when the cumulative net income of individual households is greater than 0.
    This study deeply investigates the influence of social structures as the logical foundation for household behavioral choices on the upgrading of rural energy structure. The following recommendations are put forth: (1)In traditional rural areas, it is advised to consider implementing a graduated subsidy system based on the size of the village, which would influence individuals´ income composition and enhance the level of endorsement among farmers. In atomized rural areas, farmers tend to act rationally, and an energy structure modernization project that brings mutual benefits can foster a higher level of endorsement among farmers. (2)In traditional rural areas, a low initial level of endorsement would impede the convergence of collective behavior towards endorsing the energy structure upgrade. It is crucial to promote the dissemination of knowledge and technology regarding renewable energy, advocate and facilitate sustainable rural development models, establish a solid foundation among the masses, and implement measures to enhance the initial level of endorsement. (3)Effectively leveraging the role of influential individuals. In traditional rural areas, social networks grounded in kinship and geographical connections are deeply ingrained. Augmenting the number of influential individuals who support the energy structure upgrade will expedite the convergence of regional farmers toward endorsing behavior.
    The research findings clearly identify the incentivizing factors and guiding factors for improving the level of cooperation among households, providing a theoretical basis for optimizing incentive policies for rural energy structure and mobilizing the participation of individuals in social networks. Moreover, this research is of great significance in enhancing the level of rural governance.
    Research on Student Group Consumption Preferences for New EnergyVehicles Based on Online Review Characteristics
    HE Jinmei
    2025, 34(8):  160-166.  DOI: 10.12005/orms.2025.0256
    Asbtract ( )   PDF (1174KB) ( )  
    References | Related Articles | Metrics
    In recent years, global climate change and environmental issues have received widespread attention. With the promotion and publicity of low-carbon and energy-saving awareness, consumers’ awareness of low-carbon and environmental protection has been strengthened, and their consumption choices for transportation have also changed, showing a certain preference for green products. Traditional gasoline vehicles, which have high energy consumption and pollution, can no longer adapt to the current social development. Consumers are turning their attention to the consumption of new energy vehicles. Governments around the world have introduced policies to support the new energy vehicle industry and promote its rapid development. As of the end of 2021, global sales of electric vehicles reached 6.75 million units, an increase of 108% compared to 2020. This marks the coming of the era of new energy vehicles.
    On September 22, 2020, China proposed the “dual carbon” goal at the 75th United Nations General Assembly, striving to achieve carbon peak by 2030 and carbon neutrality by 2060. The development of new energy vehicles brings new opportunities to achieve this goal. By the end of 2023, China’s production and sales of new energy vehicles had reached 9.587 million units, accounting for over 60% of global sales and leading the world for nine consecutive years. New energy vehicles not only represent the future trend of the automotive industry, but also become a key force in achieving carbon neutrality goals.
    In this transformation, as consumers of the new era, the student group consumption preferences have special significance. The student group is the successor of future socialist construction and also the main force of ecological civilization and construction. Integrating the concept of green development into ideological and political education in universities is beneficial for students to establish awareness of green, low-carbon, energy-saving, and environmental protection. After receiving certain ideological and political education and professional knowledge education, students have a certain ability to identify the consumption of new energy vehicles. They grew up in the information age, have a profound understanding of environmental protection, and are easily influenced by new media. The ideological and political education, innovation education, and socialist core values education in schools have further strengthened their green consumption concept.
    Online comments, as an important channel for consumers to obtain product information, have a direct impact on their purchase intention in terms of quantity and quality. Students lack social experience and sufficient judgment ability, and rely more on online reviews to evaluate products. Positive reviews can enhance purchasing confidence, while negative ones may lead to purchase hesitation or abandonment. Therefore, online comments have become a key factor in the formation of student preferences for new energy vehicle consumption. Meanwhile,the student population is in a critical period of value shaping and personality development, and their consumption preferences are influenced by various factors. Against the backdrop of the current prevalence of green environmental protection concepts, new energy vehicles have become a popular choice among students due to their environmental attributes and innovative technology. To win the preference of new energy vehicle consumers, enterprises need to improve product quality and performance, promote environmental protection concepts, and fulfill social responsibilities.
    This article investigates the impact of online comment features on student preference for new energy vehicle consumption through 656 questionnaire survey data. Meanwhile, factor analysis, difference analysis, and hierarchical regression analysis are used to validate the research hypotheses. The research has shown that both the quantity and quality of online comments have a significant positive promoting effect on student preference for new energy vehicle consumption. In addition, based on the ideological and political education and socialist core values education of the student group during their school years, a moderation effect model is established to analyze the positive moderating effect of a series of higher education empowerment factors on “online comments-new energy vehicle consumption preferences”. The research results help to reveal the impact of online comments on the consumption preference of new energy vehicles of student groups, and provide some inspiration for relevant departments to regulate online comments on Internet platforms.
    Rethinking of Policies of Green Technology Innovation:Stochastic Evolutionary Game Approach Based on Moran Process
    MA Ming
    2025, 34(8):  167-172.  DOI: 10.12005/orms.2025.0257
    Asbtract ( )   PDF (2131KB) ( )  
    References | Related Articles | Metrics
    Green technology, as an emerging environmental protection technology, promotes sustainable development by reducing carbon emissions and environmental pollution, serving as a key element for enterprises to optimize their energy structure, reduce carbon emissions, and enhance competitiveness, and becoming a new engine for high-quality development in China. In reality, on the one hand, decision-making for green technology innovation involves multiple stakeholders with characteristics of multi-subject relationships. The analysis of these relationships can provide a more specific and accurate framework for studying the driving forces of enterprise green technology innovation behavior; on the other hand, green technology innovation, with its characteristics of large investment, high risk, low short-term returns, and dual externalities, exhibits strong uncertainty in its returns. It is noteworthy that, at the present stage, decision-making for enterprise green technology innovation faces a more turbulent internal and external environment, bringing a high degree of uncertainty to the environment for enterprise green innovation.
    Existing literature has extensively discussed the selection of green innovation strategies under multiple subjects using deterministic game models. The evolutionary game theory with determinism is based on bounded rationality, allowing game participants to continuously experiment and learn, and making it more realistic than the traditional game theory. In conclusion, this paper, based on the Moran process, starts from reality. In a context where a finite number of pollution enterprises face highly uncertain random factors, it considers the long-term evolutionary characteristics of pollution enterprises’ green technology innovation behavior, incorporates the interaction between the government and public in green innovation, constructs a stochastic evolutionary model for pollution enterprises’ green technology innovation decision-making, and analyzes, solves, and simulates the impact mechanisms of relevant important variables on the evolutionary dynamics of pollution enterprises’ green technology innovation decisions under different selection intensities. The results show that: (1)When the number of enterprises in the market is small, the driving conditions for green technology innovation strategies are the same under strong and weak selection. However, when the number of enterprises in the market is large, the riving conditions for green technology innovation strategies are more stringent under strong selection. (2)Under weak selection, the effects of different types of incentives on the selection of green technology innovation strategies vary with different numbers of enterprises: first, when the number of pollution enterprises exceeds a certain level, positive incentives are more likely to influence the selection of green technology innovation strategies compared to negative incentives, and vice versa when the number of pollution enterprises is below a certain level; second, regardless of the number of enterprises, public participation is more likely to have a positive effect on the selection of green technology innovation strategies compared to government actions, but when the number of pollution enterprises is below a certain level, public participation has a greater impact on the selection of green technology innovation strategies. Furthermore, this paper provides policy suggestions for promoting green technology innovation in the market with different numbers of enterprises.
    Future research could further incorporate the impact of government market incentives on green technology innovation. Additionally, the influence of digitization and green finance on the strategy selection of pollution enterprises could be considered.
    Modelling Volatility of Financial Assets with Non-stationary Zero Return Process
    LIU Yifei, YANG Aijun, CHEN Lina, LIU Xiaoxing
    2025, 34(8):  173-178.  DOI: 10.12005/orms.2025.0258
    Asbtract ( )   PDF (1121KB) ( )  
    References | Related Articles | Metrics
    For the volatility of financial asset returns, as a key variable in the process of financial risk management, how to construct a suitable model to estimate and predict the volatility more accurately has important theoretical and practical significance. Zero returns often result from liquidity problems, price dispersion or rounding errors, data problems, market closures and other market specific characteristics of financial markets. In order to estimate volatility more accurately, we think it necessary to construct a reasonable model to model data containing zero returns, for studying the impact of zero returns on volatility estimation.
    The causes of the emergence of zero returns have been examined in the literature. The first type of the literature argues that zero returns arise in a continuous time frame because the underlying price transformation process is not observed. The second one argues that zero returns arise naturally because of the discrete nature of price changes. The third one argues that price changes beyond zero return are continuous. The fourth one argues that as long as the residuals of the GARCH model can be zero, the GARCH model can be used to study the zero returns problem. On the basis of the fourth one this paper studies the problem of modelling financial data containing non-stationary zero return processes.
    While there is literature on the causes and modelling of zero return generation, little attention has been paid to the fact that the zero returns process is a non-stationary case. Zero returns processes in practice are usually non-stationary, so the probability of zero returns may be time-varying or periodic. For inter-day data, a downward (upward) trend in the zero probability may be due to an upward (downward) trend in liquidity or an upward (downward) trend in the level of stock prices. For intra-day data, the zero probability tends to be non-stationary and cyclical: it will be lower when liquidity is low and higher when liquidity is high. It is therefore necessary to extend the existing GARCH family of models for financial data containing non-stationary zero returns processes.
    This paper proposes a zero-inflated GARCH model to model financial data containing a non-stationary zero return process, where the zero probability may be trending or cyclical, or both. Then, an improved QMLE method, i.e. 0-adj QMLE method, is proposed for parameter estimation. Then, six major global exchange rate markets are selected as the research object, and the daily returns data of the exchange rate market and the 5-minute high-frequency returns data of the US dollar to offshore RMB exchange rate are modelled using the zero-inflated GARCH model in this paper, and the two methods of 0-adj QMLE and standard QMLE are applied to estimate the parameters and make a comparative analysis of the estimation results; meanwhile, the impact of zero returns on the volatility of the exchange rate market is discussed.
    The main findings are: (1)The zero probability is trending in the daily data and cyclical in the high frequency data. (2)The zero-inflated GARCH model is a better fit to exchange rate data containing a non-stationary zero return process. (3)In the daily data, the previous day’s zero returns have a significant negative impact on the volatility of the next day in the USD/CNY market, and the previous day’s zero returns have a significant positive impact on the volatility of the next day in the USD/CNH market. (4)The previous day’s zero returns have a significant positive impact on the next day’s volatility in the USD/CNH market. In the USD/CNH market, the previous day’s zero returns have a significant positive impact on the next day’s volatility. For high frequency data, zero returns in the first five minutes tend to significantly increase volatility in the second five minutes.
    Research on Online Strategy of VRPSDP for Two Types of Goods AwaitingRetrieval with Different Importance and Unpredictable Quantity
    SU Bing, SHI Xiaoxuan, ZHANG Meng, JI Hao, XU Yaning, LIN Guohui
    2025, 34(8):  179-184.  DOI: 10.12005/orms.2025.0259
    Asbtract ( )   PDF (1327KB) ( )  
    References | Related Articles | Metrics
    With the changing consumer attitudes of residents and the popularization of online shopping, the express industry has achieved vigorous development, and its business volume continues to rise. In the process of express service, the company not only dispatches trucks to deliver goods to specific demand points, but also undertakes the collection of goods awaiting retrieval. The quantity and variety of goods awaiting retrieval at each demand point vary. Besides, consumers may issue new retrieval orders or cancel existing ones during the process of delivery and pickup, resulting in an inability to accurately anticipate of the quantity of goods awaiting retrieval at each demand point before the truck reaches it. The research on vehicle routing problem with simultaneous delivery and pickup (VRPSDP) mainly focuses on two scenarios: demands are completely known and randomized. There is comparatively less research in the scenario of unpredictable demands, and multiple types of goods has not been considered. To address the above gaps, this study explores the VRPSDP-2 types of goods awaiting retrieval with different importance and unpredictable quantity (VRPSDP-2TDUQ). This investigation holds significant implications for enhancing the efficiency of delivery and pickup operations for express companies.
    In VRPSDP-2TDUQ, the quantities of goods awaiting retrieval at each demand point is unpredictable and appear sequentially, which necessitates real-time decision-making with each decision affecting the subsequent steps in the entire process. The uncertainty of demand information and the sequential nature of decisions indicate that this problem is an online problem, which can be addressed through developing an online strategy. Based on the theory and method of online problems and competitive strategies, this paper develops an online strategy denoted as Strategy T to address VRPSDP-2TDUQ by maximizing the total importance of retrieved goods as much as possible with the characteristics of two types of goods with unpredictable quantities to be retrieved. Based on the principle that when the truck arrives at a particular demand point it should have the maximum possible space to accommodate a greater quantity of goods, in Strategy T the service sequence is determined before the truck departs following the principle of prioritizing demand points with larger delivery demand and subsequently smaller delivery demand. When the truck reaches any demand point, real-time decisions are made on how to retrieve two types of goods, with an emphasis on retrieving as many units of the higher-importance goods as possible.
    The competition ratio of Strategy T is analyzed, and the impact of parameter variations on the competition ratio is discussed. The results indicate that the smaller the difference in importance between two types of goods, the greater the number of demand points and the larger the cargo lower limit for the lower-importance goods, the better the performance of Strategy T. Strategy T is subsequently validated by using a road network of 30 express delivery sites in Xi’an city, and the results suggest that the execution performance of Strategy T is relatively good. A further analysis of the variation in the ratio of unit importance ratio between two types of goods reveals that as the ratio decreases, the total importance of retrieved goods diminishes. However, the competition ratio decreases simultaneously, which means the deviation between the online solution and the offline optimal solution decreases. This aligns with the conclusions drawn from a theoretical analysis.
    Future research can be approached from the following three aspects: First, this study focuses on the scenario where the quantities of two types of goods awaiting retrieval are unpredictable, and suggests the exploration of scenario where the quantities of more than two types of goods are unpredictable is worthwhile. Second, this study has not yet considered the scenario where the quantities of goods are partially known. Further research is required to address this scenario. Third, designing strategies with better competitiveness ratios is also a direction for future research.
    A Reinforcement Learning Approach for Joint Optimization ofContinuous Berth Allocation and Quay Crane Scheduling
    WANG Ling, WANG Yu, LIANG Chengji
    2025, 34(8):  185-191.  DOI: 10.12005/orms.2025.0260
    Asbtract ( )   PDF (1429KB) ( )  
    References | Related Articles | Metrics
    Container ports are important maritime hubs that connect trade between countries. In daily operations, ports need to allocate limited berth and quay crane resources to ships within a planning period based on multiple factors such as arrival time, ship type, workload, and departure plans, to ensure that all ships complete their operations as soon as possible. The decisions of berth allocation and quay crane scheduling usually rely on an intuition and experience of port staff, which can easily lead to a prolonged vessel stay in port or a waste of limited resources. Moreover, in the trend of large-sized ships, increasing port throughput, and the gradual popularization of intelligent equipment and systems, a large amount of operational data has been accumulated during daily operations. More scientific and intelligent decision-making methods are urgently required for berth and crane scheduling to further improve port operation efficiency and resources utilization, especially in complex large-scale dynamic environments.
    The problem of port resources allocation and equipment scheduling has attracted the attention of many scholars, but existing research on berth allocation usually considers the coastline to be discrete into multiple berths, which cannot truly depict the actual situation of the coastline. And existing research on crane scheduling mostly considers static task sets and fails to meet the real-time scheduling needs of ports in complex environments. At present, many scholars have proposed various solutions to berth allocation and crane scheduling problems. But most of them consider the two related decisions separately, and ones using machine learning techniques only optimize the parameters in the traditional algorithms without taking full advantage of the adaptive and efficient characteristics of reinforcement learning.
    In order to meet the needs of intelligent decision-making of port berth and crane scheduling in a large-scale complex dynamic environment, this paper considers a joint optimization problem of continuous berth allocation and quay crane dynamic scheduling. The problem is considered to be a sequencing one and is described as a Markov Decision Process (MDP) with carefully designed state space, action space, and reward functions. An efficient reinforcement learning method is proposed by combining A2C (Actor to Critical) neural network with Proximal Policy Optimization (PPO).
    The PPO-based method is trained and tested based on real data from a port in Shanghai, China. Each episode of the training process contains 72 steps of interactions with the environment, and the agent updates the model every 100 steps. During the training process, the stage training results are saved every 1000 episodes. The training process stabilizes and converges after 60 thousand episodes. Based on test datasets of different scales, the proposed method can quickly generate optimal decisions for continuous berth allocation and quay crane dynamic scheduling. To show the performance of the proposed method, comparative experiments with the DDPG algorithm and three classic heuristics are conducted. Compared with DDPG, the proposed method can reduce the total working time of the ships by 3 to 10 hours, and has a better performance and efficiency of convergence in the training process. For 30 ships, compared with the genetic algorithm, PSO algorithm, and a first-come-first-serve heuristics, the proposed method reduces the total working time optimization objective by 15.7%, 20.3%, and 11%, respectively, and reduces decision time by about 93% compared to the two metaheuristics.
    The PPO-based method proposed in this article can fully utilize historical data, obtain and update the best strategy through interactive training in a dynamic environment, and make effective decisions for continuous berth allocation and quay crane dynamic scheduling. It has more advantages in decision-making efficiency and optimization on objectives compared to traditional learning methods and is thus more in line with the current intelligent development requirements of ports. Future research could consider using reinforcement learning in multi-level equipment joint scheduling, and cooperative game of multiple agents.
    Evaluation of Urban Carrying Capacity in Natech Based on Cloud Bayesian Model
    WANG Qiuhan, PU Xujin
    2025, 34(8):  192-198.  DOI: 10.12005/orms.2025.0261
    Asbtract ( )   PDF (1110KB) ( )  
    References | Related Articles | Metrics
    The rapid changes in global climate have heightened the frequency and intensity of natural disasters, posing a significant threat to industrial equipment exposed to such events. This vulnerability can lead to secondary industrial accidents, known as natural hazard triggered technological accidents (Natech), whose occurrence is steadily increasing. Accurately assessing the urban carrying capacity of industrial cities is a fundamental challenge in exploring the inherent development of cities. This assessment aims to enhance urban carrying capacity while maintaining the city’s intrinsic advantages. Currently, existing assessment methods for Natech often overlook the dynamic flow of risks within the Natech disaster chain, resulting in a lack of clarity in understanding the interactive relationships between different disaster events. Furthermore, research on carrying capacity primarily focuses on equipment and building structures, with limited emphasis on the urban system’s carrying capacity. Additionally, methods for evaluating urban or regional carrying capacity lack a systematic approach, with shortcomings in imperfect calculations for assessing the relevance of evaluation indicators, subjective weightings, and challenges in accurately quantifying multiple indicators. In this context, improving the evaluation model for urban carrying capacity in industrial cities and exploring key factors hold significant theoretical importance and practical value for the economic development and promotion of high-quality industrial growth in these cities.
    In response to the prevailing issues of subjectivity, data scarcity, and uncertainties in indicator correlations within current evaluation models, this paper proposes a cloud Bayesian network approach integrating the coefficient of variation method. Utilizing a cloud generator to generate cloud model data, the model calculates the relative weights of unsupported nodes and obtains a conditional probability table. By objectively discretizing the states of each input node into three categories based on existing data within the Bayesian network, this model overcomes the subjective reliance on expert opinions prevalent in most urban carrying capacity studies. Simultaneously, the transformation of the fuzzy description of region numbers into a cloud map with specific numerical values addresses the challenge of lacking data support for intermediate nodes in the Bayesian network, completing the qualitative-to-quantitative conversion. Furthermore, by utilizing the cloud feature values of input nodes and their relative weights, the cloud feature values for relative nodes can be determined, establishing relative quantitative relationships based on weights. Finally, we select the Pearl River Delta industrial base, as the sample region. Three key indicators and twelve four-level assessment indicators are chosen to establish an index system for urban carrying capacity in the industrial area. The analysis spans 2009 to 2020, evaluating both overall the urban carrying capacity and individual indicators. The results indicate that the urban carrying capacity in the Pearl River Delta is consistently lower than the overall situation in Guangdong province. Moreover, the urban development in the Pearl River Delta demonstrates a significant dependence on population carrying capacity. The inherent social development model in the region proves ineffective in coping with the rapid growth of industry, highlighting an urgent need for improvements in waste disposal practices under large-scale industrial production. Furthermore, substantial disparities in the urban carrying capacity within the Pearl River Delta are observed, revealing instances where carrying capacity contradicts urban development trends.
    Study on Carbon Emission Reduction Effect of NewEnergy Demonstration City Construction
    HAN Xianfeng, ZHENG Zhuoji, LI Boxin
    2025, 34(8):  199-205.  DOI: 10.12005/orms.2025.0262
    Asbtract ( )   PDF (1057KB) ( )  
    References | Related Articles | Metrics
    The new energy demonstration city pilot is an important measure for the government to implement the low-carbon development strategy. It helps to improve the urban energy structure, encourage the development of emerging energy industries and drive the construction of new energy systems. It is of great significance for achieving carbon peak and carbon neutrality. However, research on the impact of new energy demonstration city construction on carbon emission intensity is still rare. In 2014, the National Energy Administration announced the Notice on Publishing the List of New Energy Demonstration Cities (Industrial Parks) (First Batch), which clearly identified 81 cities such as Shenzhen and 8 industrial parks as the first batch of new energy demonstration cities and industrial parks. It is pointed out that the construction of new energy demonstration cities should aim at promoting sustainable development, constantly innovating renewable energy development methods and serving the green, low-carbon and sustainable development of the economy.
    This paper takes the implementation of the new energy demonstration city pilot in 2014 as a quasi-natural experiment. Based on the panel data of 278 prefecture-level cities in China from 2006 to 2020, this paper systematically examines the carbon emission reduction effect, spatial spillover effect and dynamic heterogeneity mechanism of new energy demonstration city construction by using the difference-in-differences method, spatial econometric model and triple difference model. The research data of this paper mainly comes from China Statistical Yearbook, China City Statistical Yearbook and so on. It is worth noting that the carbon emission scale data is calculated by referring to the China Carbon Emission Accounting Database, and the energy consumption data is calculated by using the DMSP/OLS nighttime light data of each prefecture-level city.
    The study finds that the construction of new energy demonstration cities can effectively inhibit urban carbon emission intensity. This conclusion has successfully passed a series of robustness and endogenous tests, such as parallel trend test, placebo test, instrumental variable method test, PSM-DID test, changing the research window period, eliminating the samples of autonomous regions, excluding other policies, eliminating the interference of outliers and controlling the province-time fixed effect. The analysis of spatial spillover effect shows that the new energy demonstration city can not only effectively drive local carbon emission reduction, but also effectively suppress the carbon emission intensity of neighboring areas. The policy has an obvious neighboring demonstration effect. The mechanism test shows that the construction of new energy demonstration cities indirectly achieves urban carbon emission reduction mainly by promoting the development of innovation and entrepreneurship, attracting foreign investment, strengthening government support, increasing investment in science and technology, enhancing energy efficiency and improving the multi-dimensional transmission path of energy structure. Heterogeneity analysis shows that in areas with higher government environmental protection attention and better market-oriented development, the carbon emission reduction effect of new energy demonstration city construction is more obvious. The conclusions of this paper provide some policy implications for the Chinese government to further deepen the pilot of new energy demonstration cities, further promote the energy revolution and achieve the goal of “double carbon”.
    Metro Ridership Prediction Model by GCN and Improved Informer
    CHEN Wanzhi, CUI Daiyu
    2025, 34(8):  206-211.  DOI: 10.12005/orms.2025.0263
    Asbtract ( )   PDF (1124KB) ( )  
    References | Related Articles | Metrics
    Given its advantages of safety, convenience, punctuality and comfort, the subway has become a primary mode of transportation in people’s daily lives. As urban development continues, the pressure on subway passenger flow increases daily. Oversaturation of passenger flow, caused by various factors, leads to safety hazards such as station congestion, reduced operational efficiency, and even crowd crush incidents. These issues significantly affect passenger safety and the operational management of transportation departments. Therefore, accurate passenger flow prediction is crucial for supporting reasonable travel arrangements and informed decision-making by subway operation and management departments.
    Passenger flow data exhibits both temporal dependence and strong spatial correlations, along with clear periodic patterns. However, existing prediction models struggle to balance long-term and short-term dependencies when addressing spatial-temporal correlations and periodicity effectively. Consequently, this paper develops a combinatorial prediction model for subway passenger flow, integrating dual-dimensional temporal and spatial information while considering periodic characteristics. Theoretically, this study aims to accurately predict passenger flows at subway stations, enhancing the accuracy of current models. Practically, the findings can support governmental decisions regarding subway construction, assist transportation management in resources allocation, minimize resource wastage, and offer the public travel planning references.
    Initially, existing passenger flow forecasting technologies are reviewed on the Internet, their advantages and disadvantages are analyzed, and insights are gained to inform the design of our research plan. Subsequently, the card swiping records of Hangzhou Metro and the adjacency matrix of Hangzhou Metro transportation network, provided by the Hangzhou Municipal Public Security Bureau, are selected as the dataset for this study. Various factors affecting passenger flow are identified. We choose to consider and collect relevant data from three aspects: travel mode, weather conditions, and historical data at corresponding time points, to construct relevant features. Finally, to mitigate any negative impact of the constructed features on the model’s prediction results, LightGBM importance analysis and Pearson correlation analysis are employed to select features, thereby enhancing the model’s generalization ability.
    In this paper, we propose a combined model, GCN-DFInformer, for subway passenger flow prediction, which integrates a graph convolutional network (GCN) and an improved Informer model (DCC-FECAM-Informer, DFInformer) in parallel. Firstly, an inflated causal convolutional self-attention mechanism is introduced to compensate for the Informer’s insensitivity to local information and to enhance the model’s ability to capture trend changes and local fluctuations in passenger flow over short periods. Secondly, considering the obvious periodicity and seasonality of subway passenger flow data, the frequency enhanced channel attention mechanism (FECAM) is introduced to improve the model’s ability to identify and utilize inherent features in the data series for better predictions. Finally, the prediction is performed by the parallel-connected graph convolutional network to achieve the fusion of temporal and spatial information. The experiments comparing the model’s predictions with actual values demonstrate that the GCN-DFInformer model exhibits strong predictive performance and robustness. The experimental results from the scenario dataset test demonstrate that, compared to other models, the proposed model yields smaller errors and a higher coefficient of determination (R2). This validates the superior predictive performance of the model and highlights its effectiveness in improving the accuracy of subway passenger flow prediction.
    The proposed model significantly enhances prediction accuracy by integrating the Informer, dilated causal convolutional self-attention mechanism, FECAM module, and graph convolutional network. This model leverages the strengths of both GCN and DFInformer, and their parallel structure helps maintain the independence of spatial and temporal information. In the next phase, additional influencing factors related to the station will be incorporated, such as the functional area where the station is located, and other auxiliary information like the subway operation timetable, to further improve the model’s prediction accuracy.
    Management Science
    Optimal Staffing for Online Education Operation with Free Trials
    LI Jing, ZHU Wanshan, ZHUO Ao
    2025, 34(8):  212-217.  DOI: 10.12005/orms.2025.0264
    Asbtract ( )   PDF (1000KB) ( )  
    References | Related Articles | Metrics
    Online education has become an important form of extracurricular training due to COVID-19 pandemic impact and Internet technology development. In this environment, many companies that sell online courses have arisen in China. In the process of selling online courses, an industry-agreed business mode has emerged. Customers can experience the free course before deciding whether to buy it. In this business mode of free trial before you pay, operational costs are mostly workers’ salaries. Workers involved in selling the online education product are usually made up of two types. One is the sales-force that is responsible for attracting potential customers to the free trial. The other is the service-staff who teach courses to both the free-trial and the paid customers. As customers try the courses before they purchase, their purchase decision is affected by their assessments of the service staff’s teaching quality. The number of both types of workers determines operational income (i.e., the number of paid customers) and operational costs (i.e., workers’ salary). Free trial directly affects the number of sales-force and service-staff, hence, profitability of a company’s operations. We consider a problem of optimal staffing of workers for online education companies because online education industry has been greatly affected by the stricter regulation of K12 education institutions, thus online education companies have become more concerned with operating costs and profits.
    The problem of optimal staffing is subject to two important constraints: one is the financial budget constraint; the other is capacity constraint, that is, an upper limit on the number of simultaneous online users that can be accommodated by live delivery platform. Our study investigates how the free trial duration affects the optimal staffing decisions of sales and service operations, the shadow prices of the capacity and budget constraints, and the choice of the optimal sales channel under such constraints. Specifically, we answer the following three research questions. First, how does the length of the free trial affect the optimal allocation scheme on the number of sales-force and service-staff? Second, when more capital is available, should the company invest the new capital in hiring more workers, or in buying more the live delivery platform’s service to accommodate more trainees at the same time? In addition, when faced with the problem of choosing multiple channels, how should online education companies select efficient promotion channels among the many channels?
    This paper develops a dynamic system model and provides the optimal staffing solution by characterizing the system equilibrium and applying linear programming theory. We find that when the free trial duration increases, the optimal service staff number weakly will increase (non-decrease) but the optimal sales staff number weakly will decrease (non-increase), and moreover, the shadow prices of the capacity and budget constraints will decrease. We also study a case of multiple sales channels and find the effect of free trial duration consistent with the case of a single sales channel. The findings of this paper are not limited to the online education field. They also provide guidance for the operation and management of other service products with free trial features. In particular, the model of this paper can provide some insights when it comes to the staffing and marketing channel selection of other free trial services. Other companies offering free trial services can also gain some management insights to reasonably allocate staff and choose the optimal marketing channel. Future studies may consider the impact of factors such as the length of free trial on conversion rate, or introduce competitive scenarios. Customer choice and uncertainty of the online course’s value to customers could also be taken into account in further research.
    E-commerce Platform Operation Strategy under Production Diseconomies
    JIANG Tanfei, SHI Chunlai, XIE Yongping, NIE Jiajia
    2025, 34(8):  218-225.  DOI: 10.12005/orms.2025.0265
    Asbtract ( )   PDF (1179KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of the platform economy, a novel e-commerce channel has appeared, that is, the marketplace in which the manufacturers have the opportunity to sell products to consumers directly through the platform (i.e., the platform channel) besides the traditional reseller channel (the indirect one), paying a certain commission fee to the e-commerce platforms, such as Amazon, JD.com and Flipkart. However, in reality, not all e-commerce platforms tend to introduce the marketplace, such as Everlane, Vancl, and Jumei Youpin, that is, e-commerce platforms continue to serve solely as retailers and have not introduced the marketplace channel. The introduction of marketplace has been a subject of interest for researchers and industry experts. Previous research has provided valuable insights that reveal two distinct effects: the revenue effect and the encroachment effect. Specifically, it has been established that the introduction of the marketplace expands the platform channel, thereby contributing to an increase in the overall product demand, as e-commerce platforms charge a certain commission fee from the service, referred to as the revenue effect. However, the introduction of the marketplace will inevitably intensify the competition between platform channel and retail channel, potentially encroaching on the demand within the retail channel, and subsequently diminishing the profits of the retail channel, known as the encroachment effect. Particularly, in situations of manufacturer production diseconomies, an increase in product sales will rapidly escalate the manufacturer’s production costs, making it possible to raise the wholesale price, thereby exacerbating the erosion of retail profits on e-commerce platforms and potentially leading to alterations in the operation strategy.
    It is essential to note that, while these effects are widely recognized, existing research has often overlooked the impact of production diseconomies. This represents a critical gap in the current body of knowledge, and addressing this gap is one of the key objectives of this research. This paper integrates how e-commerce platforms make operational decisions under conditions of production diseconomies. Thus, motivated by the observations, the key questions of whether and under what conditions the marketplace channel should be introduced in addition to the reseller channel are raised, we employ a Stackelberg game for a two-level supply chain model consisting of a manufacturer and an e-commerce platform, to explore the platform’s channel decisions under the production diseconomies.
    Backward induction is employed to address this question as we analyze the changes in the profits of e-commerce platforms when introducing the marketplace or not doing so, thus exploring the optimal strategy for channel operation of e-commerce platforms. Our research reveals the following key findings: (1)For manufacturers, it shows that the manufacturer always has an incentive to join the marketplace channel. (2)For e-commerce platforms, some counterintuitive results are found: whether the platform tends to introduce marketplace channel depends on the manufacturer’s production cost and the commission fee. To be specific, the decision to introduce a platform marketplace depends on the commission rate when manufacturers have fixed unit production costs. When the commission rate is relatively large, introducing the platform marketplace enhances e-commerce profits, making it desirable for the platform to do so; conversely, if the commission rate is low, introducing the marketplace reduces e-commerce profits, making it less attractive. However, when manufacturers face production diseconomies, whether to introduce marketplace is not only influenced by the commission rate but also depends on the degree of production diseconomies. Specifically, when the degree of production diseconomies is large, introducing the marketplace reduces e-commerce platform profits, making the platform less inclined to do so. When production diseconomies are limited, the conclusions align with those under fixed unit production cost conditions: introducing the marketplace reduces e-commerce profits with a lower commission rate and enhances them with a higher commission rate. (3)From the perspective of the supply chain, the introduction of marketplace consistently leads to an overall increase in the profitability of the entire supply chain.
    The conclusions drawn in this paper not only elucidate the varying operational strategies of e-commerce platforms, but also provide a certain scientific foundation for optimal decision-making of both e-commerce platforms and manufacturers. Nevertheless, the paper does not consider the impact of uncertain factors such as channel spillover effects and consumer preferences on the operational strategies of e-commerce platforms. Additionally, further research avenues include investigating the choices of operational models for e-commerce platforms in scenarios where manufacturers operate through multiple sales channels simultaneously and in the context of carbon cap-and-trade.
    Online Retailer’s Research When Introducing Showroom Considering Consumers’Opportunistic and Anticipated Regret Behavior under Return Guarantee
    CHEN Sijia, WANG Hua, ZHAO Na
    2025, 34(8):  226-232.  DOI: 10.12005/orms.2025.0266
    Asbtract ( )   PDF (1391KB) ( )  
    References | Related Articles | Metrics
    In recent years, online shopping has been favored by more and more consumers, but consumers cannot directly contact the product before online shopping to understand the product information, and only after receiving the product can they find that the product is not suitable, especially for experiential products such as clothing, so consumers can clearly know whether they meet their expectations only after practical contact. To a certain extent, this reduces consumers’ purchase willingness and inhibits consumers’ purchase. In order to increase consumer confidence in stimulating consumers to actively buy products, online retailers led by JD, on the basis of the “no reason for return” guarantee, put forward a full refund return policy, allowing consumers to return products within a certain period of time and get a full refund.
    With the intensification of market competition and the acceleration of product update frequency, online retailers often implement a series of discount promotions to attract consumers to buy, and the development of information technology has intensified the strategic purchasing behavior of consumers. However, in the case of the full refund return guarantee, when the return cost is ignored, the loose return guarantee may breed the opportunistic behavior of consumers who buy the product at the full price after knowing the value of the product, and wait until the discount period to buy it again at a low price, which reduces the profit of online retailers to some extent. In the two-stage sales environment, considering that consumers’ strategic choice of purchase time is also affected by high price regret and out-of-stock regret, such anticipated regret behavior will affect consumers’ strategic choice of purchase time. Therefore, when consumers’ speculation is also considered, how will the anticipated regret behavior affect consumers’ purchase and return decisions? And how will it affect online retailers?
    In view of this kind of speculation of purposeful purchase and return, considering the speculation and expected regret of consumers under the return guarantee, this paper discusses the influence of introducing showroom mechanism on consumers’ purchase decisions, and then discusses retailers’ pricing strategies and operational decisions according to consumers’ different maximum reserve prices for products.
    The first part divides strategic consumers into two types: (1)opportunistic consumers: they compare the economic utility of retained products and opportunistic returns at the time of purchase; (2)opportunistic regret consumers: their purchase is not only affected by the comparison of economic utility, but also by the psychological utility brought by the expected regret behavior. We consider the impact of consumer expected regret on purchase timing.
    In the second part, according to the purchasing behavior of consumers, the reserve price of opportunistic consumers and opportunistic regret consumers is calculated respectively under different purchasing situations, and the size of the reserve price of opportunistic consumers and opportunistic regret consumers under different circumstances is compared according to the product matching rate and the degree of regret, and the high and low value consumers are distinguished. The pricing strategy of online retailers is given.
    In the third part, the rational expectation hypothesis is used to analyze the equilibrium problem between retailers and consumers, considering the profit model of online retailers under different purchase situations, and the optimal retail price, optimal order quantity and optimal expected profit of online retailers are solved and discussed through numerical analysis under different pricing strategies.
    Based on the above research, the following conclusions can be drawn: (1)Although the return guarantee may breed the opportunistic behavior of consumers to purchase and return goods with purpose, the showroom mechanism can eliminate such opportunistic behavior of consumers to a certain extent. (2)Only when the cost of inconvenience is low, consumers will choose to go to the physical showroom. (3)When strategic consumers are opportunistic, there are cases of inconsistent purchase decision preferences between opportunistic regret consumers and opportunistic consumers. (4)The higher retail price is not always better, the retailer’s optimal order volume and optimal profit under penetration pricing strategy are not always better than those under skimming pricing strategy, and the product matching rate and consumer proportion directly affect the retailer’s sales price setting and pricing strategy selection.
    Price and Service Strategies in Two Supply Chains with Competition and Cooperation
    WU Xiaoyong, NAN Jiangxia, ZHANG Maojun
    2025, 34(8):  233-239.  DOI: 10.12005/orms.2025.0267
    Asbtract ( )   PDF (1443KB) ( )  
    References | Related Articles | Metrics
    With the development of information technology and the gradual implementation of digital technology application scenarios, the form of competition is evolving from competition between enterprises to one between supply chains. In addition, facing the complex market environment, in order to achieve better development, enterprises from different supply chains have also begun to seek cooperation. Therefore, in an era of digital economy, competition and cooperation among enterprises have been transformed into competition and cooperation among supply chains. Price is the main factor in the supply chain. However, with the continuous transformation of consumers’ consumption concept, service has gradually become an important factor affecting consumers’ decisions. Thus, it is important to study the price and service level strategies simultaneously in the supply chain.
    This paper considers two supply chains, each consisting of a manufacturer and a retailer, the retailers determine prices and service levels, and manufacturers determine wholesale prices. To explore the price and service strategies of retailers in various supply chain structures, as well as the optimal profits of the supply chains, this paper constructs four models of the total integration (TI), the horizontal competition and vertical integration (HCVI), the horizontal integration and vertical competition (HIVC), and the total competition (TC) structures. Firstly, the manufacturers determine the wholesale prices as leaders, and the retailers decide the prices and service levels as followers. Then, the optimal price, service level, market demand, and supply chain profit are obtained by using backward induction for four supply chain structures. Secondly, the optimal strategies in the four supply chain structures are compared, to find the optimal supply chain structure. Finally, the paper investigates the relationship between the optimal price and service level of different supply chain structures in the symmetric market and asymmetric market. These findings will provide a decision-making basis for strategy selection.
    Furthermore, this paper investigates how the intensity of price competition and service competition affects the optimal price, service level, demand and supply chain profit. The findings include: (1)Of the four supply chain structures, the TI structure makes the highest profits. In the HIVC structure, retailers have the highest decision-making efficiency and respond quickly to the market. As price competition is fiercer, the retailers have begun to shift from price competition to service competition. The TC structure makes higher profits than the partial competition structures (i.e., HCVI, HIVC), which indicates that supply chain profits in the total competition situation are not always the worst. (2)In the HIVC structure, the less intense price competition leads to a higher price and lower service level. Although this may increase supply chain profits, it has a detrimental effect on consumers. Therefore, the collusion between retailers will decrease supply chain profits. (3)In the HCVI structure, vertical integration reduces the impact of double marginalization, which leads to lower price, and lower supply chain profit. In addition, the optimal strategies of manufacturers and retailers are analyzed through numerical examples in the asymmetrical market, and the effects of the intensity of price and service competition on the optimal price, service level and supply chain profit are explored.
    This paper mainly studies price and service level strategies of retailer in different supply chain structures. In this study, the manufacturer is the leader, and the retailer is the follower in the supply chain. However, retailers such as Suning, Walmart, and JD are often the leaders in the supply chain. Furthermore, this paper does not consider the effect of consumers’ preferences. For instance, some consumers may pay more attention to the cost-effectiveness of products, while others may pay more attention to the value-added services of products. These are all issues that we will study in the future.
[an error occurred while processing this directive]