A Reinforcement Learning Approach to Robust Scheduling of Semiconductor Manufacturing Facilities
Published in IEEE Transactions on Automation Science and Engineering, 2020
As semiconductor manufacturers, recently, have focused on producing multichip products (MCPs), scheduling semiconductor manufacturing operations become complicated due to the constraints related to reentrant production flows, sequence-dependent setups, and alternative machines. At the same time, the scheduling problems need to be solved frequently to effectively manage the variabilities in production requirements, available machines, and initial setup status. To minimize the makespan for an MCP scheduling problem, we propose a setup change scheduling method using reinforcement learning (RL) in which each agent determines setup decisions in a decentralized manner and learns a centralized policy by sharing a neural network among the agents to deal with the changes in the number of machines. Furthermore, novel definitions of state, action, and reward are proposed to address the variabilities in production requirements and initial setup status. Numerical experiments demonstrate that the proposed approach outperforms the rule-based, metaheuristic, and other RL methods in terms of the makespan while incurring shorter computation time than the metaheuristics considered.
Recommended citation: Park, I., Huh, J.*, Kim, J., & Park, J. (2020), A Reinforcement Learning Approach to Robust Scheduling of Semiconductor Manufacturing Facilities, IEEE Transactions on Automation Science and Engineering, 17(3), 1420-1431. (SCIE)
Download Paper
