Time: 02:00 p.m.
Place: IC 02-718
Richard Otis, Engineering and Science Directorate, California Institute of Technology, Pasadena, CA, USA
Machine learning and artificial intelligence have been investigated by numerous authors within the materials domain for the purposes of building advanced materials models, ac-celerating the use of models for design via surrogate modeling, and uncertainty quantifi-cation and propagation. Recent efforts in all of these areas, both by the presenter and by others, will be briefly reviewed for context.
The focus has been on the optimization of design and model spaces involving continuous variables; however, many materials design and model-building problems involve discrete or mutually exclusive decision-making, e.g., choice of model formalism, turning certain physics on or off, etc. Here we take inspiration from recent high-profile successes, outside of materials, in game-playing artificial intelligences using reinforcement learning tech-niques. We show that certain model-building tasks can be reformulated as a single-player “game,” with decision-making represented as a tree of discrete actions. Synthetic, physi-cally representative training datasets can be generated to efficiently teach the latent cor-relations between the derived simulation model outputs (e.g., a phase diagram) and the model inputs (e.g., a thermodynamic model). Once a policy function is learned, it encodes essential physics learned from mastering the model-building “game,” which can then be applied to an entire class of real materials modeling problems.
With this approach for handling complex interdependencies between both continuous and discrete action spaces, we grow the classes of materials challenges capable of being addressed by artificial intelligence.