**Abstract:** A digital twin of a system is a high-precision numerical simulation based on the integration of system models and observational data, representing the pinnacle of understanding of that system. I will discuss the importance and feasibility of establishing a digital twin for the Chinese economic system, as well as the requirements for high spatiotemporal resolution economic datasets and the development of large-scale econometric models.

Dr. Songxi Chen is an Academician of the Chinese Academy of Sciences. He is currently serving as the President of the Chinese Society for Probability and Statistics for the term 2023-2026. Dr. Chen earned his Ph.D. in Statistics from the Australian National University in 1993. Prior to his full-time return to China, he held faculty positions at the National University of Singapore and Iowa State University. From 2010 to 2019, Dr. Chen served as the Founding Director of the Center for Statistical Science at Peking University. His research interests are diverse and include high-dimensional data inference, environmental modeling and assessment, empirical likelihood, statistical and machine learning, and stochastic process inference. Notably, his recent work on air quality assessment and epidemiology has had a significant impact on environmental and public health in China. Dr. Chen is a Fellow of the Institute of Mathematical Statistics, the American Statistical Association, and the American Association for the Advancement of Science. He is also an elected member of the International Statistical Institute.

**Abstract:** This era of big data is fascinating for data analysis in particular and statistics in general. It has also clearly revealed more than ever different scientific attitudes toward data analysis and statistical research from different perspectives. As statisticians, we see both challenges and responsibility for foundational developments in both statistical inference and scientific modeling. This talk introduces a new principle, called the prediction principle. We argue that this principle can serve as a first principle for valid and efficient inference by exploring its implications in three key research directions:

- how the prediction principle can be used to refine both the principle of maximum likelihood and the likelihood principle,
- how statistical inference should be formalized, as the required reasoning is deductive, and
- how a general theory of scientific modeling might be achievable, despite the inherent challenges of inductive reasoning.

These discussions are illustrated using seemingly simple but unsolved problems in high-dimensional statistics and deep learning models. To prompt deeper reflections, the talk concludes with a few challenging problems.

Chuanhai Liu earned his correspondence diploma from Central China Normal University in 1985, master's degree in Probability and Statistics from Wuhan University in 1987, and PhD in Statistics from Harvard University in 1994. He worked at Bell Laboratories for ten years starting in 1995 and at Texas A&M as an Associate Professor in Spring 2024. Since 2005, he has been a Professor of Statistics at Purdue University. His research interests include the foundations of statistical inference, statistical computing, and applied statistics. Much of his work on iterative algorithms, such as Quasi-Newton, EM, and MCMC methods, is discussed in his book titled "Advanced Markov Chain Monte Carlo Methods" (2010), co-authored with F. Liang and R. J. Carroll. His work on the foundations of statistical inference, developing a new inferential framework for prior-free probabilistic inference, is included in his book titled "Inferential Models: Reasoning with Uncertainty" (2015), co-authored with R. Martin. For his research on statistical computing, he spent several years experimenting with a multi-threaded and distributed R software system called SupR for big data analysis. Currently, he is working on topics for a potential new book titled "Scientific Modeling: Principles, Methods, and Examples."