Big Universe, Big Data: emerging challenges in Astrophysics
june 25-26
Astrophysics is the stuff of dreams. Finding stardust, exploring potentially habitable new worlds, and looking back in time: these are just some of the things we do every day. In return for this huge coolness factor, we don’t get to make stars or any other cosmically relevant object in the lab, depriving us of the experimental verification tools that are so important in the physical sciences. This poses a unique set of challenges, which I will try to introduce in these lectures.
I spend a lot of time thinking about distant galaxies asking, for example: How far away are they? How many stars do they host? How and when did they assemble their stars, and what is their chemical enrichment history? Our time together will be a journey in open problems (many) and possible solutions (few, but exciting) in understanding the physical properties of galaxies, touching upon three main aspects:
1. The Universe is big. Like, really big, and ever-expanding to boot. This wasn’t much of a problem until recently, because we didn’t have the means to truly explore it, so we would just sort of ignore anything too far away. But this has changed dramatically in the last decade; we’ll soon have data for billions of galaxies, so that the focus shifted from how to get more data to what to do with the data we have. I will review some of the outstanding issues and the most promising and creative tools to address them.
2. We now have alternative approaches to building mathematical models of galaxies (“let’s suppose that this galaxy is a sphere…”). This comes from tremendous advances in data analysis and mining techniques and—perhaps even more importantly—in their availability to anyone with a little computing prowess and access to a cloud server. This provides amazing opportunities to create a richer view of the Universe, but it really changes the job description, and it opens the door to a lot of new mistakes to potentially make. I will try to explain how we can, in my opinion, stay true to the scientific method, despite the shift in methodology.
3. Validation is not easy; in many cases, we don’t know what the truth is. So how can we make sure that our inferences make sense? Thankfully, expansive computer simulations can come to our rescue; if we manage to simulate a Universe that looks like ours, we can probably believe that the ingredients we put in, and the interactions we prescribed, were reasonably correct. This tool is so powerful that I think it will become increasingly important; I will talk about what it takes to believe that what we learn from simulations is helpful to understand reality.
first lecture
slides / video
second lecture
slides / video + Q&A at 1:23:15
https://github.com/vacquaviva/ITS_SummerSchool_June2020
about the lecturer
Viviana Acquaviva is an astrophysicist using data science techniques to study the Universe. She is an Associate Professor in the Physics Department at the NYC College of Technology, CUNY and at the Graduate Center, CUNY; an associate member of the Center for Computational Astrophysics of the Flatiron Institute; a visiting research scientist at the American Museum of Natural History; and a Harlow Shapley Visiting Lecturer for the American Astronomical Society. She is passionate about cross-disciplinary research, supporting minorities in STEM, and silly puns, which she refuses to call "dad jokes". Currently, she is writing a textbook on Machine Learning methods for Physics and Astronomy, to be published by Princeton University Press.