CMU’s Zico Kolter shapes new paths for AI safety and security
Nov 04, 2024
If you harbor concerns about the ethical use of artificial intelligence in our world, you will be pleased to learn that industry leader OpenAI — the maker of ChatGPT — has recently instituted a Safety & Security Committee to study potential impacts of new product development.And you’ll be immensely pleased that Carnegie Mellon University computer science professor Dr. Zico Kolter is on that committee.Kolter was appointed to OpenAI’s nine-person board of directors this fall. He is the only AI researcher on the board and is director of CMU’s Machine Learning Department.Noted OpenAI board Chair Bret Taylor said in a welcoming statement, “Zico adds deep technical understanding and perspective in AI safety and robustness that will help us ensure general artificial intelligence benefits all of humanity.” “The Safety and Security Committee will be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed,” according to a September update on the committee from OpenAI. Debate about the degree to which AI benefits or harms any segment of humanity has intensified in the three years since OpenAI introduced ChatGPT and DALL-E, cutting-edge chatbots capable of engaging in human-like conversations and creating sophisticated images based on text prompts.Kolter’s academic research in AI safety and security has dovetailed with corporate positions as chief data scientist at C3.ai, chief expert at Bosch Center for AI and chief technical adviser at Gray Swan. He joined the CMU faculty in 2012 following a postdoctoral fellowship at Massachusetts Institute of Technology.NEXTpittsburgh asked him for his thoughts on preparing for new discoveries in AI development.* * *NEXTpittsburgh: It seems that AI, more than any other technological invention in history, is forcing us to reconsider what we believe to be the essential definition of humanity. As someone who’s spent more than 20 years working with machine learning, how do you view this breakthrough moment?Photo by Alexis Wary.Zico Kolter: Speaking just from my own perspective, I think there are two really interesting aspects to this. One is the very practical nature of, what are we as humanity going to do with these systems we’re building? Regardless of your feelings about how capable these systems are currently and how capable they will evolve to be, they clearly already have the potential for massive impact. It says a lot about us in how we deploy these systems and what we use them for and how we go about building them.That’s number one. Number two — getting more metaphysical now — there are fundamental questions these systems start to raise as to what is intelligence. What does it mean to be intelligent? So far, every sort of intelligence we’ve seen has been tied up with life, with humans or, to a lesser degree, animals. It’s hard for us to imagine general intelligence being separate from living things.But we likely are approaching a time where these things are going to be separate — a time where we will have systems that are undeniably intelligent. We as humans need to reckon with that. We need to understand what that means to our philosophies, our way of life, our work. It will ultimately pose very deep questions about our ethics. NEXTpittsburgh: When you begin participating in the Safety & Security Committee on the OpenAI board, what will you look to accomplish?Kolter: I want to emphasize that the beliefs I’m expressing here are entirely independent of my role with the Safety & Security Committee. Of course, I will bring these beliefs and perspectives and share them with the board of OpenAI and others. But these are my views as someone who’s been in the field for quite a while now.Whenever we release technology like this, there are going to be massive effects it will have on the world. It is very important that we, as developers of these systems, think about their effects both from a very technical standpoint and from a slightly broader perspective. When we release these tools, what do we envision users and society as a whole doing with them? It should not just be AI researchers who think about these things. It should be all of society thinking about them.NEXTpittsburgh: For the various marginalized groups in our society who aren’t tech-savvy or economically well-resourced, will AI technology prove to be an asset to them?Dr. Zico Kolter and his team have published 27 articles in peer-reviewed science journals this year. Photo by Alexis Wary.Kolter: As with any technology, there are massive issues of access and availability of the tools and of training. How do we think about who uses these for what purposes? My hope about AI broadly is that in many cases — not in all cases, certainly, but in many cases — a lot of work can be done to democratize access to these tools.One thing that is very nice about current systems is that they’re actually quite accessible compared with a lot of other computer systems. Users can interact with them more easily. Most people intuitively grasp how to talk and communicate with these systems in a way that’s traditionally been very hard for a lot of older computer systems.If used properly, these tools give a massive amount of capabilities and computing power to a very broad set of users that would otherwise be unable to accomplish certain things they now can accomplish with AI.NEXTpittsburgh: So far this year you and your team have published 27 articles in peer-reviewed science journals. Are there any specific areas you’re exploring?Kolter: This work is done mainly by my Ph.D. students in my group here at CMU. I have a large group, and we’re fortunately able to do a lot of really, really amazing research. The things I’m working on most right now would fall into the buckets of AI safety and robustness. Trying to understand how the data we use to train these models — the data we use to fine-tune them or control them — how that data affects the performance of machine learning models and AI models. We also do a lot of work on new architectures, or new methods, for AI systems.What really determines how AI methods work is the data used to shape and build them. How these models work by infusing data to systems that generate brand new content that’s explicitly not the same as in the training set is fascinating. I think we don’t really understand that process fully. If AI seems like magic to you, that’s because it seems like magic to us a little bit as well. We really need to better understand this process from a scientific perspective, how this process really functions. NEXTpittsburgh: What advice would you offer on how to prepare for AI’s impact in the next few years?Kolter: The single best thing everyone can do is to start using these systems as they exist right now. I’m surprised sometimes by how infrequently people use these systems right now. I’m shocked by how rarely people in my own profession, AI researchers, use these systems beyond the lab.All of us need to become natives of generative AI tools. Use them for everything you do. Use them to draft ideas, do research, write code, create initial reports. Use them for fun, too. I use them to write stories with my daughters using ChatGPT. It’s a ton of fun, and my kids love working with these systems when they can.In a lot of cases, there’s no manual for these systems. You just have a blank text box. It’s kind of daunting the first time you see it. Start getting familiar with what these systems can do, what they can do right now, what they can do at their absolute frontier. They’re going to become more and more a part of our lives. Use them productively and for fun. You will be much better equipped to handle the progression of these systems that’s likely to be upcoming.For more ethical concerns about AI to consider, we recommend this guide to using generative AI from the University of Alberta in Canada, which touches upon environmental concerns, potential copyright use implications, bias, privacy and accuracy issues. The post CMU’s Zico Kolter shapes new paths for AI safety and security appeared first on NEXTpittsburgh.