Nov 25, 2024
HONOLULU (KHON2) -- Artificial intelligence (AI) is becoming a part of our daily lives, from chatbots answering customer service queries to powerful algorithms powering the apps on our phones. But despite AI’s rapid growth and increasing capabilities, there’s something fundamental that most of us don’t fully understand: how AI learns. Get Hawaii's latest morning news delivered to your inbox, sign up for News 2 You Nicole Cacal, AI expert and founder of Forbes Ignite, a former consultancy for Fortune 500 companies, has been at the forefront of this field for years. Her work brings together business, technology and design; and she’s observed how AI has evolved in a way that is both exciting and, in many cases, confusing. Cacal believes we are just starting to scratch the surface of AI’s potential and, at the same time, grappling with its unknowns. Have you seen the docuseries What's Next? The Future with Bill Gates on Netflix? In the first episode, Gates meets with today's greatest innovators in AI technology and discovers that AI has "begun learning "woke up" and humans have no idea how AI is learning. So, KHON2.com explored this discovery with Cacal. Cacal highlighted a crucial issue: the reality that humans don’t fully understand how AI learns. “The problem is that AI has woken up, so to speak; and we don't understand how it learned that way,” Cacal explained. “It’s like we’re sitting here with this incredibly powerful technology, but we don’t know exactly how it got to where it is today.” The awakening of AI For years, AI has been under the radar, with slow progress in the background. It wasn’t until recently, thanks to consumer-facing technologies like chatbots and ChatGPT, that AI gained public attention. Cacal noted that these tools, like chatbots and ChatGPT, have made AI more accessible, which has led to rapid development and a sudden leap in its abilities. “We’ve been in the AI space for decades; but it wasn’t until recent years, thanks to things like chatbots, that AI really woke up,” Cacal said. “It’s no longer just a niche technology used by researchers; now, it’s something we interact with daily. And that’s part of the reason it feels like AI is evolving so quickly.” However, the more accessible AI becomes, the more questions arise about how it learns, what it knows and whether we can trust it. In the docuseries, Gates interviews experts who admit that AI’s learning process is still largely a mystery. “It’s a powerful tool,” one of Gates’ colleagues said in the episode. “But we don’t truly understand how it’s learning. We don’t know what inputs are shaping its decisions.” This lack of understanding, Cacal argued, is a big deal. “It’s one thing to use AI for basic tasks, like typing out a message or asking a question,” she said. “But when it comes to more complex decisions -- like who gets a loan or which job applicants are chosen -- it’s essential that we understand how these systems are learning.” The implications of not understanding AI’s learning process The fact that we don’t fully understand how AI learns has profound implications especially in terms of its influence on society. Cacal explained that one of the biggest risks of AI’s current trajectory is the potential for it to reinforce existing inequalities. “AI is a tool that can either exacerbate existing social problems or help solve them,” she says. “It all depends on how we develop it.” Cacal believes that AI’s ability to influence consumer behavior and social dynamics is a double-edged sword. On one hand, AI could democratize access to information and wealth-building opportunities. But on the other hand, if left unchecked, it could widen existing socioeconomic gaps. The technology is powerful; but without intentional development, AI could lead to unintended consequences. “We need to be intentional about how we develop AI,” Cacal stressed. “It’s not just about creating something cool or new. It’s about creating something that serves humanity in a responsible way.” A key part of AI development is understanding biases. But Cacal doesn’t believe we should aim for “bias-free” AI. Instead, she advocates for “explicit bias” in AI. This means knowing exactly what the AI is looking at and why. “Bias is inevitable in AI because humans are the ones programming it,” she explained. “But we need to be clear about what those biases are. We need an AI that we can control, that’s agreeable with our standards of ethics, morality and law.” The need for AI transparency Cacal is a strong advocate for AI transparency, but transparency alone isn’t enough. To guide the future development of AI, we need to understand how our inputs are training the AI. “Before we even talk about transparency, we need to understand how our prompts, our questions and our actions are shaping AI’s learning,” she said. “If we don’t know that, then we’re essentially playing with a time bomb.” She compared AI development to design. In her work, Cacal often talks about design as a process of creating solutions within constraints. “If you design something without constraints, you’re just guessing,” she said. “When you approach AI with design thinking, you can better understand what you’re designing for, what the goals are, and how to achieve them.” By thinking about AI design, we can start asking the right questions and ensure that we’re creating systems that are not only functional but ethical. “We need a co-creative process,” Cacal added. “It can’t just be a top-down approach where only a few people are making decisions about how AI should work. The community should have input.” Cacal believes that when more people understand how AI works and how it’s being developed, we can demand better, more inclusive solutions. “If we involve more people in the process, we’re more likely to create AI that benefits everyone, not just a select few,” she saic. The role of education in AI interaction One of the biggest challenges, Cacal saic, is ensuring that we teach people how to interact with AI in a productive way. “The more we interact with AI, the more we’re going to have to learn to communicate with it effectively,” she explained. “AI is like a tool, but the way we use it depends on how we ask questions and how we frame our inputs.” Cacal’s point about the future of education is crucial. As AI becomes more integrated into our lives, we will need new skills to work alongside it. “Just like learning to communicate with people, we need to learn to communicate with AI,” Cacal said. “It’s not about how AI learns, it’s about how we learn to interface with it.” Cacal foresees a future where education systems evolve to teach these new skills. “We may see new majors and courses focused on how to work with AI,” she said. “It’s not just about technical skills anymore. It’s about learning how to ask the right questions, how to design systems and how to think critically about the tools we’re using.” A co-creative future with AI While some are worried about the rapid development of AI, Cacal is optimistic about the potential for AI to improve lives. That is, if it's used correctly. “I truly believe that AI has the power to elevate human capabilities,” she said. “If we learn to use AI responsibly, we can enhance our critical thinking and creativity.” However, there’s a catch. If we want AI to be a force for good, we need to ensure that it’s developed with a clear, intentional purpose. “AI is not a cure-all,” Cacal cautionsed. “It’s a tool. And like any tool, it can be used for good or ill. It’s up to us to decide how we wield it.” As AI continues to grow and develop, our relationship with it will evolve as well. Cacal emphasized that this relationship must be built on understanding, intention and collaboration. “We have to create a future where humans and AI work together,” she said. “It’s not about controlling AI. It’s about guiding it in a way that benefits all of us.” Shaping the future together As AI continues to “wake up”, it’s clear that its development will have profound implications for society. But as Cacal pointed out, understanding how AI learns is just the first step. The real challenge lies in how we interact with this technology and how we design it to reflect our values. “We’re at a crossroads,” Cacal added. “We can either let AI develop on its own, with little understanding of how it works; or we can take an active role in shaping its future. The choice is ours.” You can click here to learn more about Cacal and her work. For now, the key is to approach AI development with intention, transparency and inclusivity. Get news on the go with KHON 2GO, KHON's morning podcast, every morning at 8 If we can do that, we just might create a future where AI doesn’t just learn on its own. It will learn to work with us, for the betterment of humanity.
Respond, make new discussions, see other discussions and customize your news...

To add this website to your home screen:

1. Tap tutorialsPoint

2. Select 'Add to Home screen' or 'Install app'.

3. Follow the on-scrren instructions.

Feedback
FAQ
Privacy Policy
Terms of Service