Hey there! Let’s dive into the fascinating world of artificial intelligence. It’s a topic that’s sparking a lot of excitement, fear, and debate these days. Is AI a force for good or bad, or perhaps something we’re still trying to fully grasp? Today, we’re sitting down with the renowned computer scientist and AI researcher, Mária Bieliková, to explore these pressing issues surrounding AI, its impact on humanity, and the broader ethical dilemmas and questions of trust it raises.
Congratulations on winning the prestigious ESET Science Award! How does it feel to receive this recognition?
Thank you! It’s truly a moment of immense gratitude and happiness. Receiving the award from Emmanuelle Charpentier herself was an unforgettable experience, filled with intense emotions. This award isn’t just for me – it belongs to all the incredible people who have been part of this journey with me. In the world of IT and technology, achievements are a result of team efforts, not individual endeavors.
I’m particularly thrilled that this award has recognized the field of IT and AI for the first time. It’s also interesting to note that the Nobel Prize in 2024 was awarded for advancements in AI, with four prizes going to AI-related inventions in Physics and Chemistry.
I’m also proud of the Kempelen Institute of Intelligent Technologies, which has established itself as a key player in the AI ecosystem of Central Europe.
Mária Bieliková, a leading Slovak computer scientist, has made significant contributions to human-computer interaction analysis, user modeling, and personalization. Her work also delves into data analysis and modeling of antisocial behavior on the web. She is a prominent voice in discussions about trustworthy AI, combating disinformation, and leveraging AI for societal good. Ms. Bieliková is also the co-founder and head of the Kempelen Institute of Intelligent Technologies (KInIT), where ESET serves as a mentor and partner. She recently won the Outstanding Scientist in Slovakia category of the ESET Science Award.
Author and historian Yuval Noah Harari has made a thought-provoking observation about the uncertain future we face. As someone deeply engaged in AI research, how do you envision the world two decades from now, especially in terms of technology and AI? What skills will be essential for the children of today?
The world has always been complex and unpredictable. Technology today amplifies these challenges in ways that demand quick adaptation. AI not only streamlines tasks and replaces human labor but also introduces new structures and synthetic entities that could lead to unforeseen crises.
Technology, consciously or not, can drive societal divisions. It’s not just about digital threats to infrastructure; it’s also about manipulating human thoughts through lightning-speed propaganda dissemination, a phenomenon unimaginable a few decades ago.
Envisioning the society of the future is a daunting task. We might even witness a transformation of our meritocratic system from one based on knowledge evaluation to a more inclusive model that unifies society. The way we handle data may evolve as we question the reliability of our senses.
In the future, children will likely prioritize practical knowledge over rote learning and test-based success metrics. Energy invested in meaningful actions will outweigh mere cognitive prowess, emphasizing the importance of social and emotional skills.
As AI advances, it challenges traditional notions of human uniqueness. Do you believe René Descartes’ assertion, “I think, therefore I am”, needs reevaluation in an era where machines can “think”? How close are we to AI systems that could redefine human consciousness and intelligence?
AI systems, particularly large foundation models, are reshaping the AI landscape with continuous enhancements. OpenAI’s recent breakthroughs with models like O3 and O3mini suggest significant progress, but true Artificial General Intelligence (AGI) remains elusive with current technology.
While AI excels in specific tasks and even outperforms humans in certain domains, it lacks genuine comprehension. Machines may reason to some extent, but they lack consciousness and emotional depth. Whether this dynamic will evolve or if our understanding of intelligence will transform is uncertain.

The idea of “to create is human” faces new challenges as AI systems generate art, music, and literature. How does the emergence of generative AI impact human creativity? Does it enhance or diminish our identity as creators?
The debate on creativity and AI is ongoing. While AI can produce art, music, and literature, the essence of human creativity remains distinct. AI-generated content may offer novelty, but it lacks the profound human touch that resonates with our emotions and relationships.
Art plays a vital role in our society, serving as a conduit for human connections and narratives. While AI-generated art has its place, it may not evoke the same depth of emotion and meaning as human-created art supported by technology.
AI’s rapid progress in machine learning and generative AI has surprised many. How fast is too fast? Is this pace sustainable and desirable? Should we pause AI innovation to grasp its societal impacts, or does slowing down risk impeding valuable breakthroughs?
The unprecedented speed of AI advancements is driven by global competition and technological breakthroughs. Balancing progress with ethical considerations is crucial, especially as AI’s capabilities expand beyond our comprehension.
Slowing down AI innovation may not be feasible in our current societal framework. Instead, a paradigm shift is needed to ensure that AI development aligns with ethical standards and addresses societal implications proactively.
Investing in understanding AI’s consequences and evaluating its societal impact is essential. Research initiatives like those at the Kempelen Institute, exploring the ethical dimensions of AI, are instrumental in shaping responsible AI practices.
AI holds promise in addressing global challenges like healthcare and climate change. Where do you see AI making the most significant impact, both ethically and practically? Can AI be a solution to humanity’s pressing issues, or do we risk overestimating its capabilities?
AI presents a double-edged sword in tackling critical challenges while introducing new risks. Its applications in healthcare, such as drug development and protein structure prediction, showcase its potential. However, AI’s capacity to create synthetic organisms poses unforeseen threats.
While AI aids in raising awareness about issues like climate change, it also fuels disinformation and societal divisions. Recognizing the dual nature of AI is crucial in leveraging its potential while mitigating its negative impacts.
Concerns have been raised about AI becoming a threat to humanity. How can we balance responsible AI development with innovation without succumbing to alarmism?
Navigating the risks posed by AI requires a delicate balance between exploration and ethical considerations. It’s essential to invest in understanding AI’s implications on society and individuals to ensure responsible development.
Collaboration between multidisciplinary teams is key to navigating the ethical challenges of AI. By fostering dialogue and research on AI’s societal impact, we can strike a balance between innovation and responsible development.
Building trust in AI is crucial for its acceptance globally. How can the AI research community cultivate trust in AI technologies and ensure their ethical use across diverse societies?
Multidisciplinary research is vital in evaluating AI’s impact on individuals and society. Deep neural networks are reshaping AI methodologies, emphasizing the need for a holistic approach to AI development and deployment.
Collaboration between the public and private sectors is essential in fostering trust in AI technologies. By prioritizing ethics and transparency in AI research and development, we can build a foundation of trust and credibility in AI systems.
AI regulation lags behind technological advancements. How can AI researchers contribute to policies that ensure ethical AI development? Should they play a more active role in shaping AI regulations?
Ethical considerations must be integral to AI research and product development. Engaging experts in ethics and regulations from the outset is crucial in navigating the ethical dilemmas posed by AI innovation.
AI researchers can contribute to shaping ethical AI policies by prioritizing transparency and ethical considerations in their work. By collaborating with policymakers and experts in ethics, AI researchers can help shape regulations that promote responsible AI development.
As AI researchers navigate ethical dilemmas, how do you balance the imperatives of AI development with ethical considerations, particularly in personalized AI systems and data privacy?
Ensuring transparency and ethical considerations from the outset is essential in navigating ethical dilemmas in AI development. Collaboration with experts in ethics and data privacy is crucial in balancing AI development with ethical imperatives.
Large technology companies play a significant role in shaping the future of AI. How vital is it for these corporations to lead by example in promoting ethical AI, inclusivity, and sustainability?
The collaboration between academia and industry is crucial in shaping a future where AI aligns with societal values. Initiatives like the AI Awards highlight the importance of trustworthy AI practices and ethical innovation.
Large companies can lead by example in promoting ethical AI practices and inclusivity. By prioritizing ethical considerations and sustainability in AI development, corporations can set a precedent for responsible AI innovation.
Thank you for joining us in this insightful conversation!