An 8-question interview with David Van Bruwaene, Co-Founder and CEO of Fairly AI, through the eyes of Liza Rizzardi, a Gen-Z student entering her final year at university.
As I enter my final year at university, I'm increasingly intrigued by the advances of AI in the educational sector. While I, like many Gen Z students, grew up surrounded by technology, the introduction of generative AI tools in our learning environments is something relatively new. Starting my university life right in the middle of Covid-19, jumping from one Zoom lecture to another, and now getting a taste of these innovative new AI tools that are flipping the script on how I learn, it's evident that the pace of progress is relentless. And if I'm being honest, it's got me both excited and a bit anxious about what this all means for us students transitioning from academic life to the professional world. So, I thought, why not go straight to the source? I had the privilege of interviewing David, the Co-founder of the innovative startup, Fairly AI, to delve deeper into this fascinating confluence of AI and education.
1. Do you think there's a risk that relying too heavily on AI tools like ChatGPT could hinder the critical thinking and research skills students have traditionally been taught in universities?
David: Definitely. While tools like ChatGPT are revolutionary, they're not without drawbacks. Relying excessively on such technologies can indeed jeopardize essential skills like critical thinking. It's crucial to strike a balance. While these tools can aid in structuring sentences and phrases, over-reliance might mean students don't fully develop their conceptual content and expression abilities. This can impact their real-time conversational skills. However, interactive use of these tools can also provide learning opportunities, much like how when alphabets were first introduced and writing on papyrus became common, it negatively impacted people's memory. However, on the flip side, it allowed for the transfer of information across generations. Every new technology has its benefits and drawbacks.
2. Given the rise of platforms like ChatGPT and other generative AI tools, how do you see them shaping the learning experience for incoming freshmen versus those in their final year?
David: Both incoming freshmen and graduating students are adapting to new AI technologies like ChatGPT. The distinction lies in foundational skills. Established university students have honed their literacy skills over time, enabling them to leverage AI more effectively in their final years. They can think abstractly, carefully construct prompts, and rapidly assimilate diverse information, potentially supercharging their capabilities. On the other hand, there's curiosity surrounding incoming freshmen who use these tools throughout their college journey. Will they lack certain literacy foundations, or might their extensive exposure to AI offer unique advantages? Ideally, students would blend traditional language-focused learning, such as poetry writing, with the high-level analytical opportunities that AI prompts provide. The true impact remains speculative and will be clearer as the AI-exposed generation graduates.
3. With notable institutions like Harvard integrating AI into introductory computer science courses, what's your stance on the widespread adoption of generative AI in academic settings?
David: I believe it's a double-edged sword. While AI can enhance learning, especially in tech courses, we must be wary of diluting the human element in education, which fosters creativity and individual expression. Incorporating AI in courses at renowned institutions like Harvard is a progressive step, acknowledging the evolving technological landscape. As technology advances, there are shifts in what's foundational. For instance, programming languages like C++ or Python represent a higher level of abstraction than earlier languages, much like the move from compasses to GPS in navigation. It's essential for education to keep pace with these advancements. Students are meant to leverage existing knowledge and be at the cutting edge. Ignoring AI's presence or pretending it isn't influencing academic outcomes can lead to inequities. Those who adhere to prohibitions due to moral convictions could be disadvantaged. Recognizing and integrating AI in education is essential, though finding the optimal approach will be an evolving process.
4. Since you have taught Ethics at Cornell and the University of Waterloo and with the rapid advancement of AI, what are some key philosophical or ethical challenges you foresee in introducing generative AI to the classroom?
David: The main challenge lies in representation and fairness. AI often mirrors societal biases, so unchecked use in education can perpetuate stereotypes, subtly influencing young minds. We need to ensure that AI in classrooms respects diversity and fosters inclusivity. Introducing generative AI like ChatGPT to the classroom raises several ethical and philosophical issues.
To begin with, consider the Prisoner's Dilemma. If all students collectively decide not to use generative AI to assist in their assignments, valuing the integrity of their education, everyone would benefit from an even playing field. However, the temptation for an individual student to break this pact for personal gain—achieving higher grades with less effort—echoes the dilemma prisoners face in the classical scenario. If one believes their peers are capitalizing on AI's capabilities, they too might be driven to do the same, fearing disadvantage otherwise. This mutual distrust can lead to a situation where most students rely on AI, thus diluting the value of their education.
This behavior parallels the Tragedy of the Commons. Just as herdsmen might be tempted to overgraze a shared pasture for personal benefit, jeopardizing it for everyone, students could collectively "overgraze" the benefits of generative AI, ultimately devaluing the education they receive. The resource here isn’t grass, but the integrity and value of education. Over-reliance on AI might give short term individual gains but can diminish the collective worth of the educational experience.
In conclusion, while the ethical quandaries of AI in education might seem novel, they echo age-old dilemmas of individual vs. collective benefit. And as with all ethical challenges, solutions require careful consideration and a balancing of interests.
5. As someone at the intersection of AI, philosophy, and education, how do you think the job market will evolve for students graduating in the next 5 years? What skills will be most crucial?
David: While many fear that emerging technologies will drastically alter everything, in reality, many things will remain unchanged. As history shows, with the introduction of new technologies, there are initial adjustments. However, there's a point of saturation—much like an asymptotic curve that rises sharply and then levels off. Hence, while some areas might undergo disruption, the majority will stay the same. The job market will increasingly favor those adept at collaborating with AI. Apart from technical skills, soft skills like adaptability, critical thinking, and emotional intelligence will be invaluable.
The main shifts we can anticipate in the next five years are a decrease in jobs centered around routine written tasks, image, and video production. Those in such roles will face heightened productivity expectations and will need to embrace new technologies to remain competitive. For instance, transitioning from pen and paper to word processors became a necessity in the past.
Younger entrants to the workforce might find themselves at an advantage as they can easily adapt to newer tools and methods without the baggage of old habits. Engaging with new technologies, like ChatGPT, with an imaginative and creative approach can lead to the development of unique, in-depth skills. As these individuals master the art of generating high-quality content efficiently, they will position themselves as valuable assets in the evolving job market. Furthermore, as technology reduces business operational costs, we might see a surge in startups, offering additional employment opportunities.
6. How can AI help students who find traditional learning tough or have disabilities?
David: AI can be transformative for such students. For instance, speech-to-text tools can assist those with visual impairments, and personalized AI tutors can cater to individual learning paces, making education more inclusive.
Moreover, AI can help bridge gaps in professional settings. For individuals who may face challenges due to cultural differences, limited education, or other factors, tools like ChatGPT can refine their ideas into professional communications, leveling the playing field.
Lastly, emerging AI technologies promise to aid those with visual or auditory impairments. Consider innovations like glasses equipped with cameras, which can identify and verbally convey potential hazards. Such advancements will be game-changers, further integrating and empowering individuals with disabilities in various life scenarios.
7. From your dual lens of an educator and AI entrepreneur, where do you believe generative AI falls short in educational settings?
David: Generative AI, while robust, tends to generalize knowledge. So its use, without careful and intentional application, can result in minimal educational value. Generative AI typically produces outputs that lack individuality and style as well. For instance, the difference between a good poem and a great one is not just structure, but the imperfection, creativity, and unique style that resonates with readers. It's something that can't be reduced to formula. And it just works on its own merits and just makes sense in the way that it's expressed in and of itself. Think of autocorrect and tools like ChatGPT that aim for precision, but often overlook the nuance of words. Take, for example, the word "fairy." In modern spelling, it's "F-A-I-R-Y," but in older English, it's "F-A-E-R-I-E." Such subtleties can be lost with automation, which is truly tragic. It's essential for students to appreciate these imperfections and understand the diverse ways of expression that vary across individuals, regions, and generations.d that's a great tragedy. And I hope that students are exposed to imperfection and a very different way of expressing yourself from one person to the next, from one place to the next, from one generation or century to the next. Think of the unique spellings and stylistic choices by poets like John Milton. They deviate from the norm, convey a depth of emotion and meaning that AI might overlook.
8. Your company's name, "Fairly AI", suggests a focus on equitable AI. How do you envision ensuring educational AI tools benefit all students equally, such as those with disabilities?
David: Fairly AI underscores our dedication to equitable AI, especially within educational tools. The utilization of AI technologies can grant equal opportunities, especially for students with disabilities, allowing them to communicate and engage more effectively. However, challenges remain. Digital access disparities can leave certain students behind. Additionally, there's a pressing issue with representational harms. If automated content continuously perpetuates cultural biases, like gender stereotypes, these seemingly minor biases can accumulate, echoing the societal injustices faced by marginalized groups in history. Such unintentional, frequent biases can gradually erode one's sense of worth and contribute to systemic discrimination. We aim to actively combat this by ensuring our tools don't simply amplify existing prejudices and by investing in equality-focused optimizations.
As my conversation with David concluded, I felt both enlightened and empowered. Generative AI holds massive promise for the classroom, but it's up to us, the Gen-Z torchbearers, to ensure that it's harnessed ethically and equitably.