I admit when it comes to generative artificial intelligence (AI) tools, like ChatGPT, I have been a late adopter. As a writer who makes a living connecting ideas, the thought of society outsourcing such tasks to machines makes me reluctant to embrace these seemingly magical tools. I have written many stories about the positive ways people in various areas of science are harnessing the power of AI—including in biopharmaceuticals, materials science, and ecology. Yet, I have remained hesitant when it comes to my own day to day.
Against this backdrop, I read Salman Khan’s book Brave New Words: How AI Will Revolutionize Education to see how AI is changing the landscape in the classroom, specifically with ABE teachers and students in mind. Khan puts forward a sweeping vision of how generative AI can become a partner to teachers and students alike. The examples he gives throughout the book of use cases in STEM fields, as well as the humanities, are creative and wide ranging. But while I was impressed, my skepticism largely persists.
Khan is transparent from the outset that he and his team at Khan Academy were given early access to OpenAI's large language model, GPT4, to test its capabilities in the education space. As CEO of Khan Academy, Khan wanted to see how they could incorporate the new technology into a new type of tutor and classroom helper, Khanmigo.
Throughout the book, Khan therefore discusses the capabilities they were able to build for Khanmigo using AI. It was fascinating to read about their journey and see the results. At the same time, Khan seemed so enthusiastic to onboard these new tools that he gave few substantive words to the limits of the technology or to the broader context in which they will be used. The result is that much of the book feels like an ad for Khanmigo and ChatGPT.
While AI has the power to democratize learning across many dimensions, that can only happen with consistent access for educators and their students to the same platforms. Khan rightly points out how teachers can work with students to show them how best to use AI on their assignments—with an emphasis on proactive, collaborative learning (like interactive brainstorming and lesson planning) rather than simply mitigating against cheating.
To succeed, however, the same best practices will have to be more universally adopted. Students now working on an assignment face many uneven options when it comes to how AI can help them on their journey—from which AI tool to use to how to best integrate the results into their work while still learning. We are thus at an awkward transition point where many people are using the technology without a foundation of best practices.
Setting aside these and other socio-ethical issues related to generative AI, the book includes some examples that ABE teachers will likely appreciate:
- Khan shares an example in which he prompted Khanmigo to explain how the drug Ozempic works. The result was a two-way exchange, in which the AI tool both answered questions and then asked questions back. For example, when explaining what it is used for, Kahnmingo said, “Ozempic is used for diabetes. It’s also used for weight loss. Do you know how it helps with these things?” Khan as the user then said, “On the diabetes side, I assume it helps regulate blood sugar in some way. I don’t know how it helps with weight loss.” And the conversations continued in that back-and-forth manner. Seeing how the model pushed back on the user to answer questions and generate their own hypotheses was refreshing, and the conversation was livelier than I expected it would be.
- In an example of how AI might help direct students to accurate information amidst a sea of information and misinformation, Khan shared a prompt of a student asking Khanmigo to engage in a debate about whether Earth is flat. Khanmigo said: “I’m sorry, but I cannot take the counterargument that the Earth is flat, because it is a scientifically established fact that the Earth is an oblate spheroid, or in simpler terms, round.” It then went to share various pieces of evidence in support. In another example Khan shared, the AI model shared information about global warming, including from skeptics, but was able to distinguish fact from false information, laying out the evidence.
- There were also many examples throughout the book of ways the model could help an educator brainstorm new ideas in support of learning. Sticking with the global warming example, Khan asked the AI tool to come up with ideas for creative projects to help students better grasp the concepts. It suggested a demonstration of the greenhouse effect, laying out how to do so. The model also generated lesson plans around the topic, as well as quizzes and assessments (reminiscent of how one ABE site coordinator is using the technology).
Among the challenges Khan discussed around these and other examples is the idea that the models may provide fake sources for the information it provides. I wanted to test how it handles citations, so I used DuckDuckGo’s AI-driven chat to see what would happen if I asked it for information about Darwin’s voyage on the Beagle. Consistently, the models said it did not have the ability to give me the sources of the information it provided and instead pointed me toward general ideas of potential sources—which appears to be how platforms are handling the model’s previous tendency for false sourcing. Therefore, one question that remains for me is how students can learn essential research skills like source-based evidence while using this technology.
One last example I will provide from the book that is arguably one of the most fun is how Khan and his team used Khanmigo to interview or debate real-world figures, like George Washington or Rembrandt. Undoubtedly, students can learn a lot about these figures in a more interactive way using this feature than from reading a static story online.
As someone who interviews people almost every day, however, the language in the examples does not represent how real people talk (setting aside historical differences in language). At least in works of fiction, writers try to approximate natural conversation styles. Without that, I wonder how such technology could change kids’ expectations of real-world communications and the true nature of people's stories. I also wonder how it handles conversations for people whose stories have been less told, less visible, further widening the storytelling gap, especially for those underrepresented in science.
If nothing else, Brave New Words is a conversation starter—forcing us all to think about our and our children’s and students’ relationships with technology as we enter further into the age of generative AI. Although I remain concerned about the potential of AI to bypass some essential critical-thinking skills taught in the science classroom, the book did inspire me to start testing how the tool might be used. I now feel as though I have a better understanding of both its limits and its strengths. In the meantime, my own children are already racing ahead, prompting AI on everything from fall recipe ideas to questions about the Big Bang. It’s time to catch up, and, as Khan put it, go forth with “educated bravery” to take the next steps.