Two weeks ago, I began my collaboration with the Bok Center on their chatbot design initiative. When shopping for projects at the beginning of the semester, I knew going in that I wanted to find something that had more of a technical background. Since AI was going to be the focus, I wanted to in this more scaffolded environment to have the opportunity to look under the hood and see how these models were operating. The project at the Bok Center particularly stood out to me in its focus on more granular details of how chatbots both work and can be developed for clients' specific needs.
My work thus far has confirmed this and more. First, working in the Bok Center itself has been an eye-opening experience in its own right. Thoroughgoingly committed to embracing the multimodal potential of education in the 21st century, the centers space exists as one large studio. It's hard to find a corner of the office that doesn't have a camera, microphone, or greenscreen set for content to be created with ease. This existing relationship with technology naturally carries over into their approach towards the AI tools that have cropped up over the past half decade.
The work I have found myself performing though has been involved in the back end, which here has meant taking advantage of different GPT's API capabilities. The Bok center has particularly leaned into this GPT use case to supplement its existent content creation ability. Routinely audio is taken and converted into text which is taken and attached to pictures, completely translating the medium of an interaction in an instant. The applications of this workflow to some of my own academic and professional interests, particularly the world of philosophy for children, has greatly excited me for the work.
Thus far, I have been learning the process behind each of these steps when performing an API call, whether it be text-to-text, text-to-image or voice-to-text. The medium this work has been done in is Google Colab, a coding environment I had not previously been acquainted with however is fantastic in its ability to seamlessly weave together code and its documentation.
Our goal for this Bok Center project is currently to create an installation where students and teachers can see these unique applications for AI in education spaces. Performed in collaboration with another T127 group, this project will display both custom GPT's made through OpenAI and Anthropic's more user-friendly front ends and also interlinking API calls performed in Google Colab and Jupyter notebooks. We want viewers to walk away from this installation with a greater understanding of how GPT's can be very simply tinkered with to unlock a gamut of more personalized functionalities, and how those functionalities can be specifically leveraged in education settings.
This project with the Bok Center has already thus far proven to be extremely fruitful and I'm excited to see how our development towards our installation plays out in tandem with the new skills we develop.
Comments