Skip to main content

The Progression of Ubiquity


 Last week I attended an event hosted by WMUX in which instructors discussed how they were using AI in their courses, and also how they were writing policies into their syllabi in an attempt to be a guide for students who they assume will be using AI anyway.

I think like many things AI is being deployed, used, and expanded at a widely disproportionate rate than what those of us in the academic realms assume, and that those in the corporate world as well. I think we are making assumptions about the upcoming generations and their understanding of various technologies. I think AI risks becoming an embedded technology before the children who are under ten today have any chance to have any opinion or say on. 

When it comes to how to bring AI into the classroom - whether that is as a tool to be used (for educators and students), an enemy to be grappled with, or a savior to put outsized hopes in - I have opinions, but they quickly slip into foolishness of prophesying. Perhaps I do this in an attempt to either comfort or brace or avoid considering what is coming. In this way I think I recognize that I am not so very different probably from those who learned about global warming/climate change in the 80s/90s. Those who did what they thought they could and then allowed themselves to be swept along in what came next because at the end of the day the kind of commitment it takes to go up against what feels inevitable is a lot of emotional and physical sacrifice. 

When I do indulge dreaming up scenarios for the future in which AI became embedded into our world in ways that magnified our humanity, I seem to always wind up dreaming up ways in which that could be bent or overapplied to also be harmful. In this way I do not know how people who invent such things, develop dreams and plans and marketing campaigns for such things, develop storylines and publish books and talk about progression of the stories we have to reflect our own thoughts, fears, and dreams in do it. How does one empower themselves to that extent? Or is it foolish to think there is any point at which people feel like their hand is on the pulse of the movement of it all? 

Has there been a handful of CEOs, PR reps, or marketing gurus who stood in a room and planned that as many citizens as possible in this country would have a smartphone of some sort in their hand for some percentage of their everyday? And were they believed? Were they the same people who successfully deployed that dream? Or did they fail (maybe the creator of Blackberry) but their dream was a domino that ultimately made their dream come true, just not in the way they imagined it?

I wonder that about AI. I wonder if there are people who are thinking; one day there is going to be AI bots in every single instructional institution in this country. Perhaps these bots will be programmed to the learning style of every single new batch of students it interacts with. It will be a teacher's aid and learning assistant capable of asking children the type of inquiry-minded questions that will aid them in becoming flexible and generative learners, while also seeing immediately where learning experiences could be transitioned into collaborative sessions of student connection. Could having such a being in class result in classrooms that didn't have to feel so meritocracy based? If a bot could avoid the human emotional struggles that come with identifying some kids as easier than others, or some kids as having parents who care more, or some kids as hopeless causes and responding overtime - unconsciously or consciously - to those kids with differing levels of responsiveness and belief in their abilities and futures - what could that look like? And how would that impact the long-term development of the students coming out of that environment?

If I were instructing a course right now I think I would take the tact of trying to help students have some guidelines for AI use. I would also be using AI to help me brainstorm an identity inventory for my students to help them contextualize themselves as users of AI in their identities and how that could impact their use of AI in my class and beyond it. 

Or if we look at the collegiate level. If getting into a certain level of schooling at certain colleges afforded you a research bot of your very own. If it meant a level of support and protection for your work and your time and your emotional state so that you were able to work, and not just work, but work from a state of mind that is more often afforded to people with certain privileges that others have less of ... could that be of benefit? Or would it just be another way to exploit people? Just another way to contribute to a glut of information without application? Could a bot be programmed to connect to bots out in the field? Could there be safeguards in place for an agreement that translated research into practice in ways that ensured the creator of that research would reach a point of sustained support in their field that did not require them to continue working, but instead be able to lead a self-directed and resource available life - retirement of a sort without the decades long anxiety it takes to get there? 

But then also, if we make bots to do so much of the work now done by those who don't have access or a desire to work in jobs that 'couldn't be done by an AI bot' OR who have enough people with enough privilege (time and resources specifically) to rally or connect with one another to become a protected class of worker from AI encroachment, what happens to the people who are left out? We historically, and now, have a very poor track record of showing up for people who don't fit the standards of 'who deserves rights and comfort' in our environments?

I kind of don't doubt - unless Climate Change makes it impossible - that AI bot ubiquity will follow a similar arc as any other technology. It is already becoming a given in the way Googling something became a given. People who talk openly about using AI for this or that task - professional, personal, interpersonal, practical - are already embedded in, at least the life I am living. I have not polled all my friends on their experiences or use, but it would certainly be an interesting experience to do so. 

If I were instructing a course right now I think I would take the tact of trying to help students have some guidelines for AI use. I would also be using AI to help me brainstorm an identity inventory for my students to help them contextualize themselves as users of AI in their identities and how that could impact their use of AI in my class and beyond it.

In the realm of instructional design, and imagined futures, I dream of leading a class in which every learner there has a topic they are seeking to change their mind on, or better understand for themselves on an emotionally authentic level. I think how we emotionally connect to the opinions and ideas we hold is an important thing for us to be aware of. 

AI in relation with VR tech seems like a powerful way to guide people through learning experiences that are generated immediately based on what person's own conception and perceptions of the world are. Guided by a method of mindfulness and purpose that could be provided in the structure of a class intended to help people learn how to connect with themselves (and then maybe others too) I think those could be powerful experiences for people, maybe in the realm of helping them find place in the world, maybe in the vein of connecting them with ideas and paradigms they have never encountered before, maybe in helping them develop more emotional mindfulness in how they learn and how emotions impact what they learn.


Comments

Popular posts from this blog

An Embeddement

  Bonus Challenge: Match the flags with their icon

A Climate Adaptation Exercise

 Lesson Plan: Objective: To introduce adult learners in a course about Climate Adaptation, Morality, and Justice: Starting Where Your At using Google Maps to help them create a road trip to become a tourist in their own state as well as a way to emotionally connect them to the natural world available to them. The creation of the map will create a foundation of care in students for future class discussions surrounding the new climate morality. Considering, grappling with, or answering questions like: - How do we balance the different mentalities around climate change and our responsibility or values within that larger discussion? - What is the balance of caring for our natural world while also being a unavoidable (for now) participant in its continued disruption and destruction? - How have our perceptions shifted? How can they continue to shift to achieve the kind of sustainability that will allow us adaptability? The stops on the map below illustrate how students will plan a road trip

EXPECTATIONS: a new course carried on from the old.

Instructor: Dr. Brian Horvitz The Theory and Practice of Online Learning, 2nd Edition, Anderson & Elloumi. Free Ebook found at https://www.researchgate.net/publication/44833801_Theory_and_Practice_of_Onli ne_Learning   Adding Some TEC-VARIETY: 100+ Activities for Motivating and Retaining Online Learners, Free PDF download found at http://tec-variety.com/TEC-Variety_eBook_5-4.pdf  Discussion Expectations: 1 page (approx: 500 words) by Thursdays, substantive responses to 2 peers by Sundays Regarding your discussion postings, Grice's (1975) principle and maxims of conversation are also useful to keep in mind: The principle of co-operation: Try to make your contribution one that supports the goal and purpose of the ongoing conversation. 1. Maxim of quantity: Make your contribution as informative as is required, but give no more information than is required. (Sometimes overly long posts make it harder to have conversational dialogue.) 2. Maxim of quality: Try to make your contribut