ai Thinking

notai

So… I’m in a spot that feels uncomfortable to me, especially with generative ai in education, or anywhere. Normally, I’m an early adopter, excited about new technology, tools, software, etc… Not being in that same spot right now with ai (generative ai) feels really off. I did write a tiny bit here in December of 2022, but I had not thoroughly thought through things when I posted those musings. I was initially intrigued when I first tried some generative ai tools. Full disclosure. I’ve used it some. I even used it to help write a published journal article. I now wish I had not. “Using with revulsion..” I think I heard that on the Search Engine podcast.

First, some great resources

I’ve been reading, listening, and learning quite a bit. The following is a list of professionals, resources, and podcasts I’ve found very helpful.

  • Tom Mullaney – Check out his website and Substack. Threads profile. Bluesky Tom has a great Bluesky Starter Pack list of folks thinking critically about ai.
  • Emily M. Bender – Professor in the Department of Linguistics, University of Washington. Website. Bluesky
  • Per Axbom – Awesome resources on their website, especially AI Ethics… Follow them on Bluesky, too.
  • Reid Southen – Film concept artist and illustrator. Website. (also on Bluesky)
  • Kristen MattsonWebsite link. (also on Bluesky)
  • Karla Ortiz – Puerto Rican artist. Website. Instagram. (also on Bluesky)
  • Ed Newton-Rex – CEO of Fairly Trained. Website. (also on Bluesky)
  • Neil TurkewitzMedium (also on Twitter)
  • Copyright AllianceWebsite. Bluesky
  • Benjamin Riley – Founder of Cognitive Resonance. Website. (also on Bluesky)
  • Ian Krietzberg – Editor-in-Chief of The Deep View. (also on Bluesky)
  • Charles Logan – Learning Sciences PhD Candidate at Northwestern University. (also on Bluesky)
  • Mystery AI Hype Theater 3000 Podcast – “Artificial Intelligence has too much hype. In this stream, linguist Prof. Emily M. Bender and sociologist Dr. Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation.” Podcast link. Also, on all the platforms. (occasional explicit language)
  • Distributed AI Research Institute – “AI is not inevitable. We DAIR to imagine, build & use AI deliberately.” Website. LinkedIn.
  • Search Engine Podcast – “Search Engine is the podcast that tries to answer the questions that keep you up at night. A podcast made by humans that provides the answers that neither artificial intelligence nor actual search engines really can.” Podcast link. Also, on all the platforms. (occasional explicit language)
  • Wanda Terral – Director of Technology for the Lakeland School System. Wanda shares lots of great edtech-y information. I see many of her shares on Facebook, but she also has a great website. (also on Bluesky)
  • Elissa Malespina – Elissa created a great group on Facebook, The AI School Librarian. I learn a ton there, especially in the comments. She also has a newsletter with the same name on Substack. (also on Bluesky)

Some of my updated thinking

In no particular order…

Define ai

I’m pretty much talking about “generative” ai. I constantly get the argument…”ai has been around forever.” “..you’ve been using it for a long time..” I’m not talking about spellcheck, Google Maps, or important cancer research. I’m mostly concerned with the technology creating content that sounds/looks/reads like it was made by a human.

From our KDE AI Guidance Brief, “GenAI is only one type of AI that can create new content, such as text, images, or music, based on patterns it has learned from its training data and/or language models (e.g., large language models (LLM) are huge deep learning models that are pre-trained on vast amounts of data).”

Demand from “industry”

One place where I’m stuck, and I keep seeing messages like: “Employers are demanding workers with generative AI skills.” I’m not convinced they really are. If they do have this demand, is it because it is for a good reason, or did they just see an article or commercial on ai somewhere? It probably will be part of our lives, but not in the way the ai companies and tech hypists are claiming. We are a whopping 2, almost 3 years in, with general access to generative AI. Do “industry” folks demand instant training? Do they expect k12 educators to train students? Post-secondary educators? Who is teaching the educators these “skills?”

So… As I was typing this, I had a quick Facebook discussion. It was proposed that workers are already being replaced by workers who can use ai. Use ai so that the corporation needs fewer workers. Is it computer science skills or machine learning skills that they need? I still don’t really know what “ai skills” are. If people are unemployable not because of their lack of ai knowledge but because there are fewer jobs… I don’t see much of a solution.

“One recent graduate told Handshake they were “hesitant to use generative AI because it doesn’t seem ‘officially’ accepted or commonplace yet, and I would feel lazy and guilty for using it to do work for me.” – Buiness Insider

I think students need to be critical, ethical, and responsible, but not necessarily users of generative ai at all, at this point. I’m getting along relatively fine in life, and the internet in its current form didn’t even exist for most of my time as a k-12, and undergraduate student. How was I able to adapt? To continue to learn? Those are the skills I believe we should continue to focus on.

Fair use

I often get copyright questions in my role in education. I immediately think about fair use exceptions. I’m going to go through the four factors that a judge would use when deciding a case where a copyright holder feels the infringement was not fair use. (I’m copying and pasting these factors from the U.S. Copyright Office website.) For this exercise, I’m going to choose OpenAI’s ChatGPT product.

  • 1. Purpose and character of the use: The use is of a commercial nature or is for nonprofit educational purposes, and the issue of transformative use: I feel like in this example, ChatGPT is definitely a commercial product, and on a vast scale. The transformative part is a little more complex. I can see arguments going both ways. Overall, not a good argument for fair use.
  • 2. Nature of the copyrighted work: creative/imaginative versus factual work: In this factor, it is difficult to say. There is not a ton of transparency on what “publicly available” works were used for training. For the visual generations, for sure, the works that were scraped were creative and imaginative. Overall, not a good argument for fair use.
  • 3. Amount and substantiality of the portion used in relation to the copyrighted work as a whole: This one is rough. I’m assuming they used entire works, the whole article, the whole image, etc. Overall, it is not a good argument for fair use.
  • 4. Effect of the use upon the potential market for or value of the copyrighted work: This factor is so important and complicated. The potential for huge effects on the value of creative works is a real possibility.

If… big if. If things develop in the next months and years, and courts make fair use and copyright decisions, we can avoid the environmental impact and trauma-inducing need for humans to tag terrible images, real privacy issues are resolved, plus a few other things… I could start to think of a path where we need to teach the “use” of ai. Computer science, machine learning, data analysis, coding… all for that, but using generative ai?

I’ve grown up enjoying every new technology. Love learning new tools. Love sharing new tools. I could not wrap my head around a stereotypical “Luddite” mindset. I’m now starting to understand it, though. Why are we here on this planet? What are we trying to do? What are we rushing to get to? I keep seeing things that say, “Ai will save you so much time…” What are we doing with that time? Are we getting paid less? Working more? Are we just rushing to the next project? The costs just seem huge, and the benefits… I’m not sure there even are any benefits.

Loneliness

Last second addition… (which I also shared on the AI School Librarian Facebook group.) This article named another thing I think I’ve been feeling about generative AI. I keep using the word “icky,” but there is also a nagging and depressing feeling of loneliness. Along with the loneliness, so many tools are designed as deception machines. This is worth the 5 minute read: “I Created an A.I. Voice Clone to Prank Telemarketers. But the Joke’s on Us.” by By Evan Ratliff.

Ok… end of this capturing of a moment… 😅

Leave a Reply

Your email address will not be published.