Response w/out AI

Answering Mike’s concerns.

Bits inline….

I Wonder:

How would you cite an AI derived source? I can imagine my research methods professor saying that we couldn’t use that as a valid research tool. Something that would not be accepted as an academic source. In the early days of the digital age when we were both in college we could only use printed peer reviewed academic journal articles, magazine articles and other publications on microfiche. Only academic sources that we could put our hands on physically. Internet citations was a new thing and APA, MLA or the Chicago manual of style had not spelled out how to document a source from the world wide web. The internet was a “Brave New World” full of information that some in the academic world did not accept as valid or academic. Including my research methods professor. How would you cite an AI source if it pulled it’s answer from multiple sources to generate the answer? Will there be a custom url generated from that AI to copy and paste into the footnotes to direct you to their own sources of reference?

I think you’d cite similar to how you did early Intertubes articles. URL, date retrieved.

To add further support, I’d recommend making a copy (PDF? PNG?) That shows exactly what you saw as it was when you saw it.

In addition to the links being all broken when I link to soemthing like Twitter, I want something more permanent.

(Some of this is self-reflective since I periodically delete all of my Tweets. As I wrote recently, whatever I might have spat out years ago likely isn’t representative of how I behave or think now. Sorry Ms. SHRMP with your five-figure Federal student loans…)

AI has the potential to slowly erode the ability of future scholars to actually do the footwork and research of the subjects they are studying.

I generally agree. But that’s why you need to have something in mind before you turn to the AI to confirm your premises.

Will the questions the researcher asks the AI generate only a certain kind of response to reinforce the researchers world view of the subject?

Maybe? At the same time, it shows that hte researcher is actually thinking enough to ask relevant questions.

We already live in a world where academics and politicians only look to source material that reinforce the ideas they have or the ideas they want the general public to have. A person’s individual “bias” would have the researcher to use language in their questions to get a certain type of answer.

It’s not nice to pidgeon-hole MSNBC Viewers like that, Mike.

AI in the wrong hands would tend to make the general public more ignorant of the true state of the world and the issues we deal with today. All the while leaving everyone feeling empowered because they believe that AI has the power to present them with more information and a broader perspectives on the issues then they would be able to see left to their own devices. In reality the information presented would be to shape a certain view point and create a “group think” mentality that doesn’t have a clue or see the whole picture in the end. That would be by design of whoever controls the algorithm.

I see where you’re coming from. At the same time, I think you have to have a general idea where you’re going before you engage the tools. Then you use those tools to either help confirm your suspicions, or tell you that you’re full of shit.

I like it when someone proves me wrong about something, and helps me figure out where I was led astray.

But that’s not good for the dying media platforms.

So. Whatever.

This kinda fits with the controversy surrounding RFK Jr.’s potential debate on Rogan.

Whatever. I’m pretty confident that there’s not going to be anything worthwhile discussed.

So I just don’t listen. Whatever. /GenX