Issues inherent in Artificial Intelligence
When using AI, keep your skeptical critical thinking brain on high alert.
- Hallucinations: An overload of input or a lack of data to mine can result in AI generating nonsensical, incorrect, or made up information. This includes fake citations.
- Being Flat-Out Wrong: AI can re-generate whatever bad information it mines, or conflate closely related ideas that are frequently mentioned together.
- Theft of Intellectual Property: AI programs mine material taken from online sources not located behind a paywall without the permission of the authors/creators and thus the output, particularly when unattributed, may constitute intellectual property theft. A number of well known authors and other entertainers have filed lawsuits against OpenAI (the creator of ChatGPT) for violating copyright by using their protected works to train ChatGPT without asking for permission or compensating the creators.
- Propaganda: AI output is subject to be tainted by misinformation and propaganda, such as doctored images and "deepfake" videos. For example, until at least 2018, there was a website with "martinlutherking" as part of the URL that was managed by neo-Nazi propagandists.
- Lack of Transparency: Because the developers of AI do not disclose their algorithms, it is impossible for researchers to analyze output for inherent bias and accuracy of source material. Nobody but the company knows what information went into the AI, so we can't form accurate judgements about it.
- Security: Like any other technology, artificial intelligence can be susceptible to malware, spam, phishing, and other forms of privacy breaches and data theft. For more see Artificial Intelligence and Personal Security. Also, remember that your questions are used to further train the AI, and could end up being revealed to other users in their answers, so don't reveal any personal information to AI platforms.
- Data Quality and Accuracy: AI will provide data that it can find by scouring the free Internet; where one can find data sets and statistics that are incomplete, incorrect, or subject to misinterpretation, and these errors may be repeated in an AI inquiry. Per ChatGPT: "Inequities in data collection can lead to underrepresentation or misrepresentation of certain groups, making it difficult for the models to provide fair and unbiased information or support for those groups" (ChatGPT, September 2023).
- Algorithmic Bias: Biases can creep into algorithms for many reasons, including biases in the original data, the design of the algorithms, or biases in the evaluation of the output. You can read more about algorithmic bias in this IBM white paper.
- Language and Cultural Bias: AI tends to be biased toward the dominant languages and cultures that are prevalent in the data used to train the program. This can result in less accurate and less comprehensive responses for users from marginalized linguistic or cultural backgrounds (ChatGPT, August 2023).